I re-used the "demo" conda environment from the Demo homework (that I did on 8-28-24) again as I didn't want to wait so long installing packages (specifically torch).
I liked this assignment quite a bit! It's a great introduction to XTuner , and the time when it started training was so coooooool. It seems XTuner's 10% faster or so than LLaMA-Factory, and avoids Out-Of-Memory errors, so I think it's a pretty good learning experience!
Since I can't upload screenshots well on this website, I'll post my full work on my github.io website at the link below:
L1 XTuner Homework for LLM Practical Camp Course (InternLM-1.8b)
(Very sorry, I gave the wrong link. The website creator uses localhost:8000 to let me see my changes in real time, and then once I'm happy with the changes in real time, I push it to the realharryhero.github.io GitHub repo. I accidentally copied the link from the website creator for my real time changes, rather than the GitHub repo.
Thankfully, localhost:8000 is really a port on your own computer, so that previous link is pretty much safe.)