1、安装环境,想冒个险,都用默认最新的版本试下,这样的话,出现问题后在解决问题的过程中加深下记忆。
conda create --name xtuner-env python=3.10 -y
conda activate xtuner-env
conda install pytorch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 pytorch-cuda=12.1 -c pytorch -c nvidia
cd ~ git clone https://github.com/InternLM/xtuner.git cd xtuner pip install -e '.[all]' pip install -U xtuner
2、开干
mkdir ~/ft-oasst1 && cd ~/ft-oasst1 xtuner list-cfg
等的好慢啊,出结果了。
cd ~/ft-oasst1
xtuner copy-cfg internlm_chat_7b_qlora_oasst1_e3 .
cp -r /share/temp/model_repos/internlm-chat-7b ~/ft-oasst1/
cd ~/ft-oasst1
cp -r /share/temp/datasets/openassistant-guanaco .
xtuner train ./internlm_chat_7b_qlora_oasst1_e3_copy.py --deepspeed deepspeed_zero2
cp -r /share/temp/model_repos/internlm-chat-7b /root/personal_assistant/model/Shanghai_AI_Laboratory
xtuner train /root/personal_assistant/config/internlm_chat_7b_qlora_oasst1_e3_copy.py --deepspeed deepspeed_zero2
ssh -CNg -L 6006:127.0.0.1:6006 root@ssh.intern-ai.org.cn -p 33693
streamlit run /root/personal_assistant/code/InternLM/web_demo.py --server.address 127.0.0.1 --server.port 6006
user_avator = "/root/personal_assistant/code/InternLM/doc/imgs/user.png"
robot_avator = "/root/personal_assistant/code/InternLM/doc/imgs/robot.png"
ssh -CNg -L 127.0.0.1:6006:0.0.0.0:6006 -o "StrictHostKeyChecking no" root@ssh.intern-ai.org.cn -p 33693
后面那三句,至关重要啊,