基础作业(结营必做)
完成以下任务,并将实现过程记录截图:
- 配置 LMDeploy 运行环境
- 以命令行方式与 InternLM2-Chat-1.8B 模型对话
创建conda环境
studio-conda -t lmdeploy -o pytorch-2.1.2
conda activate lmdeploy
安装LMDeploy
pip install lmdeploy[all]==0.3.0
下载模型
ln -s /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b /root/
# cp -r /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b /root/
使用Transformer运行模型
使用vscode打开terminal界面
新建文件
touch /root/pipeline_transformer.py
写入代码
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("/root/internlm2-chat-1_8b", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("/root/internlm2-chat-1_8b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
inp = "hello"
print("[INPUT]", inp)
response, history = model.chat(tokenizer, inp, history=[])
print("[OUTPUT]", response)
inp = "please provide three suggestions about time management"
print("[INPUT]", inp)
response, history = model.chat(tokenizer, inp, history=history)
print("[OUTPUT]", response)
运行代码
python /root/pipeline_transformer.py
使用LMDeploy部署模型
lmdeploy chat /root/internlm2-chat-1_8b
使用同样的问题,推理速度快了很多