一、基础作业
1. 配置 LMDeploy 运行环境
Conda环境配置
Python 3.10环境
LMDeploy安装
pip install lmdeploy[all]==0.3.0
完成环境配置
2. 以命令行方式与 InternLM2-Chat-1.8B 模型对话
InternLM2-Chat-1.8B的HF模型
使用Transformer库运行模型
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("/root/internlm2-chat-1_8b", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("/root/internlm2-chat-1_8b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
inp = "hello"
print("[INPUT]", inp)
response, history = model.chat(tokenizer, inp, history=[])
print("[OUTPUT]", response)
inp = "please provide three suggestions about time management"
print("[INPUT]", inp)
response, history = model.chat(tokenizer, inp, history=history)
print("[OUTPUT]", response)
运行:python /root/pipeline_transformer.py
运行结果:
运行时间:12.32s
LMDeploy对话
lmdeploy chat /root/internlm2-chat-1_8b
连续推理测试:
LMDeploy的推理速度约为337.6 words/s,是Transformer库的8倍。
二、进阶作业
1. 设置KV Cache最大占用比例为0.4,开启W4A16量化,以命令行方式与模型对话。
命令:
lmdeploy chat /root/internlm2-chat-1_8b-4bit\
--model-format awq --cache-max-entry-count 0.4
2. 以API Server方式启动 lmdeploy,开启 W4A16量化,调整KV Cache的占用比例为0.4,分别使用命令行客户端与Gradio网页客户端与模型对话。
开启服务命令:
lmdeploy serve api_server \
/root/internlm2-chat-1_8b-4bit \
--model-format awq \
--quant-policy 0 \
--cache-max-entry-count 0.4 \
--server-name 0.0.0.0 \
--server-port 23333 \
--tp 1
显存占用
命令行对话
网页端对话
lmdeploy serve gradio http://localhost:23333 \
--server-name 0.0.0.0 \
--server-port 6006
报错修正——降级gradio——pip install gradio==3.50
3. 使用W4A16量化,调整KV Cache的占用比例为0.4,使用Python代码集成的方式运行internlm2-chat-1.8b模型。
pipeline_kv.py
from lmdeploy import pipeline, TurbomindEngineConfig
backend_config = TurbomindEngineConfig(cache_max_entry_count=0.4)
pipe = pipeline('/root/internlm2-chat-1_8b-4bit',
backend_config=backend_config)
response = pipe(['Hi, pls intro yourself', '上海是'])
print(response)
4. 使用 LMDeploy 运行视觉多模态大模型 llava gradio demo
本地运行
pipeline_llava.py
from lmdeploy.vl import load_image
from lmdeploy import pipeline, TurbomindEngineConfig
backend_config = TurbomindEngineConfig(session_len=8192) # 图片分辨率较高时请调高session_len
# pipe = pipeline('liuhaotian/llava-v1.6-vicuna-7b', backend_config=backend_config) 非开发机运行此命令
pipe = pipeline('/share/new_models/liuhaotian/llava-v1.6-vicuna-7b', backend_config=backend_config)
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
response = pipe(('describe this image', image))
print(response)
运行结果:
gradio_llava.py
import gradio as gr
from lmdeploy import pipeline, TurbomindEngineConfig
backend_config = TurbomindEngineConfig(session_len=8192) # 图片分辨率较高时请调高session_len
# pipe = pipeline('liuhaotian/llava-v1.6-vicuna-7b', backend_config=backend_config) 非开发机运行此命令
pipe = pipeline('/share/new_models/liuhaotian/llava-v1.6-vicuna-7b', backend_config=backend_config)
def model(image, text):
if image is None:
return [(text, "请上传一张图片。")]
else:
response = pipe((text, image)).text
return [(text, response)]
demo = gr.Interface(fn=model, inputs=[gr.Image(type="pil"), gr.Textbox()], outputs=gr.Chatbot())
demo.launch()
运行,端口转发后,浏览器访问:http://127.0.0.1:7860
运行结果: