1. 任务内容
- 使用 Cli Demo 完成 InternLM2-Chat-1.8B 模型的部署,并生成 300 字小故事。
- 使用 LMDeploy 完成 InternLM-XComposer2-VL-1.8B 的部署,并完成一次图文理解对话。
- 使用 LMDeploy 完成 InternVL2-2B 的部署,并完成一次图文理解对话。
2.task1:InternLM2-Chat-1.8B 模型的部署
其代码为:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name_or_path = "/root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True, device_map='cuda:0')
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='cuda:0')
model = model.eval()
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.
"""
messages = [(system_prompt, '')]
print("=============Welcome to InternLM chatbot, type 'exit' to exit.=============")
while True:
input_text = input("\nUser >>> ")
input_text = input_text.replace(' ', '')
if input_text == "exit":
break
length = 0
for response, _ in model.stream_chat(tokenizer, input_text, messages):
if response is not None:
print(response[length:], flush=True, end="")
length = len(response)
实际测试结果如下图:在10%性能的A100上占用显存4876MB,推理时速度较慢,肉眼可见的逐字输出,并且输出的结论幻觉严重。
在streamlit上部署与之类似:
设置端口映射后在本地浏览器上打开,结果如下,效果依旧不好:
3.task2:LMDeploy部署InternLM-XComposer2-VL-1.8B
输入命令:
lmdeploy serve gradio /share/new_models/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-1_8b --cache-max-entry-count 0.1
4.task3: LMDeploy部署InternVL2-2B
输入命令:
lmdeploy serve gradio /share/new_models/OpenGVLab/InternVL2-2B --cache-max-entry-count 0.1