环境配置
我们已经在 /root/share/pre_envs 中配置好了预置环境xtuner0.1.17
可以通过如下指令进行激活:
conda activate xtuner0.1.17
Cli Demo 部署 InternLM2-Chat-1.8B 模型
首先,我们创建一个目录,用于存放我们的代码。并创建一个 cli_demo.py
mkdir -p /root/demo
touch /root/demo/cli_demo.py
然后,我们将下面的代码复制到 cli_demo.py 中
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name_or_path = "/root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True, device_map='cuda:0')
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='cuda:0')
model = model.eval()
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.
"""
messages = [(system_prompt, '')]
print("=============Welcome to InternLM chatbot, type 'exit' to exit.=============")
while True:
input_text = input("\nUser >>> ")
input_text = input_text.replace(' ', '')
if input_text == "exit":
break
length = 0
for response, _ in model.stream_chat(tokenizer, input_text, messages):
if response is not None:
print(response[length:], flush=True, end="")
length = len(response)
接下来,我们便可以通过 python /root/demo/cli_demo.py 来启动我们的 Demo
Streamlit Web Demo 部署 InternLM2-Chat-1.8B 模型
我们执行如下代码来把本教程仓库 clone 到本地,以执行后续的代码。
cd /root/demo
git clone https://github.com/InternLM/Tutorial.git
然后,我们执行如下代码来启动一个 Streamlit 服务。
cd /root/demo
streamlit run /root/demo/Tutorial/tools/streamlit_demo.py --server.address 127.0.0.1 --server.port 6006
LMDeploy 部署 InternVL2-2B 模型
LMDeploy 也已经支持了 InternVL2 系列模型的部署,让我们一起来使用 LMDeploy 部署 InternVL2-2B 模型。
我们可以通过下面的命令来启动 InternVL2-2B 模型的 Gradio 服务。
conda activate demo
lmdeploy serve gradio /share/new_models/OpenGVLab/InternVL2-2B --cache-max-entry-count 0.1
在完成端口映射后,我们便可以通过浏览器访问 http://localhost:6006 来启动我们的 Demo。
在使用 Upload Image 上传图片后,我们输入 Instruction 后按下回车,便可以看到模型的输出。