书生大模型实战营第三期闯关作业打卡(8G 显存玩转书生大模型 Demo)

文档:https://github.com/InternLM/Tutorial/blob/camp3/docs/L1/Demo/readme.md

视频:https://www.bilibili.com/video/BV18x4y147SU/?spm_id_from=333.788&vd_source=a4fb2c65a9e6b4f6ae62b5dd5b60fc7f

作业:https://github.com/InternLM/Tutorial/blob/camp3/docs/L1/Demo/task.md

进入开发机,创建demo环境

#创建环境conda create -n demo python=3.10 -y
#激活环境conda activate demo
#安装 torchconda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia -y
#安装其他依赖pip install transformers==4.38
pip install sentencepiece==0.1.99
pip install einops==0.8.0
pip install protobuf==5.27.2
pip install accelerate==0.33.0
pip install streamlit==1.37.0

安装环境需要好一会儿,需要耐心等待。。。。。。终于装好了。

Cli Demo 部署 InternLM2-Chat-1.8B 模型

mkdir -p /root/8G-demo
touch /root/8G-demo/cli_demo.py

在root目录下创建文件夹8G-demo,文件夹8G-demo里面创建cli_demo.py,并把以下代码复制到cli_demo.py里面

import torch

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name_or_path = "/root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b"

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True, device_map='cuda:0')

model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='cuda:0')

model = model.eval()

system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).

- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.

- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.

"""

messages = [(system_prompt, '')]

print("=============Welcome to InternLM chatbot, type 'exit' to exit.=============")

while True:

    input_text = input("\nUser  >>> ")

    input_text = input_text.replace(' ', '')

    if input_text == "exit":

        break

    length = 0

    for response, _ in model.stream_chat(tokenizer, input_text, messages):

        if response is not None:

            print(response[length:], flush=True, end="")

            length = len(response)

 

Save python filecli_demo.py

 

在命令行中执行cli_demo.py

 

Streamlit Web Demo 部署 InternLM2-Chat-1.8B 模型

Tutorial的代码仓库clone到开发机本地以执行后续的代码

cd /root/8G-demo
git clone
https://github.com/InternLM/Tutorial.git

 

开启streamlit 服务

streamlit run /root/8G-demo/Tutorial/tools/streamlit_demo.py --server.address 127.0.0.1 --server.port 6006

 

  You can now view your Streamlit app in your browser.

  URL: http://127.0.0.1:6006

表明 服务已经开启。。

 

查看我的SSH端口号

在本地的powerShell中输入以下命令,将端口映射到本地。

ssh -CNg -L 6006:127.0.0.1:6006 root@ssh.intern-ai.org.cn -p 40830

因为本地和这台开发机之间设置了公钥,本地与开发机的链接不需要密码。

 

在本地浏览器打开网址: http://127.0.0.1:6006/, 等待一会儿, 就可以看到demo了。

LMDeploy 部署 InternLM-XComposer2-VL-1.8B 模型

 

激活demo环境,并安装LMDeploy 和其他 dependances.

 

使用 LMDeploy 启动一个与 InternLM-XComposer2-VL-1.8B 模型交互的 Gradio 服务

lmdeploy serve gradio /share/new_models/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-1_8b --cache-max-entry-count 0.1

目前开发机的配置:

出现RuntimeError: [TM][ERROR] CUDA runtime error: out of memory /lmdeploy/src/turbomind/utils/memory_utils.cu:32

解决runtime error: out of memory:

重新启动开发机,这样可以释放显存,

activate demo 环境,因为之前已经装过lmdeploy和timm了,即使重启开发机也不用重新安装。

用LMDeploy启动一个与 InternLM-XComposer2-VL-1.8B 模型交互的 Gradio 服务:

conda activate demo

lmdeploy serve gradio /share/new_models/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-1_8b --cache-max-entry-count 0.1

又有报错:

Could not create share link. Missing file: /root/.conda/envs/demo/lib/python3.10/site-packages/gradio/frpc_linux_amd64_v0.2.

Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps:

1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64

2. Rename the downloaded file to: frpc_linux_amd64_v0.2

3. Move the file to this location: /root/.conda/envs/demo/lib/python3.10/site-packages/gradio

按照提示从huggingface官网下载frpc_linux_amd64文件,把它改名为frpc_linux_amd64_v0.2,将它上传到开发机

在terminal 用linux命令把frpc_linux_amd64_v0.2 复制到/root/.conda/envs/demo/lib/python3.10/site-packages/gradio目录下。

cp /root/8G-demo/frpc_linux_amd64_v0.2 /root/.conda/envs/demo/lib/python3.10/site-packages/gradio/frpc_linux_amd64_v0.2

重启开发机,用LMDeploy启动一个与 InternLM-XComposer2-VL-1.8B 模型交互的 Gradio 服务:

把以下命令 输入本地powershell进行端口映射:

ssh -CNg -L 6006:0.0.0.0:6006 root@ssh.intern-ai.org.cn -p 40830  

(将40830 替换成自己的SSH端口号)

本地浏览器打开http://0.0.0.0:6006/

上传图片:

可以开展图文对话了

LMDeploy 部署 InternVL2-2B 模型

关闭在开发机上运行的Terminal 释放显存

启动 InternVL2-2B 模型的 Gradio 服务:

conda activate demo

lmdeploy serve gradio /share/new_models/OpenGVLab/InternVL2-2B --cache-max-entry-count 0.1

刚才做过端口映射了,

本地浏览器打开http://0.0.0.0:6006/

上传图片:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值