本文主要介绍使用virtualenv库生成venv,进而部署/运行GLM-4-9B-Chat开源双语对话语言模型的方法。
1. 在线体验
本文代码已部署到百度飞桨AI Studio平台,以供大家在线体验。
项目链接:GLM-4 在线体验
注意:GLM-4-9B-Chat (fp16)显存占用大于18GB。
2. 环境部署
python版本:3.10.10
virtualenv环境部署代码如下:
git clone https://github.com/THUDM/GLM-4.git
cd GLM-4
pip install -U virtualenv
python -m virtualenv venv
source venv/bin/activate
pip install --upgrade pip
pip install torch torchvision transformers huggingface-hub sentencepiece pydantic timm tiktoken accelerate sentence_transformers peft gradio
实测仅安装 torch torchvision transformers huggingface-hub sentencepiece pydantic timm tiktoken accelerate sentence_transformers peft gradio 这些库就可以运行Gradio交互式界面进行对话。
如果有其他需求,可以根据 requirements.txt 文件安装其他依赖库。
requirements.txt 文件内容:
# use vllm
# vllm>=0.4.3
torch>=2.3.0
torchvision>=0.18.0
transformers==4.40.0
huggingface-hub>=0.23.1
sentencepiece>=0.2.0
pydantic>=2.7.1
timm>=0.9.16
tiktoken>=0.7.0
accelerate>=0.30.1
sentence_transformers>=2.7.0
# web demo
gradio>=4.33.0
# openai demo
openai>=1.31.1
einops>=0.7.0
sse-starlette>=2.1.0
# INT4
bitsandbytes>=0.43.1
# PEFT model, not need if you don't use PEFT finetune model.
# peft>=0.11.0
3. 模型下载
原始链接:https://huggingface.co/THUDM/glm-4-9b-chat
镜像链接:https://hf-mirror.com/THUDM/glm-4-9b-chat
这里采用wget下载模型文件(共23个文件,约18GB)。
下载代码:
mkdir glm-4-9b-chat
cd glm-4-9b-chat
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/.gitattributes
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/LICENSE
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/README.md
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/README_en.md
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/config.json
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/configuration.json
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/configuration_chatglm.py
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/generation_config.json
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model-00001-of-00010.safetensors
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model-00002-of-00010.safetensors
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model-00003-of-00010.safetensors
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model-00004-of-00010.safetensors
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model-00005-of-00010.safetensors
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model-00006-of-00010.safetensors
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model-00007-of-00010.safetensors
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model-00008-of-00010.safetensors
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model-00009-of-00010.safetensors
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model-00010-of-00010.safetensors
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/model.safetensors.index.json
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/modeling_chatglm.py
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/tokenization_chatglm.py
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/tokenizer.model
wget https://hf-mirror.com/THUDM/glm-4-9b-chat/resolve/main/tokenizer_config.json
模型清单(通过 du -ah 获取):
1.9G ./model-00006-of-00010.safetensors
2.5K ./configuration_chatglm.py
1.8G ./model-00007-of-00010.safetensors
512 ./generation_config.json
1.8G ./model-00004-of-00010.safetensors
1.9G ./model-00003-of-00010.safetensors
1.5K ./config.json
1.9G ./model-00001-of-00010.safetensors
1.6G ./model-00010-of-00010.safetensors
52K ./modeling_chatglm.py
6.5K ./LICENSE
1.7G ./model-00002-of-00010.safetensors
1.5K ./.gitattributes
8.0K ./README_en.md
2.6M ./tokenizer.model
29K ./model.safetensors.index.json
7.5K ./README.md
16K ./tokenization_chatglm.py
3.5K ./tokenizer_config.json
1.7G ./model-00008-of-00010.safetensors
1.7G ./model-00005-of-00010.safetensors
1.9G ./model-00009-of-00010.safetensors
512 ./configuration.json
18G .
4. 运行
GLM-4的运行很简单,激活虚拟环境后运行 basic_demo/trans_web_demo.py 文件即可(可以通过定义MODEL_PATH环境变量从自定义路径加载模型)。
cd GLM-4
source venv/bin/activate
# export MODEL_PATH=自定义模型路径
python basic_demo/trans_web_demo.py