2025年1月14日更新
下面的操作已经过时了,现在直接运行代码即可,无需导入torch-npu
大家可以参考一下我的环境
我使用的服务器是Atlas 800T A2 推理服务器,显卡是 8*910B1
如果有需要mindie商用版可以私信我
Ascend-hdk-910b-npu-driver_24.1.rc3_linux-aarch64.run
Ascend-hdk-910b-npu-firmware_7.5.0.1.129.run
Ascend-cann-toolkit_8.0.RC3.alpha003_linux-aarch64.run
Ascend-cann-kernels-910b_8.0.RC3.alpha003_linux-aarch64.run
mindspore-2.4.1
torch_npu-2.4.0
transformers-4.47.1
tokenizers-0.19.1
mindnlp-0.4.0
准备阶段
- 配套软件包Ascend-cann-toolkit和Ascend-cann-nnae
- 适配昇腾的Pytorch
- 适配昇腾的Torchvision Adapter
- 下载ChatGLM3代码
- 下载chatglm3-6b模型,或在modelscope里下载
本人环境
pip install transformers==4.39.2
pip3 install torch==2.1.0
pip3 install torch-npu==2.1.0.post4
pip3 install accelerate==0.24.1
pip3 install transformers-stream-generator==0.0.5
驱动:Ascend-hdk-910-npu-driver_24.1.rc1_linux-aarch64.run
固件:Ascend-hdk-910-npu-firmware_7.1.0.6.220.run
toolkit:Ascend-cann-toolkit_8.0.RC1_linux-aarch64.run
kernels:Ascend-cann-kernels-910_8.0.RC1_linux.run
避坑阶段
- 每个人的服务器都不一样,在ChatGLM3/issues中别人只需要修改指定驱动,但是我的不行
- 删除模型文件包中的model.safetensors.index.json,否则加载模型时会自动加载safetensors文件,而不加载bin文件
/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:124: RuntimeWarning: torch.jit.script will be disabled by transfer_to_npu, which currently does not support it, if you need to enable torch.jit.script, please do not use transfer_to_npu.
warnings.warn(msg, RuntimeWarning)
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/HwHiAiUser/work/ChatGLM3/basic_demo/cli_demo.py", line 22, in <module>
model = AutoModel.from_pretrained(MODEL_PATH, trust_remote_code=True).npu().eval()
File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
return model_class.from_pretrained(
File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3187, in from_pretrained
) = cls._load_pretrained_model(
File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3560, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/modeling_utils.py", line 467, in load_state_dict
with safe_open(checkpoint_file, framework="pt") as f:
FileNotFoundError: No such file or directory: "/home/HwHiAiUser/models/chatglm3-6b/model-00001-of-00007.safetensors"
/home/anaconda3/envs/sakura/lib/python3.9/tempfile.py:817: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmp1ygjyx3i'>
_warnings.warn(warn_message, ResourceWarning)
添加代码
找到ChatGLM3/basic_demo/cli_demo.py
头部添加以下代码:
import os
import platform
import torch
import torch_npu
from torch_npu.contrib import transfer_to_npu
torch_device = "npu:3" # 0~7
torch.npu.set_device(torch.device(torch_device))
torch.npu.set_compile_mode(jit_compile=False)
option = {}
option["NPU_FUZZY_COMPILE_BLACKLIST"] = "Tril"
torch.npu.set_option(option)
print("torch && torch_npu import successfully")
模型加载部分修改为:
model = AutoModel.from_pretrained(MODEL_PATH, trust_remote_code=True).npu().eval()