昇腾910b部署qwen-7b-chat进行流式输出【pytorch框架】NPU推理

文章讲述了在使用Mindspore框架运行Qwen-7B-Chat模型时遇到的错误,涉及`torch_npu._C._NPUDeviceProperties`对象属性缺失问题。解决方案包括修改modeling_qwen.py中的支持类型设置和调整cli_demo.py中模型加载方式。结果表明速度较慢但能输出,且与Mindspore框架相比存在输出差异。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

准备阶段

参考我上一篇文章910b上跑Chatglm3-6b进行流式输出【pytorch框架】

避坑阶段

  • 我在mindspore框架下运行了qwen-7b-base、qwen-7b-chat输出都有大大的问题,参考官方文档

qwen-7b-base的输出
在这里插入图片描述
qwen-7b-chat的输出
在这里插入图片描述

  • 像之前给ChatCLM3一样添加代码,会报错 'torch_npu._C._NPUDeviceProperties' object has no attribute 'major'
Traceback (most recent call last):
  File "/home/HwHiAiUser/Qwen/cli_demo.py", line 228, in <module>
    main()
  File "/home/HwHiAiUser/Qwen/cli_demo.py", line 130, in main
    model, tokenizer, config = _load_model_tokenizer(args)
  File "/home/HwHiAiUser/Qwen/cli_demo.py", line 68, in _load_model_tokenizer
    model = AutoModelForCausalLM.from_pretrained(
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 553, in from_pretrained
    model_class = get_class_from_dynamic_module(
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/dynamic_module_utils.py", line 500, in get_class_from_dynamic_module
    return get_class_in_module(class_name, final_module.replace(".py", ""))
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/dynamic_module_utils.py", line 200, in get_class_in_module
    module = importlib.import_module(module_path)
  File "/home/anaconda3/envs/sakura/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 790, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/root/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat/modeling_qwen.py", line 38, in <module>
    SUPPORT_BF16 = SUPPORT_CUDA and torch.cuda.is_bf16_supported()
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch/cuda/__init__.py", line 154, in is_bf16_supported
    torch.cuda.get_device_properties(torch.cuda.current_device()).major >= 8
AttributeError: 'torch_npu._C._NPUDeviceProperties' object has no attribute 'major'

解决这个报错能搜到是下面这些
在这里插入图片描述
华为官网上的回答,一般都在和稀泥,要么没人理
在这里插入图片描述
gitee这个,虽然报错结果一样,但是过程不一样,没有参考价值
在这里插入图片描述
Github上看了,推理都是在mindspore框架下进行的,没有参考价值

解决方案

一、modeling_qwen.py

找到第38,39行,替换为

SUPPORT_BF16 = False
SUPPORT_FP16 = True

尝试过SUPPORT_BF16 = True,会报错

Traceback (most recent call last):
  File "/home/HwHiAiUser/Qwen/cli_demo.py", line 228, in <module>
    main()
  File "/home/HwHiAiUser/Qwen/cli_demo.py", line 212, in main
    for response in model.chat_stream(tokenizer, query, history=history, generation_config=config):
  File "/root/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat/modeling_qwen.py", line 1216, in stream_generator
    for token in self.generate_stream(
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
    response = gen.send(None)
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers_stream_generator/main.py", line 931, in sample_stream
    outputs = self(
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat/modeling_qwen.py", line 1045, in forward
    transformer_outputs = self.transformer(
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat/modeling_qwen.py", line 824, in forward
    inputs_embeds = self.wte(input_ids)
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 162, in forward
    return F.embedding(
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch/nn/functional.py", line 2233, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: call aclnnEmbedding failed, detail:EZ1001: weight not implemented for DT_BFLOAT16, should be in dtype support list [DT_DOUBLE,DT_FLOAT,DT_FLOAT16,DT_INT64,DT_INT32,DT_INT16,DT_INT8,DT_UINT8,DT_BOOL,DT_COMPLEX128,DT_COMPLEX64,].

二、cli_demo.py

注释加载模型的resume_download

model = AutoModelForCausalLM.from_pretrained(
    args.checkpoint_path,
    device_map=device_map,
    trust_remote_code=True,
    # resume_download=True,
).npu().eval()

结果展示

在这里插入图片描述

  • 速度很慢,比mindspore慢了一倍,但是好歹能正常输出了,虽然还输出错了。
  • 搞笑的是mindspore框架推理ChatGLM3背诵出师表只背诵前半部分,Qwen这里背诵了后半部分,Torch框架下ChatGLM3可以背出全文。
### Ascend 910 PyTorch 多卡训练配置与优化 对于在Ascend 910平台上利用PyTorch进行多GPU训练的任务,硬件资源的合理分配以及软件环境的有效设置至关重要。指定训练模式为预训练(pt),此过程可以在OpenI平台上的四块Ascend 910芯片上执行,每张卡拥有32GB显存,总计提供128GB显存空间用于模型参数存储和计算需求[^1]。 为了实现高效的分布式训练,在编写代码时需引入`torch.distributed.launch`模块来启动多个进程,每个进程负责一块GPU的工作负载。下面是一个简单的脚本示例: ```bash python -m torch.distributed.launch --nproc_per_node=4 train.py ``` 其中`train.py`文件内应包含如下初始化操作以支持多节点通信: ```python import os import torch from torch.nn.parallel import DistributedDataParallel as DDP def setup(rank, world_size): os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '12355' # 初始化进程组 torch.distributed.init_process_group( backend='hccl', rank=rank, world_size=world_size) # 主函数入口处调用setup方法完成初始化工作 if __name__ == "__main__": local_rank = int(os.getenv('LOCAL_RANK')) setup(local_rank, 4) model = ... # 定义网络结构 ddp_model = DDP(model.to(f'npu:{local_rank}'), device_ids=[f'npu:{local_rank}']) ``` 关于批处理大小的选择,建议根据实际情况调整`per_device_train_batch_size` 和 `per_device_eval_batch_size` 参数,默认情况下这两个值都被设为8,意味着单个设备每次迭代会加载8条样本数据参与前向传播运算[^2]。然而当使用更多数量的GPU时,可以适当增加全局批量尺寸从而加速收敛速度并提高最终性能表现。 通过上述方式能够充分利用Ascend 910的强大算力优势来进行大规模深度学习模型的快速训练与评估。
评论 13
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值