chatGlm报错

  File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/quantization.py", line 157, in quantize
    layer.attention.query_key_value = QuantizedLinear(
  File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/quantization.py", line 137, in __init__
    self.weight = compress_int4_weight(self.weight)
  File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/quantization.py", line 78, in compress_int4_weight
    kernels.int4WeightCompression(
  File "/root/miniconda3/envs/glm/lib/python3.10/site-packages/cpm_kernels/kernels/base.py", line 48, in __call__
    func = self._prepare_func()
  File "/root/miniconda3/envs/glm/lib/python3.10/site-packages/cpm_kernels/kernels/base.py", line 36, in _prepare_func
    curr_device = cudart.cudaGetDevice()
  File "/root/miniconda3/envs/glm/lib/python3.10/site-packages/cpm_kernels/library/base.py", line 72, in wrapper
    raise RuntimeError("Library %s is not initialized" % self.__name)
RuntimeError: Library cudart is not initialized


报以上错

可能没有装cudnn(Ubuntu22.04)


#ubuntu 22.04

#wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb

#sudo dpkg -i cuda-keyring_1.0-1_all.deb 

#sudo apt-get update 

#sudo apt-get -y install cuda

环境配置

sudo gedit ~/.bashrc

 在文末添加以下信息

    export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CPUTI/lib64
    export CUDA_HOME=/usr/local/cuda/bin
    export PATH=$PATH:$LD_LIBRARY_PATH:$CUDA_HOME

验证cuda安装是否成功:关闭当前命令行,并执行

    source ~/.bashrc
    nvcc -V

最后显示安装的cuda版本为11.4,安装成功。

验证cuda是否正常

    cd ~/NVIDIA_CUDA-11.4_Samples/1_Utilities/bandwidthTest/
    make
    ./bandwidthTest  
 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值