用开源大模型食用指南 self-llm项目的 GLM-4-9B-Chat WebDemo 部署文档部署时遇到如下错误:

ValueError: too many values to unpack (expected 2)

Traceback:
File "/root/miniconda3/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
    exec(code, module.__dict__)
File "/root/autodl-tmp/ChatBot.py", line 51, in <module>
    generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512)
File "/root/miniconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/generation/utils.py", line 1914, in generate
    result = self._sample(
File "/root/miniconda3/lib/python3.10/site-packages/transformers/generation/utils.py", line 2651, in _sample
    outputs = self(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 1005, in forward
    transformer_outputs = self.transformer(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 901, in forward
    hidden_states, presents, all_hidden_states, all_self_attentions = self.encoder(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 726, in forward
    layer_ret = layer(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 629, in forward
    attention_output, kv_cache = self.self_attention(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 494, in forward
    cache_k, cache_v = kv_cache
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.

经排查报错原因是官方的bug导致,最新的包有问题。
重新安装下transformers的包并重启问题就可以解决

pip install transformers==4.40.2

注意:

1、下载模型的第一行代码导包书写有误,需要自行更改下。源代码如下:

import torch
from modelscope import snapshot_download, AutoModel, AutoTokenizer
import os
model_dir = snapshot_download('ZhipuAI/glm-4-9b-chat', cache_dir='/root/autodl-tmp', revision='master')
  • 1.
  • 2.
  • 3.
  • 4.

2、注意模型的路径,将路径改为绝对路径。

mode_name_or_path = '/root/autodl-tmp/ZhipuAI/glm-4-9b-chat'
  • 1.

完整部署文档详见:

https://github.com/datawhalechina/self-llm/blob/master/GLM-4/03-GLM-4-9B-Chat%20WebDemo.md
  • 1.

本文由博客一文多发平台  OpenWrite 发布!