运用THUDM /glm推理,出现错误: ValueError: too many values to unpack (expected 2)

错误如下:

Traceback (most recent call last):
  File "/data/lxj/LLM_chatGLM/huggingface/basemodel.py", line 11, in <module>
    response = model.chat(tokenizer, "你好", history=[])
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/sdust_nlp0/.cache/huggingface/modules/transformers_modules/chatglm3-6b/modeling_chatglm.py", line 1042, in chat
    outputs = self.generate(**inputs, **gen_kwargs, eos_token_id=eos_token_id)
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/transformers/generation/utils.py", line 1989, in generate
    result = self._sample(
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/transformers/generation/utils.py", line 2932, in _sample
    outputs = self(**model_inputs, return_dict=True)
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sdust_nlp0/.cache/huggingface/modules/transformers_modules/chatglm3-6b/modeling_chatglm.py", line 941, in forward
    transformer_outputs = self.transformer(
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sdust_nlp0/.cache/huggingface/modules/transformers_modules/chatglm3-6b/modeling_chatglm.py", line 834, in forward
    hidden_states, presents, all_hidden_states, all_self_attentions = self.encoder(
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sdust_nlp0/.cache/huggingface/modules/transformers_modules/chatglm3-6b/modeling_chatglm.py", line 641, in forward
    layer_ret = layer(
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sdust_nlp0/.cache/huggingface/modules/transformers_modules/chatglm3-6b/modeling_chatglm.py", line 544, in forward
    attention_output, kv_cache = self.self_attention(
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/anaconda3/envs/llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sdust_nlp0/.cache/huggingface/modules/transformers_modules/chatglm3-6b/modeling_chatglm.py", line 413, in forward
    cache_k, cache_v = kv_cache
ValueError: too many values to unpack (expected 2)

解决方法:

降级transformers 到4.40.2

 pip uninstall transformers 
 pip install transformers==4.40.2

亲测好用

引用:https://huggingface.co/THUDM/glm-4-9b/discussions/9

### 解决 Python GLM 中 `ValueError: too many values to unpack (expected 2)` 错误 #### 错误原因分析 `ValueError: too many values to unpack (expected 2)` 表明尝试解包的对象中的元素数量超过了预期的数量,在此情况下是两个。这通常发生在函数返回值或迭代过程中,当实际返回的元组长度不匹配期望时发生。 对于特定于GLM模型的情况,该错误可能是由于使用的库版本存在兼容性问题所引起的[^3]。例如,某些更新后的库可能会改变其内部实现方式,从而影响到依赖它的代码逻辑。 #### 方案一:重新安装最新版 transformers 库 如果当前环境中使用的是有缺陷的新版本,则可以考虑卸载现有版本并重装最新的稳定版本来解决问题: ```bash pip uninstall transformers pip install --upgrade transformers ``` 这样做之后再运行程序看是否解决了上述异常情况。 #### 方案二:降级至指定版本 有时新发布的版本可能存在尚未修复的问题;此时可以根据社区反馈选择回退到已知良好工作的旧版本。比如有人提到将 `transformers` 版本降到4.40.2 后成功解决了相同类型的错误: ```bash pip uninstall transformers pip install transformers==4.40.2 ``` 执行以上命令后再次测试应用以验证问题是否得到解决[^4]。 #### 调试建议 为了更精确地定位具体哪一部分代码引发了这个错误,可以在抛出异常的位置附近加入断点或者打印语句,查看变量的实际取值范围以及结构形式,进而判断是否有不符合预期的数据传入了解构表达式中。 另外也可以利用调试工具如 PyCharm 或者 pdb 来逐步跟踪执行流程直至找到确切的原因所在。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值