TypeError: GenerationMixin got an unexpected keyword argument ‘standardize_cache_format‘ 报错解决

1.问题描述

笔者在使用glm4-9b-chat进行推理时遇到了如下报错:

Traceback (most recent call last):
  File "/data/BMMLU/src/instruct/glm4.py", line 41, in <module>
    main(engine_name=os.path.basename(args.model_path), mode=args.mode, args=args)
  File "/data/BMMLU/src/instruct/glm4.py", line 30, in main
    initialization(engine_name, mode, inference, args)
  File "/data/BMMLU/src/instruct/mp_utils.py", line 261, in initialization
    instruct(question_file_path, None, output_file_path, with_E, mode, inference, args)
  File "/data/BMMLU/src/instruct/mp_utils.py", line 307, in instruct
    response_content = inference(content, args)
  File "/data/BMMLU/src/instruct/glm4.py", line 20, in inference
    outputs = model.generate(**inputs, **gen_kwargs)
  File "/data/zwh_llm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/zwh_llm/lib/python3.10/site-packages/transformers/generation/utils.py", line 2015, in generate
    result = self._sample(
  File "/data/zwh_llm/lib/python3.10/site-packages/transformers/generation/utils.py", line 3010, in _sample
    model_kwargs = self._update_model_kwargs_for_generation(
  File "/home/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 930, in _update_model_kwargs_for_generation
    cache_name, cache = self._extract_past_from_model_output(
TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'

2.问题解决

新版本的transfomers目前和glm4推理存在冲突,将其降级到4.43.0版本即可:

pip install transfomers==4.43.0

参考资料:https://github.com/THUDM/GLM-4/issues/439

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值