情况描述
环境:
linux
transformers 4.41.2
tokenizers 0.19.1
torch 2.3.0
vllm 0.4.3
在使用vllm运行xverse/XVERSE-13B-256K时(代码如下):
from vllm import LLM, SamplingParams
llm = LLM(
model=args.pretrain,
trust_remote_code=True, seed=args.seed
)
报错如下
File "/cfs/xxx/xxx.py", line 155, in <module>
vllm_main(args)
File "/cfs/xxx/xxx.py", line 84, in vllm_main
llm = LLM(
File "/data/miniconda3/envs/xxx/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 144, in __init__
self.llm_engine = LLMEngine.from_engine_args(
File "/data/miniconda3/envs/xxx/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 359, in from_engine_args
engine = cls(
File "/data/miniconda3/envs/xxx/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 212, in __init__
self.tokenizer = self._init_tokenizer()
File "/data/miniconda3/envs/xxx/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 408, in _init_tokenizer
return get_tokenizer_group(self.parallel_config.tokenizer_pool_config,
File "/data/miniconda3/envs/xxx/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer_group/__init__.py", line 20, in get_tokenizer_group
return TokenizerGroup(**init_kwargs)
File "/data/miniconda3/envs/xxx/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 23, in __init__
self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config)
File "/data/miniconda3/envs/xxx/lib/python3.10/site-packages/vllm/transformers_utils/tokenizer.py", line 92, in get_tokenizer
tokenizer = AutoTokenizer.from_pretrained(
File "/data/miniconda3/envs/xxx/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 880, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/data/miniconda3/envs/xxx/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2110, in from_pretrained
return cls._from_pretrained(
File "/data/miniconda3/envs/xxx/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2336, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/data/miniconda3/envs/openrlhf/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 114, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 78 column 3
排查与解决
将xverse/XVERSE-13B-256K换成其他模型后能够正常运行,怀疑是XVERSE-13B-256K和一些版本的库不兼容。由于主要和tokenizer有关,遂降低tokenizer版本:
pip install transformers==4.38.2
pip install tokenizers==0.15.2
成功解决!