解决办法:
inputs_ids = tokenizer.encode(text, truncation=True, padding='max_length', max_length=1000)
tokenizer添加参数truncation、padding、max_length长度
max_length=1000自定义
参考链接https://discuss.huggingface.co/t/error-when-fine-tuning-pretrained-masked-language-model/5386/6