问题描述
You're using a GPT2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
问题解决
tokenizer = AutoTokenizer.from_pretrained(...)
tokenizer.deprecation_warnings["Asking-to-pad-a-fast-tokenizer"] = True