运行基于Tensorflow的bert的时候产生的报错,困扰了许久,原作者可以运行说明不是像其他解决教程里面是处理输入格式的问题,最后发现是transformer版本不一致,输入参数变动,导致输入的矩阵长度没有padding,导致无法统一转换的问题
报错行tokenizer.encode_plus:
def convert_example_to_feature(review): # combine step for tokenization, WordPiece vector mapping, adding special tokens as well as truncating reviews longer than the max length return tokenizer.encode_plus(review, add_special_tokens = True, # add [CLS], [SEP] max_length = max_length, # max length of the text that can go to BERT pad_to_max_length = True, # add [PAD] tokens return_attention_mask = True, # add attention mask to not focus on pad tokens truncation=True )
解决办法,增加参数:
padding="max_length", 确保统一padding到最长,即可解决此报错