bert下albert_chinese_small实现文本分类
import torchfrom transformers import BertTokenizer, BertModel, BertConfigimport numpy as npfrom torch.utils import datafrom sklearn.model_selection import train_test_splitimport pandas as pdpretrained = r'albert_chinese_small'tokenizer = BertTokeni
原创
2021-08-20 17:10:07 ·
776 阅读 ·
1 评论