🚩🚩🚩Hugging Face 实战系列 总目录
有任何问题欢迎在下面留言
本篇文章的代码运行界面均在Jupyter Notebook中进行
本篇文章配套的代码资源已经上传
Hugging Face实战-系列教程11:文本预训练模型构建2
Hugging Face实战-系列教程12:文本预训练模型构建3
在NLP领域中,最常用的就是bert,因为是蒸馏过的,所以参数量、计算量都比原始模型小很多。BERT模型不需要标签,全是完形填空的形式。拿到文本数据,随机mask掉一些词,然后让模型去猜是什么,如果能猜出来是什么就说明语言能力不错了。
第一步还是取预训练模型,看看效果怎么样
第二步做微调,预训练模型是在海量数据中去做的,可能通用能力很强但是专业能力很差 ,微调就是让它专注于一个小的领域。
1、Mask Language Model
1.1 导入模型
- 先找个小BERT来玩玩(蒸馏过的)
- 由于我们今天定位的任务与预训练模型差异较大
- 所以在开源模型基础上,套自己的任务继续训练
- 看看不同预训练模型结果的差异到底多大
import warnings
warnings.filterwarnings("ignore")
from transformers import AutoModelForMaskedLM
model_checkpoint = "distilbert-base-uncased"
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
- 导包
- 设置不打印警告信息
- 导入自动mask功能的包
- 要用的模型名(蒸馏过的模型)
- 取出模型
第一次执行的时候,肯定会需要加载一段时间。
1.2 查看模型参数量
distilbert_num_parameters = model.num_parameters() / 1_000_000
print(f"'>>> DistilBERT number of parameters: {round(distilbert_num_parameters)}M'")
print(f"'>>> BERT number of parameters: 110M'")
- 计算蒸馏过的bert模型参数有多少M
- 打印distilbert模型参数量
- 直接打印bert模型参数量
打印结果:
'>>> DistilBERT number of parameters: 67M'
'>>> BERT number of parameters: 110M'
效果其实差不多的,但是蒸馏过的小了很多,今天的任务就是去预测MASK到底是个啥
1.3 加载模型与分词器
text = "This is a great [MASK]."
这个MASK是一个特殊字符,是BERT规定的,这个格式尽量不去改变,因为Tokenizer认识它。
加载预训练模型与分词器:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
将分词器和预训练模型加载进来,我们需要做分词,把文本转化为input_id
这里执行的时候会下载模型:
inputs = tokenizer(text, return_tensors="pt")
inputs
加载文本,返回pt格式,打印出来:
{'input_ids': tensor([[ 101, 2023, 2003, 1037, 2307, 103, 1012, 102]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1]])}
因为返回的是pt,所以结果是tensor的格式,将输入文本转化为input_ids,比如101是开始符号,102是终止符号
mask也有自己的id:
tokenizer.mask_token_id
103
1.4 打印模型
model
DistilBertForMaskedLM(
(activation): GELUActivation()
(distilbert): DistilBertModel(
(embeddings): Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(transformer): Transformer(
(layer): ModuleList(
(0): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): Linear(in_features=768, out_features=768, bias=True)
(k_lin): Linear(in_features=768, out_features=768, bias=True)
(v_lin): Linear(in_features=768, out_features=768, bias=True)
(out_lin): Linear(in_features=768, out_features=768, bias=True)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): Linear(in_features=768, out_features=3072, bias=True)
(lin2): Linear(in_features=3072, out_features=768, bias=True)
(activation): GELUActivation()
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(1): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): Linear(in_features=768, out_features=768, bias=True)
(k_lin): Linear(in_features=768, out_features=768, bias=True)
(v_lin): Linear(in_features=768, out_features=768, bias=True)
(out_lin): Linear(in_features=768, out_features=768, bias=True)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): Linear(in_features=768, out_features=3072, bias=True)
(lin2): Linear(in_features=3072, out_features=768, bias=True)
(activation): GELUActivation()
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(2): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): Linear(in_features=768, out_features=768, bias=True)
(k_lin): Linear(in_features=768, out_features=768, bias=True)
(v_lin): Linear(in_features=768, out_features=768, bias=True)
(out_lin): Linear(in_features=768, out_features=768, bias=True)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): Linear(in_features=768, out_features=3072, bias=True)
(lin2): Linear(in_features=3072, out_features=768, bias=True)
(activation): GELUActivation()
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(3): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): Linear(in_features=768, out_features=768, bias=True)
(k_lin): Linear(in_features=768, out_features=768, bias=True)
(v_lin): Linear(in_features=768, out_features=768, bias=True)
(out_lin): Linear(in_features=768, out_features=768, bias=True)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): Linear(in_features=768, out_features=3072, bias=True)
(lin2): Linear(in_features=3072, out_features=768, bias=True)
(activation): GELUActivation()
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(4): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): Linear(in_features=768, out_features=768, bias=True)
(k_lin): Linear(in_features=768, out_features=768, bias=True)
(v_lin): Linear(in_features=768, out_features=768, bias=True)
(out_lin): Linear(in_features=768, out_features=768, bias=True)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): Linear(in_features=768, out_features=3072, bias=True)
(lin2): Linear(in_features=3072, out_features=768, bias=True)
(activation): GELUActivation()
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(5): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): Linear(in_features=768, out_features=768, bias=True)
(k_lin): Linear(in_features=768, out_features=768, bias=True)
(v_lin): Linear(in_features=768, out_features=768, bias=True)
(out_lin): Linear(in_features=768, out_features=768, bias=True)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): Linear(in_features=768, out_features=3072, bias=True)
(lin2): Linear(in_features=3072, out_features=768, bias=True)
(activation): GELUActivation()
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
)
)
)
(vocab_transform): Linear(in_features=768, out_features=768, bias=True)
(vocab_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(vocab_projector): Linear(in_features=768, out_features=30522, bias=True)
(mlm_loss_fct): CrossEntropyLoss()
)
这个模型就是一个很正常的模型,通过很多层Transformer的堆叠(Self-Attention),得到768维向量,最后做一个30522的分类。三万多个词,让模型猜猜mask是哪个。
2、预训练模型结果
2.1 加载模型、数据
模型的训练结果肯定与训练数据高度相关
原始的训练数据:wikipedia · Datasets at Hugging Face
import torch
inputs = tokenizer(text, return_tensors="pt")
token_logits = model(**inputs).logits
print(token_logits.shape)
- 导包
- 加载模型与分词器
- .logits就是获得结果
- 打印结果维度
打印结果:
这个1就是一句话,也就是batch,8是对每一个词都做一个它是什么词的结果,30522是每个词都对应的分类特征(最终概率要用softmax计算)
2.2 找到mask
mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]
mask_token_logits = token_logits[0, mask_token_index, :]
top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()
for token in top_5_tokens:
print(f"'>>> {text.replace(tokenizer.mask_token, tokenizer.decode([token]))}'")
- 找到mask对应的词的索引
- 找到mask词对应的分类特征,0是第一句话(只有一句话)1,mask_token_index是mask对应的索引,:表示30522个分类特征
- 取出概率前5的结果
- 将5个最终的预测结果遍历
- decode([token])是将预测的索引转换为词
打印结果:
'>>> This is a great deal.'
'>>> This is a great success.'
'>>> This is a great adventure.'
'>>> This is a great idea.'
'>>> This is a great feat.'
很显然猜出来的都是一些通用的东西