BERT学习笔记

BERT全称Bidirectional Enoceder Representations from Transformers,即双向的Transformers的Encoder

BERT结构

Bert的内部结构有两种Size,其中L表示的是transformer的层数,H表示输出的维度,A表示mutil-head attention的个数。

BERT的输入

bert的输入可以是单一的一个句子或者是句子对,实际的输入值是segment embedding与position embedding相加。

BERT的输入词向量是三个向量之和:

Token Embeddings:词向量,第一个单词是CLS标志,可以用于之后的分类任务
Segment Embeddings:表明这个词属于哪个句子(NSP需要两个句子)
Position Embedding:学习出来的embedding向量。这与Transformer不同,Transformer中是预先设定好的值。

BERT的预训练

BERT的预训练阶段包括两个任务,一个是Masked Language Model (Masked LM),还有一个是下句预测 Next Sentence Prediction(NSP)

#1 Masked Language Model

Masked LM 可以形象地称为完形填空问题,随机掩盖掉每一个句子中15%的词,用其上下文来去判断被盖住的词原本应该是什么。

举例来说,有这样一个未标注句子 my dog is hairy ,我们可能随机选择了hairy进行遮掩,就变成 my dog is [mask] ,训练模型去预测 [mask] 位置的词,使预测出 hairy的可能性最大。

但是因为是mask15%的词,其数量已经很高了,这样就会导致某些词在fine-tuning阶段从未见过,为了解决这个问题,作者做了如下的处理:

80%的时间是采用[mask],my dog is hairymy dog is [MASK]

10%的时间是随机取一个词来代替mask的词,my dog is hairy -> my dog is apple

10%的时间保持不变,my dog is hairy -> my dog is hairy

为什么以一定的概率使用随机词呢?

因为transformer要保持对每个输入token分布式的表征,否则Transformer很可能会记住这个[MASK]就是"hairy",从而导致若训练样本和微调的样本mask不一致的情况下,模型预测出现很大的偏差。

#2 Next Sentence Prediction

MLM任务倾向于抽取token层次的表征,因此不能直接获取句子层次的表征。为了使模型能够有能力理解句子间的关系,BERT使用了NSP任务来预训练,预测两个句子是否连在一起。

对于每一个训练样例,我们在语料库中挑选出句子A和句子B来组成,50%的时候句子B就是句子A的下一句(标注为IsNext),剩下50%的时候句子B是语料库中的随机句子(标注为NotNext)。

其输入形式是,开头是一个特殊符号[CLS],然后两个句子之间用[SEP]隔断:

训练样例如下:

Input = [CLS] the man went to [MASK] store [SEP]he bought a gallon [MASK] milk [SEP]
Label = IsNext
Input = [CLS] the man [MASK] to the store [SEP]penguin [MASK] are flight ##less birds[SEP]
Label = NotNext

BERT的输出

BERT最终输出的就是句子中每个token的768维的向量,第一个位置是[CLS],它的向量表示蕴含了这个句子整体的信息,用于做文本分类等句子级任务;对于序列标注等token级任务,就需要使用到每一个token的向量表示。

代码实现

"""
Task: BERT模型的实现
Date: 2023/12/5
Reference: https://github.com/graykode/nlp-tutorial/blob/master/5-2.BERT/BERT.py
"""

import math
import re
from random import *
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim


def make_batch():
    batch = []
    positive = negative = 0  # 为了记录NSP任务中的正样本和负样本的个数,比例最好是在一个batch中接近1:1
    while positive != batch_size / 2 or negative != batch_size / 2:
        # 从整个样本中随机抽取对应的样本的索引;比如tokens_a_index=3,tokens_b_index=1;randrange() 方法返回指定递增基数集合中的一个随机数,基数默认值为1
        tokens_a_index, tokens_b_index = randrange(len(sentences)), randrange(len(sentences))
        # 根据索引获取对应样本:tokens_a=[5, 23, 26, 20, 9, 13, 18] tokens_b=[27, 11, 23, 8, 17, 28, 12, 22, 16, 25]
        tokens_a, tokens_b = token_list[tokens_a_index], token_list[tokens_b_index]
        # 加上特殊符号,CLS符号是1,sep符号是2:[1, 5, 23, 26, 20, 9, 13, 18, 2, 27, 11, 23, 8, 17, 28, 12, 22, 16, 25, 2]
        input_ids = [word_dict['[CLS]']] + tokens_a + [word_dict['[SEP]']] + tokens_b + [word_dict['[SEP]']]
        # 分割句子符号:[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]。0表示第一个句子,1表示第二个句子
        segment_ids = [0] * (1 + len(tokens_a) + 1) + [1] * (len(tokens_b) + 1)

        '''
        MASK LM:有很多种做法,这里只是其中一种,一些做法是没有这个max_pred的。
        n_pred=3:整个句子的15%的字符可以被mask掉,这里取和max_pred中的最小值,确保每次计算损失的时候没有那么多字符以及信息充足
        '''
        n_pred = min(max_pred, max(1, int(round(len(input_ids) * 0.15))))
        # 整个句子input_ids中可以被mask的符号必须是非cls和sep符号的
        cand_maked_pos = [i for i, token in enumerate(input_ids)
                          if token != word_dict['[CLS]'] and token != word_dict['[SEP]']]  # cand_maked_pos=[1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18];
        shuffle(cand_maked_pos)  # 打乱顺序
        masked_tokens, masked_pos = [], []
        # 取其中的三个;masked_pos=[6, 5, 17] 这里对应的是position信息;
        # masked_tokens=[13, 9, 16]:真实标签,这里是被mask的元素之前对应的原始单字数字;
        for pos in cand_maked_pos[:n_pred]:
            masked_pos.append(pos)
            masked_tokens.append(input_ids[pos])
            if random() < 0.8:  # 80%的概率是将它替换成这个特殊的掩码符号
                input_ids[pos] = word_dict['[MASK]']  # make mask
            elif random() < 0.5:  # 10%的概率将它替换成一个随机的词元
                index = randint(0, vocab_size - 1)  # random index in vocabulary
                input_ids[pos] = word_dict[number_dict[index]]  # replace
            # 最后有10 % 的概率不改变,用来做预测

        # Zero Paddings
        n_pad = maxlen - len(input_ids)  # maxlen=30;n_pad=10
        input_ids.extend([0] * n_pad)  # 在input_ids后面补零
        segment_ids.extend([0] * n_pad)  # 在segment_ids 后面补零

        # Zero Padding (100% - 15%) tokens 是为了计算一个batch中句子的mlm损失的时候可以组成一个有效矩阵放进去
        if max_pred > n_pred:
            n_pad = max_pred - n_pred
            masked_tokens.extend(
                [0] * n_pad)  # masked_tokens= [13, 9, 16, 0, 0] masked_tokens 对应的是被mask的元素的原始真实标签groundtruth
            masked_pos.extend([0] * n_pad)  # masked_pos= [6, 5, 17,0,0] 记录哪些位置被mask了
        # a句子和b句子是相邻句子,那么就是正样例。
        if tokens_a_index + 1 == tokens_b_index and positive < batch_size / 2:
            batch.append([input_ids, segment_ids, masked_tokens, masked_pos, True])  # IsNext
            positive += 1
        elif tokens_a_index + 1 != tokens_b_index and negative < batch_size / 2:
            batch.append([input_ids, segment_ids, masked_tokens, masked_pos, False])  # NotNext
            negative += 1
    return batch
# Proprecessing Finished

def get_attn_pad_mask(seq_q, seq_k):
    batch_size, len_q = seq_q.size()
    batch_size, len_k = seq_k.size()
    # eq(zero) is PAD token
    pad_attn_mask = seq_k.data.eq(0).unsqueeze(1)  # batch_size x 1 x len_k(=len_q), one is masking
    return pad_attn_mask.expand(batch_size, len_q, len_k)  # batch_size x len_q x len_k

def gelu(x):
    "Implementation of the gelu activation function by Hugging Face"
    return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))

class Embedding(nn.Module):
    def __init__(self):
        super(Embedding, self).__init__()
        self.tok_embed = nn.Embedding(vocab_size, d_model)  # token embedding
        self.pos_embed = nn.Embedding(maxlen, d_model)  # position embedding
        self.seg_embed = nn.Embedding(n_segments, d_model)  # segment(token type) embedding
        self.norm = nn.LayerNorm(d_model)

    def forward(self, x, seg):
        seq_len = x.size(1)
        pos = torch.arange(seq_len, dtype=torch.long)
        pos = pos.unsqueeze(0).expand_as(x)  # (seq_len,) -> (batch_size, seq_len)
        embedding = self.tok_embed(x) + self.pos_embed(pos) + self.seg_embed(seg)
        return self.norm(embedding)

"""点积自注意力"""
class ScaledDotProductAttention(nn.Module):
    def __init__(self):
        super(ScaledDotProductAttention, self).__init__()

    def forward(self, Q, K, V, attn_mask):
        scores = torch.matmul(Q, K.transpose(-1, -2)) / np.sqrt(d_k) # scores : [batch_size x n_heads x len_q(=len_k) x len_k(=len_q)]
        scores.masked_fill_(attn_mask, -1e9) # Fills elements of self tensor with value where mask is one.
        attn = nn.Softmax(dim=-1)(scores)
        context = torch.matmul(attn, V)
        return context, attn

"""多头注意力机制,类似于Transformer"""
class MultiHeadAttention(nn.Module):
    def __init__(self):
        super(MultiHeadAttention, self).__init__()
        self.W_Q = nn.Linear(d_model, d_k * n_heads)
        self.W_K = nn.Linear(d_model, d_k * n_heads)
        self.W_V = nn.Linear(d_model, d_v * n_heads)
    def forward(self, Q, K, V, attn_mask):
        # q: [batch_size x len_q x d_model], k: [batch_size x len_k x d_model], v: [batch_size x len_k x d_model]
        residual, batch_size = Q, Q.size(0)
        # (B, S, D) -proj-> (B, S, D) -split-> (B, S, H, W) -trans-> (B, H, S, W)
        q_s = self.W_Q(Q).view(batch_size, -1, n_heads, d_k).transpose(1,2)  # q_s: [batch_size x n_heads x len_q x d_k]
        k_s = self.W_K(K).view(batch_size, -1, n_heads, d_k).transpose(1,2)  # k_s: [batch_size x n_heads x len_k x d_k]
        v_s = self.W_V(V).view(batch_size, -1, n_heads, d_v).transpose(1,2)  # v_s: [batch_size x n_heads x len_k x d_v]

        attn_mask = attn_mask.unsqueeze(1).repeat(1, n_heads, 1, 1) # attn_mask : [batch_size x n_heads x len_q x len_k]

        # context: [batch_size x n_heads x len_q x d_v], attn: [batch_size x n_heads x len_q(=len_k) x len_k(=len_q)]
        context, attn = ScaledDotProductAttention()(q_s, k_s, v_s, attn_mask)
        context = context.transpose(1, 2).contiguous().view(batch_size, -1, n_heads * d_v) # context: [batch_size x len_q x n_heads * d_v]
        output = nn.Linear(n_heads * d_v, d_model)(context)
        return nn.LayerNorm(d_model)(output + residual), attn # output: [batch_size x len_q x d_model]

"""前馈神经网络"""
class PoswiseFeedForwardNet(nn.Module):
    def __init__(self):
        super(PoswiseFeedForwardNet, self).__init__()
        self.fc1 = nn.Linear(d_model, d_ff)
        self.fc2 = nn.Linear(d_ff, d_model)

    def forward(self, x):
        # (batch_size, len_seq, d_model) -> (batch_size, len_seq, d_ff) -> (batch_size, len_seq, d_model)
        return self.fc2(gelu(self.fc1(x)))


"""编码层,包括多头自注意力和前馈神经网络"""
class EncoderLayer(nn.Module):
    def __init__(self):
        super(EncoderLayer, self).__init__()
        self.enc_self_attn = MultiHeadAttention()
        self.pos_ffn = PoswiseFeedForwardNet()

    def forward(self, enc_inputs, enc_self_attn_mask):
        enc_outputs, attn = self.enc_self_attn(enc_inputs, enc_inputs, enc_inputs, enc_self_attn_mask) # enc_inputs to same Q,K,V
        enc_outputs = self.pos_ffn(enc_outputs) # enc_outputs: [batch_size x len_q x d_model]
        return enc_outputs, attn


"""BERT模型整体架构"""
class BERT(nn.Module):
    def __init__(self):
        super(BERT, self).__init__()
        self.embedding = Embedding()  # 词向量层,构建词表矩阵
        # 把N个Encoder堆叠起来
        self.layers = nn.ModuleList([EncoderLayer() for _ in range(n_layers)])
        # 前馈神经网络
        self.fc = nn.Linear(d_model, d_model)
        self.activ1 = nn.Tanh()
        # mlm
        self.linear = nn.Linear(d_model, d_model)
        self.activ2 = gelu
        self.norm = nn.LayerNorm(d_model)  # 层归一化
        # cls 这是一个分类层,维度是从d_model到2
        self.classifier = nn.Linear(d_model, 2)
        # decoder is shared with embedding layer
        embed_weight = self.embedding.tok_embed.weight
        n_vocab, n_dim = embed_weight.size()
        self.decoder = nn.Linear(n_dim, n_vocab, bias=False)
        self.decoder.weight = embed_weight
        self.decoder_bias = nn.Parameter(torch.zeros(n_vocab))

    def forward(self, input_ids, segment_ids, masked_pos):
        # 生成input_ids对应的embdding;和segment_ids对应的embedding
        output = self.embedding(input_ids, segment_ids)
        # 和Transformer的get_attn_pad_mask一样
        enc_self_attn_mask = get_attn_pad_mask(input_ids, input_ids)
        for layer in self.layers:
            output, enc_self_attn = layer(output, enc_self_attn_mask)
        # output : [batch_size, len, d_model], attn : [batch_size, n_heads, d_model, d_model]
        # it will be decided by first token(CLS)
        h_pooled = self.activ1(self.fc(output[:, 0]))  # [batch_size, d_model]
        logits_clsf = self.classifier(h_pooled)  # [batch_size, 2],cls分类的任务
        # [batch_size, max_pred, d_model]  其中一个 masked_pos= [6, 5, 17,0,0]
        masked_pos = masked_pos[:, :, None].expand(-1, -1, output.size(-1))
        # get masked position from final output of transformer.
        h_masked = torch.gather(output, 1, masked_pos)  # masking position [batch_size, max_pred, d_model]
        h_masked = self.norm(self.activ2(self.linear(h_masked)))
        logits_lm = self.decoder(h_masked) + self.decoder_bias  # [batch_size, max_pred, n_vocab]

        return logits_lm, logits_clsf


if __name__ == '__main__':
    # BERT Parameters
    maxlen = 30  # 句子的最大长度
    batch_size = 6  # 每一组有多少个句子
    max_pred = 5  # max tokens of prediction
    n_layers = 6  # number of Encoder of Encoder Layer
    n_heads = 12  # number of heads in Multi-Head Attention
    d_model = 768  # Embedding Size
    d_ff = 3072  # 4*d_model, FeedForward dimension
    d_k = d_v = 64  # dimension of K(=Q), V
    n_segments = 2

    text = (
        'Hello, how are you? I am Romeo.\n'
        'Hello, Romeo My name is Juliet. Nice to meet you.\n'
        'Nice meet you too. How are you today?\n'
        'Great. My baseball team won the competition.\n'
        'Oh Congratulations, Juliet\n'
        'Thanks you Romeo'
    )
    # 过滤掉'.', ',', '?', '!',以\n为分隔符。
    sentences = re.sub("[.,!?\\-]", '', text.lower()).split('\n')
    # 句子之间使用分隔符(空格)连接。然后使用空格为分隔符进行分割
    word_list = list(set(" ".join(sentences).split()))
    # 特殊字符对应的数字
    word_dict = {'[PAD]': 0, '[CLS]': 1, '[SEP]': 2, '[MASK]': 3}
    for i, w in enumerate(word_list):
        word_dict[w] = i + 4  # 0,1,2,3为特殊字符,所以从4开始赋值给句子单词
    # 转换为【索引-单词】词典
    number_dict = {i: w for i, w in enumerate(word_dict)}
    # 词汇大小
    vocab_size = len(word_dict)

    token_list = list()
    # 通过词典,把单词转换为数字,存储在token_list中
    for sentence in sentences:
        arr = [word_dict[s] for s in sentence.split()]
        token_list.append(arr)

    # 获取NSP任务中的正样本和负样本
    batch = make_batch()
    input_ids, segment_ids, masked_tokens, masked_pos, isNext = map(torch.LongTensor, zip(*batch))

    model = BERT()
    # mask的位置才计算损失,不被mask的不计算。
    criterion = nn.CrossEntropyLoss(ignore_index=0)  # 注意这里的参数ignore_index=0,表示忽略真实标签为0的样本。
    optimizer = optim.Adam(model.parameters(), lr=0.001)

    for epoch in range(100):
        optimizer.zero_grad()
        logits_lm, logits_clsf = model(input_ids, segment_ids,
                                       masked_pos)  # logits_lm 【6,5,29】 bs*max_pred*voca  logits_clsf:[6*2]
        loss_lm = criterion(logits_lm.transpose(1, 2), masked_tokens)  # for masked LM ;masked_tokens [6,5]
        loss_lm = (loss_lm.float()).mean()
        loss_clsf = criterion(logits_clsf, isNext)  # for sentence classification
        loss = loss_lm + loss_clsf
        if (epoch + 1) % 10 == 0:
            print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss))
        loss.backward()
        optimizer.step()

    # Predict mask tokens ans isNext
    input_ids, segment_ids, masked_tokens, masked_pos, isNext = map(torch.LongTensor, zip(batch[0]))
    print(text)
    print([number_dict[w.item()] for w in input_ids[0] if number_dict[w.item()] != '[PAD]'])

    logits_lm, logits_clsf = model(input_ids, segment_ids, masked_pos)
    logits_lm = logits_lm.data.max(2)[1][0].data.numpy()
    print('masked tokens list : ', [pos.item() for pos in masked_tokens[0] if pos.item() != 0])
    print('predict masked tokens list : ', [pos for pos in logits_lm if pos != 0])

    logits_clsf = logits_clsf.data.max(1)[1].data.numpy()[0]
    print('isNext : ', True if isNext else False)
    print('predict isNext : ', True if logits_clsf else False)

参考 

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

一文读懂BERT(原理篇) -CSDN博客

nlp-tutorial/5-2.BERT/BERT.py at master · graykode/nlp-tutorial · GitHub

BERT代码(源码)从零解读【Pytorch-手把手教你从零实现一个BERT源码模型】_哔哩哔哩_bilibili

  • 6
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
BERT linear是指在BERT模型的基础上添加一个线性层,并通过softmax函数将输出转化为类别。具体步骤如下:首先,预训练好BERT模型,可以使用填空题的数据进行预训练。然后,将训练样本输入整个网络,训练线性层的参数(BERT模型的参数固定)。这个过程可以看作是一个半监督问题,因为只有部分数据有标注。输出的类别与输入的长度一致,比如可以用于词性判断或判断两个句子之间的关系。通过BERT linear模型,可以进行文本分类任务,并取得比基础的BERT模型更好的效果。\[1\] \[2\] \[3\] #### 引用[.reference_title] - *1* [BERT(李宏毅课程)](https://blog.csdn.net/qq_40438523/article/details/121110628)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* [NLP实战 | BERT文本分类及其魔改(附代码)](https://blog.csdn.net/qq_27590277/article/details/127236825)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] - *3* [《深度学习学习笔记(六)](https://blog.csdn.net/weixin_43135165/article/details/124288068)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值