莫烦nlp-GPT 单向语言模型

视频链接:https://mofanpy.com/tutorials/machine-learning/nlp/gpt/

学习原因:

  1. GPT比bert的学习效率高
  2. 在莫烦代码中,bert是继承GPT的,学习GPT较快
  3. 知识追踪领域中,使用前一题预测后一题,不能对后面的预测泄露信息,属于单向模型。

那就开始我们的学习吧。

模型Generative Pre-Training (GPT)

  模型越来越大的好处很显而易见,模型能用更多非线性能力处理更复杂的问题。但是由此带来另一个难题,就是难以训练。每训练一个超大模型, 我们都要消耗更多计算资源和更多时间。

  GPT主要的目标还是当好一个预训练模型该有的样子。用非监督的人类语言数据,训练一个预训练模型,然后拿着这个模型进行finetune, 基本上就可以让你在其他任务上也表现出色。因为下游要finetune的任务千奇百怪,在这个教学中,我会更专注GPT模型本身。 告诉你GPT模型到底长什么样,又会有什么样的特性。至于后续的finetune部分,其实比起模型本身,要容易不少。

  有人说它是Transformer的Decoder,但是我觉得这可能并不准确。 它更像是一种Transformer的Decoder与Encoder的结合。用着Decoder的 Future Mask (Look Ahead Mask),但结构上又更像Encoder。

这么设计就是为了让GPT方便训练。用前文的信息预测后文的信息,所以用上了Future Mask。

如果不用Future Mask, 又要做大量语料的非监督学习,很可能会让模型在预测A时看到A的信息,从而有一种信息穿越的问题。 具体解释一下,因为Transformer这种MultiHead Attention模式,每一个Head都会看到所有的文字内容,如果用前文的信息预测后文内容,又不用Future Mask时, 模型是可以看到要预测的信息的,这种训练是无效的。 Future Mask的应用,就是不让模型看到被穿越的信息,用一双无形的手,蒙蔽了它的透视眼。

另外一个与Transformer Decoder的不同之处是,它没有借用到Encoder提供的 self-attention 信息。所以GPT的Decoder要比Transformer少一些层,都是self-attention,没有vanilla attention。 那么最终的模型乍一看的确和Transformer的某一部分很像,不过就是有两点不同。

  1. Decoder 少了一些连接 Encoder 的层;
  2. 只使用Future Mask (Look ahead mask)做注意力。

论文解读Attention is all you need,这篇简洁、内容较准确。

在这里插入图片描述 在这里插入图片描述

任务,如何训练模型

在这里插入图片描述
当然task还能有很多。就看你的数据支持的是什么样的task了。 多种task一起来训练一个模型,能让这个模型在更多task上的泛化能力更强。

让模型训练两个任务,1. 非监督的后文预测,2. 是否是下一句。

纠正:其实该数据集中string1和string2并不是上下文关系,

along with human annotations indicating whether each pair captures a paraphrase/semantic equivalence relationship. Last published: March 3, 2005.

结果分析

  1. 因为future mask的原因,GPT是没办法很好的预测句子的前半段的, 因为前半段的信息太少了。所以我们才说GPT是单向语言模型。

ELMo的前向lstm也是这个问题,推荐系统、知识追踪也是。是一种冷启动的问题。需要想想怎么解决

  1. 莫烦见解:很多头都会用最开始的。 很有可能是这时候模型并不需要注意什么,为了不注意,他们就将注意力分配到不重要的信息上,也就是这里的。

在这里插入图片描述
普通NLP玩家充当一下吃瓜群众就好了

数据处理

utils.MRPCData():

  • seqs[:, :-1]是X input中的句子信息,[ [ string1,string2] ]
  • segs[:, :-1]是X input的前后句信息,判断是否是前句还是后句。因为我们会同时将前句和后句放在seqs中一起给模型,所以模型需要搞清楚他到底是前句还是后句。
  • seqs[:, 1:]是非监督学习的Y信息,即标签,用前句预测后句。
  • nsp_labels是判断输入的两句话是否是前后文关系。
    与bert相同

GPT框架

模型的架构我们会使用到在Transformer中的Encoder代码,因为他们是通用的。

  1. 只是我们需要将Encoder中的Mask规则给替换掉。在bert中已经为class GPT作了注解。
  2. 定义好词向量word_emb,片段向量segment_emb,位置向量position_emb, 这三个向量表达,我们的输入端就完成了, 接着就是直接套用Transformer的encoder。
  3. 用call()做前向预测的时候,X数据过一遍所有的embedding,然后直接进入Transformer的Encoder,拿到最后的注意后的结果。 最后经过两个输出端 mlm (非监督语言模型) 和 nsp (是否是前后句),完成两个任务的预测。
  4. future mask的效果如下图所示
    在这里插入图片描述

代码

class GPT(keras.Model):
    def __init__(self, model_dim, max_len, n_layer, n_head, n_vocab, lr, max_seg=3, drop_rate=0.1, padding_idx=0):
        super().__init__()
        self.padding_idx = padding_idx #pad_id = 0
        self.n_vocab = n_vocab # len(self.v2i)
        self.max_len = max_len #72-1

        # I think task emb is not necessary for pretraining,
        # because the aim of all tasks is to train a universal sentence embedding
        # the body encoder is the same across all tasks,
        # and different output layer defines different task just like transfer learning.
        # finetuning replaces output layer and leaves the body encoder unchanged.

        # self.task_emb = keras.layers.Embedding(
        #     input_dim=n_task, output_dim=model_dim,  # [n_task, dim]
        #     embeddings_initializer=tf.initializers.RandomNormal(0., 0.01),
        # )

        self.word_emb = keras.layers.Embedding(
            input_dim=n_vocab, output_dim=model_dim,  # [n_vocab, dim]
            embeddings_initializer=tf.initializers.RandomNormal(0., 0.01),
        ) #词向量
        self.segment_emb = keras.layers.Embedding(
            input_dim=max_seg, output_dim=model_dim,  # [max_seg, dim]
            embeddings_initializer=tf.initializers.RandomNormal(0., 0.01),
        ) #片段向量,seg的值0:句子1 |1:句子2 |2:padding 
        self.position_emb = self.add_weight(
            name="pos", shape=[1, max_len, model_dim], dtype=tf.float32,   # [1, step, dim] 相加时broadcast第一维
            initializer=keras.initializers.RandomNormal(0., 0.01)) 
         #位置向量,这里是自己学习参数。论文为固定的数学公式
        self.encoder = Encoder(n_head, model_dim, drop_rate, n_layer) # Transformer的内容,可直接使用
        self.task_mlm = keras.layers.Dense(n_vocab) #task1 预测下一个词
        self.task_nsp = keras.layers.Dense(2) #task2 是否上下句关系

        self.cross_entropy = keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction="none")
        # reduction=‘auto’,这个参数是进行最后的求平均,如果是设置为losses_utils.ReductionV2.None,就不会求平均了
        self.opt = keras.optimizers.Adam(lr)

    def call(self, seqs, segs, training=False): 
    #traning参数控制dropout mask矩阵控制attention
        embed = self.input_emb(seqs, segs)  # [n, step, dim]
        z = self.encoder(embed, training=training, mask=self.mask(seqs))     # [n, step, dim]
        mlm_logits = self.task_mlm(z)  # [n, step, n_vocab]
        nsp_logits = self.task_nsp(tf.reshape(z, [z.shape[0], -1]))  # [n, n_cls]
        return mlm_logits, nsp_logits

    def step(self, seqs, segs, seqs_, nsp_labels):
    ...
    
    def input_emb(self, seqs, segs):
        return self.word_emb(seqs) + self.segment_emb(segs) + self.position_emb  # [n, step, dim]

    def mask(self, seqs):
	...
	
    @property
    def attentions(self):
        attentions = {
            "encoder": [l.mh.attention.numpy() for l in self.encoder.ls],
        }
        return attentions

m = GPT(model_dim=MODEL_DIM, max_len=d.max_len - 1, n_layer=N_LAYER, n_head=4, n_vocab=d.num_word,
        lr=LEARNING_RATE, max_seg=d.num_seg, drop_rate=0.2, padding_idx=d.pad_id)

看注释,bert已经解释过。bert重写了gpt的step和mask函数,下面看看有何不同:

- step函数

tf.math.not_equal Performs a broadcast with the arguments and then an element-wise inequality comparison, returning a Tensor of boolean values.

    def step(self, seqs, segs, seqs_, nsp_labels):
        with tf.GradientTape() as tape:
            mlm_logits, nsp_logits = self.call(seqs, segs, training=True)
            pad_mask = tf.math.not_equal(seqs_, self.padding_idx)# 非padding位置为True
            # mlm_logits [n, step, n_vocab]
            pred_loss = tf.reduce_mean(tf.boolean_mask(self.cross_entropy(seqs_, mlm_logits), pad_mask)) # 非padding位置都计算交叉熵
            # nsp_logits [n, n_cls]
            nsp_loss = tf.reduce_mean(self.cross_entropy(nsp_labels, nsp_logits))
            loss = pred_loss + 0.2 * nsp_loss
            grads = tape.gradient(loss, self.trainable_variables)
            self.opt.apply_gradients(zip(grads, self.trainable_variables))
        return loss, mlm_logits

- mask函数

tf.linalg.band_part

tf.linalg.band_part(tf.ones((5, 5)), -1, 0)
Out[14]: 
<tf.Tensor: shape=(5, 5), dtype=float32, numpy=
array([[1., 0., 0., 0., 0.],
       [1., 1., 0., 0., 0.],
       [1., 1., 1., 0., 0.],
       [1., 1., 1., 1., 0.],
       [1., 1., 1., 1., 1.]], dtype=float32)>

transformer(一)有举例子如何mask的

    def mask(self, seqs):
        """
         abcd--
        a011111
        b001111
        c000111
        d000011
        -000011
        -000011
        force head not to see afterward. eg. 后面乘以负无穷
        a is a embedding for a---
        b is a embedding for ab--
        c is a embedding for abc-
        later, b embedding will + b another embedding from previous residual input to predict c
        """
        mask = 1 - tf.linalg.band_part(tf.ones((self.max_len, self.max_len)), -1, 0)
        pad = tf.math.equal(seqs, self.padding_idx)
         # 3个句子,step为5
         #[3,5]->[3,1,1,5] |x:1(boardcast)| y:[1,1,5,5] |---> [3,1,5,5]
        mask = tf.where(pad[:, tf.newaxis, tf.newaxis, :], 1, mask[tf.newaxis, tf.newaxis, :, :])
        return mask  # (step, step) 

mask的形状应该是(batch,1,step,step),而attention的形状为# [batch, num_heads, q_step, step]
这里q_step=step,因为self-attention的矩阵是长宽一样

- train函数

GPT的标签容易,跟知识追踪一样(上一题预测下一题)

def train(model, data, step=10000, name="gpt"):
    t0 = time.time()
    for t in range(step):
        seqs, segs, xlen, nsp_labels = data.sample(16)
        loss, pred = model.step(seqs[:, :-1], segs[:, :-1], seqs[:, 1:], nsp_labels)
        if t % 100 == 0:
            pred = pred[0].numpy().argmax(axis=1)
            t1 = time.time()
            print(
                "\n\nstep: ", t,
                "| time: %.2f" % (t1 - t0),
                "| loss: %.3f" % loss.numpy(),
                "\n| tgt: ", " ".join([data.i2v[i] for i in seqs[0, 1:][:xlen[0].sum()+2]]),#二次筛选长度 应该+2:到<sep>结束符
                "\n| prd: ", " ".join([data.i2v[i] for i in pred[:xlen[0].sum()+2]]),
                )
            t0 = t1
    os.makedirs("./visual/models/%s" % name, exist_ok=True)
    model.save_weights("./visual/models/%s/model.ckpt" % name)

运行结果

在这里插入图片描述

num word:  12880 max_len:  72

step:  100 | time: 13.26 | loss: 7.495 
| tgt:  the unions also staged a five-day strike in march that forced all but one of yale 's dining halls to close . <SEP> the unions also staged a five-day strike in march ; strikes have preceded eight of the last <NUM> contracts . <SEP> 
| prd:  the . the the the the the . . the . . the the the the . . the the the . . the the the . . . the . the the . . . . . the . the . . the

step:  4900 | time: 13.55 | loss: 1.047 
| tgt:  they were held under section <NUM> of the terrorism act <NUM> on suspicion of involvement in the commission , preparation or instigation of acts of terrorism . <SEP> badat was arrested under section <NUM> of the terrorism act “ on suspicion of involvement in the commission , preparation or instigation of acts of terrorism , ” scotland yard confirmed . <SEP> 
| prd:  the were not today section <NUM> of the terrorism act <NUM> for suspicion of terrorism in the commission , preparation or instigation of terrorism of terrorism 's <SEP> badat was arrested under section <NUM> of the terrorism act “ on suspicion of acts in the commission , preparation or instigation of acts of terrorism , ” scotland yard confirmed . <SEP>

step:  5000 | time: 13.63 | loss: 0.937 
| tgt:  michael mitchell , the chief public defender in baton rouge who is representing lee , did not answer his phone wednesday afternoon . <SEP> michael mitchell , the chief public defender in baton rouge who is representing lee , was not available for comment . <SEP> 
| prd:  the mitchell , the chief justice defender in baton rouge who is representing lee , did not attempt his lawyer the afternoon . <SEP> michael mitchell , the chief justice defender in baton rouge who is representing lee , was not available for comment . <SEP>

step:  9800 | time: 13.45 | loss: 0.211 
| tgt:  in <NUM> , president bush named kathy gregg to the student loan marketing association board of directors . <SEP> in <NUM> , president bush named her to the student loan marketing association , the largest u.s. lender for students . <SEP> 
| prd:  the the , for bush named kathy gregg to the student loan marketing association board of directors . <SEP> in <NUM> , president bush named her to the student loan marketing association , the largest president lender for students . <SEP>

step:  9900 | time: 13.28 | loss: 0.210 
| tgt:  the product also features an updated release of the apache web server , as well as apache tomcat and apache axis . <SEP> panther server also includes an updated release of apache , along with apache tomcat and apache axis for creating powerful web services . <SEP> 
| prd:  <quote> first also features an updated release of the apache web server , as well as apache tomcat and apache axis . <SEP> panther server also includes an updated release of apache , along with apache tomcat and apache axis for creating powerful web services . <SEP>

 total time: 22 min 28 second
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值