transformerXL

!DOCTYPE html>

<link href="https://csdnimg.cn/public/favicon.ico" rel="SHORTCUT ICON">
<title>Transformer-XL解读(论文 + PyTorch源码) - Magical_Bubble的博客 - CSDN博客</title>

    
                <link rel="stylesheet" href="https://csdnimg.cn/release/phoenix/template/css/detail-c278e9b7fe.min.css">
        
        <script type="application/ld+json">{"@context":"https:\/\/ziyuan.baidu.com\/contexts\/cambrian.jsonld","@id":"https:\/\/blog.csdn.net\/magical_bubble\/article\/details\/89060213","appid":"1563894916825412","title":"Transformer-XL\u89e3\u8bfb\uff08\u8bba\u6587 + PyTorch\u6e90\u7801\uff09 - Magical_Bubble\u7684\u535a\u5ba2","images":["https:\/\/img-blog.csdnimg.cn\/20190407095343453.png","https:\/\/img-blog.csdnimg.cn\/20190407095512873.png","https:\/\/img-blog.csdnimg.cn\/20190407095601191.png"],"pubDate":"2019-07-24T11:31:25"}</script>
    
        <link rel="stylesheet" href="https://csdnimg.cn/release/phoenix/themes/skin3-template/skin3-template-9b39979775.min.css">
    <script type="text/javascript">
    var username = "Magical_Bubble";
    var blog_address = "https://blog.csdn.net/magical_bubble";
    var static_host = "https://csdnimg.cn/release/phoenix/";
    var currentUserName = "w344674";
    var isShowAds = true;
    var isOwner = false;
    var loginUrl = "http://passport.csdn.net/account/login?from=https://blog.csdn.net/magical_bubble/article/details/89060213"
    var blogUrl = "https://blog.csdn.net/";

    var curSkin = "skin3-template";
    // 第四范式所需数据
    var articleTitles = "Transformer-XL解读(论文 + PyTorch源码) - Magical_Bubble的博客";
    
    var nickName = "MagicBubble";
    var isCorporate = false;
    var subDomainBlogUrl = "https://blog.csdn.net/"
    var digg_base_url = "https://blog.csdn.net/magical_bubble/phoenix/comment";
    var articleDetailUrl = "https://blog.csdn.net/Magical_Bubble/article/details/89060213";
</script>
<script src="https://csdnimg.cn/public/common/libs/jquery/jquery-1.9.1.min.js" type="text/javascript"></script>
<script src="//g.csdnimg.cn/??fixed-sidebar/1.1.3/fixed-sidebar.js,report/1.0.2/report.js" type="text/javascript"></script>
<link rel="stylesheet" href="https://csdnimg.cn/public/sandalstrap/1.4/css/sandalstrap.min.css">
<style>
    .MathJax, .MathJax_Message, .MathJax_Preview{
        display: none
    }
</style>

    Transformer-XL解读(论文 + PyTorch源码)

    前言

    目前在NLP领域中,处理语言建模问题有两种最先进的架构:RNN和Transformer。RNN按照序列顺序逐个学习输入的单词或字符之间的关系,而Transformer则接收一整段序列,然后使用self-attention机制来学习它们之间的依赖关系。这两种架构目前来看都取得了令人瞩目的成就,但它们都局限在捕捉长期依赖性上。

    为了解决这一问题,CMU联合Google Brain在2019年1月推出的一篇新论文《Transformer-XL:Attentive Language Models beyond a Fixed-Length Context》同时结合了RNN序列建模和Transformer自注意力机制的优点,在输入数据的每个段上使用Transformer的注意力模块,并使用循环机制来学习连续段之间的依赖关系。Transformer-XL在多种语言建模数据集(如单词级别的enwik8和字符级别的text8)上实现了目前的SoTA效果,且该模型在推理阶段速度更快,比之前最先进的利用Transformer进行语言建模的方法快300~1800倍。 同时,该论文也放出了其配套源码(包括TensorFlow和PyTorch的)、预训练模型及在各个数据集上训练的超参数,可以说是非常良心了~造福我等伸手党!

    本文将主要针对模型原理及其PyTorch实现进行逐一对照解读,因笔者能力有限,如有不详尽之处,可移步文末的传送门进行详细阅读,并欢迎指出~

    一. 回顾Transformer

    在NLP领域中,一种对语言建模的最常用模型就是RNN,它可以捕捉单词之间的依赖关系。但因为梯度消失和爆炸的问题,RNN变得非常难以训练,LSTM单元和梯度裁剪方法的提出也不足以解决此类问题。同时RNN网络的计算速度往往很慢,其学习长期依赖的能力也较为有限(论文中提到,LSTM语言模型平均只能建模200个上下文词语)。

    2017年6月,Google Brain在论文《Attention Is All You Need》中提出的Transformer架构,完全摒弃了RNN的循环机制,采用一种self-attention的方式进行全局处理。其接收一整段序列,并使用三个可训练的权重矩阵——Query、Key和Value来一次性学习输入序列中各个部分之间的依赖关系。Transformer网络由多个层组成,每个层都由多头注意力机制和前馈网络构成。由于在全局进行注意力机制的计算,忽略了序列中最重要的位置信息。Transformer为输入添加了位置编码(Positional Encoding),使用正弦函数完成,为每个部分的位置生成位置向量,不需要学习,用于帮助网络学习其位置信息。其示意如下图所示:
    Transformer
    有关Transformer的更深入讨论,可参考笔者之前的博客:

    Transformer(论文 + PyTorch源码解读)

    二. vanilla Transformer

    为何要提这个模型?因为Transformer-XL是基于这个模型进行的改进。

    Al-Rfou等人基于Transformer提出了一种训练语言模型的方法( https://arxiv.org/abs/1808.04444 ),来根据之前的字符预测片段中的下一个字符。例如,它使用 x 1 , x 2 , . . . , x n − 1 x 1 , x 2 , . . . , x n − 1 x 1 ​ , x 2 ​ , . . . , x n − 1 ​ x1,x2,...,xn−1x_1, x_2, ..., x_{n-1}x1​,x2​,...,xn−1​ x1,x2,...,xn1x1,x2,...,xn1x1,x2,...,xn1预测字符 x n x n x n ​ xnx_nxn​ xnxnxn,而在 x n x n x n ​ xnx_nxn​ xnxnxn之后的序列则被mask掉。论文中使用64层模型,并仅限于处理 512个字符这种相对较短的输入,因此它将输入分成段,并分别从每个段中进行学习,如下图所示。 在测试阶段如需处理较长的输入,该模型会在每一步中将输入向右移动一个字符,以此实现对单个字符的预测。
    vanilla Transformer示意图
    该模型在常用的数据集如enwik8和text8上的表现比RNN模型要好,但它仍有以下两个缺点:

    a. 上下文长度受限:字符之间的最大依赖距离受输入长度的限制,模型看不到出现在几个句子之前的单词。
    b. 上下文碎片:对于长度超过512个字符的文本,都是从头开始单独训练的。段与段之间没有上下文依赖性,会让训练效率低下,也会影响模型的性能。
    c. 推理速度慢:在测试阶段,每次预测下一个单词,都需要重新构建一遍上下文,并从头开始计算,这样的计算速度非常慢。

    三. Transformer-XL

    Transformer-XL架构在vanilla Transformer的基础上引入了两点创新:循环机制(Recurrence Mechanism)和相对位置编码(Relative Positional Encoding),以克服vanilla Transformer的缺点。与vanilla Transformer相比,Transformer-XL的另一个优势是它可以被用于单词级和字符级的语言建模。

    1. 引入循环机制

    与vanilla Transformer的基本思路一样,Transformer-XL仍然是使用分段的方式进行建模,但其与vanilla Transformer的本质不同是在于引入了段与段之间的循环机制,使得当前段在建模的时候能够利用之前段的信息来实现长期依赖性。如下图所示:
    Transformer-XL示意图
    在训练阶段,处理后面的段时,每个隐藏层都会接收两个输入:

    1. 该段的前面隐藏层的输出,与vanilla Transformer相同(上图的灰色线)。
    2. 前面段的隐藏层的输出(上图的绿色线),可以使模型创建长期依赖关系。

    这两个输入会被拼接,然后用于计算当前段的Key和Value矩阵。对于某个段的某一层的具体计算公式如下:
    引入循环机制后的计算方式
    其中, τ τ τ τ\tauτ τττ表示第几段, n n n nnn nnn表示第几层, h h h hhh hhh表示隐层的输出。 S G ( ⋅ ) S G ( ⋅ ) S G ( ⋅ ) SG(⋅)SG(·)SG(⋅) SG()SG()SG()表示停止计算梯度, [ h u ∘ h v ] [ h u ∘ h v ] [ h u ​ ∘ h v ​ ] [hu∘hv][h_u \circ h_v][hu​∘hv​] [huhv][huhv][huhv]表示在长度维度上的两个隐层的拼接, W . W . W . ​ W.W_.W.​ W.W.W.是模型参数。乍一看与Transformer中的计算公式很像,唯一关键的不同就在于Key和Value矩阵的计算上,即 k τ + 1 n k τ + 1 n k τ + 1 n ​ kτ+1nk_{\tau+1}^nkτ+1n​ kτ+1nkτ+1nkτ+1n v τ + 1 n v τ + 1 n v τ + 1 n ​ vτ+1nv_{\tau + 1}^nvτ+1n​ vτ+1nvτ+1nvτ+1n,它们基于的是扩展后的上下文隐层状态 h   τ + 1 n − 1 h ~ τ + 1 n − 1 h   τ + 1 n − 1 ​ h~τ+1n−1\tilde{h}_{\tau+1}^{n-1}h~τ+1n−1​ h τ+1n1h~τ+1n1h τ+1n1进行计算, h τ n − 1 h τ n − 1 h τ n − 1 ​ hτn−1{h}_{\tau}^{n-1}hτn−1​ hτn1hτn1hτn1是之前段的缓存。

    原则上只要GPU内存允许,该方法可以利用前面更多段的信息,测试阶段也可以获得更长的依赖。

    在测试阶段,与vanilla Transformer相比,其速度也会更快。在vanilla Transformer中,一次只能前进一个step,并且需要重新构建段,并全部从头开始计算;而在Transformer-XL中,每次可以前进一整个段,并利用之前段的数据来预测当前段的输出。

    2. 相对位置编码

    在Transformer中,一个重要的地方在于其考虑了序列的位置信息。在分段的情况下,如果仅仅对于每个段仍直接使用Transformer中的位置编码,即每个不同段在同一个位置上的表示使用相同的位置编码,就会出现问题。比如,第 i − 2 i − 2 i − 2 i−2i-2i−2 i2i2i2段和第 i − 1 i − 1 i − 1 i−1i-1i−1 i1i1i1段的第一个位置将具有相同的位置编码,但它们对于第 i i i iii iii段的建模重要性显然并不相同(例如第 i − 2 i − 2 i − 2 i−2i-2i−2 i2i2i2段中的第一个位置重要性可能要低一些)。因此,需要对这种位置进行区分。

    论文对于这个问题,提出了一种新的位置编码的方式,即会根据词之间的相对距离而非像Transformer中的绝对位置进行编码。在Transformer中,第一层的计算查询 q i T q i T q i T ​ qiTq_i^TqiT​ qiTqiTqiT和键 k j k j k j ​ kjk_jkj​ kjkjkj之间的attention分数的方式为:
    Transformer的attention计算公式分解
    其中, E x i E x i E x i ​ ​ ExiE_{x_i}Exi​​ ExiExiExi是词 i i i iii iii的embedding, E x j E x j E x j ​ ​ ExjE_{x_j}Exj​​ ExjExjExj是词 j j j jjj jjj的embedding, U i U i U i ​ UiU_iUi​ UiUiUi U j U j U j ​ UjU_jUj​ UjUjUj是位置向量,这个式子实际上是 ( W q ( E x i + U i ) ) T ⋅ ( W k ( E x j + U j ) ) ( W q ( E x i + U i ) ) T ⋅ ( W k ( E x j + U j ) ) ( W q ​ ( E x i ​ ​ + U i ​ ) ) T ⋅ ( W k ​ ( E x j ​ ​ + U j ​ ) ) (Wq(Exi+Ui))T⋅(Wk(Exj+Uj))(W_q(E_{x_i}+U_i))^T·(W_k(E_{x_j}+U_j))(Wq​(Exi​​+Ui​))T⋅(Wk​(Exj​​+Uj​)) (Wq(Exi+Ui))T(Wk(Exj+Uj))(Wq(Exi+Ui))T(Wk(Exj+Uj))(Wq(Exi+Ui))T(Wk(Exj+Uj))的展开,就是Transformer中的标准格式。

    在Transformer-XL中,对上述的attention计算方式进行了变换,转为相对位置的计算,而且不仅仅在第一层这么计算,在每一层都是这样计算。
    Transformer-XL的attention计算公式分解
    对比来看,主要有三点变化:

    1. 在(b)和(d)这两项中,将所有绝对位置向量 U j U j U j ​ UjU_jUj​ UjUjUj都转为相对位置向量 R i − j R i − j R i − j ​ Ri−jR_{i-j}Ri−j​ RijRijRij,与Transformer一样,这是一个固定的编码向量,不需要学习。
    2. 在(c)这一项中,将查询的 U i T W q T U i T W q T U i T ​ W q T ​ UiTWqTU_i^TW_q^TUiT​WqT​ UiTWqTUiTWqTUiTWqT向量转为一个需要学习的参数向量 u u u uuu uuu,因为在考虑相对位置的时候,不需要查询的绝对位置 i i i iii iii,因此对于任意的 i i i iii iii,都可以采用同样的向量。同理,在(d)这一项中,也将查询的 U i T W q T U i T W q T U i T ​ W q T ​ UiTWqTU_i^TW_q^TUiT​WqT​ UiTWqTUiTWqTUiTWqT向量转为另一个需要学习的参数向量 v v v vvv vvv
    3. 将键的权重变换矩阵 W k W k W k ​ WkW_kWk​ WkWkWk转为 W k , E W k , E W k , E ​ Wk,EW_{k, E}Wk,E​ Wk,EWk,EWk,E W k , R W k , R W k , R ​ Wk,RW_{k, R}Wk,R​ Wk,RWk,RWk,R,分别作为content-based key vectors和location-based key vectors。

    从另一个角度来解读这个公式的话,可以将attention的计算分为如下四个部分:

    a. 基于内容的“寻址”,即没有添加原始位置编码的原始分数。
    b. 基于内容的位置偏置,即相对于当前内容的位置偏差。
    c. 全局的内容偏置,用于衡量key的重要性。
    d. 全局的位置偏置,根据query和key之间的距离调整重要性。

    3. 整体计算公式

    结合上面两个创新点,将Transformer-XL模型的整体计算公式整理如下,这里考虑一个N层的只有一个注意力头的模型:
    Transformer-XL的整体计算公式
    其中, τ τ τ τ\tauτ τττ代表第几段, n n n nnn nnn代表第几层, h τ 0 : = E s τ h τ 0 : = E s τ h τ 0 ​ : = E s τ ​ ​ hτ0:=Esτh_\tau^0 := E_{s_\tau}hτ0​:=Esτ​​ hτ0:=Esτhτ0:=Esτhτ0:=Esτ定义为第 τ τ τ τ\tauτ τττ段的词向量序列。值得一提的是,计算 A A A AAA AAA矩阵的时候,需要对所有的 i − j i − j i − j i−ji-ji−j ijijij计算 W k , R n R i − j W k , R n R i − j W k , R n ​ R i − j ​ Wk,RnRi−jW_{k,R}^nR_{i-j}Wk,Rn​Ri−j​ Wk,RnRijWk,RnRijWk,RnRij,如果直接按照公式计算的话,计算时间是 O ( l e n g t h ) 2 O ( l e n g t h ) 2 O ( l e n g t h ) 2 O(length)2O(length)^2O(length)2 O(length)2O(length)2O(length)2,而实际上 i − j i − j i − j i−ji-ji−j ijijij的范围只从0 ~ length,因此可以先计算好这length个向量,然后在实际计算 A A A AAA AAA矩阵时直接取用即可。

    具体的,设 M M M MMM MMM L L L LLL LLL分别为memory和当前段序列的长度,则 i − j i − j i − j i−ji-ji−j ijijij的范围也就为0 ~ M + L − 1 M + L − 1 M + L − 1 M+L−1M + L - 1M+L−1 M+L1M+L1M+L1。下面的 Q Q Q QQQ QQQ矩阵中的每一行都代表着 W k , R R i − j W k , R R i − j W k , R ​ R i − j ​ Wk,RRi−jW_{k,R}R_{i-j}Wk,R​Ri−j​ Wk,RRijWk,RRijWk,RRij中一个 i − j i − j i − j i−ji-ji−j ijijij的可能性,即 Q k = W k , R R M + L − 1 − k Q k = W k , R R M + L − 1 − k Q k ​ = W k , R ​ R M + L − 1 − k ​ Qk=Wk,RRM+L−1−kQ_k = W_{k, R} R_{M+L-1-k}Qk​=Wk,R​RM+L−1−k​ Qk=Wk,RRM+L1kQk=Wk,RRM+L1kQk=Wk,RRM+L1k
    Q矩阵
    则对于上面公式中的(b)项,即 q i T W k , R R i − j q i T W k , R R i − j q i T ​ W k , R ​ R i − j ​ qiTWk,RRi−jq_i^TW_{k,R}R_{i-j}qiT​Wk,R​Ri−j​ qiTWk,RRijqiTWk,RRijqiTWk,RRij,其构成的所有可能向量的矩阵为 B B B BBB BBB矩阵,其形状为 L ∗ ( M + L ) L ∗ ( M + L ) L ∗ ( M + L ) L∗(M+L)L * (M + L)L∗(M+L) L(M+L)L(M+L)L(M+L),这是我们最终需要的(b)项的attention结果。
    B矩阵
    我们进一步定义 B   B ~ B   B~\tilde{B}B~ B B~B 矩阵为如下:
    B矩阵的变换
    可见,需要的 B B B BBB BBB矩阵的每一行只是 B   B ~ B   B~\tilde{B}B~ B B~B 的向左shift而已。因此,可以直接利用矩阵乘法计算 B   B ~ B   B~\tilde{B}B~ B B~B 即可。设 R i − j R i − j R i − j ​ Ri−jR_{i-j}Ri−j​ RijRijRij的维度为 d R d R d R ​ dRd_RdR​ dRdRdR q i q i q i ​ qiq_iqi​ qiqiqi的维度为 d q d q d q ​ dqd_qdq​ dqdqdq W k , R W k , R W k , R ​ Wk,RW_{k,R}Wk,R​ Wk,RWk,RWk,R矩阵的维度为 d q ∗ d R d q ∗ d R d q ​ ∗ d R ​ dq∗dRd_q * d_Rdq​∗dR​ dqdRdqdRdqdR,则直接计算矩阵B的时间复杂度为 2 ∗ d q ∗ d R ∗ L ∗ ( M + L ) 2 ∗ d q ∗ d R ∗ L ∗ ( M + L ) 2 ∗ d q ​ ∗ d R ​ ∗ L ∗ ( M + L ) 2∗dq∗dR∗L∗(M+L)2* d_q * d_R * L * (M+L)2∗dq​∗dR​∗L∗(M+L) 2dqdRL(M+L)2dqdRL(M+L)2dqdRL(M+L),而计算 B   B ~ B   B~\tilde{B}B~ B B~B 的时间复杂度为 L ∗ d q ∗ ( M + L ) + d q ∗ d R ∗ ( M + L ) L ∗ d q ∗ ( M + L ) + d q ∗ d R ∗ ( M + L ) L ∗ d q ​ ∗ ( M + L ) + d q ​ ∗ d R ​ ∗ ( M + L ) L∗dq∗(M+L)+dq∗dR∗(M+L)L * d_q * (M + L) + d_q * d_R * (M + L)L∗dq​∗(M+L)+dq​∗dR​∗(M+L) Ldq(M+L)+dqdR(M+L)Ldq(M+L)+dqdR(M+L)Ldq(M+L)+dqdR(M+L),计算量明显不是一个量级(后者要快很多)。

    同理,对于(d)项来说,可以对所有的 i − j i − j i − j i−ji-ji−j ijijij定义需要的矩阵 D D D DDD DDD L ∗ ( M + L ) L ∗ ( M + L ) L ∗ ( M + L ) L∗(M+L)L * (M+L)L∗(M+L) L(M+L)L(M+L)L(M+L)
    D矩阵
    可以用如下的 d   d ~ d   d~\tilde{d}d~ d d~d 来进行shift得到:
    D矩阵的变换
    其中 Q Q Q QQQ QQQ矩阵已经计算过了,也可以在这一步减少计算量。

    四. PyTorch实现

    笔者在这里主要研究的是核心模型部分,将针对关键的实现细节进行剖析,想要看完整代码的读者请戳这里

    1. 首先来看RelativePositionalEmbedding部分。
    class PositionalEmbedding(nn.Module):
        def __init__(self, demb):
            super(PositionalEmbedding, self).__init__()
            self.demb = demb
            inv_freq = 1 / (10000 ** (torch.arange(0.0, demb, 2.0) / demb))
    
    <span class="token keyword">def</span> <span class="token function">forward</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> pos_seq<span class="token punctuation">)</span><span class="token punctuation">:</span>
        sinusoid_inp <span class="token operator">=</span> torch<span class="token punctuation">.</span>ger<span class="token punctuation">(</span>pos_seq<span class="token punctuation">,</span> self<span class="token punctuation">.</span>inv_freq<span class="token punctuation">)</span>
        pos_emb <span class="token operator">=</span> torch<span class="token punctuation">.</span>cat<span class="token punctuation">(</span><span class="token punctuation">[</span>sinusoid_inp<span class="token punctuation">.</span>sin<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">,</span> sinusoid_inp<span class="token punctuation">.</span>cos<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">]</span><span class="token punctuation">,</span> dim<span class="token operator">=</span><span class="token operator">-</span><span class="token number">1</span><span class="token punctuation">)</span>
        <span class="token keyword">return</span> pos_emb<span class="token punctuation">[</span><span class="token punctuation">:</span><span class="token punctuation">,</span><span class="token boolean">None</span><span class="token punctuation">,</span><span class="token punctuation">:</span><span class="token punctuation">]</span>
    

    这里的demb是相对位置编码的维度,pos_seq是序列的位置向量,在代码里面是torch.arange(klen-1, -1, -1.0),其中的klenmlen+qlen,从名称和之前的原理介绍可知这里的mlen是memory的长度,qlen是query的长度,这两者组成了key的长度。最终返回的即是 R R R RRR RRR向量矩阵,可见是不需要学习的。

    1. 接着来看MultiHeadAttention的部分,为了叙述方便,这里的MultiHeadAttn是源代码中的RelMultiHeadAttn和RelPartialLearnableMultiHeadAttn的整合,也即一层self-attention的计算方式。
    
    class MultiHeadAttn(nn.Module):
        def __init__(self, n_head, d_model, d_head, dropout, dropatt=0,
                     tgt_len=None, ext_len=None, mem_len=None, pre_lnorm=False):
            super(MultiHeadAttn, self).__init__()
    
    	self<span class="token punctuation">.</span>n_head <span class="token operator">=</span> n_head
        self<span class="token punctuation">.</span>d_model <span class="token operator">=</span> d_model
        self<span class="token punctuation">.</span>d_head <span class="token operator">=</span> d_head
        self<span class="token punctuation">.</span>dropout <span class="token operator">=</span> dropout
    
        self<span class="token punctuation">.</span>qkv_net <span class="token operator">=</span> nn<span class="token punctuation">.</span>Linear<span class="token punctuation">(</span>d_model<span class="token punctuation">,</span> <span class="token number">3</span> <span class="token operator">*</span> n_head <span class="token operator">*</span> d_head<span class="token punctuation">,</span> bias<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">)</span>
    
        self<span class="token punctuation">.</span>drop <span class="token operator">=</span> nn<span class="token punctuation">.</span>Dropout<span class="token punctuation">(</span>dropout<span class="token punctuation">)</span>
        self<span class="token punctuation">.</span>dropatt <span class="token operator">=</span> nn<span class="token punctuation">.</span>Dropout<span class="token punctuation">(</span>dropatt<span class="token punctuation">)</span>
        self<span class="token punctuation">.</span>o_net <span class="token operator">=</span> nn<span class="token punctuation">.</span>Linear<span class="token punctuation">(</span>n_head <span class="token operator">*</span> d_head<span class="token punctuation">,</span> d_model<span class="token punctuation">,</span> bias<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">)</span>
    
        self<span class="token punctuation">.</span>layer_norm <span class="token operator">=</span> nn<span class="token punctuation">.</span>LayerNorm<span class="token punctuation">(</span>d_model<span class="token punctuation">)</span>
    
        self<span class="token punctuation">.</span>scale <span class="token operator">=</span> <span class="token number">1</span> <span class="token operator">/</span> <span class="token punctuation">(</span>d_head <span class="token operator">**</span> <span class="token number">0.5</span><span class="token punctuation">)</span>
    
        self<span class="token punctuation">.</span>pre_lnorm <span class="token operator">=</span> pre_lnorm
    
        self<span class="token punctuation">.</span>r_net <span class="token operator">=</span> nn<span class="token punctuation">.</span>Linear<span class="token punctuation">(</span>self<span class="token punctuation">.</span>d_model<span class="token punctuation">,</span> self<span class="token punctuation">.</span>n_head <span class="token operator">*</span> self<span class="token punctuation">.</span>d_head<span class="token punctuation">,</span> bias<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">)</span>
    
    	<span class="token keyword">def</span> <span class="token function">_rel_shift</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> x<span class="token punctuation">,</span> zero_triu<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
            zero_pad <span class="token operator">=</span> torch<span class="token punctuation">.</span>zeros<span class="token punctuation">(</span><span class="token punctuation">(</span>x<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token number">1</span><span class="token punctuation">,</span> <span class="token operator">*</span>x<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">[</span><span class="token number">2</span><span class="token punctuation">:</span><span class="token punctuation">]</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
                                   device<span class="token operator">=</span>x<span class="token punctuation">.</span>device<span class="token punctuation">,</span> dtype<span class="token operator">=</span>x<span class="token punctuation">.</span>dtype<span class="token punctuation">)</span>
            x_padded <span class="token operator">=</span> torch<span class="token punctuation">.</span>cat<span class="token punctuation">(</span><span class="token punctuation">[</span>zero_pad<span class="token punctuation">,</span> x<span class="token punctuation">]</span><span class="token punctuation">,</span> dim<span class="token operator">=</span><span class="token number">1</span><span class="token punctuation">)</span>
    
            x_padded <span class="token operator">=</span> x_padded<span class="token punctuation">.</span>view<span class="token punctuation">(</span>x<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">)</span> <span class="token operator">+</span> <span class="token number">1</span><span class="token punctuation">,</span> x<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token operator">*</span>x<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">[</span><span class="token number">2</span><span class="token punctuation">:</span><span class="token punctuation">]</span><span class="token punctuation">)</span>
    
            x <span class="token operator">=</span> x_padded<span class="token punctuation">[</span><span class="token number">1</span><span class="token punctuation">:</span><span class="token punctuation">]</span><span class="token punctuation">.</span>view_as<span class="token punctuation">(</span>x<span class="token punctuation">)</span>
    
            <span class="token keyword">if</span> zero_triu<span class="token punctuation">:</span>
                ones <span class="token operator">=</span> torch<span class="token punctuation">.</span>ones<span class="token punctuation">(</span><span class="token punctuation">(</span>x<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span><span class="token punctuation">,</span> x<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
                x <span class="token operator">=</span> x <span class="token operator">*</span> torch<span class="token punctuation">.</span>tril<span class="token punctuation">(</span>ones<span class="token punctuation">,</span> x<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">)</span> <span class="token operator">-</span> x<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">[</span><span class="token punctuation">:</span><span class="token punctuation">,</span><span class="token punctuation">:</span><span class="token punctuation">,</span><span class="token boolean">None</span><span class="token punctuation">,</span><span class="token boolean">None</span><span class="token punctuation">]</span>
    
            <span class="token keyword">return</span> x
    
        <span class="token keyword">def</span> <span class="token function">forward</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> w<span class="token punctuation">,</span> r<span class="token punctuation">,</span> r_w_bias<span class="token punctuation">,</span> r_r_bias<span class="token punctuation">,</span> attn_mask<span class="token operator">=</span><span class="token boolean">None</span><span class="token punctuation">,</span> mems<span class="token operator">=</span><span class="token boolean">None</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
            qlen<span class="token punctuation">,</span> rlen<span class="token punctuation">,</span> bsz <span class="token operator">=</span> w<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span><span class="token punctuation">,</span> r<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span><span class="token punctuation">,</span> w<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">)</span>
    
            <span class="token keyword">if</span> mems <span class="token keyword">is</span> <span class="token operator">not</span> <span class="token boolean">None</span><span class="token punctuation">:</span>
                cat <span class="token operator">=</span> torch<span class="token punctuation">.</span>cat<span class="token punctuation">(</span><span class="token punctuation">[</span>mems<span class="token punctuation">,</span> w<span class="token punctuation">]</span><span class="token punctuation">,</span> <span class="token number">0</span><span class="token punctuation">)</span>
                <span class="token keyword">if</span> self<span class="token punctuation">.</span>pre_lnorm<span class="token punctuation">:</span>
                    w_heads <span class="token operator">=</span> self<span class="token punctuation">.</span>qkv_net<span class="token punctuation">(</span>self<span class="token punctuation">.</span>layer_norm<span class="token punctuation">(</span>cat<span class="token punctuation">)</span><span class="token punctuation">)</span>
                <span class="token keyword">else</span><span class="token punctuation">:</span>
                    w_heads <span class="token operator">=</span> self<span class="token punctuation">.</span>qkv_net<span class="token punctuation">(</span>cat<span class="token punctuation">)</span>
                r_head_k <span class="token operator">=</span> self<span class="token punctuation">.</span>r_net<span class="token punctuation">(</span>r<span class="token punctuation">)</span>
    
                w_head_q<span class="token punctuation">,</span> w_head_k<span class="token punctuation">,</span> w_head_v <span class="token operator">=</span> torch<span class="token punctuation">.</span>chunk<span class="token punctuation">(</span>w_heads<span class="token punctuation">,</span> <span class="token number">3</span><span class="token punctuation">,</span> dim<span class="token operator">=</span><span class="token operator">-</span><span class="token number">1</span><span class="token punctuation">)</span>
                w_head_q <span class="token operator">=</span> w_head_q<span class="token punctuation">[</span><span class="token operator">-</span>qlen<span class="token punctuation">:</span><span class="token punctuation">]</span>
            <span class="token keyword">else</span><span class="token punctuation">:</span>
                <span class="token keyword">if</span> self<span class="token punctuation">.</span>pre_lnorm<span class="token punctuation">:</span>
                    w_heads <span class="token operator">=</span> self<span class="token punctuation">.</span>qkv_net<span class="token punctuation">(</span>self<span class="token punctuation">.</span>layer_norm<span class="token punctuation">(</span>w<span class="token punctuation">)</span><span class="token punctuation">)</span>
                <span class="token keyword">else</span><span class="token punctuation">:</span>
                    w_heads <span class="token operator">=</span> self<span class="token punctuation">.</span>qkv_net<span class="token punctuation">(</span>w<span class="token punctuation">)</span>
                r_head_k <span class="token operator">=</span> self<span class="token punctuation">.</span>r_net<span class="token punctuation">(</span>r<span class="token punctuation">)</span>
    
                w_head_q<span class="token punctuation">,</span> w_head_k<span class="token punctuation">,</span> w_head_v <span class="token operator">=</span> torch<span class="token punctuation">.</span>chunk<span class="token punctuation">(</span>w_heads<span class="token punctuation">,</span> <span class="token number">3</span><span class="token punctuation">,</span> dim<span class="token operator">=</span><span class="token operator">-</span><span class="token number">1</span><span class="token punctuation">)</span>
    
            klen <span class="token operator">=</span> w_head_k<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span>
    
            w_head_q <span class="token operator">=</span> w_head_q<span class="token punctuation">.</span>view<span class="token punctuation">(</span>qlen<span class="token punctuation">,</span> bsz<span class="token punctuation">,</span> self<span class="token punctuation">.</span>n_head<span class="token punctuation">,</span> self<span class="token punctuation">.</span>d_head<span class="token punctuation">)</span>           <span class="token comment"># qlen x bsz x n_head x d_head</span>
            w_head_k <span class="token operator">=</span> w_head_k<span class="token punctuation">.</span>view<span class="token punctuation">(</span>klen<span class="token punctuation">,</span> bsz<span class="token punctuation">,</span> self<span class="token punctuation">.</span>n_head<span class="token punctuation">,</span> self<span class="token punctuation">.</span>d_head<span class="token punctuation">)</span>           <span class="token comment"># qlen x bsz x n_head x d_head</span>
            w_head_v <span class="token operator">=</span> w_head_v<span class="token punctuation">.</span>view<span class="token punctuation">(</span>klen<span class="token punctuation">,</span> bsz<span class="token punctuation">,</span> self<span class="token punctuation">.</span>n_head<span class="token punctuation">,</span> self<span class="token punctuation">.</span>d_head<span class="token punctuation">)</span>           <span class="token comment"># qlen x bsz x n_head x d_head</span>
    
            r_head_k <span class="token operator">=</span> r_head_k<span class="token punctuation">.</span>view<span class="token punctuation">(</span>rlen<span class="token punctuation">,</span> self<span class="token punctuation">.</span>n_head<span class="token punctuation">,</span> self<span class="token punctuation">.</span>d_head<span class="token punctuation">)</span>                <span class="token comment"># qlen x n_head x d_head</span>
    
            <span class="token comment">#### compute attention score</span>
            rw_head_q <span class="token operator">=</span> w_head_q <span class="token operator">+</span> r_w_bias                                         <span class="token comment"># qlen x bsz x n_head x d_head</span>
            AC <span class="token operator">=</span> torch<span class="token punctuation">.</span>einsum<span class="token punctuation">(</span><span class="token string">'ibnd,jbnd-&gt;ijbn'</span><span class="token punctuation">,</span> <span class="token punctuation">(</span>rw_head_q<span class="token punctuation">,</span> w_head_k<span class="token punctuation">)</span><span class="token punctuation">)</span>             <span class="token comment"># qlen x klen x bsz x n_head</span>
    
            rr_head_q <span class="token operator">=</span> w_head_q <span class="token operator">+</span> r_r_bias
            BD <span class="token operator">=</span> torch<span class="token punctuation">.</span>einsum<span class="token punctuation">(</span><span class="token string">'ibnd,jnd-&gt;ijbn'</span><span class="token punctuation">,</span> <span class="token punctuation">(</span>rr_head_q<span class="token punctuation">,</span> r_head_k<span class="token punctuation">)</span><span class="token punctuation">)</span>              <span class="token comment"># qlen x klen x bsz x n_head</span>
            BD <span class="token operator">=</span> self<span class="token punctuation">.</span>_rel_shift<span class="token punctuation">(</span>BD<span class="token punctuation">)</span>
    
            <span class="token comment"># [qlen x klen x bsz x n_head]</span>
            attn_score <span class="token operator">=</span> AC <span class="token operator">+</span> BD
            attn_score<span class="token punctuation">.</span>mul_<span class="token punctuation">(</span>self<span class="token punctuation">.</span>scale<span class="token punctuation">)</span>
    
            <span class="token comment">#### compute attention probability</span>
            <span class="token keyword">if</span> attn_mask <span class="token keyword">is</span> <span class="token operator">not</span> <span class="token boolean">None</span> <span class="token operator">and</span> attn_mask<span class="token punctuation">.</span><span class="token builtin">any</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">.</span>item<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
                <span class="token keyword">if</span> attn_mask<span class="token punctuation">.</span>dim<span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">==</span> <span class="token number">2</span><span class="token punctuation">:</span>
                    attn_score <span class="token operator">=</span> attn_score<span class="token punctuation">.</span><span class="token builtin">float</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">.</span>masked_fill<span class="token punctuation">(</span>
                        attn_mask<span class="token punctuation">[</span><span class="token boolean">None</span><span class="token punctuation">,</span><span class="token punctuation">:</span><span class="token punctuation">,</span><span class="token punctuation">:</span><span class="token punctuation">,</span><span class="token boolean">None</span><span class="token punctuation">]</span><span class="token punctuation">,</span> <span class="token operator">-</span><span class="token builtin">float</span><span class="token punctuation">(</span><span class="token string">'inf'</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">.</span>type_as<span class="token punctuation">(</span>attn_score<span class="token punctuation">)</span>
                <span class="token keyword">elif</span> attn_mask<span class="token punctuation">.</span>dim<span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">==</span> <span class="token number">3</span><span class="token punctuation">:</span>
                    attn_score <span class="token operator">=</span> attn_score<span class="token punctuation">.</span><span class="token builtin">float</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">.</span>masked_fill<span class="token punctuation">(</span>
                        attn_mask<span class="token punctuation">[</span><span class="token punctuation">:</span><span class="token punctuation">,</span><span class="token punctuation">:</span><span class="token punctuation">,</span><span class="token punctuation">:</span><span class="token punctuation">,</span><span class="token boolean">None</span><span class="token punctuation">]</span><span class="token punctuation">,</span> <span class="token operator">-</span><span class="token builtin">float</span><span class="token punctuation">(</span><span class="token string">'inf'</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">.</span>type_as<span class="token punctuation">(</span>attn_score<span class="token punctuation">)</span>
    
            <span class="token comment"># [qlen x klen x bsz x n_head]</span>
            attn_prob <span class="token operator">=</span> F<span class="token punctuation">.</span>softmax<span class="token punctuation">(</span>attn_score<span class="token punctuation">,</span> dim<span class="token operator">=</span><span class="token number">1</span><span class="token punctuation">)</span>
            attn_prob <span class="token operator">=</span> self<span class="token punctuation">.</span>dropatt<span class="token punctuation">(</span>attn_prob<span class="token punctuation">)</span>
    
            <span class="token comment">#### compute attention vector</span>
            attn_vec <span class="token operator">=</span> torch<span class="token punctuation">.</span>einsum<span class="token punctuation">(</span><span class="token string">'ijbn,jbnd-&gt;ibnd'</span><span class="token punctuation">,</span> <span class="token punctuation">(</span>attn_prob<span class="token punctuation">,</span> w_head_v<span class="token punctuation">)</span><span class="token punctuation">)</span>
    
            <span class="token comment"># [qlen x bsz x n_head x d_head]</span>
            attn_vec <span class="token operator">=</span> attn_vec<span class="token punctuation">.</span>contiguous<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">.</span>view<span class="token punctuation">(</span>
                attn_vec<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span><span class="token punctuation">,</span> attn_vec<span class="token punctuation">.</span>size<span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">)</span><span class="token punctuation">,</span> self<span class="token punctuation">.</span>n_head <span class="token operator">*</span> self<span class="token punctuation">.</span>d_head<span class="token punctuation">)</span>
    
            <span class="token comment">##### linear projection</span>
            attn_out <span class="token operator">=</span> self<span class="token punctuation">.</span>o_net<span class="token punctuation">(</span>attn_vec<span class="token punctuation">)</span>
            attn_out <span class="token operator">=</span> self<span class="token punctuation">.</span>drop<span class="token punctuation">(</span>attn_out<span class="token punctuation">)</span>
    
            <span class="token keyword">if</span> self<span class="token punctuation">.</span>pre_lnorm<span class="token punctuation">:</span>
                <span class="token comment">##### residual connection</span>
                output <span class="token operator">=</span> w <span class="token operator">+</span> attn_out
            <span class="token keyword">else</span><span class="token punctuation">:</span>
                <span class="token comment">##### residual connection + layer normalization</span>
                output <span class="token operator">=</span> self<span class="token punctuation">.</span>layer_norm<span class="token punctuation">(</span>w <span class="token operator">+</span> attn_out<span class="token punctuation">)</span>
    
            <span class="token keyword">return</span> output
    

    其中n_head,d_model,d_head分别表示注意力头的个数,模型的隐层维度,每个头的隐层维度。qkv_net是用于计算query、key和value变换的参数矩阵 W q , W k , E , W v W q , W k , E , W v W q ​ , W k , E ​ , W v ​ Wq,Wk,E,WvW_{q}, W_{k,E}, W_{v}Wq​,Wk,E​,Wv​ Wq,Wk,E,WvWq,Wk,E,WvWq,Wk,E,Wv,与标准的Transformer中一致,o_net是用于将所有注意力头的结果拼接后再变换到模型维度的参数矩阵,layer_norm是LayerNormalization层,r_net是用于计算relative position embedding变换的参数矩阵 W k , R W k , R W k , R ​ Wk,RW_{k,R}Wk,R​ Wk,RWk,RWk,R

    在前向计算的过程中,wr分别是上一层的输出以及RelativePositionEmbedding,r_w_biasr_r_bias分别是 u u u uuu uuu向量和 v v v vvv vvv向量,AC是前面公式中的(a)项和(c)项,BD是前面公式中的(b)项和(d)项,根据前面讲的快速计算带有相对位置的项,这里的BD需要进行偏移,即_rel_shift,经过笔者的演算,发现这里经过此函数后的BD并不是想要的 B B B BBB BBB矩阵,其在 B B B BBB BBB矩阵的(M+1)对角线(设主对角线为0,正数即为向右上偏移的量)的右上还有元素,不过后面紧接着就进行了mask。这里的attn_mask即为torch.triu(word_emb.new_ones(qlen, klen), diagonal=1+mlen).byte()[:,:,None]。再往后就是标准的Transformer中的add&norm环节了,就不再赘述。

    1. 最后来看memory的更新过程:
    def _update_mems(self, hids, mems, qlen, mlen):
        # does not deal with None
        if mems is None: return None
    
    <span class="token comment"># mems is not None</span>
    <span class="token keyword">assert</span> <span class="token builtin">len</span><span class="token punctuation">(</span>hids<span class="token punctuation">)</span> <span class="token operator">==</span> <span class="token builtin">len</span><span class="token punctuation">(</span>mems<span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token string">'len(hids) != len(mems)'</span>
    
    <span class="token comment"># There are `mlen + qlen` steps that can be cached into mems</span>
    <span class="token comment"># For the next step, the last `ext_len` of the `qlen` tokens</span>
    <span class="token comment"># will be used as the extended context. Hence, we only cache</span>
    <span class="token comment"># the tokens from `mlen + qlen - self.ext_len - self.mem_len`</span>
    <span class="token comment"># to `mlen + qlen - self.ext_len`.</span>
    <span class="token keyword">with</span> torch<span class="token punctuation">.</span>no_grad<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
        new_mems <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token punctuation">]</span>
        end_idx <span class="token operator">=</span> mlen <span class="token operator">+</span> <span class="token builtin">max</span><span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">,</span> qlen <span class="token operator">-</span> <span class="token number">0</span> <span class="token operator">-</span> self<span class="token punctuation">.</span>ext_len<span class="token punctuation">)</span>
        beg_idx <span class="token operator">=</span> <span class="token builtin">max</span><span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">,</span> end_idx <span class="token operator">-</span> self<span class="token punctuation">.</span>mem_len<span class="token punctuation">)</span>
        <span class="token keyword">for</span> i <span class="token keyword">in</span> <span class="token builtin">range</span><span class="token punctuation">(</span><span class="token builtin">len</span><span class="token punctuation">(</span>hids<span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
    
            cat <span class="token operator">=</span> torch<span class="token punctuation">.</span>cat<span class="token punctuation">(</span><span class="token punctuation">[</span>mems<span class="token punctuation">[</span>i<span class="token punctuation">]</span><span class="token punctuation">,</span> hids<span class="token punctuation">[</span>i<span class="token punctuation">]</span><span class="token punctuation">]</span><span class="token punctuation">,</span> dim<span class="token operator">=</span><span class="token number">0</span><span class="token punctuation">)</span>
            new_mems<span class="token punctuation">.</span>append<span class="token punctuation">(</span>cat<span class="token punctuation">[</span>beg_idx<span class="token punctuation">:</span>end_idx<span class="token punctuation">]</span><span class="token punctuation">.</span>detach<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
    
    <span class="token keyword">return</span> new_mems
    

    这里的hids是当前段每层的输出,mems为当前段每层依赖的memory,qlen为序列长度,mlen为当前段依赖的memory的长度。

    从代码来看的话,前面的循环示意图似乎有些问题?感觉在训练阶段,对于每个段里面的第二个位置开始的点,都应该连到第一个位置连到的最前面memory?因为用的是同样长度的memory。

    五. 实验结果

    1. 语言建模指标

    在最关心的语言模型建模指标上,论文比较了模型在单词级别和字符级别上不同数据集的表现,并且与RNN和(vanilla) Transformer都做了比较。实验证明,Transformer-XL在各个不同的数据集上均实现了目前的SoTA:在大型单词级别数据集WikiText-103上,Transformer-XL将困惑度从20.5降到18.3;在enwiki8数据集上,12层Transformer-XL的bpc达到了1.06,相同bpc的AI-Rfou的模型( https://arxiv.org/abs/1808.04444 )参数量却是6倍,24层Transformer-XL的bpc更是达到了0.99;在One Billion Word数据集上(仅具有短句的)和Penn Treebank数据集上(小型,仅有1M)也取得了SoTA的效果,前者的困惑度从23.7到21.8,后者的困惑度从55.3到54.5。表明了Transformer-XL在各个数据集下的不俗竞争力。

    2. 两个创新点的优势

    下图比较了不同上下文长度(即memory的长度)中包不包含循环机制、以及使不使用新位置编码方式的困惑度得分。可见,使用循环机制和相对位置编码的Transformer-XL明显优于其他的模型,并且能够有效利用长期依赖性,而且它能捕获超出RNN 80%的依赖性,和超出Transformer 450%的依赖性。
    Transformer-XL的对比实验

    3. 测试阶段的速度

    Transformer-XL的推理速度也明显快于vanilla Transformer,尤其是对于较长的上下文。比如,在上下文长度为800时,Transformer-XL提速363倍;而当上下文长度增加到3800时,Transformer-XL提速1874倍!

    六. 总结

    1. 模型特点

    在 AI-Rfou 等人提出的vanilla Transformer上做了两点创新:

    1. 引入循环机制(Recurrence Mechanism)
    2. 相对位置编码(Relative Positional Encoding)

    2. 优点

    1. 在几种不同的数据集(大/小,字符级别/单词级别等)均实现了最先进的语言建模结果。
    2. 结合了深度学习的两个重要概念——循环机制和注意力机制,允许模型学习长期依赖性,且可能可以扩展到需要该能力的其他深度学习领域,例如音频分析(如每秒16k样本的语音数据)等。
    3. 在inference阶段非常快,比之前最先进的利用Transformer模型进行语言建模的方法快300~1800倍。
    4. 有详尽的源码!含TensorFlow和PyTorch版本的,并且有TensorFlow预训练好的模型及各个数据集上详尽的超参数设置。

    3. 不足

    1. 尚未在具体的NLP任务如情感分析、QA等上应用。
    2. 没有给出与其他的基于Transformer的模型,如BERT等,对比有何优势。
    3. 在Github源码中提到,目前的sota结果是在TPU大集群上训练得出,对于我等渣机器党就只能玩玩base模式了。

    传送门

    论文:https://arxiv.org/pdf/1901.02860.pdf
    代码:https://github.com/kimiyoung/transformer-xl
    参考:https://www.lyrn.ai/2019/01/16/transformer-xl-sota-language-model

                                    </div>
                <link href="https://csdnimg.cn/release/phoenix/mdeditor/markdown_views-e44c3c0e64.css" rel="stylesheet">
                    </div>
    </article>
    
            <div class="hide-article-box hide-article-pos text-center">
            <a class="btn-readmore" data-report-view='{"mod":"popu_376","dest":"https://blog.csdn.net/magical_bubble/article/details/89060213","strategy":"readmore"}' data-report-click='{"mod":"popu_376","dest":"https://blog.csdn.net/magical_bubble/article/details/89060213","strategy":"readmore"}'>
                展开阅读全文
                <svg class="icon chevrondown" aria-hidden="true">
                    <use xlink:href="#csdnc-chevrondown"></use>
                </svg>
            </a>
        </div>
    
        <div id="dmp_ad_58"><div id="kp_box_58" data-pid="58" data-track-view='{"mod":"kp_popu_58-386","con":",,"}' data-track-click='{"mod":"kp_popu_58-386","con":",,"}' data-report-view='{"mod":"kp_popu_58-386","keyword":""}' data-report-click='{"mod":"kp_popu_58-386","keyword":""}'><div style="width:100%;background:#fff;">
    
    <div class="comment-edit-box d-flex">
    	<a id="commentsedit"></a>
    	<div class="user-img">
    		<a href="//me.csdn.net/w344674" target="_blank">
    			<img class="" src="https://avatar.csdn.net/3/0/4/3_w344674.jpg">
    		</a>
    	</div>
    	<form id="commentform">
    		<input type="hidden" id="comment_replyId">
    		<textarea class="comment-content" name="comment_content" id="comment_content" placeholder="想对作者说点什么"></textarea>
    		<div class="opt-box"> <!-- d-flex -->
    			<div id="ubbtools" class="add_code">
    				<a href="#insertcode" code="code" target="_self"><i class="icon iconfont icon-daima"></i></a>
    			</div>
    			<input type="hidden" id="comment_replyId" name="comment_replyId">
    			<input type="hidden" id="article_id" name="article_id" value="89060213">
    			<input type="hidden" id="comment_userId" name="comment_userId" value="">
    			<input type="hidden" id="commentId" name="commentId" value="">
    			<div style="display: none;" class="csdn-tracking-statistics tracking-click" data-report-click='{"mod":"popu_384","dest":""}'><a href="#" target="_blank" class="comment_area_btn">发表评论</a></div>
    			<div class="dropdown" id="myDrap">
    				<a class="dropdown-face d-flex align-items-center" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">
    				<div class="txt-selected text-truncate">添加代码片</div>
    				<svg class="icon d-block" aria-hidden="true">
    					<use xlink:href="#csdnc-triangledown"></use>
    				</svg>
    				</a>
    				<ul class="dropdown-menu" id="commentCode" aria-labelledby="drop4">
    					<li><a data-code="html">HTML/XML</a></li>
    					<li><a data-code="objc">objective-c</a></li>
    					<li><a data-code="ruby">Ruby</a></li>
    					<li><a data-code="php">PHP</a></li>
    					<li><a data-code="csharp">C</a></li>
    					<li><a data-code="cpp">C++</a></li>
    					<li><a data-code="javascript">JavaScript</a></li>
    					<li><a data-code="python">Python</a></li>
    					<li><a data-code="java">Java</a></li>
    					<li><a data-code="css">CSS</a></li>
    					<li><a data-code="sql">SQL</a></li>
    					<li><a data-code="plain">其它</a></li>
    				</ul>
    			</div>  
    			<div class="right-box">
    				<span id="tip_comment" class="tip">还能输入<em>1000</em>个字符</span>
    				<input type="button" class="btn btn-sm btn-cancel d-none" value="取消回复">
    				<input type="submit" class="btn btn-sm btn-red btn-comment" value="发表评论">
    			</div>
    		</div>
    	</form>
    </div>
    
    	<div class="comment-list-container">
    	<a id="comments"></a>
    	<div class="comment-list-box">
    	</div>
    	<div id="commentPage" class="pagination-box d-none"></div>
    	<div class="opt-box text-center">
    		<div class="btn btn-sm btn-link-blue" id="btnMoreComment"></div>
    	</div>
    </div>
    
    <div class="recommend-item-box recommend-ad-box"><div id="kp_box_59" data-pid="59" data-track-view='{"mod":"kp_popu_59-78","con":",,"}' data-track-click='{"mod":"kp_popu_59-78","con":",,"}' data-report-view='{"mod":"kp_popu_59-78","keyword":""}' data-report-click='{"mod":"kp_popu_59-78","keyword":""}'><script type="text/javascript">
    (function() {
        var s = "_" + Math.random().toString(36).slice(2);
        document.write('<div style="" id="' + s + '"></div>');
        (window.slotbydup = window.slotbydup || []).push({
            id: "u3491668",
            container:  s
        });
    })();
    
    <div class="recommend-item-box recommend-box-ident recommend-download-box clearfix" data-report-view='{"mod":"popu_387","dest":"https://download.csdn.net/download/hanelyuki/10941087","strategy":"BlogCommendFromBaidu_7"}' data-report-click='{"mod":"popu_387","dest":"https://download.csdn.net/download/hanelyuki/10941087","strategy":"BlogCommendFromBaidu_7"}'>
    	<a href="https://download.csdn.net/download/hanelyuki/10941087" target="_blank">
    		<div class="content clearfix">
    			<div class="">
    				<h4 class="text-truncate oneline clearfix">
    					<em>Transformer-XL</em> <em>论文</em>					</h4>
    				<span class="data float-right">01-28</span>
    			</div>
    			<div class="desc oneline">
    					这是google最新推出的语言模型,是对《Attention is what you need》中的Transformer的升级版,它可以用在语言模型、对话系统等任务中。				</div>
    			<span class="type-show type-show-download">下载</span>
    		</div>
    	</a>
    </div>
    
    <div class="recommend-item-box recommend-ad-box"><div id="kp_box_60" data-pid="60" data-track-view='{"mod":"kp_popu_60-43","con":",,"}' data-track-click='{"mod":"kp_popu_60-43","con":",,"}' data-report-view='{"mod":"kp_popu_60-43","keyword":""}' data-report-click='{"mod":"kp_popu_60-43","keyword":""}'><div id="three_ad8" class="mediav_ad" ></div>
    
    		<div class="recommend-item-box blog-expert-recommend-box">
    		<div class="d-flex">
    			<div class="blog-expert-recommend">
    				<div class="blog-expert">
    					<div class="blog-expert-flexbox"></div>
    				</div>
    			</div>
    		</div>
    	</div>
    
    <div class="recommend-item-box recommend-ad-box"><div id="kp_box_61" data-pid="61" data-track-view='{"mod":"kp_popu_61-622","con":",,"}' data-track-click='{"mod":"kp_popu_61-622","con":",,"}' data-report-view='{"mod":"kp_popu_61-622","keyword":""}' data-report-click='{"mod":"kp_popu_61-622","keyword":""}'><script type="text/javascript" src="//rabc1.iteye.com/common/web/production/79m9.js?f=aszggcwz"></script></div></div>
    
    <div class="recommend-item-box recommend-ad-box"><div id="kp_box_62" data-pid="62" data-track-view='{"mod":"kp_popu_62-623","con":",,"}' data-track-click='{"mod":"kp_popu_62-623","con":",,"}' data-report-view='{"mod":"kp_popu_62-623","keyword":""}' data-report-click='{"mod":"kp_popu_62-623","keyword":""}'><script type="text/javascript">
    (function() {
        var s = "_" + Math.random().toString(36).slice(2);
        document.write('<div style="" id="' + s + '"></div>');
        (window.slotbydup = window.slotbydup || []).push({
            id: "u3600849",
            container:  s
        });
    })();
    
    <div class="recommend-item-box recommend-ad-box"><div id="kp_box_63" data-pid="63" data-track-view='{"mod":"kp_popu_63-1405","con":",,"}' data-track-click='{"mod":"kp_popu_63-1405","con":",,"}' data-report-view='{"mod":"kp_popu_63-1405","keyword":""}' data-report-click='{"mod":"kp_popu_63-1405","keyword":""}'><script type="text/javascript">
        (function() {
            var s = "_" + Math.random().toString(36).slice(2);
            document.write('<div style="" id="' + s + '"></div>');
            (window.slotbydup = window.slotbydup || []).push({
                id: "u4221910",
                container: s
            });
        })();
    
    <div class="recommend-item-box recommend-ad-box"><div id="kp_box_64" data-pid="64" data-track-view='{"mod":"kp_popu_64-1060","con":",,"}' data-track-click='{"mod":"kp_popu_64-1060","con":",,"}' data-report-view='{"mod":"kp_popu_64-1060","keyword":""}' data-report-click='{"mod":"kp_popu_64-1060","keyword":""}'><iframe  src="https://kunpeng-sc.csdnimg.cn/#/preview/235?positionId=64&queryWord=" frameborder="0" width= "100%"  height= "75px" scrolling="no" ></iframe></div></div>
    
    Transformer-XL模型:Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

    04-15 阅读数 144

    参考链接参考论文:https://arxiv.org/abs/1901.02860参考博客:https://ai.googleblog.com/2019/01/transformer-xl-unlea... 博文 来自: ACM_hades的博客

    transformer-pytorch实现

    12-09 阅读数 73

    transformer-pytorch实现 博文 来自: JN_rainbow的博客

    <div class="recommend-item-box recommend-ad-box"><div id="kp_box_65" data-pid="65" data-track-view='{"mod":"kp_popu_65-625","con":",,"}' data-track-click='{"mod":"kp_popu_65-625","con":",,"}' data-report-view='{"mod":"kp_popu_65-625","keyword":""}' data-report-click='{"mod":"kp_popu_65-625","keyword":""}'><script type="text/javascript" src="//rabc1.iteye.com/common/openjs/m022.js?hcuzbzy=bi"></script></div></div>
    
    论文笔记 — Transformer-XL [更优秀的长文本编码器]

    06-26 阅读数 54

    FromGoogleBrainandCMU.Authors:ZihangDai∗,ZhilinYang∗,YimingYang,JaimeCarbonell,QuocV.Le,RuslanSalakh... 博文 来自: IndexFziQ CSDN

    <div class="recommend-item-box recommend-ad-box"><div id="kp_box_66" data-pid="66" data-track-view='{"mod":"kp_popu_66-87","con":",,"}' data-track-click='{"mod":"kp_popu_66-87","con":",,"}' data-report-view='{"mod":"kp_popu_66-87","keyword":""}' data-report-click='{"mod":"kp_popu_66-87","keyword":""}'><div id="three_ad38" class="mediav_ad" ></div>
    
    FCN代码解析

    04-09 阅读数 5868

    FCN代码解析 博文 来自: 独白z

    <div class="recommend-item-box recommend-ad-box"><div id="kp_box_67" data-pid="67" data-track-view='{"mod":"kp_popu_67-658","con":",,"}' data-track-click='{"mod":"kp_popu_67-658","con":",,"}' data-report-view='{"mod":"kp_popu_67-658","keyword":""}' data-report-click='{"mod":"kp_popu_67-658","keyword":""}'><script type="text/javascript">
    (function() {
        var s = "_" + Math.random().toString(36).slice(2);
        document.write('<div style="" id="' + s + '"></div>');
        (window.slotbydup = window.slotbydup || []).push({
            id: "u3573058",
            container:  s
        });
    })();
    
    <div class="recommend-item-box recommend-ad-box"><div id="kp_box_68" data-pid="68" data-track-view='{"mod":"kp_popu_68-625","con":",,"}' data-track-click='{"mod":"kp_popu_68-625","con":",,"}' data-report-view='{"mod":"kp_popu_68-625","keyword":""}' data-report-click='{"mod":"kp_popu_68-625","keyword":""}'><script type="text/javascript" src="//rabc1.iteye.com/common/openjs/m022.js?hcuzbzy=bi"></script></div></div>
    
                <div class="recommend-item-box type_hot_word">
                                <div class="content clearfix">
                    <div class="word float-left">
                                                            <span>
                        <a href="https://edu.csdn.net/courses/o5329_s5330_k " target="_blank">
                        机器学习教程                    </a></span>
                                                                                <span>
                        <a href="https://edu.csdn.net/courses/o280_s351_k " target="_blank">
                        Objective-C培训                    </a></span>
                                                                                <span>
                        <a href="https://edu.csdn.net/combos/o7115_s388_l0_t " target="_blank">
                        交互设计视频教程                    </a></span>
                                                                                <span>
                        <a href="https://edu.csdn.net/course/play/5599/104252 " target="_blank">
                        颜色模型                    </a></span>
                                                                                <span>
                        <a href="https://edu.csdn.net/combos/o363_l0_t " target="_blank">
                        设计制作学习                    </a></span>
                                                            </div>
                </div>
                                                <div class="content clearfix">
                    <div class="float-left">
                                        <span>
                        <a href="https://www.csdn.net/gather_24/MtTaEg3sMDM5MS1ibG9n.html" target="_blank">
                        mysql关联查询两次本表</a>
                    </span>
                                        <span>
                        <a href="https://www.csdn.net/gather_10/MtjaIg3sMTUzMy1kb3dubG9hZAO0O0OO0O0O.html" target="_blank">
                        native底部 react</a>
                    </span>
                                        <span>
                        <a href="https://www.csdn.net/gather_1b/Ntzagg1sOTU3LWRvd25sb2Fk.html" target="_blank">
                        extjs glyph 图标</a>
                    </span>
                                        <span>
                        <a href="https://www.csdn.net/gather_4a/NtTaMg0sMDUtZWR1.html" target="_blank">
                        大数据解读视频</a>
                    </span>
                                        <span>
                        <a href="https://www.csdn.net/gather_4a/MtTaIgzsNjMtZWR1.html" target="_blank">
                        区块链解读课程</a>
                    </span>
                                        </div>
                </div>
                                </div>
                            <div class="recommend-loading-box">
                <img src='https://csdnimg.cn/release/phoenix/images/feedLoading.gif'>
            </div>
            <div class="recommend-end-box">
                <p class="text-center">没有更多推荐了,<a href="https://blog.csdn.net/" class="c-blue c-blue-hover c-blue-focus">返回首页</a></p>
            </div>
        </div>
    </main>
    
    <aside>
    <div id="asideProfile" class="aside-box">
    <!-- <h3 class="aside-title">个人资料</h3> -->
    <div class="profile-intro d-flex">
        <div class="avatar-box d-flex justify-content-center flex-column">
            <a href="https://blog.csdn.net/Magical_Bubble">
              <img src="https://avatar.csdn.net/C/F/4/3_magical_bubble.jpg" class="avatar_pic">
                              <img src="https://g.csdnimg.cn/static/user-reg-year/1x/5.png" class="user-years">
                          </a>
            
        </div>
        <div class="user-info d-flex justify-content-center flex-column">
            <p class="name csdn-tracking-statistics tracking-click" data-report-click='{"mod":"popu_379"}'>
                <a href="https://blog.csdn.net/Magical_Bubble" target="_blank" class="" id="uid">MagicBubble</a>
            </p>
                    </div>
                <div class="opt-box d-flex justify-content-center flex-column">
            <span  class="csdn-tracking-statistics tracking-click" data-report-click='{"mod":"popu_379"}'>
                                    <a class="btn btn-sm btn-red-hollow attention" id="btnAttent">关注</a>
                            </span>
        </div>
            </div>
    <div class="data-info d-flex item-tiling">
                <dl class="text-center" title="26">
                        <dt><a href="https://blog.csdn.net/magical_bubble?t=1">原创</a></dt>
            <dd><a href="https://blog.csdn.net/magical_bubble?t=1"><span class="count">26</span></a></dd>
                    </dl>
        <dl class="text-center" id="fanBox" title="33">
            <dt>粉丝</dt>
            <dd><span class="count" id="fan">33</span></dd>
        </dl>
        <dl class="text-center" title="23">
            <dt>喜欢</dt>
            <dd><span class="count">23</span></dd>
        </dl>
        <dl class="text-center" title="25">
            <dt>评论</dt>
            <dd><span class="count">25</span></dd>
        </dl>
    </div>
    <div class="grade-box clearfix">
        <dl>
            <dt>等级:</dt>
            <dd>
                <a href="https://blog.csdn.net/home/help.html#level" title="2级,点击查看等级说明" target="_blank">
                    <svg class="icon icon-level" aria-hidden="true">
                        <use xlink:href="#csdnc-bloglevel-2"></use>
                    </svg>
                </a>
            </dd>
        </dl>
        <dl>
            <dt>访问:</dt>
            <dd title="8976">
                8976            </dd>
        </dl>
        <dl>
            <dt>积分:</dt>
            <dd title="374">
                374            </dd>
        </dl>
        <dl title="263545">
            <dt>排名:</dt>
            <dd>26万+</dd>
        </dl>
    </div>
        <div class="badge-box d-flex">
        <span>勋章:</span>
        <div class="badge d-flex">
                                                        <div class="icon-badge" title="持之以恒">
                       <div class="mouse-box">
                          <img src="https://g.csdnimg.cn/static/user-medal/chizhiyiheng.svg" alt="">
                          <div class="icon-arrow"></div>
                       </div>
                       <div class="grade-detail-box">
                           <div class="pos-box">
                               <div class="left-box d-flex justify-content-center align-items-center flex-column">
                                    <img src="https://g.csdnimg.cn/static/user-medal/chizhiyiheng.svg" alt="">
                                   <p>持之以恒</p>
                               </div>
                               <div class="right-box">
                                   授予每个自然月内发布4篇或4篇以上原创或翻译IT博文的用户。不积跬步无以至千里,不积小流无以成江海,程序人生的精彩需要坚持不懈地积累!                               </div>
                           </div>
                       </div>
                   </div>
                                                             <div class="icon-badge" title="勤写标兵Lv3">
                       <div class="mouse-box">
                          <img src="https://g.csdnimg.cn/static/user-medal/qinxiebiaobing_l3_t1.svg" alt="">
                          <div class="icon-arrow"></div>
                       </div>
                       <div class="grade-detail-box">
                           <div class="pos-box">
                               <div class="left-box d-flex justify-content-center align-items-center flex-column">
                                    <img src="https://g.csdnimg.cn/static/user-medal/qinxiebiaobing_l3_t1.svg" alt="">
                                   <p>勤写标兵Lv3</p>
                               </div>
                               <div class="right-box">
                                   授予每个自然周发布7篇到8篇原创IT博文的用户。本勋章将于次周上午根据用户上周的博文发布情况由系统自动颁发。                               </div>
                           </div>
                       </div>
                   </div>
                                             </div>
        <script>
            (function ($) {
                setTimeout(function(){
                    $('div.icon-badge.show-moment').removeClass('show-moment');
                }, 5000);
            })(window.jQuery)
        </script>
    </div>
    </div>
    
    • 1
      点赞
    • 2
      收藏
      觉得还不错? 一键收藏
    • 0
      评论

    “相关推荐”对你有帮助么?

    • 非常没帮助
    • 没帮助
    • 一般
    • 有帮助
    • 非常有帮助
    提交
    评论
    添加红包

    请填写红包祝福语或标题

    红包个数最小为10个

    红包金额最低5元

    当前余额3.43前往充值 >
    需支付:10.00
    成就一亿技术人!
    领取后你会自动成为博主和红包主的粉丝 规则
    hope_wisdom
    发出的红包
    实付
    使用余额支付
    点击重新获取
    扫码支付
    钱包余额 0

    抵扣说明:

    1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
    2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

    余额充值