Learning to Generalize to More: Continuous Semantic Augmentation for Neural Machine Translation

Learning to Generalize to More: Continuous Semantic Augmentation for Neural Machine Translation

摘要

​ 有监督学习受限制于数据量。常见的数据增强方法不能生成diverse和faithful的样本。本文提出了一种新的数据增强范式–连续语义增强(CSANMT),它用一个邻接语义区域来增强每个训练实例,该邻接语义区域可以覆盖相同意义下足够多的字面表达变体。

​ 这个工作在WMT14 英 → \rightarrow {德,法}、NIST 中 → \rightarrow 英方向和许多IWSLT的低资源翻译任务上进行了测试。在现有的数据增强方法中取得了巨大的提高。

引入

  1. 在离散空间里增强训练样本不够多样。以back translation为例子,通常翻译模型的搜索策略是beam search或者greedy search,这两种搜索策略都是近似的算法,他们最大化后验概率输出,因此在歧义的情况下倾向于出现频率最多的一个。Edunov et al. (2018)(2018)提出了一种从输出分布出发的sampling策略来缓解这一问题,但这种方法通常产生的合成数据质量较低。其他的方法也不能生成足以覆盖相同意思的变体。
  2. 在离散空间里的数据增强方法很难保持原有的意思,在nlp里常见的离散操作是增、删、重排、替换。这些操作很容易引发significant语义变化。为了解决这个问题,有工作通过词嵌入插值,将词替换为在相同上下文下使用语言模型预测的其他词,简单的说就是通过语言模型,预测当前位置的词语在词汇表上的概率分布,然后利用概率分布做权重,去融合他们的embedding。这些技术虽然有效,但都局限于词级操作,无法实现整个句子的转换,如通过重新措辞来生成另一个句子,使其具有相同的意义。

​ CSANMT的方法如下

  1. 通过切向对比训练一个语义Encoder,这个encoder可以支持每个训练样例生成一个连续空间的邻域。并将该区域的切点作为语义等价的临界状态。连续空间中的向量可以很容易地在相同的意义下覆盖足够的变体。
  2. 然后引入了混合高斯递归链(MGRC)算法,从邻接语义区域中抽取一簇向量。
  3. 每个采样向量最终通过开发一个与模型结构无关的广播集成网络被合并到解码器中。因此,将离散句子转换到连续空间可以有效地扩大训练数据空间,从而提高NMT模型的泛化能力。

框架

问题定义

如果你对NLP尤其是MT很熟悉,可以跳过问题定义。

目标是最大化对数似然。其中 C = { ( x ( n ) , y ( n ) ) } n = 1 N \mathcal{C}=\{(x^{(n)},y^{(n)})\}_{n=1}^N C={(x(n),y(n))}n=1N代表了一个容量大小得为N的训练集, x x x y y y是源语言和目标语言的句子。整个训练过程就是最大化他们的期望,让我慢慢说。
J m l e ( Θ ) = E ( x , y ) ∼ C ( l o g P ( y ∣ x ; Θ ) ) \mathcal{J}_{mle}(\Theta)=\mathbb{E}_{(x,y)\sim C}(logP(y|x;\Theta)) Jmle(Θ)=E(x,y)C(logP(yx;Θ))
其中 Θ \Theta Θ是模型的参数,一个句子的损失就通过每个时间步的损失合在一块。
l o g P ( y ∣ x ; Θ ) = ∑ t = 1 T ′ l o g P ( y t ∣ y < t ; x ; Θ ) logP(y|x;\Theta)=\sum_{t=1}^{T'}logP(y_t|\mathcal{y}_{<t};x;\Theta) logP(yx;Θ)=t=1TlogP(yty<t;x;Θ)
训练过程采用的是teacher forcing,也就是哪怕decoder输出了错误的结果,我们也可以直接用正确的[1,t-1]时刻的词替换。上式中的 y t y_t yt就是训练集里t时刻的正确词,每个时间步上decoder都会输出所有词的分布,然后我们取出 y t y_t yt位置的概率,如果其概率越大,那么log就会越大,所以总和就会靠近零(因为P是0-1区间的,最大logP就是0)。所以总体的训练目标就是在每句话上都取得准确位置P尽可能大的结果。

通常情况下我们会说希望可以最小化损失函数,这个也很好办,直接对上面损失函数取负号就行,这就是负对数似然。

连续语义扩充(Continuous Semantic Augmentation)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-7NHYF4Xo-1660959896636)(E:\论文\image-20220801071537215.png)]

和标准transformer相比,这个结构引入了一个semantic encoder,他可以把训练集中的源语言输入x和目标语言输入y转换成实值向量 r x r_x rx r y r_y ry,这个semantic encoder应该是个多语种encoder。作者使用语义编码器建立了一个源语言和目标语言通用的语义空间,将离散的句子映射成连续的向量。接下来的描述就有点诡异了,**作者说: 任取训练集中的两句话,都有 r x = r y r_x=r_y rx=ry。这在我理解应该是训练的一个目标,很难保证两个不同的句子映射出来的实值向量一模一样吧。**图中 r x r_x rx r y r_y ry所形成的的就是邻接区域(也就是交集) v ( r x , r y ) v(r_x,r_y) v(rx,ry)在这个空间里描述了句子对(x,y)的足够语义变体。

作者通过从邻接语义区域中采样一系列向量R={ r ^ ( 1 ) , r ^ ( 2 ) , r ^ ( 3 ) , . . . r ^ ( k ) \hat{r}^{(1)},\hat{r}^{(2)},\hat{r}^{(3)},...\hat{r}^{(k)} r^(1),r^(2),r^(3),...r^(k)},其中K是决定采样数量的超参数,然后通过一个广播集成网络(broadcasting integration network)把每个样本集成到生成过程中:
o ^ t = W 1 r ^ k + W 2 o t + b \hat{o}_t=W_1\hat{r}^{k}+W_2o_t+b o^t=W1r^k+W2ot+b
其中 o t o_t ot是t位置的decoder自注意力层的输出,等于说加了一层线性变换。那么损失函数极大似然估计自然也会随之变成
J m l e ( Θ ) = E ( x , y ) ∼ C , r ^ ( k ) ∼ R ( l o g P ( y ∣ x , r ^ ( k ) ; Θ ) ) \mathcal{J}_{mle}(\Theta)=\mathbb{E}_{(x,y)\sim C,\hat{r}^{(k)}\sim\mathcal{R}}(logP(y|x,\hat{r}^{(k)};\Theta)) Jmle(Θ)=E(x,y)C,r^(k)R(logP(yx,r^(k);Θ))
接下来我们需要考虑两个问题,第一个是如果优化semantic encoder,从而使得邻接语义区域有意义;第二个是怎么设计采样算法,高效的获取变体向量。

切向对比学习

解决问题(1)的方法叫做切向对比学习Tangential Contrastive Learning。[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-w8qHjW1m-1660959896638)(E:\论文\image-20220818172949184.png)]

我们随便从训练样本里抽出一对 ( x i , y i ) (x_i,y_i) (xi,yi),然后定义 d = ∣ ∣ r x i − r y i ∣ ∣ 2 d=||r_{x_i}-r_{y_i}||_2 d=∣∣rxiryi2,使用d作为半径, r x i r_x{_i} rxi r y i r_{y_{i}} ryi分别作为圆心,画出两个封闭圆,这两个封闭圆的并集区域就是我们说的邻接区域。d也是一个松弛变量,和 r x i r_x{_i} rxi r y i r_{y_{i}} ryi距离不超过d的向量都可以认为和 ( x i , y i ) (x_i,y_i) (xi,yi)语义等价。

下面这个式子是一个损失函数,他的目的是尽可能增加 xi和yi的相似分数,减小xi 和 x中除了xi的样本和y中除了yi的样本的距离。【这个做法如果看不明白,这篇ICLR2022的论文写的很详细ON LEARNING UNIVERSAL REPRESENTATIONS ACROSS LANGUAGES】

J c t l = E ( x ( i ) , y ( i ) ) ∼ B ( l o g e s ( r x i , r y i ) e s ( r x i , r y i ) + ξ ) \displaystyle{\mathcal{J}_{ctl}=\mathbb{E}_{(x^{(i)},y^{(i)})\sim\mathcal{B}}(log\frac{e^{s(r_{x_i},r_{y_i})}}{e^{s(r_{x_i},r_{y_i})}+\xi})} Jctl=E(x(i),y(i))B(loges(rxi,ryi)+ξes(rxi,ryi))
B \mathcal{B} B指的就是训练集中的一个batch,s()是余弦相似度, ξ \xi ξ的定义如下所示:
ξ = ∑ j & j ≠ i ∣ B ∣ ( e s ( r x i , r x i ′ ) + e s ( r y i , r y i ′ ) ) \xi=\sum_{j\&j\ne i}^{|\mathcal{B}|}(e^{s(r_{x_i},r'_{x_i})}+e^{s(r_{y_i},r'_{y_i})}) ξ=j&j=iB(es(rxi,rxi)+es(ryi,ryi))
这个定义和ICLR2022那篇论文略有不同了,因为他只拉大了x和y内部的距离,却没有拉开xi和yj的差距。

负样例 x j ′ 和 y j ′ x'_{j}和y'_{j} xjyj的设计如下(要做这个设计的原因是因为如果只是简单的把不对齐的句子拿来拉大距离,这个任务太简单了,利用平滑插值,可以创造出更难的负例,即拉大相似却不等价的样例的距离):

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Gc3F2RET-1660959896639)(E:\论文\image-20220819104251837.png)]

ICLR那篇比较了 +样例的距离和 -样例的距离,来判断这个负例难不难,比方说如果正例距离都大于负例距离了,那说明这个负例极度相似,很难,这个时候就不需要去处理这个样例,直接保留就好;否则就要平滑差值。ACL这篇公式里没提到这个条件判断,但是文中提到了。我们可以认为 λ \lambda λ越小越难。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VUVCvfvZ-1660959896639)(E:\论文\image-20220819104413650.png)]

两个向量的平滑插值会产生一个在AB线段上移动的新向量点,如果搞不懂数学的朋友(像我)不妨用geogebra拉一下就能看出来。

λ \lambda λ的值是自适应调整的,算出正例之间的距离除以负例之间的距离,然后用这个比值去做 ξ × p a v g + \xi \times p^+_{avg} ξ×pavg+ 次方。

其中 p a v g + = 1 100 ∑ j = − 100 − 1 exp ⁡ − L s p^+_{avg}=\frac{1}{100}\sum_{j=-100}^{-1} \exp {^{-L_s}} pavg+=1001j=1001expLs这个式子的意思就是把最后一百个batch的句子ctl loss求个平均对数平均。在预训练过程中,当模型倾向于容易区分正样本时,这意味着负样本已经不具有信息量。这个时候LS会变小, e − L s e^{-L_s} eLs就会变大,从而 p a v g p_{avg} pavg也会变大。而且 d + d − \frac{d+}{d-} dd+会变小,这会使得lambda变小,从而构造更难的样例出来。$\xi $没啥说的,就是个超参,ICLR那篇设置的0.9.

MGRC Sampling 高斯递归链算法

这是一个从邻接区域采样的算法。

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Preface Suppose you want to predict whether tomorrow will be a sunny or rainy day. You can develop an algorithm that is based on the current weather and your meteorological knowledge using a rather complicated set of rules to return the desired prediction. Now suppose that you have a record of the day-by-day weather conditions for the last five years, and you find that every time you had two sunny days in a row, the following day also happened to be a sunny one. Your algorithm could generalize this and predict that tomorrow will be a sunny day since the sun reigned today and yesterday. This algorithm is a pretty simple example of learning from experience. This is what Machine Learning is all about: algorithms that learn from the available data. This course is designed in the same way that many data science and analytics projects play out. First, we need to acquire data; the data is often messy, incomplete, or not correct in some way. Therefore, we spend the first chapter talking about strategies for dealing with bad data and ways to deal with other problems that arise from data. For example, what happens if we have too many features? How do we handle that? What this learning path 20 covers Module 1, Learning scikit-learn: Machine Learning in Python, in this module, you will learn several methods for building Machine Learning applications that solve different real-world tasks, from document classification to image recognition. We will use Python, a simple, popular, and widely used programming language, and scikit-learn, an open source Machine Learning library. In each chapter of this module, we will present a different Machine Learning setting and a couple of well-studied methods as well as show step-by-step examples that use Python and scikit-learn to solve concrete tasks. We will also show you tips and tricks to improve algorithm performance, both from the accuracy and computational cost point of views. Module 2, scikit-learn Cookbook, the first chapter of this module is your guide. The meat of this module will walk you through various algorithms and how to implement them into your workflow. And finally, we'll end with the postmodel workflow. This chapter is fairly agnostic to the other chapters of the module and can be applied to the various algorithms you'll learn up until the final chapter. Module 3, Mastering Machine Learning with scikit-learn, in this module, we will examine several machine learning models and learning algorithms. We will discuss tasks that machine learning is commonly applied to, and learn to measure the performance of machine learning systems. We will work with a popular library for the Python programming language called scikit-learn, which 21 has assembled excellent implementations of many machine learning models and algorithms under a simple yet versatile API. This module is motivated by two goals: Its content should be accessible. The book only assumes familiarity with basic programming and math. Its content should be practical. This book offers hands-on examples that readers can adapt to problems in the real world.
Welcome to the Practitioner Bundle of Deep Learning for Computer Vision with Python! This volume is meant to be the next logical step in your deep learning for computer vision education after completing the Starter Bundle. At this point, you should have a strong understanding of the fundamentals of parameterized learning, neural net works, and Convolutional Neural Networks (CNNs). You should also feel relatively comfortable using the Keras library and the Python programming language to train your own custom deep learning networks. The purpose of the Practitioner Bundle is to build on your knowledge gained from the Starter Bundle and introduce more advanced algorithms, concepts, and tricks of the trade — these tech- niques will be covered in three distinct parts of the book. The first part will focus on methods that are used to boost your classification accuracy in one way or another. One way to increase your classification accuracy is to apply transfer learning methods such as fine-tuning or treating your network as a feature extractor. We’ll also explore ensemble methods (i.e., training multiple networks and combining the results) and how these methods can give you a nice classification boost with little extra effort. Regularization methods such as data augmentation are used to generate additional training data – in nearly all situations, data augmentation improves your model’s ability to generalize. More advanced optimization algorithms such as Adam [1], RMSprop [2], and others can also be used on some datasets to help you obtain lower loss. After we review these techniques, we’ll look at the optimal pathway to apply these methods to ensure you obtain the maximum amount of benefit with the least amount of effort.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值