Attention Is All You Need
论文链接
源代码
torch版本
文章主要提出了基于注意力机制的机器翻译模型,摒弃了RNN结构以及卷积,使用了独特的多头注意力机制,这个结构也启发了后面的Bert模型
摘要
论文提出了新的序列转换(sequence transduction)的transformer 模型,基于注意力机制(attention mechanisms),而非递归或卷积网络。根据机器翻译实验,模型更容易进行并行,并且训练地更快。28.4 BLEU on the WMT 2014 English-to-German translation task
Introduction
前人工作基于Recurrent models,递归神经网络,在网络之间加入attention机制进行连接,而本文完全放弃了递归网络(eschewing),全部使用注意力机制。
Background
对比ByteNet 和 ConvS2S,都使用了卷积模块,这些卷积计算会基于输入输出信号的位置距离的增加而增加,对 ConvS2S 来说是线性的,对ByteNet 来说是对数的,但是本文的模型这个增加只是常数级别的,尽管我们减少了计算使用平均的位置加权对效果产生了影响,但我们使用了多头(muti-head)注意力机制来抵消这种效果。
Self-attention
是一种与不同位置有关的注意力机制,用于计算整个序列的表示。
Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.
Model Architecture
序列转换模型常用结构是,把input的token序列放入encoder得到连续序列表示,然后这些序列表示放入decoder得到最终输出,过程是auto-regressive的,即每次的输入包含了上一次的输出。
本文模型在decoder和encoder中都堆叠使用point-wise的self-attention以及全连接层进行构建
class EncoderDecoder(nn.Module):
"""
A standard Encoder-Decoder architecture. Base for this and many
other models.
"""
def __init__(self, encoder, decoder, src_embed, tgt_embed, generator):
super(EncoderDecoder, self).__init__()
self.encoder = encoder
self.decoder = decoder
self.src_embed = src_embed
self.tgt_embed = tgt_embed
self.generator = generator
def forward(self, src, tgt, src_mask, tgt_mask):
"Take in and process masked src and target sequences."
return self.decode(self.encode(src, src_mask), src_mask,
tgt, tgt_mask)
def encode(self, src, src_mask):
return self.encoder(self.src_embed(src), src_mask)
def decode(self, memory, src_mask, tgt, tgt_mask):
return self.decoder(self.tgt_embed(tgt), memory, src_mask, tgt_mask)
class Generator(nn.Module):
"Define standard linear + softmax generation step."
def __init__(self, d_model, vocab):
super(Generator, self).__init__()
self.proj = nn.Linear(d_model, vocab)
def forward(self, x):
return F.log_softmax(self.proj(x), dim=-1)
Encoder and Decoder Stacks
Encoder:总共有6层,每层有两个子层:一个multi-head self-attention mechanism层,和一个全连接层。在每两个子层之间进行了 residual connection。在 正则之前,即 L a y e r N o r m ( x + S u b l a y e r ( x ) ) LayerNorm(x+ Sublayer(x)) LayerNorm(x+Sublayer(x))所有层的 输出维度都为 d = 512
def clones(module, N):
"Produce N identical layers."
return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])
class Encoder(nn.Module):
"Core encoder is a stack of N layers"
def __init__(self, layer, N):
super(Encoder, self).__init__()
self.layers = clones(layer, N)
self.norm = LayerNorm(layer.size)
def forward(self, x, mask):
"Pass the input (and mask) through each layer in turn."
for layer in self.layers:
x = layer(x, mask)
return self.norm(x)
class EncoderLayer(nn.Module):
"Encoder is made up of self-attn and feed forward (defined below)"
def __init__(self, size, self_attn, feed_forward, dropout):
super(EncoderLayer, self).__init__()
self.self_attn = self_attn
self.feed_forward = feed_forward
self.sublayer = clones(SublayerConnection(size, dropout), 2)
self.size = size
def forward(self, x, mask):
"Follow Figure 1 (left) for connections."
x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask))
return self.sublayer[1](x, self.feed_forward)
Decoder: 同样为6层,比encoder多了一个子层:与encoder的输出做muti-head attention,并且对 self-attention进行mask,防止看到后面的句子。
class Decoder(nn.Module):
"Generic N layer decoder with masking."
def __init__(self, layer, N):
super(Decoder, self).__init__()
self.layers = clones(layer, N)
self.norm = LayerNorm(layer.size)
def forward(self, x, memory, src_mask, tgt_mask):
for layer in self.layers:
x = layer(x, memory, src_mask, tgt_mask)
return self.norm(x)
class DecoderLayer(nn.Module):
"Decoder is made of self-attn, src-attn, and feed forward (defined below)"
def __init__(self, size, self_attn, src_attn, feed_forward, dropout):
super(DecoderLayer, self).__init__()
self.size = size
self.self_attn = self_attn
self.src_attn = src_attn
self.feed_forward = feed_forward
self.sublayer = clones(SublayerConnection(size, dropout), 3)
def forward(self, x, memory, src_mask, tgt_mask):
"Follow Figure 1 (right) for connections."
m = memory
x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask))
x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask))
return self.sublayer[2](x, self.feed_forward)
def subsequent_mask(size):
"Mask out subsequent positions."
attn_shape = (1, size, size)
subsequent_mask = np.triu(np.ones(attn_shape), k=1).astype('uint8')
return torch.from_numpy(subsequent_mask) == 0
Attention
Scaled Dot-Product Attention
Query,Key
d
k
d_k
dk维, Value
d
v
d_v
dv维
A
t
t
e
n
t
i
o
n
(
Q
,
K
,
V
)
=
s
o
f
t
m
a
x
(
Q
K
T
d
k
)
V
Attention(Q,K,V) = softmax(\frac{QK^T}{\sqrt{d_k}})V
Attention(Q,K,V)=softmax(dkQKT)V
相比于利用前向神经网络进行紧致函数运算的Additive attention,本文使用更有效率的dot-product attention,除掉根号d效果会更好,因为内积运算在高维时会获得很大的值影响效果。
Multi-Head Attention
先将 d-model 维度的 Q,K,V 投射到更低的维度
d
k
,
d
k
,
d
v
d_k,d_k,d_v
dk,dk,dv做内积,再把结果concat
多头注意力机制利用到了不同的位置不同表征子空间的信息,而单一注意力抑制(inhibits )了这些
模型使用了8个头,将
d
k
,
d
v
d_k,d_v
dk,dv的维度置为512/8 = 64 维
代码中相当于将q,k,v做了线性变换后分8段进行attention:
class MultiHeadedAttention(nn.Module):
def __init__(self, h, d_model, dropout=0.1):
"Take in model size and number of heads."
super(MultiHeadedAttention, self).__init__()
assert d_model % h == 0
# We assume d_v always equals d_k
self.d_k = d_model // h
self.h = h
self.linears = clones(nn.Linear(d_model, d_model), 4)
self.attn = None
self.dropout = nn.Dropout(p=dropout)
def forward(self, query, key, value, mask=None):
"Implements Figure 2"
if mask is not None:
# Same mask applied to all h heads.
mask = mask.unsqueeze(1)
nbatches = query.size(0)
# 1) Do all the linear projections in batch from d_model => h x d_k
query, key, value = \
[l(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2)
for l, x in zip(self.linears, (query, key, value))]
# 2) Apply attention on all the projected vectors in batch.
x, self.attn = attention(query, key, value, mask=mask,
dropout=self.dropout)
# 3) "Concat" using a view and apply a final linear.
x = x.transpose(1, 2).contiguous() \
.view(nbatches, -1, self.h * self.d_k)
return self.linears[-1](x)
Applications of Attention in our Model
- encoder-decoder attention ,使用encoder的output 作为 Key 和 Value,使用 decoder 作为 Query
- encoder 的 self-attention
- decoder 里为了避免看到后面信息,保持auto-regressive性质,mask掉后面token
Position-wise Feed-Forward Networks
输入输出
d
m
o
d
e
l
=
512
,
中
间
层
d
f
f
=
2048
d_{model}=512, 中间层d_{ff}=2048
dmodel=512,中间层dff=2048
class PositionwiseFeedForward(nn.Module):
"Implements FFN equation."
def __init__(self, d_model, d_ff, dropout=0.1):
super(PositionwiseFeedForward, self).__init__()
self.w_1 = nn.Linear(d_model, d_ff)
self.w_2 = nn.Linear(d_ff, d_model)
self.dropout = nn.Dropout(dropout)
def forward(self, x):
return self.w_2(self.dropout(F.relu(self.w_1(x))))
Embeddings and Softmax
对词做embedding,输入输出共享matrix,对预测做线性变换后softmax
class Embeddings(nn.Module):
def __init__(self, d_model, vocab):
super(Embeddings, self).__init__()
self.lut = nn.Embedding(vocab, d_model)
self.d_model = d_model
def forward(self, x):
return self.lut(x) * math.sqrt(self.d_model)
Positional Encoding
在decoder和encoder的最底层,使用相同维度
d
m
o
d
e
l
=
512
d_{model}=512
dmodel=512的位置向量与embedding相加
其中pos表示位置,i表示维度。在每个维度中这些点构成了一个正弦波,由于正弦函数的性质,
P
E
p
o
s
+
k
PE_{pos+k}
PEpos+k可以通过
P
E
p
o
s
PE_{pos}
PEpos线性变换得到
class PositionalEncoding(nn.Module):
"Implement the PE function."
def __init__(self, d_model, dropout, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
# Compute the positional encodings once in log space.
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) *
-(math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
x = x + Variable(self.pe[:, :x.size(1)],
requires_grad=False)
return self.dropout(x)
Why Self-Attention
从三方面考虑:
- 计算复杂度
- 并行计算数量
- 远距离依赖
相比于卷积和递归神经网络的优势,我们模型不仅个别注意力头清楚地学会执行不同的任务,许多似乎表现出与句子的语法和语义结构相关的行为。
整个模型:
def make_model(src_vocab, tgt_vocab, N=6,
d_model=512, d_ff=2048, h=8, dropout=0.1):
"Helper: Construct a model from hyperparameters."
c = copy.deepcopy
attn = MultiHeadedAttention(h, d_model)
ff = PositionwiseFeedForward(d_model, d_ff, dropout)
position = PositionalEncoding(d_model, dropout)
model = EncoderDecoder(
Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N),
Decoder(DecoderLayer(d_model, c(attn), c(attn),
c(ff), dropout), N),
nn.Sequential(Embeddings(d_model, src_vocab), c(position)),
nn.Sequential(Embeddings(d_model, tgt_vocab), c(position)),
Generator(d_model, tgt_vocab))
# This was important from their code.
# Initialize parameters with Glorot / fan_avg.
for p in model.parameters():
if p.dim() > 1:
nn.init.xavier_uniform(p)
return model
Train
- WMT 2014 English-German dataset, WMT2014 English-French dataset ,
- Adam optimizer with
β
1
=
0.9
,
β
2
=
0.98
,
ϵ
=
10
−
9.
β_1= 0.9,β_2= 0.98 ,\epsilon=10−9.
β1=0.9,β2=0.98,ϵ=10−9.
warmup_step = 4000 - Regularization
- Dropout = 0.1 子层和位置向量层
- Residual
- Label Smoothing ϵ l r = 0.1 \epsilon_{lr}=0.1 ϵlr=0.1
Residual:
class SublayerConnection(nn.Module):
"""
A residual connection followed by a layer norm.
Note for code simplicity the norm is first as opposed to last.
"""
def __init__(self, size, dropout):
super(SublayerConnection, self).__init__()
self.norm = LayerNorm(size)
self.dropout = nn.Dropout(dropout)
def forward(self, x, sublayer):
"Apply residual connection to any sublayer with the same size."
return x + self.dropout(sublayer(self.norm(x)))
LayerNorm:
class LayerNorm(nn.Module):
"Construct a layernorm module (See citation for details)."
def __init__(self, features, eps=1e-6):
super(LayerNorm, self).__init__()
self.a_2 = nn.Parameter(torch.ones(features))
self.b_2 = nn.Parameter(torch.zeros(features))
self.eps = eps
def forward(self, x):
mean = x.mean(-1, keepdim=True)
std = x.std(-1, keepdim=True)
return self.a_2 * (x - mean) / (std + self.eps) + self.b_2
LabelSmoothing:
class LabelSmoothing(nn.Module):
"Implement label smoothing."
def __init__(self, size, padding_idx, smoothing=0.0):
super(LabelSmoothing, self).__init__()
self.criterion = nn.KLDivLoss(size_average=False)
self.padding_idx = padding_idx
self.confidence = 1.0 - smoothing
self.smoothing = smoothing
self.size = size
self.true_dist = None
def forward(self, x, target):
assert x.size(1) == self.size
true_dist = x.data.clone()
true_dist.fill_(self.smoothing / (self.size - 2))
true_dist.scatter_(1, target.data.unsqueeze(1), self.confidence)
true_dist[:, self.padding_idx] = 0
mask = torch.nonzero(target.data == self.padding_idx)
if mask.dim() > 0:
true_dist.index_fill_(0, mask.squeeze(), 0.0)
self.true_dist = true_dist
return self.criterion(x, Variable(true_dist, requires_grad=False))
Results
模型超参数调节: