Abstract
Transformer : 无recurrence和convolutions,只基于attention
Introduction
Recurrent models 是seq2seq model,h_t = f (position,h_t-1);不能并行运算,
RNN 长期忘记,transformer: averaging attention-weighted positions
Self-attention : relating different positions of a single sequence
Model Architecture
- The encoder is composed of a stack of N = 6 identical layers. 每个layer由multi-head self-attention和fully connected feed-forward network组成
- The decoder is also composed of a stack of N = 6 identical layers, 3 sub-layer, multi-head self-attention和fully connected feed-forward network和multi-head attention
- position-wise feed-forward: FFN(x) = max(0,xW1 + b1)W2 + b2
- decoder output to predicted next-token probabilities.
- make use of the order of the sequence : 添加position信息
Attention
map a query -> key-value , 计算相关度,可以dot-product 也可以其他
multi-head:将query,key和value分别线性地投影为dk,dk和dv维度的h时间,分别具有不同的学习线性投影
##self-attention && multihead && 加入位置信息 示意图
##并行计算 && transformer理解