一、Self-Attention
- Self-Attention,把Attention用在一个RNN网络上
- Attention可以用在所有的RNN上
- Self-Attention [2]: attention [1] beyond Seq2Seq models.
- The original self-attention paper uses LSTM .(self-attention的原始论文,把attention用在LSTM上)
- To make teaching easy, I replace LSTM by SimpleRNN.(我把LSTM换成SimpleRNN)
Original paper:
- Bahdanau, Cho, & Bengio. Neural machine translation by jointly learning to align and translate. in ICLR, 2015.
- Cheng, Dong, & Lapata. Long Short-Term Memory-Networks for Machine Reading. In EMNLP, 2016.
二、SimpleRNN + Self-Attention
- 初始时,C0 和 状态向量h0 都是全零向量。
- RNN读入第一个输入X1 ,需要更新状态h,把X1 的信息压缩到新的状态h中,计算h1
- 下一步,计算C1 ,是已有状态的加权平均。
-
想要计算Ci ,需要计算权重αi ,计算第二个Weights:αi = align(hi ,h2).
-
对已有的状态h1,和h2做加权平均来计算C,由于h0为全零向量,以后忽略h0
-
之后不断重复这个过程。
三、Summary(总结)
- With self-attention, RNN is less likely to forget.(self-attention不局限于Seq2Seq模型,self-attention可以用在所有的RNN上)
- Pay attention to the context relevant to the new input.(除了避免遗忘,self-attention能帮助RNN关注相关的信息)