提出问题
- 完全基于Attention的网络还只应用于Seq2Seq问题,如NMT
- Attention-Only丢失时序信息(所以需要Positional Encoding)
本文目标:不用RNN/CNN,构建一个统一的可以用于各种NLP问题的Sentence Encoding问题的Attention网络
背景知识
-
Sentence Encoding
-
Attention: A t t e n t i o n ( Q u e r y , S o u r c e ) = ∑ i = 1 L x S i m i l a r i t y ( Q u e r y , K e y i ) ∗ V a l u e i Attention(Query,Source)=\sum_{i=1}^{L_x}Similarity(Query,Key_i)*Value_i Attention(Query,Source)=∑i=1LxSimilarity(Query,Keyi)∗Valuei
- Additive(Multi-layer perceptron): f ( x i , q ) = w T σ ( W 1 ∗ x i + W 2 ∗ q ) f(x_i,q)=w^T\sigma(W_1*x_i+W_2*q) f(xi,q)=wTσ(W1∗xi+W2∗q)
- Multiplicative(Dot-product): f ( x i , q ) = < W 1 ∗ x i , W 2 ∗ q > f(x_i,q)=<W_1*x_i,W_2*q> f(xi,q)=<W1∗xi,W2∗q>
- p = S i m i l a r i t y = S o f t m a x ( f ( x i , q ) ) p = Similarity=Softmax(f(x_i,q)) p=Similarity=Softmax(f(xi,q))
- s = A t t e n t i o n = ∑ i =