清华大学NLP整理的神经机器翻译reading list中提到了十篇必读论文
https://github.com/THUNLP-MT/MT-Reading-List
又回到神经机器翻译,这篇论文present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure。使用了两个LSTM作为encoder和decoder(没用attention),生词用UNK表示,在WMT14的数据集上比传统的基于短语的统计机器翻译提高了近2个bleu值。
论文的大体思想和RNN encoder-decoder是一样的,只是用来LSTM来实现。
paper提到三个important point:
1)encoder和decoder的LSTM是两个不同的模型
2)deep LSTM表现比shallow好,选用了4层的LSTM
We found that the LSTM models are fairly easy to train. We used deep LSTMs with 4 layers,with 1000 cells at each layer and 1000 dimensional word embed