seq2seq模型实例:用Keras实现机器翻译
玩转Keras之seq2seq自动生成标题
https://baijiahao.baidu.com/s?id=1627587324043258333&wfr=spider&for=pc
https://www.jianshu.com/p/923c8b489604
https://www.cnblogs.com/DLlearning/p/7834018.html
https://mp.weixin.qq.com/s/QwVImqc66GP3_KBwSaU2CQ
https://yq.aliyun.com/articles/669616
https://zhuanlan.zhihu.com/p/40920384
https://www.zhihu.com/people/cheshengyuan/posts
https://zhuanlan.zhihu.com/p/36361833
https://zhuanlan.zhihu.com/p/39034683
https://spaces.ac.cn/archives/5861/comment-page-2#comments
https://github.com/bojone/seq2seq
https://cloud.tencent.com/developer/news/46171
https://github.com/chenjiayu0808/ML-AI-experiments/tree/master/AI/Neural Machine Translation
https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py
Attention目前基本上已经是seq2seq模型的“标配”模块了,它的思想就是:每一步解码时,不仅仅要结合encoder编码出来的固定大小的向量(通读全文),还要往回查阅原来的每一个字词(精读局部),两者配合来决定当前步的输出。
干货 | Attention注意力机制超全综述
阿里云社区-Attention模型
完全图解RNN、RNN变体、Seq2Seq、Attention机制
苏剑林-《Attention is All You Need》浅读(简介+GitHub keras 代码)
真正的完全图解Seq2Seq Attention模型