N L P NLP NLP
主题:语言模型,机器翻译,文本分类,阅读理解,生成对话,序列标注,关系抽取,建模关系数据
在NLP相比于文本分类,序列标注更为核心
一 语言模型
二 序列模型
三 机器翻译
四 文本分类
五 阅读理解
六 对话生成
七 序列标注
八 关系抽取
九 建模关系数据
十 记忆网络
十一 序列生成
十二 元学习
十三 语音情绪识别
十四 其他
十五 NLP补充
文本摘要
1.模型:CopyNet
论文题目:Incorporating Copying Mechanism in Sequence-to-Sequence Learning
2.模型:SummaRuNNer
论文题目:SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documen
3.模型:SeqGAN
论文题目:SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
4.模型:Latent Extractive
论文题目:Neural latent extractive document summarization
5.模型:NEUSUM
论文题目:Neural Document Summarization by Jointly Learning to Score and Select Sentences
6.模型:BERTSUM
论文题目:Text Summarization with Pretrained Encoders
7.模型:BRIO
论文题目:BRIO: Bringing Order to Abstractive Summarization
8.模型:NAM
论文题目:A Neural Attention Model for Abstractive Sentence Summarization
9.模型:RAS
论文题目:Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
10.模型:PGN
论文题目:Get To The Point: Summarization with Pointer-Generator Networks
11.模型:Re3Sum
论文题目:Retrieve, rerank and rewrite: Soft template based neural summarization
12.模型:MTLSum
论文题目:Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation
13.模型:KGSum
论文题目:Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization
14.模型:PEGASUS
论文题目:PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
15.模型:FASum
论文题目:Enhancing Factual Consistency of Abstractive Summarization
16.模型:RNN(ext) + ABS + RL + Rerank
论文题目:Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting
17.模型:BottleSUM
论文题目:BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle
文本生成
Sequence to sequence learning with neural networks
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Neural machine translation by jointly learning to align and translate
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
Attention is all you need
Improving language understanding by generative pre-training
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Cross-lingual Language Model Pretraining
Language Models are Unsupervised Multitask Learners
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension