NLP常用网络

N L P NLP NLP

主题:语言模型,机器翻译,文本分类,阅读理解,生成对话,序列标注,关系抽取,建模关系数据

在NLP相比于文本分类,序列标注更为核心


一 语言模型


二 序列模型


三 机器翻译


四 文本分类


五 阅读理解


六 对话生成


七 序列标注


八 关系抽取


九 建模关系数据


十 记忆网络


十一 序列生成


十二 元学习


十三 语音情绪识别


十四 其他


十五 NLP补充


文本摘要

1.模型:CopyNet

论文题目:Incorporating Copying Mechanism in Sequence-to-Sequence Learning

2.模型:SummaRuNNer

论文题目:SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documen

3.模型:SeqGAN

论文题目:SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

4.模型:Latent Extractive

论文题目:Neural latent extractive document summarization

5.模型:NEUSUM

论文题目:Neural Document Summarization by Jointly Learning to Score and Select Sentences

6.模型:BERTSUM

论文题目:Text Summarization with Pretrained Encoders

7.模型:BRIO

论文题目:BRIO: Bringing Order to Abstractive Summarization

8.模型:NAM

论文题目:A Neural Attention Model for Abstractive Sentence Summarization

9.模型:RAS

论文题目:Abstractive Sentence Summarization with Attentive Recurrent Neural Networks

10.模型:PGN

论文题目:Get To The Point: Summarization with Pointer-Generator Networks

11.模型:Re3Sum

论文题目:Retrieve, rerank and rewrite: Soft template based neural summarization

12.模型:MTLSum

论文题目:Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation

13.模型:KGSum

论文题目:Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization

14.模型:PEGASUS

论文题目:PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

15.模型:FASum

论文题目:Enhancing Factual Consistency of Abstractive Summarization

16.模型:RNN(ext) + ABS + RL + Rerank

论文题目:Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting

17.模型:BottleSUM

论文题目:BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle


文本生成

Sequence to sequence learning with neural networks

Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation

Neural machine translation by jointly learning to align and translate

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

Attention is all you need

Improving language understanding by generative pre-training

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Cross-lingual Language Model Pretraining

Language Models are Unsupervised Multitask Learners

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值