EMNLP2020中事件抽取/关系抽取/NER/少样本/MRC/摘要 论文分类整理

**EMNLP2020放榜啦!**小编根据个人研究方向相关将主会和Findings论文进行了一些分类整理,(●’◡’●)整理中如有错误与疏漏,还请大家多多交流指正嗷!

主会论文榜单传送门: https://2020.emnlp.org/papers/main

Findings论文榜单传送门: https://2020.emnlp.org/papers/findings

ps.此文也在知乎发布啦!欢迎在知乎关注我: Carrie

事件抽取

主会

  • Affective Event Classification with Discourse-enhanced Self-training

  • Biomedical Event Extraction as Sequence Labeling

  • Connecting the Dots: Event Graph Schema Induction with Path Language Modeling

  • Event Extraction as Machine Reading Comprehension

  • Event Extraction by Answering (Almost) Natural Questions

  • Incremental Event Detection via Knowledge Consolidation Networks

  • MAVEN: A Massive General Domain Event Detection Dataset

  • Weakly Supervised Subevent Knowledge Acquisition

  • Event Detection: Gate Diversity and Syntactic Importance Scores for Graph Convolution Neural Networks

  • Introducing a New Dataset for Event Detection in Cybersecurity Texts

  • Semi-supervised New Event Type Induction and Event Detection

Findings

  • Edge-Enhanced Graph Convolution Networks for Event Detection with Syntactic Relation

  • Event Extraction as Multi-turn Question Answering

  • How Does Context Matter? On the Robustness of Event Detection with Context-Selective Mask Generalization

  • Graph Transformer Networks with Syntactic and Semantic Structures for Event Argument Extraction

  • Biomedical Event Extraction on Graph Edge-conditioned Attention Networks with Hierarchical Knowledge Graphs

  • Resource-Enhanced Neural Model for Event Argument Extraction

  • Learning to Classify Events from Human Needs Category Descriptions

关系抽取

主会

  • Double Graph Based Reasoning for Document-level Relation Extraction

  • FedED: Federated Learning via Ensemble Distillation for Medical Relation Extraction

  • Global-to-Local Neural Networks for Document-Level Relation Extraction

  • Learning from Context or Names? An Empirical Study on Neural Relation Extraction

  • Let’s Stop Error Propagation in the End-to-End Relation Extraction Literature!

  • Pre-training Entity Relation Encoder with Intra-span and Inter-span Information

  • Recurrent Interaction Network for Jointly Extracting Entities and Classifying Relations

  • Relation-aware Graph Attention Networks with Relational Position Encodings for Emotion Recognition in Conversations

  • SelfORE: Self-supervised Relational Feature Learning for Open Relation Extraction

  • Two are Better Than One: Joint Entity and Relation Extraction with Table-Sequence Encoders

  • Denoising Relation Extraction from Document-level Distant Supervision

  • Distilling Structured Knowledge for Text-Based Relational Reasoning

  • Exposing Shallow Heuristics of Relation Extraction Models with Challenge Data

  • Joint Constrained Learning for Event-Event Relation Extraction

  • Domain Knowledge Empowered Structured Neural Net for End-to-End Event Temporal Relation Extraction. Rujun Han, Yichao Zhou and Nanyun Peng

Findings

  • Active Testing: An Unbiased Evaluation Method for Distantly Supervised Relation Extraction

  • Minimize Exposure Bias of Seq2Seq Models in Joint Entity and Relation Extraction

  • The RELX Dataset and Matching the Multilingual Blanks for Cross-Lingual Relation Classification

  • Unsupervised Relation Extraction from Language Models using Constrained Cloze Completion

  • The Dots Have Their Values: Exploiting the Node-Edge Connections in Graph-based Neural Models for Document-level Relation Extraction

NER

主会

  • Coarse-to-Fine Pre-training for Named Entity Recognition

  • A Rigorous Study on Named Entity Recognition: Can Fine-tuning Pretrained Model Lead to the Promised Land?

  • Counterfactual Generator: A Weakly-Supervised Method for Named Entity Recognition

  • Entity Enhanced BERT Pre-training for Chinese NER

  • Frustratingly Simple Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning

  • HIT: Nested Named Entity Recognition via Head-Tail Pair and Token Interaction

  • Interpretable Multi-dataset Evaluation for Named Entity Recognition

  • Named Entity Recognition Only from Word Embeddings

  • Named Entity Recognition for Social Media Texts with Semantic Augmentatio

Findings

  • Toward Recognizing More Entity Types in NER: An Efficient Implementation using Only Entity Lexicons

  • A Dual-Attention Network for Joint Named Entity Recognition and Sentence Classification of Adverse Drug Events

  • Improving Named Entity Recognition with Attentive Ensemble of Syntactic Information

  • Constrained Decoding for Computationally Efficient Named Entity Recognition Taggers

  • Hierarchical Region Learning for Nested Named Entity Recognition

FewShot & ZeroShot

主会

  • Adaptive Attentional Network for Few-Shot Knowledge Graph Completion

  • An Empirical Study on Large-Scale Multi-Label Text Classification including Few and Zero-Shot Labels

  • Discriminative Nearest Neighbor Few-Shot Intent Detection by Transferring Natural Language Inference

  • Few-shot Complex Knowledge Base Question Answering via Meta Reinforcement Learning

  • Few-Shot Learning for Opinion Summarization

  • Self-Supervised Meta-Learning for Few-Shot Natural Language Classification Tasks

  • Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

  • Frustratingly Simple Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning

  • Scalable Zero-shot Entity Linking with Dense Entity RetrievalScalable Zero-shot Entity Linking with Dense Entity Retrieval

  • Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start

  • Multi-label Few/Zero-shot Learning with Knowledge Aggregated from Multiple Label Graphs

  • Automatic Machine Translation Evaluation in Many Languages via Zero-Shot Paraphrasing

  • From Zero to Hero: On the Limitations of Zero-Shot Language Transfer with Multilingual Transformers

  • Grounded Adaptation for Zero-shot Executable Semantic Parsing

  • MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale

  • Scalable Zero-shot Entity Linking with Dense Entity Retrieval

  • Self-Supervised Knowledge Triplet Learning for Zero-shot Question Answering

  • Zero-Shot Cross-Lingual Transfer with Meta Learning

  • Zero-Shot Crosslingual Sentence Simplification

  • Language Adapters for Zero Shot Neural Machine Translation

  • On the Evaluation of Contextual Embeddings for Zero-Shot Cross-Lingual Transfer Learning

  • SLEDGE: A Simple Yet Effective Zero-Shot Baseline for Coronavirus Scientific Knowledge Search

Findings

  • Few-shot Natural Language Generation for Task-Oriented Dialog

  • Few-Shot Multi-Hop Relation Reasoning over Knowledge Bases

  • Dynamic Semantic Matching and Aggregation Network for Few-shot Intent Detection

  • Composed Variational Natural Language Generation for Few-shot Intents

  • Learning to Learn to Disambiguate: Meta-Learning for Few-Shot Word Sense Disambiguation

  • Few-Shot Multi-Hop Relation Reasoning over Knowledge Bases

  • Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines

  • Hybrid Emoji-Based Masked Language Models for Zero-Shot Abusive Language Detection

  • Zero-shot Entity Linking with Efficient Long Range Sequence Modeling

  • ZEST: Zero-shot Learning from Text Descriptions using Textual Similarity and Visual Summarization

  • Zero-Shot Rationalization by Multi-Task Transfer Learning from Question Answering

  • Towards Zero-Shot Conditional Summarization with Adaptive Multi-Task Fine-Tuning

  • Sparse and Decorrelated Representations for Stable Zero-shot NMT

  • Zero-shot Entity Linking with Efficient Long Range Sequence Modeling

MRC & QA

主会

  • Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering

  • Discern: Discourse-Aware Entailment Reasoning Network for Conversational Machine Reading

  • IIRC: A Dataset of Incomplete Information Reading Comprehension Questions

  • Interactive Fiction Game Playing as Multi-Paragraph Reading Comprehension with Reinforcement Learning

  • MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics

  • Scene Restoring for Narrative Machine Reading Comprehension

  • TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions

  • Towards Medical Machine Reading Comprehension with Structural Knowledge and Plain Text

  • Reading Between the Lines: Exploring Infilling in Visual Narratives

  • Towards Interpreting BERT for Reading Comprehension Based QA

  • Neural Conversational QA: Learning to Reason vs Exploiting Patterns

  • A Simple Yet Strong Pipeline for HotpotQA

  • SubjQA: A Dataset for Subjectivity and Review Comprehension

  • STL-CQA: Structure-based Transformers with Localization and Encoding for Chart Question Answering

  • QADiscourse - Discourse Relations as QA Pairs: Representation, Crowdsourcing and Baselines

  • ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning

  • LAReQA: Language-agnostic answer retrieval from a multilingual pool

  • Is Multihop QA in DiRe Condition? Measuring and Reducing Disconnected Reasoning

  • AutoQA: From Databases To Q&A Semantic Parsers With Only Synthetic Training Data

  • AmbigQA: Answering Ambiguous Open-domain Questions

  • Hierarchical Graph Network for Multi-hop Question Answering

  • Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering

  • Is Graph Structure Necessary for Multi-hop Question Answering?

Findings

  • Undersensitivity in Neural Reading Comprehension

  • Adversarial Augmentation Policy Search for Domain and Cross-Lingual Generalization in Reading Comprehension

  • No Answer is Better Than Wrong Answer: A Reflection Model for Document Level Machine Reading Comprehension

  • PolicyQA: A Reading Comprehension Dataset for Privacy Policies

  • Answer Span Correction in Machine Reading Comprehension

  • Open-Ended Visual Question Answering by Multi-Modal Domain Adaptation

  • ConceptBert: Concept-Aware Representation for Visual Question Answering

  • HybridQA: A Dataset of Multi-Hop Question Answering

  • FQuAD: French Question Answering Dataset

  • Question Answering with Long Multiple-Span Answers

  • Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering

  • Multi-hop Question Generation with Graph Convolutional Network

  • MMFT-BERT: Multimodal Fusion Transformer with BERT Encodings for Visual Question Answering

  • Open Domain Question Answering based on Text Enhanced Knowledge Graph with Hyperedge Infusion

Summarization

  • A Spectral Method for Unsupervised Multi-Document Summarization

  • Coarse-to-Fine Query Focused Multi-Document Summarization

  • Compressive Summarization with Plausibility and Salience Modeling

  • Evaluating the Factual Consistency of Abstractive Text Summarization

  • Friendly Topic Assistant for Transformer Based Abstractive Summarization

  • Intrinsic Evaluation of Summarization Datasets

  • MLSUM: The Multilingual Summarization Corpus

  • Multi-document Summarization with Maximal Marginal Relevance-guided Reinforcement Learning

  • Multi-Fact Correction in Abstractive Text Summarization

  • Multi-hop Inference for Question-driven Summarization

  • Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization

  • Neural Extractive Summarization with Hierarchical Attentive Heterogeneous Graph Network

  • On Extractive and Abstractive Neural Document Summarization with Transformer Language Models

  • Pre-training for Abstractive Document Summarization by Reinstating Source Text

  • Q-learning with Language Model for Edit-based Unsupervised Summarization

  • Re-evaluating Evaluation in Text Summarization

  • Stepwise Extractive Summarization and Planning with Structured Transformers

  • TESA: A Task in Entity Semantic Aggregation for Abstractive Summarization

  • What Have We Achieved on Text Summarization?

  • Factual Error Correction for Abstractive Summarization Models

  • Learning to Fuse Sentences with Transformers for Summarization

  • Modeling Content Importance for Summarization with Pre-trained Language Models

  • Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles

  • Summarizing Text on Any Aspects: A Knowledge-Informed Weakly-Supervised Approach

  • Understanding Neural Abstractive Summarization Models via Uncertainty

Findings

  • A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining

  • Conditional Neural Generation using Sub-Aspect Functions for Extractive News Summarization

  • Unsupervised Extractive Summarization by Pre-training Hierarchical Transformers

  • TED: A Pretrained Unsupervised Summarization Model with Theme Modeling and Denoising

  • Reducing Quantity Hallucinations in Abstractive Summarization

  • Abstractive Multi-Document Summarization via Joint Learning with Single-Document Summarization

  • Corpora Evaluation and System Bias Detection in Multi-document Summarization

  • Towards Zero-Shot Conditional Summarization with Adaptive Multi-Task Fine-Tuning

  • Literature Retrieval for Precision Medicine with Neural Matching and Faceted Summarization

  • CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems

  • Dr. Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures

  • WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization

  • SupMMD: A Sentence Importance Model for Extractive Summarization using Maximum Mean Discrepancy

  • TLDR: Extreme Summarization of Scientific Documents

Others

主会

  • QADiscourse - Discourse Relations as QA Pairs: Representation, Crowdsourcing and Baselines

  • Dynamic Anticipation and Completion for Multi-Hop Reasoning over Sparse Knowledge Graph

  • Language Generation with Multi-hop Reasoning on Commonsense Knowledge Graph

  • Multi-hop Inference for Question-driven Summarization


了解更多深度学习相关知识与信息,请关注公众号深度学习的知识小屋

  • 5
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值