2021-07-09

转自:NLP工作站 知乎 刘聪NLP

主要包括10个分类,如下:(1)预训练语言模型及应用(58篇);(2)表征学习(9篇);(3)问答及检索(42篇);(4)文本生成(29篇);(5)摘要(23篇);(6)小样本(16篇);(7)对话(32篇);(8)情感及情绪分析(15篇);(9)信息抽取(60篇);(10)其他(21篇)。

update:刘聪NLP:ACL2021 Findings论文汇总及分类

一、 预训练语言模型及应用
Long

(1)How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models

https://arxiv.org/abs/2012.15613

(2)Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains

https://arxiv.org/abs/2012.01266

(3)How is BERT surprised? Layerwise detection of linguistic anomalies

https://arxiv.org/abs/2105.07452

(4)Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization

https://arxiv.org/abs/2105.12002

(5)R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling

(6)IrEne: Interpretable Energy Prediction for Transformers

https://arxiv.org/abs/2106.01199

(7)GhostBERT: Generate More Features with Cheap Operations for BERT

(8)Syntax-Enhanced Pre-trained Model

https://arxiv.org/abs/2012.14116

(9)PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction

(10)EnsLM: Ensemble Language Model for Data Diversity by Semantic Clustering

(11)StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling

https://arxiv.org/abs/2012.00857

(12)Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models

https://arxiv.org/abs/2106.05505

(13)Implicit Representations of Meaning in Neural Language Model

https://arxiv.org/abs/2106.00737

(14)ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning

https://arxiv.org/abs/2012.15022

(15)Improving Formality Style Transfer with Context-Aware Rule Injection

https://arxiv.org/abs/2106.00210

(16)BinaryBERT: Pushing the Limit of BERT Quantization

https://arxiv.org/abs/2012.15701

(17)Shortformer: Better Language Modeling using Shorter Inputs

https://arxiv.org/abs/2012.15832

(18)Making Pre-trained Language Models Better Few-shot Learners

https://arxiv.org/abs/2012.15723

(19)ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information

(20)Are Pretrained Convolutions Better than Pretrained Transformers?

https://arxiv.org/abs/2105.03322

(21)ERNIE-Doc: A Retrospective Long-Document Modeling Transformer

https://arxiv.org/abs/2012.15688

(22)LeeBERT: Learned Early Exit for BERT with cross-level optimization

(23)Positional Artefacts Propagate Through Masked Language Model Embeddings

https://arxiv.org/abs/2011.04393

(24)Optimizing Deeper Transformers on Small Datasets

https://arxiv.org/abs/2012.15355

(25) When Do You Need Billions of Words of Pretraining Data?

https://arxiv.org/abs/2011.04946

(26)Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases

https://arxiv.org/abs/2106.09231

(27)EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets

https://arxiv.org/abs/2101.00063

(28)SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining

(29)Structural Guidance for Transformer Language Models

(30)MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding

https://arxiv.org/abs/2106.01541

(31)Language Model Evaluation Beyond Perplexity

https://arxiv.org/abs/2106.00085

(32)BERTGen: Multi-task Generation through BERT

https://arxiv.org/abs/2106.03484

(33)Pre-training Universal Language Representation

https://arxiv.org/abs/2105.14478

(34)Cascaded Head-colliding Attention

https://arxiv.org/abs/2105.14850

(35)Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks

https://arxiv.org/abs/2106.04489

(36)Accelerating BERT Inference for Sequence Labeling via Early-Exit

https://arxiv.org/abs/2105.13878

(37)AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models

(38)Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter

https://arxiv.org/abs/2105.07148

(39)On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

https://arxiv.org/abs/2106.03164

(40)Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adaptation

(41)Marginal Utility Diminishes: Exploring the Minimum Knowledge for BERT Knowledge Distillation

https://arxiv.org/abs/2106.05691

(42)Obtaining Better Static Word Embeddings Using Contextual Embedding Models

https://arxiv.org/abs/2106.04302

(43)Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models

https://arxiv.org/abs/2010.08566

(44)Reservoir Transformers

https://arxiv.org/abs/2012.15045

(45)LexFit: Lexical Fine-Tuning of Pretrained Language Models

(46)Selecting Informative Contexts Improves Language Model Fine-tuning

https://arxiv.org/abs/2005.00175

(47)BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?

https://arxiv.org/abs/2105.04949

(48)Examining the Inductive Bias of Neural Language Models with Artificial Languages

https://arxiv.org/abs/2106.01044

(49)An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models

https://arxiv.org/abs/2106.09204

(50)BERTAC: Enhancing Transformer-based Language Models with Adversarially Pretrained Convolutional Neural Networks

(51)Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

https://arxiv.org/abs/2106.02205

(52)Length-Adaptive Transformer: Train Once with Length Drop, Use Anytime with Search

https://arxiv.org/abs/2010.07003

(53)H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences

Short

(54)Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling

https://arxiv.org/abs/2106.01040

(55)Is Sparse Attention more Interpretable?

https://arxiv.org/abs/2106.01087

(56)Learning to Generate Task-Specific Adapters from Task Description

https://arxiv.org/abs/2101.00420

(57)Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer

https://arxiv.org/abs/2105.06947?context=cs

(58)Pre-training is a Hot Topic: Contextualized Document Embeddings Improve Topic Coherence

https://arxiv.org/abs/2004.03974

二、 表征学习
Long

(1)DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations

https://arxiv.org/abs/2006.03659(对比学习)

(2)Automated Concatenation of Embeddings for Structured Prediction

https://arxiv.org/abs/2010.05006

(3)Lightweight Cross-Lingual Sentence Representation Learning

https://arxiv.org/abs/2105.13856

(4)ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

https://arxiv.org/abs/2105.11741

(5)Dynamic Contextualized Word Embeddings

https://arxiv.org/abs/2010.12684

(6)Self-Guided Contrastive Learning for BERT Sentence Representations

https://arxiv.org/abs/2106.07345

(7)Bootstrapped Unsupervised Sentence Representation Learning

Short

(8)Attentive Multiview Text Representation for Differential Diagnosis

Short

(9)DefSent: Sentence Embeddings using Definition Sentences

https://arxiv.org/abs/2105.04339

三、 问答及检索
Long

(1)Evaluating Evaluation Measures for Ordinal Classification and Ordinal Quantification

(2)Dual Reader-Parser on Hybrid Textual and Tabular Evidence for Open Domain Question Answering

https://www.amazon.science/publications/dual-reader-parser-on-hybrid-textual-and-tabular-evidence-for-open-domain-question-answering

(3)Explanations for CommonsenseQA: New Dataset and Models

https://zenodo.org/record/4784281#.YNngJvkzZsY

(4)Answering Ambiguous Questions through Generative Evidence Fusion and Round-Trip Prediction

https://arxiv.org/abs/2011.13137

(5)Improving Document Representations by Generating Pseudo Query Embeddings for Dense Retrieval

https://arxiv.org/abs/2105.03599

(6)CoSQA: 20,000+ Web Queries for Code Search and Question Answering

https://arxiv.org/abs/2105.13239

(7)Coreference Reasoning in Machine Reading Comprehension

https://arxiv.org/pdf/2012.15573.pdf

(8)End-to-End Training of Neural Retrievers for Open-Domain Question Answering

https://arxiv.org/abs/2101.00408

(9)Few-Shot Question Answering by Pretraining Span Selection

https://arxiv.org/abs/2101.00438

(10)Integrating Semantics and Neighborhood Information with Graph-Driven Generative Models for Document Retrieval

https://arxiv.org/abs/2105.13066

(11)Robustifying Multi-hop QA through Pseudo-Evidentiality Training

(12)Learning Dense Representations of Phrases at Scale

https://arxiv.org/abs/2012.12624

(13)Generation-Augmented Retrieval for Open-Domain Question Answering

https://arxiv.org/abs/2009.08553

(14)xMoCo: Cross Momentum Contrastive Learning for Open-Domain Question Answering

(15)TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance

https://arxiv.org/abs/2105.07624

(16)A Semantic-based Method for Unsupervised Commonsense Question Answering

https://arxiv.org/abs/2105.14781

(17)A Neural Model for Joint Document and Snippet Ranking in Question Answering for Large Document Collections

https://arxiv.org/abs/2106.08908

(18)Challenges in Information-Seeking QA: Unanswerable Questions and Paragraph Retrieval

https://arxiv.org/abs/2010.11915

(19)Question Answering Over Temporal Knowledge Graphs

https://arxiv.org/abs/2106.01515

(20)Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA?

https://arxiv.org/abs/2106.01561

(21)Article Reranking by Memory-Enhanced Key Sentence Matching for Detecting Previously Fact-Checked Claims

(22)UnitedQA: A Hybrid Approach for Open Domain Question Answering

https://arxiv.org/abs/2101.00178

(23)ForecastQA: A Question Answering Challenge for Event Forecasting with Temporal Text Data

https://arxiv.org/abs/2005.00792

(24)On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study

https://arxiv.org/abs/2106.00872

(25)Multi-task Retrieval for Knowledge-Intensive Tasks

https://arxiv.org/abs/2101.00117

(26)Joint Models for Answer Verification in Question Answering Systems

(27)Which Linguist Invented the Lightbulb? Presupposition Verification for Question-Answering

https://arxiv.org/abs/2101.00391

(28)Modeling Transitions of Focal Entities for Conversational Knowledge Base Question Answering

(29)A Mutual Information Maximization Approach for the Spurious Solution Problem in Weakly Supervised Question Answering

https://arxiv.org/abs/2106.07174

(30)Learn to Resolve Conversational Dependency: A Consistency Training Framework for Conversational Question Answering

(31)Learning to Perturb Word Embeddings for Out-of-distribution QA

https://arxiv.org/abs/2105.02692

Short

(32)The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes

https://arxiv.org/abs/2012.14210

(33)DuReader_robust: A Chinese Dataset Towards Evaluating Robustness and Generalization of Machine Reading Comprehension in Real-World Applications

https://arxiv.org/abs/2004.11142

(34)Towards a more Robust Evaluation for Conversational Question Answering

(35)Training Adaptive Computation for Open-Domain Question Answering with Computational Constraints

(36)Efficient Passage Retrieval with Hashing for Open-domain Question Answering

https://arxiv.org/abs/2106.00882

(37)Using Adversarial Attacks to Reveal the Statistical Bias in Machine Reading Comprehension Models

https://arxiv.org/abs/2105.11136

(38)VAULT: VAriable Unified Long Text Representation for Machine Reading Comprehension

https://arxiv.org/abs/2105.03229

(39)Towards more equitable question answering systems: How much more data do you need?

https://arxiv.org/abs/2105.14115

(40)A Semantics-aware Transformer Model of Relation Linking for Knowledge Base Question Answering

(41)Neural Retrieval for Question Answering with Cross-Attention Supervised Data Augmentation

https://arxiv.org/abs/2009.13815

(42)Addressing Semantic Drift in Generative Question Answering with Auxiliary Extraction

四、 文本生成
Long

(1)Generalising Multilingual Concept-to-Text NLG with Language Agnostic Delexicalisation

https://arxiv.org/abs/2105.03432

(2)Prefix-Tuning: Optimizing Continuous Prompts for Generation

https://arxiv.org/abs/2101.00190

(3)Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models

https://arxiv.org/abs/2101.00288

(4)Competence-based Multimodal Curriculum Learning for Medical Report Generation

(5)BACO: A Background Knowledge- and Content-Based Framework for Citing Sentence Generation

(6)Mention Flags (MF): Constraining Transformer-based Text Generators

(7)Guiding the Growth: Difficulty-Controllable Question Generation through Step-by-Step Rewriting

https://arxiv.org/abs/2105.11698

(8)Improving Encoder by Auxiliary Supervision Tasks for Table-to-Text Generation

(9)Writing by Memorizing: Hierarchical Retrieval-based Medical Report Generation

https://arxiv.org/abs/2106.06471

(10)Data Augmentation for Text Generation Without Any Augmented Data

https://arxiv.org/abs/2105.13650

(11)Long Text Generation by Modeling Sentence-Level and Discourse-Level Coherence

https://arxiv.org/abs/2105.08963

(12)PENS: A Dataset and Generic Framework for Personalized News Headline Generation

https://www.microsoft.com/en-us/research/uploads/prod/2021/06/ACL2021_PENS_Camera_Ready_1862_Paper.pdf

(13)De-Confounded Variational Encoder-Decoder for Logical Table-to-Text Generation

(14)Bridging Subword Gaps in Pretrain-Finetune Paradigm for Natural Language Generation

https://arxiv.org/abs/2106.06125

(15)Employing Argumentation Knowledge Graphs for Neural Argument Generation

(16)Select, Extract and Generate: Neural Keyphrase Generation with Layer-wise Coverage Attention

https://arxiv.org/abs/2008.01739

(17)DESCGEN: A Distantly Supervised Datasetfor Generating Entity Descriptions

https://arxiv.org/abs/2106.05365

(18)GTM: A Generative Triple-wise Model for Conversational Question Generation

https://arxiv.org/abs/2106.03635

(19)All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text

(20)A Hierarchical VAE for Calibrating Attributes while Generating Text using Normalizing Flow

(21)DYPLOC: Dynamic Planning of Content Using Mixed Language Models for Text Generation

https://arxiv.org/abs/2106.00791

(22)Controllable Open-ended Question Generation with A New Question Type Ontology

https://web.eecs.umich.edu/~wangluxy/papers/ACL2021_cao_wang.pdf

(23)DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts

https://arxiv.org/abs/2105.03023

(24)Towards Table-to-Text Generation with Numerical Reasoning

(25)TGEA: An Error-Annotated Dataset and Benchmark Tasks for TextGeneration from Pretrained Language Models

Short

(26)On Training Instance Selection for Few-Shot Neural Text Generation

(27)How Helpful is Inverse Reinforcement Learning for Table-to-Text Generation?

(28)Question Generation for Adaptive Education

https://arxiv.org/abs/2106.04262

(29)Avoiding Overlap in Data Augmentation for AMR-to-Text Generation

五、 摘要
Long

(1)Cross-Lingual Abstractive Summarization with Limited Parallel Resources

https://arxiv.org/abs/2105.13648

(2)Unsupervised Extractive Summarization-Based Representations for Accurate and Explainable Collaborative Filtering

(3)Improving Factual Consistency of Abstractive Summarization via Question Answering

https://arxiv.org/abs/2105.04623

(4)Long-Span Summarization via Local Attention and Content Selection

https://arxiv.org/abs/2105.03801

(5)RepSum: Unsupervised Dialogue Summarization based on Replacement Strategy

(6)TWAG: A Topic-Guided Wikipedia Abstract Generator

(7)Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization

https://arxiv.org/abs/2105.12544

(8)BASS: Boosting Abstractive Summarization with Unified Semantic Graph

https://arxiv.org/abs/2105.12041

(9)Focus Attention: Promoting Faithfulness and Diversity in Summarization

https://arxiv.org/abs/2105.11921

(10)Deep Differential Amplifier for Extractive Summarization

(11)Generating Query Focused Summaries from Query-Free Resources

https://arxiv.org/abs/2012.14774

(12)PASS: Perturb-and-Select Summarizer for Product Reviews

https://www.amazon.science/publications/pass-perturb-and-select-summarizer-for-product-reviews

(13)ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining

https://arxiv.org/abs/2106.00829

(14)Multi-TimeLine Summarization (MTLS): Improving Timeline Summarization by Generating Multiple Summaries

(15)EmailSum: Abstractive Email Thread Summarization

(16)Dissecting Generation Modes for Abstractive Summarization Models via Ablation and Attribution

https://arxiv.org/abs/2106.01518

(17)A Training-free and Reference-free Summarization Evaluation Metric via Centrality-weighted Relevance and Self-referenced Redundancy

https://arxiv.org/abs/2106.13945

(18)Generating SOAP Notes from Doctor-Patient Conversations Using Modular Summarization Techniques

https://arxiv.org/abs/2005.01795

Short

(19)WikiSum: Coherent Summarization Dataset for Efficient Human-Evaluation

https://registry.opendata.aws/wikisum/

(20)Bringing Structure into Summaries: a Faceted Summarization Dataset for Long Scientific Documents

https://arxiv.org/abs/2106.00130

(21)Reinforcement Learning for Abstractive Question Summarization with Question-aware Semantic Rewards

(22)Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning

https://arxiv.org/abs/2105.14241

(23)SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization

https://arxiv.org/abs/2106.01890

六、 小样本
Long
(1)Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains

https://arxiv.org/abs/2012.01266

(2)Multi-Label Few-Shot Learning for Aspect Category Detection

https://arxiv.org/abs/2105.14174

(3)ProtAugment: Intent Detection Meta-Learning through Unsupervised Diverse Paraphrasing

https://arxiv.org/abs/2105.12995

(4)Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

https://arxiv.org/abs/2012.14862

(5)AugNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation

https://arxiv.org/abs/2106.05589

(6)A Pre-training Strategy for Zero-Resource Response Selection in Knowledge-Grounded Conversations

(7)Evaluating morphological typology in zero-shot cross-lingual transfer

(8)Lexicon Learning for Few Shot Sequence Modeling

https://arxiv.org/abs/2106.03993

(9)To POS Tag or Not to POS Tag: The Impact of POS Tags on Morphological Learning in Low-Resource Settings

(10)Meta-Learning to Compositionally Generalize

https://arxiv.org/abs/2106.04252

(11)Risk Minimization for Zero-shot Sequence Labeling

http://faculty.sist.shanghaitech.edu.cn/faculty/tukw/acl21rm.pdf

Short

(12)QA-Driven Zero-shot Slot Filling with Weak Supervision Pretraining

(13)Zero-shot Fact Verification by Claim Generation

https://arxiv.org/abs/2105.14682

(14)Distinct Label Representations for Few-Shot Text Classification

(15)Zero-shot Event Extraction via Transfer Learning: Challenges and Insights

(16)Issues with Entailment-based Zero-shot Text Classification

七、 对话
Long

(1)TicketTalk: Toward human-level performance with end-to-end, transaction-based dialog systems

https://arxiv.org/abs/2012.12458

(2)SocAoG: Incremental Graph Parsing for Social Relation Inference in Dialogues

https://arxiv.org/abs/2106.01006

(3)HERALD: An Annotation Efficient Method to Detect User Disengagement in Social Conversations

https://arxiv.org/abs/2106.00162

(4)Comprehensive Study: How the Context Information of Different Granularity Affects Dialogue State Tracking?

https://arxiv.org/abs/2105.03571

(5)Discovering Dialog Structure Graph for Coherent Dialog Generation

(6)Dialogue Response Selection with Hierarchical Curriculum Learning

https://arxiv.org/abs/2012.14756

(7)Diversifying Dialog Generation via Adaptive Label Smoothing

https://arxiv.org/abs/2105.14556

(8)BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data

https://arxiv.org/abs/2106.06169

(9)I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling

https://arxiv.org/abs/2012.13391

(10)Towards Quantifiable Dialogue Coherence Evaluation

https://arxiv.org/abs/2106.00507

(11)A Sequence-to-Sequence Approach to Dialogue State Tracking

https://arxiv.org/abs/2011.09553

(12)Dual Slot Selector via Local Reliability Verification for Dialogue State Tracking

(13)Learning from Perturbations: Diverse and Informative Dialogue Generation with Inverse Adversarial Training

https://arxiv.org/abs/2105.15171

(14)Novel Slot Detection: A Benchmark for Discovering Unknown Slot Types in the Task-Oriented Dialogue System

https://arxiv.org/abs/2105.14313

(15)RADDLE: An Evaluation Benchmark and Analysis Platform for Robust Task-oriented Dialog Systems

https://arxiv.org/abs/2012.14666

(16)Learning to Ask Conversational Questions by Optimizing Levenshtein Distance

(17)Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances https://arxiv.org/abs/2106.02227

(18)Semantic Representation for Dialogue Modeling

https://arxiv.org/abs/2105.10188

(19)Towards Emotional Support Dialog Systems

https://arxiv.org/abs/2106.01144

(20)Discovering Dialogue Slots with Weak Supervision

(21)Structural Pre-training for Dialogue Comprehension

https://arxiv.org/abs/2105.10956

(22)Transferable Dialogue Systems and User Simulators

(23)Improving Dialog Systems for Negotiation with Personality Modeling

(24)TIMEDIAL: Temporal Commonsense Reasoning in Dialog

https://arxiv.org/abs/2106.04571

(25)Increasing Faithfulness in Knowledge-Grounded Dialogue with Controllable Features

(26)GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple Intent Detection and Slot Filling

https://arxiv.org/abs/2106.01925

(27)DynaEval: Unifying Turn and Dialogue Level Evaluation

https://arxiv.org/abs/2106.01112

Short

(28)Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries

https://arxiv.org/abs/2012.01873

(29)Preview, Attend and Review: Schema-Aware Curriculum Learning for Multi-Domain Dialogue State Tracking

https://arxiv.org/abs/2106.00291

(30)Continual Learning for Task-oriented Dialogue System with Iterative Network Pruning, Expanding and Masking

(31)Domain-Adaptive Pretraining Methods for Dialogue Understanding

https://arxiv.org/abs/2105.13665

(32)PRAL: A Tailored Pre-Training Model for Task-Oriented Dialog Generation

https://arxiv.org/abs/2004.13835

八、 情感或情绪分析
Long

(1)Dual Graph Convolutional Networks for Aspect-based Sentiment Analysis

(2)Directed Acyclic Graph Network for Conversational Emotion Recognition

https://arxiv.org/abs/2105.12907

(3)DynaSent: A Dynamic Benchmark for Sentiment Analysis

https://arxiv.org/abs/2012.15349

(4)Position Bias Mitigation: A Knowledge-Aware Graph Model for Emotion Cause Extraction

https://arxiv.org/abs/2106.03518

(5)Topic-Driven and Knowledge-Aware Transformer for Dialogue Emotion Detection

https://arxiv.org/abs/2106.01071

(6)Distributed Representations of Emotion Categories in Emotion Space

(7)DialogueCRN: Contextual Reasoning Networks for Emotion Recognition in Conversations

https://arxiv.org/abs/2106.01978

(8)Missing Modality Imagination Network for Emotion Recognition with Uncertain Missing Modalities

(9)A Unified Generative Framework for Aspect-based Sentiment Analysis

https://arxiv.org/abs/2106.04300

(10)Exploring the Efficacy of Automatically Generated Counterfactuals for Sentiment Analysis

(11)Structured Sentiment Analysis as Dependency Graph Parsing

https://arxiv.org/abs/2105.14504

(12)Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions

Short

(13)Deep Context- and Relation-Aware Learning for Aspect-based Sentiment Analysis

https://arxiv.org/abs/2106.03806

(14)Towards Generative Aspect-Based Sentiment Analysis

(15)eMLM: A New Pre-training Objective for Emotion Related Tasks

九、 信息抽取
Long

(1)Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data

https://arxiv.org/abs/2106.08977

(2)Competence-based Multimodal Curriculum Learning for Medical Report Generation

https://arxiv.org/abs/2105.06804

(3)OntoED: Low-resource Event Detection with Ontology Embedding

https://arxiv.org/abs/2105.10922

(4)Subsequence Based Deep Active Learning for Named Entity Recognition

(5)BERTifying the Hidden Markov Model for Multi-Source Weakly Supervised Named Entity Recognition

https://arxiv.org/abs/2105.12848

(6)Knowledge-Enriched Event Causality Identification via Latent Structure Induction Networks

(7)Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker

https://arxiv.org/abs/2105.14924

(8)A Large-Scale Chinese Multimodal NER Dataset with Speech Clues

(9)LearnDA: Learnable Knowledge-Guided Data Augmentation for Event Causality Identification

https://arxiv.org/abs/2106.01649

(10)CIL: Contrastive Instance Learning Framework for Distantly Supervised Relation Extraction

https://arxiv.org/abs/2106.10855

(11)Few-NERD: A Few-shot Named Entity Recognition Dataset

https://arxiv.org/abs/2105.07464

(12)SENT: Sentence-level Distant Relation Extraction via Negative Training

https://arxiv.org/abs/2106.11566?context=cs

(13)Modularized Interaction Network for Named Entity Recognition

(14)Capturing Event Argument Interaction via A Bi-Directional Entity-Level Recurrent Decoder

(15)A Span-Based Model for Joint Overlapped and Discontinuous Named Entity Recognition

(16)An End-to-End Progressive Multi-Task Learning Framework for Medical Named Entity Recognition and Normalization

(17)MLBiNet: A Cross-Sentence Collective Event Detection Network

https://arxiv.org/abs/2105.09458

(18)PRGC: Potential Relation and Global Correspondence Based Joint Relational Triple Extraction

https://arxiv.org/abs/2106.09895

(20)Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning

https://arxiv.org/abs/2105.03654

(21)Leveraging Type Descriptions for Zero-shot Named Entity Recognition and Classification

(22)Revisiting the Negative Data of Distantly Supervised Relation Extraction

https://arxiv.org/abs/2105.10158

(23)Learning from Miscellaneous Other-Class Words for Few-shot Named Entity Recognition

(24)Joint Biomedical Entity and Relation Extraction with Knowledge-Enhanced Collective Inference

https://arxiv.org/abs/2105.13456

(25)Nested Named Entity Recognition via Explicitly Excluding the Influence of the Best Path

(27)How Knowledge Graph and Attention Help? A Qualitative Analysis into Bag-level Relation Extraction

(28)From Discourse to Narrative: Knowledge Projection for Event Relation Extraction

https://arxiv.org/abs/2106.08629

(29)Fine-grained Information Extraction from Biomedical Literature based on Knowledge-enriched Abstract Meaning Representation

(30)A Unified Generative Framework for Various NER Subtasks

https://arxiv.org/abs/2106.01223

(31)MECT: Multi-Metadata Embedding based Cross-Transformer for Chinese Named Entity Recognition

(32)Unleash GPT-2 Power for Event Detection

(33)Trigger is Not Sufficient: Exploiting Frame-aware Knowledge for Implicit Event Argument Extraction

(34)Element Intervention for Open Relation Extraction

https://arxiv.org/abs/2106.09558

(35)Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction

https://arxiv.org/abs/2106.09232

(36)CLEVE: Contrastive Pre-training for Event Extraction

https://arxiv.org/abs/2105.14485

(37)MulDA: A Multilingual Data Augmentation Framework for Low-Resource Cross-Lingual NER

https://raihanjoty.github.io/papers/linlin-et-al-acl-21.html

(38)De-biasing Distantly Supervised Named Entity Recognition via Causal Intervention

https://arxiv.org/abs/2106.09233

(39)UniRE: A Unified Label Space for Entity Relation Extraction

(40)Crowdsourcing Learning as Domain Adaptation: A Case Study on Named Entity Recognition

https://arxiv.org/abs/2105.14980

(41)Modeling Fine-Grained Entity Types with Box Embeddings

https://arxiv.org/abs/2101.00345

(42)CoRI: Collective Relation Integration with Data Augmentation for Open Information Extraction

https://arxiv.org/abs/2106.00793

(43)CitationIE: Leveraging the Citation Graph for Scientific Information Extraction

https://arxiv.org/abs/2106.01560

(44)Dependency-driven Relation Extraction with Attentive Graph Convolutional Networks

(45)Discontinuous Named Entity Recognition as Maximal Clique Discovery

https://arxiv.org/abs/2106.00218

(46)Weakly Supervised Named Entity Tagging with Learnable Logical Rules

(47)SpanNER: Named Entity Re-/Recognition as Span Prediction

https://arxiv.org/abs/2106.00641

(48)Refining Sample Embeddings with Relation Prototypes to Enhance Continual Relation Extraction

https://www.researchgate.net/publication/352257560_Refining_Sample_Embeddings_with_Relation_Prototypes_to_Enhance_Continual_Relation_Extraction

(49)Document-level Event Extraction via Parallel Prediction Networks

(50)Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction

(51)The Possible, the Plausible, and the Desirable: Event-Based Modality Detection for Language Processing

https://arxiv.org/abs/2106.08037

(52)A Neural Transition-based Joint Model for Disease Named Entity Recognition and Normalization

Short

(53)TIMERS: Document-level Temporal Relation Extraction

(54)ROPE: Reading Order Equivariant Positional Encoding for Graph-based Document Information Extraction

https://arxiv.org/abs/2106.10786

(55)Enhancing Entity Boundary Detection for Better Chinese Named Entity Recognition

(56)Entity Enhancement for Implicit Discourse Relation Classification in the Biomedical Domain

(57)Entity Concept-enhanced Few-shot Relation Extraction

https://arxiv.org/abs/2106.02401

(58)Improving Model Generalization: A Chinese Named Entity Recognition Case Study

(59)Explicitly Capturing Relations between Entity Mentions via Graph Neural Networks for Domain-specific Named Entity Recognition

(60)Three Sentences Are All You Need: Local Path Enhanced Document Relation Extraction

https://arxiv.org/abs/2106.01793

十、 其他
(1)Semi-Supervised Text Classification with Balanced Deep Representation Distributions

(2)Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble

https://arxiv.org/abs/2006.11627

(3)Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning

https://arxiv.org/abs/2105.04165

(4)Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification

https://arxiv.org/abs/2105.02657

(5)Concept-Based Label Embedding via Dynamic Routing for Hierarchical Text Classification

(6)Joint Verification and Reranking for Open Fact Checking Over Tables

https://arxiv.org/abs/2012.15115

(7)Structural Knowledge Distillation: Tractably Distilling Information for Structured Predictor

https://arxiv.org/abs/2010.05010

(8)UnNatural Language Inference

https://arxiv.org/abs/2101.00010

(9)OoMMix: Out-of-manifold Regularization in Contextual Embedding Space for Text Classification

https://arxiv.org/abs/2105.06750

(10)Database Reasoning Over Text

https://arxiv.org/abs/2106.01074

(11)Towards Robustness of Text-to-SQL Models against Synonym Substitution

https://arxiv.org/abs/2106.01065

(12)Determinantal Beam Search

https://arxiv.org/abs/2106.07400

(13)POS-Constrained Parallel Decoding for Non-autoregressive Generation

(14)Hierarchy-aware Label Semantics Matching Network for Hierarchical Text Classification

(15)Multi-View Cross-Lingual Structured Prediction with Minimum Supervision

http://faculty.sist.shanghaitech.edu.cn/faculty/tukw/acl21mv.pdf

(16)Chase: A Large-Scale and Pragmatic Chinese Dataset for Cross-Database Context-Dependent Text-to-SQL

(17)Factoring Statutory Reasoning as Language Understanding Challenges

https://arxiv.org/abs/2105.07903

(18)HiddenCut: Simple Data Augmentation for Natural Language Understanding with Better Generalization

https://arxiv.org/abs/2106.00149

(19)KaggleDBQA: Realistic Evaluation of Text-to-SQL Parsers

https://arxiv.org/abs/2106.11455

(20)Automatic ICD Coding via Interactive Shared Representation Networks with Self-distillation Mechanism

(21)Alignment Rationale for Natural Language Inference

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值