ACL2020论文整理

ACL2020论文整理(Main Conference)

最新深感自己的论文能力太差,准备向大佬们膜拜学习,于是先整理一个ACL2020的大致内容,以便之后的学习,内容在之后会补充,或者在其他文章中补充。。。

ACL2020接受文章列表

完整ACL2020接受文章列表链接 https://acl2020.org/program/accepted/
ACL2020 Best Paper https://acl2020.org/blog/ACL-2020-best-papers/
ACL论文常驻链接(一站在手,paper我有)https://www.aclweb.org/anthology/

Best Paper

Beyond Accuracy: Behavioral Testing of NLP Models with CheckList
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin and Sameer Singh

Honorable Mention Papers – Main Conference

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey and Noah A. Smith
Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics
Nitika Mathur, Timothy Baldwin and Trevor Cohn

Best Theme Paper

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data
Emily M. Bender and Alexander Koller

Honorable Mention Paper – Theme

How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
Tal Linzen

Best Demonstration Paper

GAIA: A Fine-grained Multimedia Knowledge Extraction System
Manling Li, Alireza Zareian, Ying Lin, Xiaoman Pan, Spencer Whitehead, BRIAN CHEN, Bo Wu, Heng Ji, Shih-Fu Chang, Clare Voss, Daniel Napierski and Marjorie Freedman

Honorable Mention Papers – Demonstrations

Torch-Struct: Deep Structured Prediction Library
Alexander Rush
Prta: A System to Support the Analysis of Propaganda Techniques in the News
Giovanni Da San Martino, Shaden Shaar, Yifan Zhang, Seunghak Yu, Alberto Barrón-Cedeño and Preslav Nakov

论文分类整理(自己感兴趣的方向,根据标题分类,loading)

预训练/语言模型

Adaptive Compression of Word Embeddings
Yeachan Kim, Kang-Min Kim and SangKeun Lee
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov and Luke Zettlemoyer
BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model
Performance

Timo Schick and Hinrich Schütze
CluBERT: A Cluster-Based Approach for Learning Sense Distributions in Multiple Languages
Tommaso Pasini, Federico Scozzafava and Bianca Scarlini
Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey and Noah A. Smith
Emerging Cross-lingual Structure in Pretrained Language Models
Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer and Veselin Stoyanov
Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach
Wenyu DU, Zhouhan Lin, Yikang Shen, Timothy J. O’Donnell, Yoshua Bengio and Yue Zhang
Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning
Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon and Kyomin Jung
Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders
Yu Duan, Canwen Xu, Jiaxin Pei, Jialong Han and Chenliang Li
Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models
Dan Iter, Kelvin Guu, Larry Lansing and Dan Jurafsky
Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment
Forrest Davis and Marten van Schijndel
Roles and Utilization of Attention Heads in Transformer-based Neural Language Models
Jae-young Jo and Sung-Hyon Myaeng
Unsupervised Domain Clusters in Pretrained Language Models
Roee Aharoni and Yoav Goldberg
A Two-Stage Masked LM Method for Term Set Expansion
Guy Kushilevitz, Shaul Markovitch and Yoav Goldberg
Do you have the right scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods
Ning Miao, Yuxuan Song, Hao Zhou and Lei Li
Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention
Yanzeng Li, Bowen Yu, Xue Mengge and Tingwen Liu
Glyph2Vec: Learning Chinese Out-of-Vocabulary Word Embedding from Glyphs
Hong-You Chen, SZ-HAN YU and Shou-de Lin
Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
Nora Kassner and Hinrich Schütze
Overestimation of Syntactic Representation in Neural Language Models
Jordan Kodner and Nitish Gupta
Pretrained Transformers Improve Out-of-Distribution Robustness
Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan and Dawn Song
Stolen Probability: A Structural Weakness of Neural Language Models

  • 1
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值