![](https://img-blog.csdnimg.cn/20201014180756724.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
分享
文章平均质量分 77
小皮肚鼓嘟嘟
这个作者很懒,什么都没留下…
展开
-
ACL-IJCNLP 2021-Sentiment Analysis相关论文整理
Dual Graph Convolutional Networks for Aspect-based Sentiment AnalysisGitHub:https://github.com/CCChenhao997/DualGCN-ABSAMulti-Label Few-Shot Learning for Aspect Category DetectionAbstract:Aspect category detection (ACD) in sentiment analysis aims to ide原创 2021-07-09 16:11:42 · 2622 阅读 · 1 评论 -
NAACL2021-Sentiment Analysis论文整理
North American Chapter of the Association for Computational Linguistics (2021)Does syntax matter? A strong baseline for Aspect-based Sentiment Analysis with RoBERTaJunqi Dai | Hang Yan | Tianxiang Sun | Pengfei Liu | Xipeng QiuASAP: A Chinese Review Dat原创 2021-06-21 18:45:17 · 633 阅读 · 1 评论 -
阅读分享:UniDrop:A Simple yet Effective Technique to Improve Transformer without Extra Cost
UniDrop: A Simple yet Effective Technique to Improve Transformer without Extra Cost多余的不写了,直接重点:在Transformer中加了三类droput:feature dropout, structure dropout, data dropout。原文说它们在防止Transformer过拟合和提高模型的鲁棒性方面可以发挥不同的作用。Feature Dropout (FD)除了Transformer中每个laye原创 2021-05-28 13:52:12 · 440 阅读 · 0 评论 -
阅读分享:A Frustratingly Easy Approach for Entity and Relation Extraction-NAACL2021
A Frustratingly Easy Approach for Entity and Relation Extraction(标题长度什么时候有限制了 ?_?,连个题目都写不下)跳过背景、相关工作,直接来看方法Ideas:different entity pairs,different contextual representations. NER和RE两个模型分别编码,即RE不共享NER得到的编码表示。using additional markers to highlight the原创 2021-05-27 18:21:37 · 337 阅读 · 0 评论 -
阅读分享:Improving BERT with Syntax-aware Local Attention
Li, Z., Zhou, Q., Li, C., Xu, K., & Cao, Y. (2020). Improving BERT with Syntax-aware Local Attention.一些研究发现,自注意可以通过局部注意增强,即注意范围被限制在重要的局部区域。有利用动态或固定窗口来计算局部注意,也有利用语法约束的。这项工作中提出了一种句法感知的局部注意(SLA)方法。Approach首先上模型图:从依赖树解析中导出语法结构,并将其视为无向树。每个token x原创 2021-01-03 17:13:51 · 541 阅读 · 0 评论 -
阅读分享:Utilizing BERT Intermediate Layers for ABSA and NLI
Utilizing BERT Intermediate Layers for Aspect Based Sentiment Analysis and Natural Language Inference原文地址:https://arxiv.org/pdf/2002.04815.pdf文中提到,bert的微调都是在bert结构后添加额外的输出层,忽略了中间层包含的语义知识,而每一层都能捕获到不同层次的表示。设计了LSTM和Attention两种池化策略来整合每层的[CLS]token表示。Exper原创 2020-07-13 10:44:50 · 359 阅读 · 0 评论 -
阅读分享:Enriching BERT with Knowledge Graph Embeddings for Document Classification
Enriching BERT with Knowledge Graph Embeddings for Document Classificationhttps://github.com/malteos/pytorch-bert-document-classification子任务A:对一本书进行八分类;子任务B:第二层93个和第三层242个标签(总共343个标签)进行分类。Arthitecture:TitleText:A short descriptive text (blurb) wi原创 2020-05-10 12:49:08 · 582 阅读 · 0 评论 -
分享:ERNIE转化为Pytorch版的预训练模型
来源:https://github.com/nghuyong/ERNIE-Pytorch因为是在google drive上,不方便下载,分享如下:ERNIE转换到huggingface的格式:ERNIE 1.0 Base for Chinese(pre-train step max-seq-len-128) with params, config and vocabs提取码:ugzs...原创 2020-04-11 10:31:40 · 1609 阅读 · 3 评论