文章目录
ACL 文献记录
[1] Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions (Chung et al., ACL 2023)
通过大型语言模型和人工干预生成文本数据,增加多样性同时保持准确性
https://aclanthology.org/2023.acl-long.34
[2]Pruning Pre-trained Language Models Without Fine-Tuning (Jiang et al., ACL 2023)
不经过微调对预训练语言模型进行修剪
https://aclanthology.org/2023.acl-long.35
[3]Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific Subspaces of Pre-trained Language Models (Zhang et al., ACL 2023)
微调发生在微小的子空间中:探索预训练语言模型的固有任务特定子空间
https://aclanthology.org/2023.acl-long.95
[4]In-Context Analogical Reasoning with Pre-Trained Language Models (Hu et al., ACL 2023)
基于预训练语言模型的上下文类比推理
https://aclanthology.org/2023.acl-long.109
[5]Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models (Wang et al., ACL 2023)
计划和解决提示:通过大型语言模型改进0-shot思维链推理
https://aclanthology.org/2023.acl-long.147
[6]Causal-Debias: Unifying Debiasing in Pretrained Language Models and Fine-tuning via Causal Invariant Learning (Zhou et al., ACL 2023)
因果去偏见:通过因果不变学习统一去偏见的预训练语言模型和微调
https://aclanthology.org/2023.acl-long.232
[7]KILM: Knowledge Injection into Encoder-Decoder Language Models (Xu et al., ACL 2023)
KILM: 知识注入到编码器-解码器语言模型中
https://aclanthology.org/2023.acl-long.275
[8]Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models’ Memories (Diao et al., ACL 2023)
领域适配器混合:将领域知识解耦并注入预训练语言模型的记忆
https://aclanthology.org/2023.acl-long.280
预训练语言模型(PLMs)展示了在通用领域理解文本的出色能力,但在特定领域却表现出困难。尽管在大型特定领域语料库上持续进行预训练是有效的,但调整所有参数对于该领域来说是昂贵的。在本文中,我们探讨了是否可以通过仅调整少量参数来有效且高效地调整PLMs。具体来说,我们将Transformer架构的前馈网络(FFNs)分解为两部分:原始预训练的FFNs以保持旧领域知识和我们的新领域特定适配器以并行注入领域特定知识。然后,我们采用了一种混合适配器门,动态地融合来自不同领域适配器的知识。我们提出的Mixture-of-Domain-Adapters(MixDA)采用了一个两阶段适配器调整策略,利用未标记数据和已标记数据来帮助领域适应:i)未标记数据上的领域特定适配器;随后ii)已标记数据上的任务特定适配器。MixDA可以无缝地插入预训练微调范式,我们的实验表明,MixDA在领域内任务(GLUE)、领域外任务(ChemProt、RCT、IMDB、亚马逊)和知识密集型任务(KILT)上取得了卓越的性能。进一步的分析证明了我们方法的可靠性、可扩展性和效率。
[9]Reasoning with Language Model Prompting: A Survey (Qiao et al., ACL 2023)
使用语言模型提示进行推理:一项调查
https://aclanthology.org/2023.acl-long.294
[10]Two-Stage Fine-Tuning for Improved Bias and Variance for Large Pretrained Language Models (Wang et al., ACL 2023)
大型预训练语言模型的改进偏差和方差的两阶段微调
https://aclanthology.org/2023.acl-long.877
[11]LambdaKG: A Library for Pre-trained Language Model-Based Knowledge Graph Embeddings (Xie et al., IJCNLP-AACL 2023)
LambdaKG:基于预训练语言模型的知识图谱嵌入库