1 大模型文化道德
- Knowledge of cultural moral norms in large language models
- Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
2 长文本推理
- Open-ended Long Text Generation via Masked Language Modeling
- Efficient Streaming Language Models with Attention Sinks
3 大模型推理
- In-Context Analogical Reasoning with Pre-Trained Language Models
- Semantic-Oriented Unlabeled Priming for Large-Scale Language Models
4 大模型测评
- A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models
- Do language models have coherent mental models of everyday things?
- We Understand Elliptical Sentences, and Language Models should Too: A New Dataset for Studying Ellipsis and its Interaction with Thematic Fit
- url:https://aclanthology.org/2023.acl-long.188/
- code:https://github.com/Caput97/ELLie-ellipsis_and_thematic_fit_with_LMs
- Can ChatGPT Understand Causal Language in Science Claims?
- url:https://aclanthology.org/2023.wassa-1.33/
- code:
null
- ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models
- Examining the Causal Impact of First Names on Language Models: The Case of Social Commonsense Reasoning
5 数据生成
- Self-Instruct: Aligning LM with Self Generated Instructions
- Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions
- url:https://aclanthology.org/2023.acl-long.34/
- code:
null
- Instruction Induction: From Few Examples to Natural Language Task Descriptions
- How to Unleash the Power of Large Language Models for Few-shot Relation Extraction?
- url:https://aclanthology.org/2023.sustainlp-1.13/
- code:https://github.com/zjunlp/DeepKE/tree/main/example/llm
6 大模型微调
- Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific Subspaces of Pre-trained Language Models
- url:https://aclanthology.org/2023.acl-long.95/
- code:
null
- ADEPT: Adapter-based Efficient Prompt Tuning Approach for Language Models
- url:https://aclanthology.org/2023.sustainlp-1.8/
- code:https://github.com/Aditya-shahh/ADEPT
7 大模型与安全
- GPTs Don’t Keep Secrets: Searching for Backdoor Watermark Triggers in Autoregressive Language Models
8 水文
- Decoding Symbolism in Language Models
- What is Wrong with Language Models that Can Not Tell a Story?
- url:https://aclanthology.org/2023.wnu-1.8/
- code:
null
- YNU-HPCC at WASSA-2023 Shared Task 1: Large-scale Language Model with LoRA Fine-Tuning for Empathy Detection and Emotion Classification
- url:https://aclanthology.org/2023.wassa-1.45/
- code:
null