超火prompt相关论文合集(含最新)

都知道ChatGPT的神奇,但在prompt能力一般的人手中,它并不能发挥出最大的实力。因此,针对prompt的研究又成为了热门...

既然都是热门了,咱们科研er第一反应是啥?自然是发!论!文!

目前我已经整理了一部分prompt相关论文,经典+前沿都有(后面会陆续更新,收藏一下吧)

 而且都下载好了,感兴趣的同学速度来领,免费~领取方式看文末

综述

  • Nature Language Reasoning, A Survey 2023
  • Augmented Language Models: a Survey 2023
  • A Survey for In-context Learning 2022
  • Towards Reasoning in Large Language Models: A Survey 2022
  • Reasoning with Language Model Prompting: A Survey 2022
  • ......

方法

  • Scaling Laws for Neural Language Models(2020) 引用数439
  • How Can We Know What Language Models Know?(2020) 引用数551
  • Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys(2021) 引用数962
  • The Power of Scale for Parameter-Efficient Prompt Tuning. EMNLP(2021)引用数739
  • Finetuned Language Models are Zero-Shot Learners. ICLR(2022)引用数399
  • Self-Refine: Iterative Refinement with Self-Feedback (2023)引用数7
  • kkNN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference(2023)
  • Context-faithful Prompting for Large Language Models(2023)引用数1
  • Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning(2023)引用数1
  • Larger language models do in-context learning differently(2023)引用数2
  • OpenICL: An Open-Source Framework for In-context Learning(2023)引用数1
  • Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning(2023)引用数2
  • Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners(2023)引用数6
  • How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks(2023)引用数2
  • Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT(opens in a new tab) (2023)引用数15
  • EvoPrompting: Language Models for Code-Level Neural Architecture Search(2023)引用数1
  • Chain of Hindsight Aligns Language Models with Feedback(2023)引用数2
  • ......

应用

  • Survey of Hallucination in Natural Language Generation(2022)引用数75
  • Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language(2022)引用数10
  • Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering(2022)引用数24
  • Investigating Prompt Engineering in Diffusion Models(2022)引用数4
  • Legal Prompt Engineering for Multilingual Legal Judgement Prediction(2022)引用数3
  • BloombergGPT: A Large Language Model for Finance(2023)引用数7
  • TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs(2023)引用数7
  • Linguistically Informed ChatGPT Prompts to Enhance Japanese-Chinese Machine Translation: A Case Study on Attributive Clauses(2023)引用数1
  • SPeC: A Soft Prompt-Based Calibration on Mitigating Performance Variability in Clinical Notes Summarization(2023)引用数1
  • Large Language Models and Simple, Stupid Bugs(2023)引用数1
  • Can Generative Pre-trained Transformers (GPT) Pass Assessments in Higher Education Programming Courses?(2023)引用数2
  • SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models(2023)引用数2
  • ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction(2023)引用数1
  • MathPrompter: Mathematical Reasoning using Large Language Models(2023)引用数2
  • Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting(2023)引用数1
  • Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering(2023)引用数2
  • PLACES: Prompting Language Models for Social Conversation Synthesis(2023)引用数2
  • The Capacity for Moral Self-Correction in Large Language Models(2023)引用数4
  • ......

关注下方《学姐带你玩AI》🚀🚀🚀

回复“prompt论文”获取本文论文PDF合集

码字不易,欢迎大家点赞评论收藏!

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值