论文阅读
文章平均质量分 93
相关论文整理网址:https://github.com/ShiyuNee/Awesome-Conversation-Clarifying-Questions-for-Information-Retrieval
长命百岁️
这个作者很懒,什么都没留下…
展开
-
【SIGIR-AP 2023】A Comparative Study of Training Objectives for Clarification Facet Generation
【SIGIR-AP 2023】AComparative Study of Training Objectives for Clarification Facet Generation原创 2023-10-20 19:43:08 · 251 阅读 · 0 评论 -
【论文阅读】检索增强发展历程及相关文章总结
检索增强相关文章总结:`Knn-LM`->`REALM`->`DPR`->`RAG`->`FID`->`COG`->`GenRead`->`REPLUG`->`Adaptive retrieval`原创 2023-09-19 11:32:13 · 811 阅读 · 3 评论 -
【论文阅读】Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with LLMs
该文章提出一种利用大模型对open-domain对话进行评估的方法。主要利用一个Prompt,来指示LLMs一次性输出相应的多个指标。原创 2023-07-19 17:13:10 · 333 阅读 · 0 评论 -
【论文阅读】一些多轮对话文章的体会 ACL 2023
本文是对昨天看到的ACL 2023三篇多轮对话文章的分享这三个工作都是根据一些额外属性控制输出的工作,且评估的方面比较相似,可以借鉴。原创 2023-07-18 17:49:52 · 1349 阅读 · 0 评论 -
【论文阅读】Scaling Laws for Neural Language Models
本文简要介绍的主要结论个人认为不需要特别关注公式内各种符号的具体数值,而更应该关注不同因素之间的关系,比例等。原创 2023-07-13 10:47:40 · 2845 阅读 · 0 评论 -
【论文阅读】Learing to summarize from human feedback
该仓库持续更新。原创 2023-06-16 20:13:16 · 1305 阅读 · 1 评论 -
【论文阅读】REPLUG: Retrieval-Augmented Black-Box Language Models
【论文阅读】REPLUG: Retrieval-Augmented Black-Box Language Models原创 2023-05-19 22:54:37 · 2196 阅读 · 6 评论 -
【论文阅读】MIMICS: A Large-Scale Data Collection for Search Clarification
【论文阅读】MIMICS: A Large-Scale Data Collection for Search Clarification原创 2023-03-17 15:50:57 · 362 阅读 · 0 评论 -
【论文阅读 SIGIR‘19】Asking Clarifying Questions in Open-Domain Information-Seeking Conversations
【论文阅读 SIGIR'19】Asking Clarifying Questions in Open-Domain Information-Seeking Conversations原创 2023-03-11 23:03:49 · 163 阅读 · 2 评论 -
【论文阅读】Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of
【论文阅读】Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of Perfect Information原创 2023-03-10 11:09:48 · 132 阅读 · 0 评论 -
【论文阅读 WWW‘23】Zero-shot Clarifying Question Generation for Conversational Search
【论文阅读 WWW'23】Zero-shot Clarifying Question Generation for Conversational Search原创 2023-03-05 21:59:50 · 890 阅读 · 1 评论 -
【论文阅读 T5】Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
【论文阅读 T5】Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer原创 2023-01-16 00:37:42 · 459 阅读 · 1 评论 -
【论文阅读 CIKM2011】Finding Dimensions for Queries
【论文阅读 CIKM2011】Finding Dimensions for Queries原创 2023-01-15 00:43:06 · 767 阅读 · 3 评论 -
【论文阅读 CIKM2014】Extending Faceted Search to the General Web
【论文阅读 CIKM2014】Extending Faceted Search to the General Web原创 2023-01-14 17:15:28 · 430 阅读 · 2 评论 -
【论文阅读】Stochastic Optimization of Text Set Generation for Learning Multiple Query Intent Representati
【论文阅读 CIKM 2022】Stochastic Optimization of Text Set Generation for Learning Multiple Query Intent Representations原创 2023-01-13 19:55:40 · 140 阅读 · 0 评论 -
Clarifying Question领域最常见的三个数据集
Clarifying Question 领域最常用的数据集原创 2023-01-12 22:20:40 · 778 阅读 · 0 评论 -
【论文阅读 CIKM‘2021】Learning Multiple Intent Representations for Search Queries
【论文阅读 CIKM'2021】Learning Multiple Intent Representations for Search Queries原创 2022-12-05 23:23:50 · 375 阅读 · 0 评论 -
【论文阅读 ICTIR‘2022】Revisiting Open Domain Query Facet Extraction and Generation
【论文阅读 ICTIR'2022】Revisiting Open Domain Query Facet Extraction and Generation原创 2022-11-30 23:08:21 · 746 阅读 · 0 评论 -
【论文阅读】Evaluating Mixed-initiative Conversational Search Systems via User Simulation
【WSDM'2022】Evaluating Mixed-initiative Conversational Search Systems via User Simulation原创 2022-11-28 22:00:37 · 192 阅读 · 0 评论 -
关于Dialog和Clarifying question的一些调研
关于Dialog和Clarifying question的文献整理原创 2022-11-19 21:54:12 · 780 阅读 · 2 评论 -
【论文阅读 WSDM‘21】PROP: Pre-training with Representative Words Prediction for Ad-hoc Retrieval
【论文阅读 WSDM'21】PROP: Pre-training with Representative Words Prediction for Ad-hoc Retrieval原创 2022-11-01 23:10:40 · 292 阅读 · 1 评论 -
【论文阅读 NeurIPS 2022】A Large Scale Search Dataset for Unbiased Learning to Rank
【论文阅读】A Large Scale Search Dataset for Unbiased Learning to Rank原创 2022-10-25 23:02:23 · 1222 阅读 · 2 评论 -
【论文阅读】Pre-training Methods in Information Retrieval
【论文阅读 FnTIR2022】Pre-training Methods in Information Retrieval原创 2022-10-23 11:57:34 · 755 阅读 · 1 评论 -
【论文阅读】GPT系列论文详解
【论文阅读】GPT系列论文详解原创 2022-10-09 17:23:17 · 8343 阅读 · 1 评论 -
【论文阅读】Multitask Prompted Training Enables Zero-shot Task Generalization
【论文阅读 ICLR2022】Multitask Prompted Training Enables Zero-shot Task Generalization原创 2022-10-06 22:53:37 · 1869 阅读 · 4 评论 -
【论文阅读】Masked Autoencoders Are Scalable Vision Learners(MAE)
文章目录1.Abstract2.Introduction3.Approach3.1.Masking3.2.MAE encoder3.3.MAR decoder3.4.重建目标3.5.简单的实施4.ImageNet Experiments5.Main Properties5.1.Masking ratio5.2.Decoder design5.3.Mask token5.4.Reconstruction target5.5.Data augmentation5.6.Mask sampling strategy原创 2022-04-08 22:08:24 · 3462 阅读 · 2 评论 -
【论文阅读】Attention is all you need(Transformer)
文章目录前言1.Abstract2.Introduction3.Background4.Model Architecture4.1. Encoder and Decoder Stacks4.2. Attention4.2.1. Scaled Dot-Product Attention4.2.2.Multi-Head Attention4.2.3. Applications of Attention in our Model4.3. Position-wise Feed-Forward Networks4.4原创 2022-05-21 22:09:57 · 2269 阅读 · 6 评论 -
【论文阅读】Semantic Models for the First-stage Retrieval- A Comprehensive Review
信息检索第一阶段【retrieval】综述原创 2022-09-13 20:47:55 · 1193 阅读 · 5 评论 -
【论文阅读】A Deep Look into Neural Ranking Models for Information Retrieval
信息检索第二阶段【排序阶段】的综述原创 2022-09-21 22:35:09 · 584 阅读 · 5 评论 -
【论文阅读】Finetuned Language Models Are Zero-Shot Learners
【论文阅读 ICLR2022】Finetuned Language Models Are Zero-Shot Learners原创 2022-10-01 17:47:41 · 2804 阅读 · 5 评论 -
如何读论文
【论文阅读】三遍读论文原创 2022-10-01 00:09:33 · 156 阅读 · 0 评论