【论文阅读 WWW‘23】Zero-shot Clarifying Question Generation for Conversational Search

前言

Motivation

Generate clarifying questions in a zero-shot setting to overcome the cold start problem and data bias.

cold start problem: 缺少数据导致难应用,难应用导致缺少数据

data bias: 获得包括所有可能topic的监督数据不现实,在这些数据上训练也会有 bias

Contributions

  • the first to propose a zero-shot clarifying question generation system, which attempts to address the cold-start challenge of asking clarifying questions in conversational search.
  • the first to cast clarifying question generation as a constrained language generation task and show the advantage of this configuration.
  • We propose an auxiliary evaluation strategy for clarifying question generations, which removes the information-scarce question templates from both generations and references.

Method

Backbone: a checkpoint of GPT-2

  • original inference objective is to predict the next token given all previous texts

在这里插入图片描述

Directly append the query q q q and facet f f f as input and let GPT-2 generate cq will cause two challenges:

  • it does not necessarily cover facets in the generation.
  • the generated sentences are not necessarily in the tone of clarifying questions

We divide our system into two parts:

  • facet-constrained question generation(tackle the first challenge)
  • multi-form question prompting and ranking(tackle the second challenge, rank different clarifying questions generated by different templates)

Facet-constrained Question Generation

Our model utilizes the facet words not as input but as constraints. We employ an algorithm called Neurologic Decoding. Neurologic Decoding is based on beam search.

  • in t t t​ step, assuming the already generated candidates in the beam are 𝐶 = { 𝑐 1 : 𝑘 } 𝐶 = \{𝑐_{1:𝑘} \} C={c1:k}, k k k is the beam size, c i = x 1 : ( t − 1 ) i c_i=x^i_{1:(t-1)} ci=x1:(t1)i is the i i i th candidate, x 1 : ( t − 1 ) i x^i_{1:(t-1)} x1:(t1)i are tokens generated from decoding step 1 to ( t − 1 ) (t-1) (t1)

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-98Ld4wAG-1678024307327)在这里插入图片描述

    • explain about why this method could better constrain the decoder to generate facet-related questions:
      • ( 2 ) t o p − β (2)top- \beta (2)topβ​ is the main reason for promoting facet words in generations. Because of this filtering, Neurologic Decoding tends to discard generations with fewer facet words regardless of their generation probability
      • ( 3 ) (3) (3)​ the group is the key for Neurologic Decoding to explore as many branches as possible. Because this grouping method keeps the most cases $(2^{| 𝑓 |} ) $of facet word inclusions, allowing the decoder to cover the most possibilities of ordering constraints in generation
        • because if we choose top K candidates directly, there may be some candidates containing same facets, this results in less situation containing diverse facets. Towards choosing the best candidate in each group and then choose top K candidates, every candidate will contain different facets.

Multiform Question Prompting and Ranking

Use clarifying question templates as the starting text of the generation and let the decoder generate the rest of question body.

  • if the q q q is “I am looking for information about South Africa.” Then we give the decoder “I am looking for information about South Africa. [SEP] would you like to know” as input and let it generate the rest.
  • we use multiple prompts(templates) to both cover more ways of clarification and avoid making users bored

For each query, we will append these eight prompts to the query and form eight input and generate eight questions.

  • use ranking methods to choose the best one as the returned question

Experiments

Zero-shot clarifying question generation with existing baselines

  • Q-GPT-0
    • input: query
  • QF-GPT-0:
    • input: facet + query
  • Prompt-based GPT-0: includes a special instructional prompt as input
    • input: q “Ask a question that contains words in the list [f]”
  • Template-0: a template-guided approach using GPT-2
    • input: add the eight question templates during decoding and generate the rest of the question

Existing facet-driven baselines(finetuned):

  • Template-facet: append the facet word right after the question template

在这里插入图片描述

  • QF-GPT: a GPT-2 finetuning version of QF-GPT-0.
    • finetunes on a set of tuples in the form as f [SEP] q [BOS] cq [EOS]
  • Prompt-based finetuned GPT: a finetuning version of Prompt-based GPT-0
    • finetune GPT-2 with the structure: 𝑞 “Ask a question that contains words in the list [𝑓 ].” 𝑐𝑞

Note: simple facets-input finetuning is highly inefficient in informing the decoder to generate facet-related questions by observing a facet coverage rate of only 20%

Dataset

ClariQ-FKw: has rows of (q,f,cq) tuples.

  • q is an open-domain search query, f is a search facet, cq is a human-generated clarifying question
  • The facet inClariQ is in the form of a faceted search query. ClariQ-FKw extracts the keyword of the faceted query as its facet column and samples a dataset with 1756 training examples and 425 evaluation examples

Our proposed system does not access the training set while the other supervised learning systems can access the training set for finetuning.

Result

Auto-metric evaluation

在这里插入图片描述

RQ1: How well can we do in zero-shot clarifying question generation with existing baselines

  • all these baselines(the first four rows) struggle to produce any reasonable generations except for Template-0(but it’s question body is not good)
  • we find existing zero-shot GPT-2-based approaches cannot solve the clarifying question generation task effectively.

RQ2: the effectiveness of facet information for facet-specific clarifying question generation

  • compare our proposed zero-shot facet constrained (ZSFC) methods with a facet-free variation of ZSFC named Subject-constrained which uses subject of the query as constraints.
  • our study show that adequate use of facet information can significantly improve clarifying question generation quality

RQ3: whether our proposed zeroshot approach can perform the same or even better than existing facet-driven baselines

  • We see that from both tables, our zero-shot facet-driven approaches are always better than the finetuning baselines

Note: Template-facet rewriting is a simple yet strong baseline that both finetuning-based methods are actually worse than it.

Human evaluation

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5eC8PWul-1678024307328)在这里插入图片描述

Knowledge

Approaches to clarifying query ambiguity can be roughly divided into three categories:

  • Query Reformulation: iteratively refine the query
    • is more efficient in context-rich situations
  • Query Suggestion: offer related queries to the user
    • is good for leading search topics, discovering user needs
  • Asking Clarifying Questions: proactively engages users to provide additional context.
    • could be exclusively helpful to clarify ambiguous queries without context.
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

长命百岁️

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值