《A Deep Generative Framework for Paraphrase Generation》论文笔记--摘要、引言

论文翻译及其他相关内容:https://ldzhangyx.github.io/2018/09/26/deep-para-generation/

(下述英文是原文内容,中文是给出的翻译)

标题:

Paraphrase Generation:产生复述句

一、摘要:

1. 应用:

question answering, information retrieval, information extraction, conversation systems...

问答,信息检索,信息提取,系统对话...

2. 提出方法:

Our proposed method is based on a combination of deep generative models (VAE) with sequence-to-sequence models (LSTM) to generate paraphrases, given an input sentence. 

VAE和LSTM相结合

3. 提出方法优点:

our model is simple,modular and can generate multiple paraphrases, for a given sentence.

简单、模块化、能产生多个复述句

二、引言

1. 应用:

Paraphrase generation is an important problem in many NLP applications such as question answering, information retrieval, information extraction, and summarization. 

问答,信息检索,信息提取,总结

paraphrase generation is also important for generating training data for various learning tasks

增加训练数据集

2. 提出:

In this paper, we present a deep generative framework for automatically generating paraphrases, given a sentence. Our framework combines the power of sequence to sequence models, specifically the long short-term memory (LSTM) (Hochreiter and Schmidhuber 1997), and deep generative models, specifically the variational autoencoder (VAE)(Kingma and Welling 2013;Rezende, Mohamed, and Wierstra2014),to develop a novel, end-to-end deep learning architecture for the task of paraphrase generation. 

自动生成复述句的深度生成框架:LSTM(sequence to sequence模型)+VAE(deep generative模型)

3. 难点:

(1) VAE对比:

In contrast to the recent usage of VAE for sentence generation(Bowmanetal.2015),a key differentiating aspect of our proposed VAE based architecture is that it needs to generate paraphrases, given an original sentence as input.

对于输入的句子,此文提出的方法需要生成释意。

【补充】:

【 That is, the generated paraphrased version of the sentence should capture the essence of the original sentence. Therefore, unconditional sentence generation models, such as (Bowman et al. 2015), are not suited for this task. 

复述句也就意味着更需要抓住句子的核心含义,对此无条件句子生成模型不适合该类任务。】

(2) 条件生成模型对比:

Unlike these methods where number of classes are finite, and do not require any intermediate representation, our method conditions both the sides (i.e. encoder and decoder) of VAE on the intermediate representation of the input question obtained through LSTM. 

条件生成模型:类别有限,不需要中间表示。

本文:将VAE的两侧(即编码器和解码器)都限制在通过LSTM获得的输入问题的中间表示上。

【补充】:

【 In the past, conditional generative models (Sohn,Lee,andYan2015;Kingmaetal.2014)have been applied in computer vision to generate images conditioned on the given class label.

条件生成模型可以应用于:计算机视觉中根据给定类别生成图片】

4. 针对难点其他学者提出的解决办法:

(1)One potential approach to solve the paraphrase generation problem could be to use existing sequence-to-sequence models (Sutskever, Vinyals, and Le 2014)

使用sequence-to-sequence模型

(2)one variation of sequence-to-sequence model using stacked residual LSTM (Prakash et al. 2016) is the current state of the art for this task. 

sequence-to-sequence模型的变形:stacked residual LSTM

5. 其他学者提出方法的优缺点:

优点:

having sophisticated model architectures

具有复杂的模型架构

缺陷:

lack a principled generative framework

生成框架缺乏规则

6. 作者提出方法的优点:

In contrast, our deep generative model enjoys a simple,modular architecture, and can generate not just a single but multiple, semantically sensible, paraphrases for any given sentence.

简单、模块化、能产生多个复述句

【补充】:

【虽然sequence-to-sequence模型使用beam search(https://zhuanlan.zhihu.com/p/82829880)时,也可以得到K个复述句,但是第K个复述句的准确性很低(没有作者提出的方法好)】

 

 

 

 

 

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值