本篇论文发表于IJCAI International Joint Conference on Artificial Intelligence (2018),CCF A类。
本栏目着重于学习怎么样写论文摘要。从一个七步走的方法对论文摘要进行叙述,每一步需要写什么,怎么写。
多视图数据在现实世界的数据集中很常见,其中不同的视图描述了不同的观点。为了更好地总结多视图数据中一致和互补的信息,研究人员提出了各种基于视图分解模型的多视图表示学习算法。但是,大多数以前的方法都集中在浅层分解模型上,该模型无法捕获复杂的层次信息。尽管最近提出了一种深度多视图分解模型,但该模型无法明确识别多视图数据中的一致和互补信息,并且不考虑概念标签。在这项工作中,我们提出了一种半监督的深度多视图分解方法,称为深度多视图概念学习(DMCL)。 DMCL分层执行数据的非负因式分解,并尝试捕获语义结构并以最高抽象级别显式建模多视图数据中的一致和互补信息。我们为DMCL开发了块坐标下降算法。在图像和文档数据集上进行的实验表明,DMCL表现良好,并且优于基线方法。 | Multi-view data is common in real-world datasets, where different views describe distinct perspectives. To better summarize the consistent and complementary information in multi-view data, researchers have proposed various multi-view representation learning algorithms, typically based on factorization models. However, most previous methods were focused on shallow factorization models which cannot capture the complex hierarchical information. Although a deep multiview factorization model has been proposed recently, it fails to explicitly discern consistent and complementary information in multi-view data and does not consider conceptual labels. In this work we present a semi-supervised deep multi-view factorization method, named Deep Multi-view Concept Learning (DMCL). DMCL performs nonnegative factorization of the data hierarchically, and tries to capture semantic structures and explicitly model consistent and complementary information in multi-view data at the highest abstraction level. We develop a block coordinate descent algorithm for DMCL. Experiments conducted on image and document datasets show that DMCL performs well and outperforms baseline methods. |
第一步: 交代研究背景 |
Multi-view data is common in real-world datasets, where different views describe distinct perspectives. |
第二步: 概括当前方法 |
To better summarize the consistent and complementary information in multi-view data, researchers have proposed various multi-view representation learning algorithms, typically based on factorization models. |
第三步: 一般介绍现有方法的不足,论文给出的一些解决办法。 |
However, most previous methods were focused on shallow factorization models which cannot capture the complex hierarchical information. Although a deep multiview factorization model has been proposed recently, it fails to explicitly discern consistent and complementary information in multi-view data and does not consider conceptual labels. |
第四步: 提出当前的方法 | In this work we present a semi-supervised deep multi-view factorization method, named Deep Multi-view Concept Learning (DMCL). |
第五步: 在提出论文的方法之后,需要进行对自己提出的方法的大致的介绍 | DMCL performs nonnegative factorization of the data hierarchically, and tries to capture semantic structures and explicitly model consistent and complementary information in multi-view data at the highest abstraction level. |
第六步: 第五步进行了理论上的阐述。这一步呢,通常是对提出的算法怎么样实现优化的一句话或者两句话。不能太长,因为有字数限制。(可有,也可以没有,视具体论文而定) | We develop a block coordinate descent algorithm for DMCL. |
第七步: 简要介绍一下实验,这个比较的套路,一般都是这个套路。 | Experiments conducted on image and document datasets show that DMCL performs well and outperforms baseline methods. |
摘要解读
第一步: 交代背景:多视角数据的普遍性和重要性
第二步: 概括当前方法 。
第三步: 一般介绍现有方法的不足
第四步: 提出当前的方法
第五步: 在提出论文的方法之后,需要进行对自己提出的方法的大致的介绍
第六步: 第五步进行了理论上的阐述。这一步呢,通常是对提出的算法怎么样实现优化的一句话或者两句话。不能太长,因为有字数限制。
第七步: 简要介绍一下实验,这个比较的套路。
以上就是大致的一个流程,我也正在学习,若有不足请各位耐心支出。非常感谢。
一般的摘要都会遵循这七个步骤,不同的步骤之间可能会融合到一块进行书写,在我们自己进行书写摘要的时候,可以参照这个步骤。如果自己在某个步骤实在想不出来,就暂时空下来。