Cross-view Graph Contrastive Representation Learning on Partially Aligned Multi-view Data

CIRCLE是一个端到端模型,旨在解决部分对齐的多视图学习问题。它由视图特定的自动编码器和交叉视图图对比学习模块组成,通过在聚类级别上实现对内聚类和视图间一致性的对比学习来增强表示学习。在实际数据集上的评估中,CIRCLE在聚类和分类任务上优于现有方法,证明了其在部分对齐多视图数据上的有效性。
摘要由CSDN通过智能技术生成

Basic Information:

  • Title: Cross-view Graph Contrastive Representation Learning on Partially Aligned Multi-view Data (基于交叉视图图形对齐学习的局部对齐多视图数据学习)
  • Authors: Yiming Wang, Dongxia Chang, Zhiqiang Fu, Jie Wen, Yao Zhao
  • Affiliation: Yiming Wang - Xiamen University (王亦铭 - 厦门大学)
  • Keywords: Multi-view Representation Learning, Partial Aligned Multi-view Learning, Contrastive Learning
  • Urls: None

Summary:

  • (1): The paper proposes a novel framework called Cross-vIew gRaph Contrastive representation LEarning (CIRCLE) for partially aligned multi-view learning.
  • (2): Previous multi-view learning methods assumed complete and aligned views, which resulted in performance degradation when presented with practical problems such as missing or unaligned views in real-world applications. CIRCLE is an end-to-end model that enables cluster-level alignment and representation learning based on intra-cluster and inter-view consistency.
  • (3): CIRCLE consists of two main modules: view-specific autoencoders and cross-view graph contrastive learning module (CGC). The backbone of CIRCLE is view-specific autoencoders that capture the view-specific features of different views. CIRCLE utilizes relation graphs constructed based on the distance of samples in the original space to explore the characteristics from multiply views. In training, the loss for representation is defined to maximize the similarity of positive pairs while minimizing the similarity of negative pairs.
  • (4): CIRCLE outperformed state-of-the-art methods on clustering and classification tasks when evaluated on several real datasets, demonstrating its effectiveness on partially aligned multi-view data.

Background:

  • a. Subject and characteristics:
    • The paper proposes a novel framework called Cross-vIew gRaph Contrastive representation LEarning (CIRCLE) for partially aligned multi-view learning.
  • b. Historical development:
    • In the past, many multi-view learning methods have assumed that each view is complete and aligned, leading to performance degradation when presented with practical problems such as missing or unaligned views in real-world applications.
  • c. Past methods:
    • Previous multi-view learning methods have focused on extracting helpful information from multiple views to learn common metrics or representations for downstream tasks and have generally assumed that all views are complete and aligned.
  • d. Past research shortcomings:
    • The common assumption for most multi-view representation learning methods is that all views are complete and aligned, resulting in performance degradation when presented with practical problems such as missing or unaligned views in real-world applications.
  • e. Current issues to address:
    • Partially aligned multi-view data is a challenging issue in multi-view representation learning, as multi-view data may be partially lost during transmission and storage, resulting in incomplete multi-view data. Additionally, spatial, temporal, or spatiotemporal asynchronism can cause some data to remain unaligned across views, resulting in partially aligned multi-view data.

Methods:

  • a. Study's theoretical basis:
    • The proposed method, CIRCLE, utilizes cross-view graph contrastive learning to lift instance-level contrastive learning strategies to cluster-level contrastive learning on multi-view data.
  • b. Article's technical route (step by step):
    • CIRCLE consists of two main modules: view-specific autoencoders and cross-view graph contrastive learning module (CGC). The backbone of CIRCLE is view-specific autoencoders that capture the view-specific features of different views. CIRCLE utilizes relation graphs constructed based on the distance of samples in the original space to explore the characteristics from multiply views. In training, the loss for representation is defined to maximize the similarity of positive pairs while minimizing the similarity of negative pairs.

Conclusion:

  • a. Work significance:
    • The proposed method, CIRCLE, provides a new partially aligned multi-view representation learning method based on cross-view graph contrastive learning.
  • b. Innovation, performance, and workload:
    • CIRCLE is an end-to-end model that enables cluster-level alignment and representation learning based on intra-cluster and inter-view consistency. CIRCLE outperformed state-of-the-art methods on clustering and classification tasks when evaluated on several real datasets, demonstrating its effectiveness on partially aligned multi-view data.
  • c. Research conclusions (list points):
    • (1) CIRCLE is an effective method for addressing the problem of representation learning on partially aligned multi-view data.
    • (2) CIRCLE is an end-to-end model that enables cluster-level alignment and representation learning based on intra-cluster and inter-view consistency.
    • (3) CIRCLE utilizes cross-view graph contrastive learning to lift instance-level contrastive learning strategies to cluster-level contrastive learning on multi-view data.
    • (4) CIRCLE outperformed state-of-the-art methods on clustering and classification tasks when evaluated on several real datasets.
    • (5) CIRCLE is the first deep network that can handle partially aligned multi-view data with more than two views.
  • Thanks chagptpaper.
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值