Basic Information:
- Title: Cross-view Graph Contrastive Representation Learning on Partially Aligned Multi-view Data (基于交叉视图图形对齐学习的局部对齐多视图数据学习)
- Authors: Yiming Wang, Dongxia Chang, Zhiqiang Fu, Jie Wen, Yao Zhao
- Affiliation: Yiming Wang - Xiamen University (王亦铭 - 厦门大学)
- Keywords: Multi-view Representation Learning, Partial Aligned Multi-view Learning, Contrastive Learning
- Urls: None
Summary:
- (1): The paper proposes a novel framework called Cross-vIew gRaph Contrastive representation LEarning (CIRCLE) for partially aligned multi-view learning.
- (2): Previous multi-view learning methods assumed complete and aligned views, which resulted in performance degradation when presented with practical problems such as missing or unaligned views in real-world applications. CIRCLE is an end-to-end model that enables cluster-level alignment and representation learning based on intra-cluster and inter-view consistency.
- (3): CIRCLE consists of two main modules: view-specific autoencoders and cross-view graph contrastive learning module (CGC). The backbone of CIRCLE is view-specific autoencoders that capture the view-specific features of different views. CIRCLE utilizes relation graphs constructed based on the distance of samples in the original space to explore the characteristics from multiply views. In training, the loss for representation is defined to maximize the similarity of positive pairs while minimizing the similarity of negative pairs.
- (4): CIRCLE outperformed state-of-the-art methods on clustering and classification tasks when evaluated on several real datasets, demonstrating its effectiveness on partially aligned multi-view data.
Background:
- a. Subject and characteristics:
- The paper proposes a novel framework called Cross-vIew gRaph Contrastive representation LEarning (CIRCLE) for partially aligned multi-view learning.
- b. Historical development:
- In the past, many multi-view learning methods have assumed that each view is complete and aligned, leading to performance degradation when presented with practical problems such as missing or unaligned views in real-world applications.
- c. Past methods:
- Previous multi-view learning methods have focused on extracting helpful information from multiple views to learn common metrics or representations for downstream tasks and have generally assumed that all views are complete and aligned.
- d. Past research shortcomings:
- The common assumption for most multi-view representation learning methods is that all views are complete and aligned, resulting in performance degradation when presented with practical problems such as missing or unaligned views in real-world applications.
- e. Current issues to address:
- Partially aligned multi-view data is a challenging issue in multi-view representation learning, as multi-view data may be partially lost during transmission and storage, resulting in incomplete multi-view data. Additionally, spatial, temporal, or spatiotemporal asynchronism can cause some data to remain unaligned across views, resulting in partially aligned multi-view data.
Methods:
- a. Study's theoretical basis:
- The proposed method, CIRCLE, utilizes cross-view graph contrastive learning to lift instance-level contrastive learning strategies to cluster-level contrastive learning on multi-view data.
- b. Article's technical route (step by step):
- CIRCLE consists of two main modules: view-specific autoencoders and cross-view graph contrastive learning module (CGC). The backbone of CIRCLE is view-specific autoencoders that capture the view-specific features of different views. CIRCLE utilizes relation graphs constructed based on the distance of samples in the original space to explore the characteristics from multiply views. In training, the loss for representation is defined to maximize the similarity of positive pairs while minimizing the similarity of negative pairs.
Conclusion:
- a. Work significance:
- The proposed method, CIRCLE, provides a new partially aligned multi-view representation learning method based on cross-view graph contrastive learning.
- b. Innovation, performance, and workload:
- CIRCLE is an end-to-end model that enables cluster-level alignment and representation learning based on intra-cluster and inter-view consistency. CIRCLE outperformed state-of-the-art methods on clustering and classification tasks when evaluated on several real datasets, demonstrating its effectiveness on partially aligned multi-view data.
- c. Research conclusions (list points):
- (1) CIRCLE is an effective method for addressing the problem of representation learning on partially aligned multi-view data.
- (2) CIRCLE is an end-to-end model that enables cluster-level alignment and representation learning based on intra-cluster and inter-view consistency.
- (3) CIRCLE utilizes cross-view graph contrastive learning to lift instance-level contrastive learning strategies to cluster-level contrastive learning on multi-view data.
- (4) CIRCLE outperformed state-of-the-art methods on clustering and classification tasks when evaluated on several real datasets.
- (5) CIRCLE is the first deep network that can handle partially aligned multi-view data with more than two views.
- Thanks chagptpaper.