细粒度IP定位参文32(对抗图对比学习(ARIEL)):Adversarial Graph Contrastive Learningwith Information Regulariz(2022年)

ARIEL是基于对抗训练和信息正则化的图对比学习方法,旨在解决图数据增强的挑战。通过引入对抗性样本生成和信息正则化,该方法在节点分类任务中表现出优于现有图对比学习模型的性能和鲁棒性。
摘要由CSDN通过智能技术生成

论文地址:https://arxiv.org/pdf/2202.06491v1.pdf

[32] S. Feng, B. Jing, Y. Zhu, and H. Tong, “Adversarial graph contrastive learning with information regularization,” in WWW, 2022.


Abstract

对比学习是图表示学习中一种有效的无监督方法。近年来,基于数据增强的对比学习方法已经从图像扩展到图形。然而,大多数之前的作品都是直接改编自为图像设计的模型。与图像上的数据增强不同,图上的数据增强远不直观,更难提供高质量的对比样本,这是对比学习模型性能的关键。这为现有的图对比学习框架留下了很大的改进空间。在本文中,通过引入对抗图视图和信息正则化器,我们提出了一种简单而有效的方法,对抗图对比学习(ARIEL),在合理的约束条件下提取信息对比样本。在各种真实数据集上,它对节点分类任务的性能始终优于当前的图对比学习方法,进一步提高了图对比学习的鲁棒性。

Keywords

图表示学习(graph representation learning),对比学习( contrastive learning),对抗性训练(adversarial training),互信息(mutual information

1. Introduction

对比学习是一种广泛应用于各种图形表示学习任务中的技术。在对比学习中,该模型试图最小化正对之间的距离,并最大化嵌入空间中负对之间的距离。正负对的定义是对比学习的关键组成部分。早期的方法,如DeepWalk [24]和node2vec[6],根据随机游走中节点对的同时出现来定义正对和负对。对于知识图嵌入,基于翻译[2,11,18,33,34,36]定义正负对是一种常见的做法。

[24] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. DeepWalk: Online Learning of Social Representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (New York, New York, USA) (KDD ’14). ACM, New York, NY, USA, 701–710. https://doi.org/10.1145/2623330.2623732

[6] Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable Feature Learning for Networks. arXiv:1607.00653 [cs.SI]

[2] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi-relational Data. In Advances in Neural Information Processing Systems, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (Eds.), Vol. 26. Curran Associates, Inc., 2787–2795. https://proceedings.neurips.cc/paper/2013/file/ 1cecc7a77928ca8133fa24680a88d2f9-Paper.pdf

[11] Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge Graph Embedding via Dynamic Mapping Matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, 687–696. https://doi.org/10.3115/v1/P15-1067

[18] Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning Entity and Relation Embeddings for Knowledge Graph Completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (Austin, Texas) (AAAI’15). AAAI Press, 2181–2187.

[33] Ruijie Wang, Yuchen Yan, Jialu Wang, Yuting Jia, Ye Zhang, Weinan Zhang, and Xinbing Wang. 2018. Acekg: A large-scale knowledge graph for academic data mining. In Proceedings of the 27th ACM international conference on information and knowledge management. 1487–1490.

[34] Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge Graph Embedding by Translating on Hyperplanes. In Proceedings of the TwentyEighth AAAI Conference on Artificial Intelligence (Québec City, Québec, Canada) (AAAI’14). AAAI Press, 1112–1119.

[36] Yuchen Yan, Lihui Liu, Yikun Ban, Baoyu Jing, and Hanghang Tong. 2021. Dynamic Knowledge Graph Alignment. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 4564–4572.

近年来,对比学习在计算机视觉中的突破启发了一些工作将视觉表示学习的类似思想应用于图表示学习。例如,深度图Infomax(DGI)[32]扩展了深度InfoMax [9],并比以前的基于随机游走的方法取得了显著的改进。图形互信息(GMI)[23]使用与DGI相同的框架,但将互信息的概念从向量空间推广到图域。对比多视图图表示学习(在本文中称为MVGRL)[7]通过将图扩散引入对比学习框架,进一步改进了DGI。最近的研究通常遵循基于数据增强的对比学习方法[4,8],即将来自同一实例的数据增强样本视为正对,将不同实例视为负对。图对比编码(GCC)[25]使用重新启动[29]的随机游动为每个节点生成两个子图作为两个数据增强样本。自适应增强图对比学习(GCA)[41]引入了一种自适应数据增强方法,该方法根据节点特征和边缘的重要性对其进行扰动,其训练方法与著名的视觉对比学习框架SimCLR [4]相似。它的初步工作采用均匀随机抽样而不是自适应抽样,本文称为GRACE [40]。Robinsons等[26]提出了一种基于嵌入空间距离选择硬负样本的方法并利用它获得高质量的图嵌入。[38,39]也有许多工作系统地研究图上的数据增强。

[32] Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. 2018. Deep Graph Infomax. arXiv:1809.10341 [stat.ML]

[9] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2019. Learning deep representations by mutual information estimation and maximization. arXiv:1808.06670 [stat.ML]

[23] Zhen Peng, Wenbing Huang, Minnan Luo, Qinghua Zheng, Yu Rong, Tingyang Xu, and Junzhou Huang. 2020. Graph Representation Learning via Graphical Mutual Information Maximization. arXiv:2002.01169 [cs.LG]

[7] Kaveh Hassani and Amir Hosein Khasahmadi. 2020. Contrastive Multi-View Representation Learning on Graphs. arXiv:2006.05582 [cs.LG]

[4] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A Simple Framework for Contrastive Learning of Visual Representations. arXiv:2002.05709 [cs.LG]

[8] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum Contrast for Unsupervised Visual Representation Learning. arXiv:1911.05722 [cs.CV]

[25] Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. 2020. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1150–1160.

[29] Hanghang Tong, Christos Faloutsos, and Jia-Yu Pan. 2006. Fast random walk with restart and its applications. In Sixth international conference on data mining (ICDM’06). IEEE, 613–622.

[41] Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2021. Graph Contrastive Learning with Adaptive Augmentation. Proceedings of the Web Conference 2021 (Apr 2021). https://doi.org/10.1145/3442381.3449802

[40] Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2020. Deep Graph Contrastive Representation Learning. arXiv:2006.04131 [cs.LG]

[26] Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. 2021. Contrastive Learning with Hard Negative Samples. In International Conference on Learning Representations. https://openreview.net/forum?id=CR1XOQ0UTh-

[38] Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph Contrastive Learning with Augmentations. arXiv:2010.13902 [cs.LG]

[39] Tong Zhao, Yozen Liu, Leonardo Neves, Oliver Woodford, Meng Jiang, and Neil Shah. 2020. Data Augmentation for Graph Neural Networks. arXiv:2006.06830 [cs.LG]

然而,与图像上的转换不同,图形上的转换对人类来说远没有那么直观。图上的数据增强,可能是太相似或完全与原始的图形不同。这反过来又引出了一个关键的问题,即如何生成一个新的图,这个图很难让模型区分原始的图,同时保持所期望的属性?

受[10,12,14,16,28]最近一些工作的启发,我们介绍了图对比学习上的对抗训练,并提出了一个新的框架,称为对抗graph对比学习(ARIEL)。通过对拓扑结构和节点特征的对抗性攻击,我们从原始图中生成一个对抗性样本。一方面,由于扰动是在约束下的,敌对样本仍然与原始样本足够接近。另一方面,对抗性攻击通过增加对比损失来确保对抗性样本难以与其他观点区分。在此基础上,我们提出了一种新的约束条件,称为信息正则化,它可以稳定ARIEL的训练并防止崩溃。在真实世界图和反向攻击图上,该ARIEL在节点分类任务上都优于现有的图对比学习框架。

[10] Chih-Hui Ho and Nuno Vasconcelos. 2020. Contrastive Learning with Adversarial Examples. arXiv:2010.12050 [cs.CV]

[12] Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. 2020. Robust Pre-Training by Adversarial Contrastive Learning. arXiv:2010.13337 [cs.CV]

[14] Nikola Jovanović, Zhao Meng, Lukas Faber, and Roger Wattenhofer. 2021. Towards Robust Graph Contrastive Learning. arXiv:2102.13085 [cs.LG] [15] Jian Kang, Meijia Wang, Nan Cao, Yinglong Xia, Wei Fan, and Hanghang Tong. 2018. Aurora: Auditing pagerank on large graphs. In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 713–722.

[16] Minseon Kim, Jihoon Tack, and Sung Ju Hwang. 2020. Adversarial Self-Supervised Contrastive Learning. arXiv:2006.07589 [cs.LG]

[28] Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. 2021. Adversarial Graph Augmentation to Improve Graph Contrastive Learning. arXiv:2106.05819 [cs.LG]

综上所述,我们做出了以下贡献:

(1)在图对比学习中,我们引入了对抗性的观点作为一种新的数据增强形式

(2)我们提出了一种信息正则化的方法来稳定对抗性图的对比学习。

(3)我们的经验证明,与以往的图对比学习方法相比,ARIEL可以获得更好的性能和更高的鲁棒性。

本文的其余部分组织如下。第2节给出了图表示学习的问题定义和初步。第3节描述了所提出的算法。实验结果见第4节。在回顾了第5节的相关工作后,我们总结了第6节的论文。

2. Problem Definition

第2节给出了图表示学习的问题定义和初步。

在本节中,我们将介绍本文中使用的所有符号,并给出我们的问题的正式定义。此外,我们还简要介绍了方法的初步方法。

2.1 Graph Representation Learning

对于图表示学习,设G= \left \{ V,E ,X \right \}是一个属性图,其中V={ v_{1}v_{2},……,v_{n}}表示节点集,E⊆V×V表示边集,X∈R^{n\times d}表示特征矩阵。每个节点v_{i}都有一个𝑑维特征X[i,:],并且假设所有的边都是无加权和无向的。我们使用二进制邻接矩阵A∈\left \{ 0,1 \right \}^{n\times n}来表示节点和边信息,其中A[i,j]=1当且仅当节点对(v_{i},v_{j})\in \varepsilon在下面的文本中,我们将使用

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值