Graph Structure Learning for Robust GNNs

关于图结构学习在增强图神经网络在对抗环境下的鲁棒性。

以下内容是论文Graph Structure Learning for Robust Graph Neural Networks的相关内容学习。代码链接 pro-GNN

Pre-knowledge

1.对抗性攻击是一种专门针对机器学习模型的技术,其核心目的是通过微小的、有针对性的修改来干扰模型的正常行为,使其做出错误的判断或预测。这种技术近年来在人工智能领域引起了广泛关注,因为它揭示了机器学习模型在某些情况下的脆弱性。

对抗性攻击的实现方式通常涉及对输入数据进行精心设计的微小修改。由于机器学习模型的输入形式通常是数值型向量,攻击者会构造一种有针对性的数值型向量,以欺骗模型。这些修改对于人类观察者来说可能是难以察觉的,比如在一个图像识别系统中,可能只是像素级别的微小变化,但在机器学习模型看来,这些变化足以导致其做出错误的识别。

对抗性攻击可以分为白盒攻击和黑盒攻击两种类型。在白盒攻击中,攻击者能够获知机器学习所使用的算法以及算法所使用的参数,这使得攻击者能够更精确地设计对抗性数据。而在黑盒攻击中,攻击者无法直接访问模型的内部结构和参数,但仍然可以通过观察模型的输出和输入之间的关系来构造对抗性数据。

对抗性攻击的应用场景非常广泛,不仅限于图像识别系统,还包括语音识别、自然语言处理等领域。例如,在图像识别系统中,攻击者可能通过修改一张图片的像素值,使其被模型错误地识别为其他物体;在语音识别系统中,攻击者可能通过添加微小的噪音或改变语音的某些特征,使模型无法正确识别语音内容。

对抗性攻击的存在对机器学习模型的安全性提出了严峻的挑战。为了防御这种攻击,研究者们提出了多种方法,如修改网络结构、添加更多的训练数据、使用防御性的模型等。

2.

 Graph Structure Learning(图结构学习)是一种旨在共同学习优化的图结构及其表示的方法。图结构学习源于图数据中存在的噪声和图的不完整性,这些问题可能导致图神经网络(GNN)习得的表征较差,不利于下游任务。为了得到更好的图结构,现在有很多方法联合优化训练图结构以及图表征,这些方法统称为图结构学习。

图结构学习的目标是找到最优的图结构,该结构应仅包含关于下游任务中最精简的信息,不多不少,从而能对于标签做出最精确的预测。这涉及到解决两个主要挑战:如何确保最终视图的最小性(即限制信息从基础视图向最终视图的流动)以及充分性(即最终视图应充分由标签指导,从而尽可能多地包含和标签有关的信息)。

Abstract

1.Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.

2. GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.

3. Developing robust algorithms to defend adversarial attacks is of great significance.

4.IDEA SOURCE: real-world graphs are low-rank and sparse, and the features of two adjacent nodes tend to be similar,adversarial attacks are likely to violate these graph properties.

5. Pro-GNN:jointly learn a structural graph and a robust graph neural network model from the perturbed graph guided by these properties.

Introduction

1.GNN: GNNs follow a message-passing scheme , where the node embedding is obtained by aggregating and transforming the embeddings of its neighbors.

2.Analytical tasks: node classification , link prediction , and recommender systems.

3.we apply the state-of-the-art graph poisoning attack, metattack , to perturb the graph data and visualize the graph properties before and after mettack.

Related work

1.GNNs: two main families of GNNs have been proposed, i.e., spectral methods and spatial method.

2. Adversarial Attacks and Defense for GNNs.

以上是目前对图神经网络的攻击和防御的先关方法和论文。

在DeepRobust (https://github.com/DSE-MSU/DeepRobust)查看详情。

THE PROPOSED FRAMEWORK

To defend against the attacks, Pro-GNN iteratively reconstructs the clean graph by preserving the low rank, sparsity, and feature smoothness properties of a graph so as to reduce the negative effects of adversarial structure.

Pro-GNN: Modelling

• Low rank and sparsity

one potential way is to learn a clean adjacency matrix S close to the adjacency matrix of the poisoned graph by enforcing the new adjacency matrix with the properties of low rank and sparsity.

• Feature smoothness

if the features between two connected node are quite different, Ls would be very large. Therefore, the smaller Ls is, the smoother features X are on the graph S.

The final objective function

Pro-GNN: Optimization

采用梯度下降法

Experiments

实验回答了他们自己提出的三个问题。

RQ1 How does Pro-GNN perform compared to the state-of-the-art defense methods under different adversarial attacks?

可看到在数据集cora上pro-GNN在节点分类上的表现。

RQ2 Does the learned graph work as expected?

RQ3 How do different properties affect performance of Pro-GNN.

对不同超参数的单一变量测验。

Conclusion

• We found that graph adversarial attack can break important graph properties.

• We introduced a novel defense approach Pro-GNN that learns the graph structure and GNN parameters simultaneously.

• Our experiments show that our model consistently improves the overall robustness under various adversarial attacks.

文章链接:paper link

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值