[论文精读]Line Graph Neural Networks for Link Prediction

论文网址:Line Graph Neural Networks for Link Prediction | IEEE Journals & Magazine | IEEE Xplore

论文代码:GitHub - divelab/LGLP

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用

目录

1. 省流版

1.1. 心得

1.2. 论文总结图

2. 论文逐段精读

2.1. Abstract

2.2. Introduction

2.3. Related Work

2.4. The Proposed Methods

2.4.1. Problem Formulation

2.4.2. Overall Framework

2.4.3. Line Graph Neural Networks

2.4.4. The Proposed Algorithm

2.5. Experiments

2.5.1. Datatsets and Baseline Models

2.5.2. Experimental Setup

2.5.3. Results and Analysis

2.6. Conclusion

4. Reference


1. 省流版

1.1. 心得

(1)这么大篇幅夸SEAL然后还不在摘要和intro里说自己的模型名差点让我觉得你们提出来的就是SEAL了-,-

(2)这里总是会强调封闭的子图,就是"enclosing subgraph"

1.2. 论文总结图

2. 论文逐段精读

2.1. Abstract

        ①⭐Link (edge) prediction in the original graph equals to the node classification in the line graph

2.2. Introduction

        ①⭐Heuristic methods for link prediction always limit, such as methods in social network do not match tasks in molecule

        ②⭐Link prediction can be converted to graph classification when regards enclosing subgraph as a graph

metabolic  adj. 变化的;[生理]新陈代谢的

2.3. Related Work

        ①Categories of link prediction methods: heuristic methods, embedding methods, deep learning methods

        ②Heuristic methods: first-order, secondorder, and high-order methods

        ③Embedding methods: matrix factorization and stochastic block

        ④Deep learning methods: SEAL

2.4. The Proposed Methods

2.4.1. Problem Formulation

        ①For a graph G=\left ( V, E \right )V=\left \{ v_1, v_2,...,v_n \right \} is the node set, E\subseteq V \times V is the edge set, and adjacency matrix A 

2.4.2. Overall Framework

        ①The original graph for link prediction in social networks (this method mainly focus on 1-hop neighbors and extracting subgraphs) (1-hop can be extended to h-hop). 

extracted subgraph:

The common connections determine the relations between A and B

        ②Steps of link prediction: enclosing subgraph extraction, node labeling, feature learning and link prediction

        ③Overall framework:

where the double circles are the centered nodes, and the pure yellow circle inthe forth graph is the aimed link

2.4.3. Line Graph Neural Networks

(1)Line graph space transformation

        ①⭐我觉得它和传统线图不一样,传统的是无向→有向,但是它还是无向→无向。作者定义的是当原图中的两个边有共享顶点时,两条边在线图中变成俩顶点并存在一条连接

        ②Example:

        ③⭐The number of edges in L(G) is \frac{1}{2}\sum_{i=1}^{m}d^2_i-n, where d_i denotes the degree of node i in original graph, n is the edge number in the original graph

(2)Node label transformation

        ①Node label generation:

l_{(v_1,v_2)}=\mathrm{concate}(\min(f_l(v_1),f_l(v_2)),\max(f_l(v_1),f_l(v_2)))

where the f denotes the node labeling function, v_1 and v_2 represent the end nodes of edge. In undirected graph, edge (v_1,v_2) equals to edge (v_2,v_1)

        ②作者意思是①确实适用于无向图因为无论v_1v_2顺序和编号咋样得到的l都一样。但是如果加进去俩节点属性就会有顺序上的问题,所以节点属性还是应该取和:

l_{(v_1,v_2)}=\mathrm{concate}(\min(f_l(v_1),f_l(v_2)),\max(f_l(v_1),f_l(v_2)),X_{v_1}+X_{v_2})

where X_{v_1} and X_{v_2} are the node attributes of v_1 and v_2. And this l will be the node attribute of this edge in the line graph

(3)Feature Learning by Graph Neural Networks

        ①Updating of GNN:

Z_{(v_i,v_j)}^{(k+1)}=\left(Z_{(v_i,v_j)}^{(k)}+\beta\sum_{d\in\mathcal{N}_{(v_i,v_j)}}Z_d^{(k)}\right)W^{(k)}

where \mathcal{N}_{(v_i,v_j)} is all the neighbors of node {(v_i,v_j)}W is a weight matrix and \beta denotes the normalization coefficient. Z_{(v_{i},v_{j})}^{0}=l_{(v1,v2)}

        ②Cross entropy loss:

\mathcal{L}_{CE}=-\sum_{l\in L_{t}}(y_{l}\mathrm{log} (p_{l})+(1-y_{l})\mathrm{log} (1-p_{l}))

where L_t is the links which need to be predicted, p_l is the probability that the link l exists, y_{l}\in\{0,1\} represents if the link exists

(4)Connection With Learning on Original Graphs

        ①Edge feature:

f_e=g(f_l(v1),f_l(v2))

where g\left ( \cdot \right ) denotes GNN

        ②They rewrite the updating function:

\begin{aligned}Z_{(v_{i},v_{j})}^{1}&=(l_{(v1,v2)}+\beta\sum_{d_{1}\in\mathcal{N}_{v1}}\sum_{d_{2}\in\mathcal{N}_{d1}}l_{(d1,d2)}+\beta\sum_{d_{3}\in\mathcal{N}_{v2}}\sum_{d_{4}\in\mathcal{N}_{d3}}l_{(d3,d4)})W^{(0)}\end{aligned}

⭐作者意思就是因为线图中的节点(原图的边)包含了原图中边连接的两个顶点,然后线图中节点聚合的一阶会聚合周围的节点,但是周围的节点也源于原图中其他的两个顶点,相当于是线图中一跳等于原图两跳结合。不过这个还是要取决于怎么定义线图节点特征

2.4.4. The Proposed Algorithm

(1)Enclosing Subgraph Extraction

        ①Constructing 2-hop enclosing subgraph:

G_{(v_i,v_j)}^2=\{v|\min(d(v,v_i),d(v,v_j)\leq2)\}

(2)Node Labeling

        ①Identifying 2 target nodes first

        ②Providing the importance of each node to target nodes

        ③Node labeling function:

f_l(v)=1+\min(d(v,v_1),d(v,v_2))+(d_s/2)[(d_s/2)+(d_s)\%2-1]

where d_s=d(v,v_1)+d(v,v_2)

        ④Lable of target nodes: f_l(v_1)=1 and f_l(v_2)=1

        ⑤Lable of unreachable nodes: for d(v,v_{1})=\infty or d(v,v_{2})=\infty, there is f_l(v)=0

2.5. Experiments

2.5.1. Datatsets and Baseline Models

        ①Dataset:

2.5.2. Experimental Setup

        ①Sample selection: 50% for training and 50% for testing

        ②Introducing parameter settings in other models

        ③Epoch: 15

2.5.3. Results and Analysis

        ①Running times: 10, with different split

        ②AUC comparison table with 80% for training:

        ③AP comparison table with 80% for training:

        ④AUC comparison table with 50% for training:

        ⑤AP comparison table with 50% for training:

        ⑥Loss table:

        ⑦AUC comparison under different data spliting ratio:

        ⑧t_SNE on different dataset:

2.6. Conclusion

        Line graph might overcome the disadvantage of graph pooling

4. Reference

Cai, L., Li, J., Wang, J., & Ji, S. (2021) 'Line Graph Neural Networks for Link Prediction', IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9). doi:  10.1109/TPAMI.2021.3080635

混合图神经网络用于少样本学习。少样本学习是指在给定的样本数量非常有限的情况下,如何进行有效的学习和分类任务。混合图神经网络是一种结合了图神经网络和其他模型的方法,用于解决少样本学习问题。 首先,混合图神经网络将图神经网络与其他模型结合起来,以充分利用它们在不同任务上的优势。图神经网络可以有效地处理图结构数据,并捕捉节点之间的关系,而其他模型可能在处理其他类型的数据时更加优秀。通过将它们结合起来,混合图神经网络可以在少样本学习中更好地利用有限的数据。 其次,混合图神经网络可以通过在训练过程中使用一些预训练模型来提高学习效果。预训练模型是在大规模数据集上进行训练得到的模型,在特定任务上可能有较好的性能。通过将预训练模型与图神经网络结合,混合图神经网络可以在少样本学习中利用预训练模型的知识,以更好地适应有限的数据。 最后,混合图神经网络还可以通过设计适当的注意力机制来提高学习效果。注意力机制可以使网络更加关注重要的特征和关系,忽略无关的信息。在少样本学习中,选择性地关注有限的样本和特征对于提高学习的效果至关重要。混合图神经网络可以通过引入适当的注意力机制来实现这一点,以提取和利用关键信息。 综上所述,混合图神经网络是一种用于少样本学习的方法,它结合了图神经网络和其他模型的优势,并利用预训练模型和适当的注意力机制来提高学习效果。这种方法对于在有限数据条件下执行有效的学习和分类任务非常有帮助。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值