[论文精读]Supervised Community Detection with Line Graph Neural Networks

该博客详细解读了一篇关于使用线图神经网络(LGNN)进行监督社区检测的论文。LGNN通过非回溯操作在边的邻接矩阵上增强图神经网络(GNN),以捕捉无向图中的有向信息流。论文探讨了LGNN的损失函数、实验结果,包括在随机块模型和SNAP真实数据集上的表现,并指出LGNN在处理社区结构和节点分类任务中的潜力。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

论文网址:[1705.08415] Supervised Community Detection with Line Graph Neural Networks (arxiv.org)

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用

⭐内涵大量可视化推导

目录

1. 省流版

1.1. 心得

2. 论文逐段精读

2.1. Abstract

2.2. Introduction

2.3. Problem setup

2.4. Related works

2.5. Line Graph Neural Networks

2.5.1. Graph neural networks using a family of multiscale graph operators

2.5.2. LGNN: GNN on line graphs with the non-backtraking operator

2.5.3. A loss function invariant under label permutation

2.6. Loss landscape of linear GNN optimization

2.7. Experiments

2.7.1. Stochastic block models

2.7.2. Probing the computational-to-statistical threshold in 5-class SBM

2.7.3. Real datasets from SNAP

2.8. Conclusion

3. 知识补充

3.1. Belief propagation

4. Reference List


1. 省流版

1.1. 心得

(1)改论文发表时间较早,许多公式表达没有统一现在的GNN,此文章统一将矩阵表示为大写字母

(2)不过这个有向信息流捕获感觉还有待商榷,还是感觉偏伪信息

2. 论文逐段精读

2.1. Abstract

        ①They proposed a family of Graph Neural Networks (GNNs)

        ②Tasks: supervised community detection

        ③GNN augmentation: non-backtracking operator on the line graph of edge adjacencies

2.2. Introduction

        LGNN can capture directed information flow from undirected graphs

2.3. Problem setup

        ①Task: node classification

        ②Graph: G=\left ( V,E \right )

        ③Label of nodes: y:V\to\{1,\ldots,C\}, where C denotes the number of communities

        ④Training set: \{(G_{t},y_{t})\}_{t\leq T}

        ⑤The minimized loss function: L(\theta)=\frac{1}{T}\sum_{t\leq T}\ell(\Phi_\theta(G_t),y_t) where the predicted label is \hat{y}=\Phi_{\theta}(G)\ell is a loss function, and \Phi represents the model

2.4. Related works

         Briefly introducing GNN and its variants

2.5. Line Graph Neural Networks

2.5.1. Graph neural networks using a family of multiscale graph operators

(1)Line graph construction

        ①Node feature: x_i \in \mathbb{R}^{1\times b}X \in \mathbb{R}^{\left | V \right | \times b}

        ②Adjacency matrix: A \in \mathbb{R}^{V \times V} with 1 and 0 

        ③Degree matrix: D_{ii}=diag\sum_{j\in \left | V \right |}^{}A_{ij}

        ④Power graph adjacency matrices: A_{J}=\operatorname*{min}(1,A^{2^{J}}) where J\in\mathbb{N}^{*}, the power denotes the number of hop

        ⑤Identity matrix: I

        ⑥Family matrices: \mathcal{F}=\{I,D,A,A_{2},...,A_{J}\}

        ⑦One GNN layer:

X^{k+1} \in \mathbb{R}^{\left | V \right | \times b_{k+1}}=GNN\left ( X^k \in \mathbb{R}^{\left | V \right | \times b_{k}} \right )

        ⑧GNN methods:

Z^{(k+1)}=\rho\left[\sum_{O_i\in\mathcal{F}}O_iX^{(k)}\theta_i\right],\quad\overline{Z}^{(k+1)}=\sum_{O_i\in\mathcal{F}}O_iX^{(k)}\theta_i

are two terms in one layer, where \theta_{j}\in\mathbb{R}^{b_{k}\times\frac{b_{k+1}}{2}} denotes learnable parameters, \rho \left ( \cdot \right ) there is ReLU, further update X^k by:

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值