英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用
目录
2.4. Background: Graph convolutional network
2.5. HyperGCN: Hypergraph Convolutional Network
2.5.3. HyperGCN: Enhancing 1-HyperGCN with mediators
2.6. Experiments for semi-supervised learning
2.8. HyperGCN for combinatorial optimisation
2.9. Comparison of training time
1. 省流版
1.1. 心得
1.2. 论文总结图
2. 论文逐段精读
2.1. Abstract
①Hypergraph characterizes complex relationship, thus they proposed hyperGCN
2.2. Introduction
①⭐The authors reckon Laplacian regularisation is explicit, and it will bring similarities between arbitrary node pair. Whereas the implicit regularisation of GCN avoids this. (?????)
②⭐They approximate hyper edge by pairwise edges, which costs less computing time.
2.3. Related work
①Deep learning on graphs: DNN and GCN
②Learning on hypergraphs
③Graph-based SSL
④Graph neural networks for combinatorial optimisation
2.4. Background: Graph convolutional network
①Define with adjacency matrix and feature matrix
②Provenance of original GCN
2.5. HyperGCN: Hypergraph Convolutional Network
①They define with and , where denotes small number of labelled hypernodes. Each hypernode has its own feature vector
②Only when the max relationship value in a hyperegde is low, the corresponding nodes are closed
③"Simulated" HGCN:
2.5.1. Hypergraph Laplacian
①The Laplacian can be regarded as a linear function
②Definition of hyperedge:
where denotes the real-valued signal of hypernode
③Adding weights to edges as and self loops
④Their symmetrically normalised hypergraph Laplacian
2.5.2. 1-HyperGCN
①Convolution of HyperGCN:
where denotes one node and denotes its neighbor connected by the same hyperedge;
is the edge weight between node and node , not a adjacency matrix;
can change the dimensionality of .
2.5.3. HyperGCN: Enhancing 1-HyperGCN with mediators
①For those fully connected hypergraph:
感觉是对于连接三个顶点的超边,除了ij相连以外,还会有2(e-1)-1条边(此时的e视为简单图中两两相连的),此时超边缘的大小是2,其中有个节点作为中介器
2.5.4. FastHyperGCN
They call HGCN without weights in FastHyperGCN
2.6. Experiments for semi-supervised learning
2.6.1. Baselines
①Baselines: Hypergraph neural networks (HGNN), Multi-layer perceptron (MLP), Multi-layer perceptron + explicit hypergraph Laplacian regularisation (MLP + HLR), and Confidence Interval-based method (CI)
②Tasks: topic prediction (to which document)
③Epoch: 200
2.7. Analysis of results
①Comparison of SSL in test error:
②They define:
which is the definition of edge;
which is the definition of edge weights;
the first two are representations in HGNN and the second two are in HyperGCN and FastHyperGCN
③作者认为如果要近似的话,超边连接的节点为两个或三个是最好的。
④When the max connection of hyperedges is 3, there are the same graph construction of HGNN, FastHyperGCN, and HyperGCN. Hence they achieve similar results
⑤Hypergraph in the real-world dataset exists noise, which eliminates their statistical significance
⑥Comparison table on sythetic data and subset of DBLP:
where each synthetic hypergraphs contains 1000 hypernodes with randomly 500 hyperedges and 2 classes. Each class contains 500 hypernodes. In each graph, 100 hyperedges with 5 connections connect the nodes with the same class, and the left 400 hyperedges with 20 connections connect nodes in different classes. 对于包含两个类的超边,超边连接的超级节点的比值用grid search,从0.5(噪声大)到0.75(噪声小),步长为0.05. The attributes of hypernodes are generated by random Gauss.
2.8. HyperGCN for combinatorial optimisation
①They aim to solve the densest k-subhypergraph problem
②The densest k-subhypergraph of each dataset:
③Visualization:
2.9. Comparison of training time
略
2.10. Conclusion
They get good results
3. Reference List
Yadati, N. et al. (2019) 'HyperGCN: a new method of training graph convolutional networks on hypergraphs', Neurips, pp. 1511-1522. doi: https://doi.org/10.48550/arXiv.1809.02589