Graph Representation Learning学习笔记-chapter2

Chapter2 Background and Traditional Approaches

2.1 Graph Statistics and Kernel Methods

Node-level

degree 度 : 考虑邻居数量

  • d u d_u du : node u’s degree
    • take into account how many neighbors a node has

centrality 中心性 : 考虑邻居数量+重要程度

  • e u e_u eu : eigenvector centrality

    • proportional to the average centrality of its neighbors
    • take into account how important a node’s neighbors are
    • it ranks the likelihood that a node is visited on a random walk of infinite length on the graph
    • x i = c ∑ j = 1 n a i j x j x_i=c \sum_{j=1}^{n}a_{ij}x_j xi=cj=1naijxj
      • x 表示节点特征。初始值为节点的度
      • aij 表示邻接矩阵值
      • c 为常量

    betweeness centrality

    • measures how often a node lies on the shortest path between two other nodes

    closeness centrality

    • measures the average shortest path length between a node and all other nodes

clustering coefficient 聚类系数 :考虑邻居间关系是否紧密

  • c u c_u cu : clustering coefficient (0~1)
    • measures the proportion of closed triangles in a node’s local neighborhood
    • measures how tightly clustered a node’s neighborhood is

Graph-level

Bag of nodes

  • just aggregate node-level statistics(如度、中心性和聚类系数) and use the aggregated information as a graph-level representation
  • Drawbacks :
    • entirely based upon local node-level information
    • can miss important global properties in the graph

iterative neighborhood aggregation

  • extract node-level features that contain more information than just their local ego graph, and then to aggregate these richer features into a graph-level representation.

  • example : Weisfeiler Lehman (WL) algorithm and kernel

    • 初始化标签 : 通常为度
    • 不断迭代,每一次都聚合邻居的特征得到新的标签

    the WL kernel is computed by measuring the difference between the resultant label sets for two graphs.

估计图同构的方法之一 :用wl算法跑k轮,检查两个图是否有相同的标签集合

WL算法
我的理解是:WL算法经过k轮聚合后的标签能够很好地保证不同节点的特征不同,故能够用来进行图的对比,从而看两个图是否同构 (因为WL的聚合方式采用了哈希的方法,而不是求平均或最大值等。

graphlets

  • simply count the occurrence of different small subgraph structures
  • graphlet kernel involves
    • enumerating all possible graph structures of a particular size
    • counting how many times they occur in the full graph.

path-based methods

  • enumerate all possible graphlets
  • examines the different kinds of paths that occur in the graph
    • random walk kernel
      • run random walks
      • count the occurrence of different degree sequences
    • shortest-path kernel
      • only use shortest paths between paths
      • advs : extract rich structural information & avoid many of the combinatorial pitfalls of graph data

2.2 Neighborhood Overlap Detection

Previous discussed statistics do not quantify the relationships between nodes, that is to say, they are not very useful for the task of relation prediction.

个人理解:领域重叠其实讲的是节点间的相似性。在图中,如果两个节点之间的领域重叠越多(比如它们的共同邻居越多),那它们就越相似,它们之间存在边的可能性也就越大。

Local overlap statistics 局部重叠/相似

  • functions of the number of common neighbors two nodes share
  • 计数共同邻居数量
    • 量化节点领域间的重叠并且减少由于节点度引起的偏差
    • for example : Sorensen index, Salton index, Jaccard overlap
  • 计数共同邻居数量+考虑共同邻居的重要程度
    • for example
      • Resource Allocation (RA) index : counts the inverse degrees of the common neighbors
      • Adamic-Adar (AA) index : use the inverse logarithm of the degrees

Global overlap statistics 全局重叠/相似

  • 在local的基础上考虑到了两个节点在领域内没有重叠但仍然处于图中同一个社区的情况

Katz index

  • 计数一对节点间不同长度的路径数量,其中不同长度的路径有不同的权重
  • S K a t z [ u , v ] = ∑ i = 1 ∞ β i A i [ u , v ] S_{Katz}[u,v] = \sum_{i=1}^{\infty}\beta ^i A^i[u,v] SKatz[u,v]=i=1βiAi[u,v]
    • β \beta β :用户定义的用来控制不同长短路径的权重。一般来说,路径越短权重越大

Leicht, Holme, and Newman (LHN) similarity

  • Katz index对节点的度有着强偏差,对于度很多的节点其路径也会很多(sum后的值还是后很大),而LHN指标解决了这个问题

LHN推理过程不是很理解

Random walk methods

  • consider random walks rather than exact counts of paths over the graph
  • Personalized PageRank algorithm
    • 相似程度和从一个节点random walk到另一节点的概率成正比

2.3 Graph Laplacians and Spectral Methods

Graph Laplacians

  • unnormalized laplacian
    • 定义 :L = D - A
      • L : laplacian matrix 拉普拉斯矩阵
      • D : degree matrix 度矩阵
      • A : adjacency matrix 邻接矩阵

请添加图片描述

- L的性质
    - 对称+半正定
    - $x^TLx=∑_{(u,v)∈ \mathcal{E}}​(X[u]−X[v])^2$
    - 有|v|个非负特征值
    - 定理:拉普拉斯矩阵中0特征值的出现次数=图中连通区域的个数
  • normalized laplacian
    • symmetric normalized laplacian
      • L s y m = D − 1 2 L D − 1 2 L_{sym}=D^{- \frac{1}{2}}LD^{- \frac{1}{2}} Lsym=D21LD21
    • random walk laplacian
      • L R W = D − 1 L L_{RW}=D^{-1}L LRW=D1L

Graph Cuts and Clustering 图割和聚类

Graph cuts

  • 把一个图分成k个不重叠的子集 A 1 , . . . , A k A_1,...,A_k A1,...,Ak

方法1

请添加图片描述

  • it tends to simply make clusters that consist of a single node

方法2-解决了孤立节点问题

请添加图片描述

  • enforce that the partitions are all reasonably large

方法3

请添加图片描述

  • enforce that all clusters have a similar number of edges incident to their nodes

Generalized spectral clustering

  • steps
    1. find the K smallest eigenvectors of L (exclude the smallest)
      e ∣ v ∣ − 1 , e ∣ v ∣ − 2 , . . . , e ∣ v ∣ − K e_{|v|-1},e_{|v|-2},...,e_{|v|-K} ev1,ev2,...,evK
    2. form the matrix U ∈ R ∣ V ∣ × ( K − 1 ) U\in R^{|V|\times (K-1)} URV×(K1) with the eigenvectors from step 1 as columns
    3. represent each node by its corresponding row in the matrix U
      i.e. Z u = U [ u ] , ∀ u ∈ V Z_u = U[u], \forall u \in V Zu=U[u],uV
    4. run K-means clustering on the embeddings Z u , ∀ u ∈ V Z_u, \forall u \in V Zu,uV
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值