- 感知机就是线性转换 + 非线性函数
x l + 1 = σ ( W l x l + b l ) x^ {l + 1} = \sigma ( W_l x ^ l + b ^ l) xl+1=σ(Wlxl+bl) - 里面有一句话,Graph is permutation invariant.
- 图没有标准的结点顺序
- 我们有很多种结点的顺序方案
- permutaion invariant function
- consider we learn a function f f f that maps a graph G = ( A , X ) G = (A, X) G=(A,X) to a vector R d R^d Rd
- Then, if f ( A i , X i ) = f ( A j , X j ) f(A_i, X_i) = f(A_j, X_j) f(Ai,Xi)=f(Aj,Xj) for any order plan i i i and j j j, we formally say f f f is a permutation invariant function
- permutaion equivariance
- consider we learn a function
f
f
f that maps a graph $ G = (A, X) $ to a matrix
R
m
×
d
R^{m \times d}
Rm×d
- graph has
m
m
m nodes, each row is the embedding of a
node.
- graph has
m
m
m nodes, each row is the embedding of a
- Similarly, if this property holds for any pair of
order plan i i i and j j j, we say f f f is a permutation
equivariant function.
- consider we learn a function
f
f
f that maps a graph $ G = (A, X) $ to a matrix
R
m
×
d
R^{m \times d}
Rm×d
- 图神经网络就是由多个permutation equivariant/invariant functions 组成的
- 计算图computation graph, has equivariant property
- 图中黑盒子里的函数是什么?
- 在这里
A
v
A_v
Av表示邻结矩阵第v行的行向量
- 如何训练模型?分为两类,一类是监督任务,另一类是无监督任务,监督任务采用正常方法即可,无监督任务采用结点之间的相似原则,就是距离较近的就分为一类
- CNN可以认为是一个特别的GNN,有着固定的邻接结点的大小和顺序, CNN不是permutation equivariant; transformer 可以看作是一个特别的GNN,有着全连接的单词图
看这个公式其实非常好理解CNN中相当于 N ( v ) N(v) N(v)是一个固定值,所以直接被 W l W^l Wl替换掉了,把 u ∈ N ( v ) ∪ v u \in N(v) \cup {v} u∈N(v)∪v拆开成v的邻接结点加上v结点本身
cs223w课程笔记-06-GNN1
最新推荐文章于 2024-07-24 14:05:30 发布