class GCNConv(MessagePassing):
r"""The graph convolutional operator from the "Semi-supervised Classification with Graph Convolutional Networks" <https://arxiv.org/abs/1609.02907>
_ paper
math::
X
′
=
D
^
−
1
/
2
A
^
\mathbf{X}^{\prime} = \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}}
X′=D^−1/2A^
D
^
−
1
/
2
X
Θ
\mathbf{\hat{D}}^{-1/2} \mathbf{X} \mathbf{\Theta}
D^−1/2XΘ
math:
A
^
=
A
+
I
\mathbf{\hat{A}} = \mathbf{A} + \mathbf{I}
A^=A+I 插入自循环和的邻接矩阵 denotes the
adjacency matrix with inserted self-loops and
D
^
i
i
=
∑
j
=
0
A
^
i
j
\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij}
D^ii=∑j=0A^ijits diagonal degree matrix. #对角度矩阵
The adjacency matrix can include other values than :obj:1
representing
edge weights via the optional :obj:edge_weight
tensor.
Its node-wise formulation is given by: 其节点表达式如下:
math::
x
i
′
=
Θ
∑
j
∈
N
(
v
)
∪
{
i
}
e
j
,
i
d
^
j
d
^
i
x
j
\mathbf{x}^{\prime}_i = \mathbf{\Theta} \sum_{j \in \mathcal{N}(v) \cup \{ i \}} \frac{e_{j,i}}{\sqrt{\hat{d}_j \hat{d}_i}} \mathbf{x}_j
xi′=Θ∑j∈N(v)∪{i}d^jd^iej,ixj
d
^
i
=
1
+
∑
j
∈
N
(
i
)
e
j
,
i
\hat{d}_i = 1 + \sum_{j \in \mathcal{N}(i)} e_{j,i}
d^i=1+∑j∈N(i)ej,i
e
j
,
i
e_{j,i}
ej,idenotes the edge weight from source node :obj:j
to target
node :obj:i
(default: :obj:1.0
)
in_channels (int): Size of each input sample.
out_channels (int): Size of each output sample.
improved (bool, optional): If set to :obj:True
, the layer computes
:math:
A
^
\mathbf{\hat{A}}
A^ as :math
A
+
2
I
\mathbf{A} + 2\mathbf{I}
A+2I
(default: :obj:False
)
cached (bool, optional): If set to :obj:True
, the layer will cache
the computation of :math:
D
^
−
1
/
2
A
^
D
^
−
1
/
2
\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}
D^−1/2A^D^−1/2 on first execution, and will use the
cached version for further executions.
This parameter should only be set to :obj:True
in transductive
learning scenarios. (default: :obj:False
)
add_self_loops (bool, optional): If set to :obj:False
, will not add
self-loops to the input graph. (default: :obj:True
)
normalize (bool, optional): Whether to add self-loops and compute
symmetric normalization coefficients on the fly.
(default: :obj:True
)
bias (bool, optional): If set to :obj:False
, the layer will not learn
an additive bias. (default: :obj:True
)
**kwargs (optional): Additional arguments of
:class:torch_geometric.nn.conv.MessagePassing
.
GAT:
x
i
′
=
α
i
,
i
Θ
x
i
+
∑
j
∈
N
(
i
)
α
i
,
j
Θ
x
j
\mathbf{x}^{\prime}_i = \alpha_{i,i}\mathbf{\Theta}\mathbf{x}_{i} + \sum_{j \in \mathcal{N}(i)} \alpha_{i,j}\mathbf{\Theta}\mathbf{x}_{j}
xi′=αi,iΘxi+∑j∈N(i)αi,jΘxj
where the attention coefficients :math:
α
i
,
j
\alpha_{i,j}
αi,j are computed as
α
i
,
j
=
exp
(
L
e
a
k
y
R
e
L
U
(
a
⊤
[
Θ
x
i
∥
Θ
x
j
]
)
)
∑
k
∈
N
(
i
)
∪
{
i
}
exp
(
L
e
a
k
y
R
e
L
U
(
a
⊤
[
Θ
x
i
∥
Θ
x
k
]
)
)
\alpha_{i,j} = \frac{ \exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top} [\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_j] \right)\right)} {\sum_{k \in \mathcal{N}(i) \cup \{ i \}} \exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top} [\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_k] \right)\right)}
αi,j=∑k∈N(i)∪{i}exp(LeakyReLU(a⊤[Θxi∥Θxk]))exp(LeakyReLU(a⊤[Θxi∥Θxj]))
in_channels (int or tuple): Size of each input sample. A tuple
corresponds to the sizes of source and target dimensionalities.
out_channels (int): Size of each output sample.
heads (int, optional): Number of multi-head-attentions.
(default: :obj:`1`)
concat (bool, optional): If set to :obj:`False`, the multi-head
attentions are averaged instead of concatenated.
(default: :obj:`True`)
negative_slope (float, optional): LeakyReLU angle of the negative
slope. (default: :obj:`0.2`)
dropout (float, optional): Dropout probability of the normalized
attention coefficients which exposes each node to a stochastically
sampled neighborhood during training. (default: :obj:`0`)
add_self_loops (bool, optional): If set to :obj:`False`, will not add
self-loops to the input graph. (default: :obj:`True`)
bias (bool, optional): If set to :obj:`False`, the layer will not learn
an additive bias. (default: :obj:`True`)
**kwargs (optional): Additional arguments of
:class:`torch_geometric.nn.conv.MessagePassing`.