GCNConv

class GCNConv(MessagePassing):
r"""The graph convolutional operator from the "Semi-supervised Classification with Graph Convolutional Networks" <https://arxiv.org/abs/1609.02907>_ paper

math::
X ′ = D ^ − 1 / 2 A ^ \mathbf{X}^{\prime} = \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} X=D^1/2A^
D ^ − 1 / 2 X Θ \mathbf{\hat{D}}^{-1/2} \mathbf{X} \mathbf{\Theta} D^1/2
math: A ^ = A + I \mathbf{\hat{A}} = \mathbf{A} + \mathbf{I} A^=A+I 插入自循环和的邻接矩阵 denotes the
adjacency matrix with inserted self-loops and

D ^ i i = ∑ j = 0 A ^ i j \hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij} D^ii=j=0A^ijits diagonal degree matrix. #对角度矩阵
The adjacency matrix can include other values than :obj:1 representing
edge weights via the optional :obj:edge_weight tensor.

Its node-wise formulation is given by: 其节点表达式如下:

math::
x i ′ = Θ ∑ j ∈ N ( v ) ∪ { i } e j , i d ^ j d ^ i x j \mathbf{x}^{\prime}_i = \mathbf{\Theta} \sum_{j \in \mathcal{N}(v) \cup \{ i \}} \frac{e_{j,i}}{\sqrt{\hat{d}_j \hat{d}_i}} \mathbf{x}_j xi=ΘjN(v){i}d^jd^i ej,ixj
d ^ i = 1 + ∑ j ∈ N ( i ) e j , i \hat{d}_i = 1 + \sum_{j \in \mathcal{N}(i)} e_{j,i} d^i=1+jN(i)ej,i
e j , i e_{j,i} ej,idenotes the edge weight from source node :obj:j to target
node :obj:i (default: :obj:1.0)

in_channels (int): Size of each input sample.

out_channels (int): Size of each output sample.

improved (bool, optional): If set to :obj:True, the layer computes
:math: A ^ \mathbf{\hat{A}} A^ as :math A + 2 I \mathbf{A} + 2\mathbf{I} A+2I
(default: :obj:False)

cached (bool, optional): If set to :obj:True, the layer will cache
the computation of :math: D ^ − 1 / 2 A ^ D ^ − 1 / 2 \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2} D^1/2A^D^1/2 on first execution, and will use the
cached version for further executions.
This parameter should only be set to :obj:True in transductive
learning scenarios. (default: :obj:False)

add_self_loops (bool, optional): If set to :obj:False, will not add
self-loops to the input graph. (default: :obj:True)
normalize (bool, optional): Whether to add self-loops and compute
symmetric normalization coefficients on the fly.
(default: :obj:True)

bias (bool, optional): If set to :obj:False, the layer will not learn
an additive bias. (default: :obj:True)

**kwargs (optional): Additional arguments of
:class:torch_geometric.nn.conv.MessagePassing.
GAT:
x i ′ = α i , i Θ x i + ∑ j ∈ N ( i ) α i , j Θ x j \mathbf{x}^{\prime}_i = \alpha_{i,i}\mathbf{\Theta}\mathbf{x}_{i} + \sum_{j \in \mathcal{N}(i)} \alpha_{i,j}\mathbf{\Theta}\mathbf{x}_{j} xi=αi,iΘxi+jN(i)αi,jΘxj
where the attention coefficients :math: α i , j \alpha_{i,j} αi,j are computed as
α i , j = exp ⁡ ( L e a k y R e L U ( a ⊤ [ Θ x i   ∥   Θ x j ] ) ) ∑ k ∈ N ( i ) ∪ { i } exp ⁡ ( L e a k y R e L U ( a ⊤ [ Θ x i   ∥   Θ x k ] ) ) \alpha_{i,j} = \frac{ \exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top} [\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_j] \right)\right)} {\sum_{k \in \mathcal{N}(i) \cup \{ i \}} \exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top} [\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_k] \right)\right)} αi,j=kN(i){i}exp(LeakyReLU(a[ΘxiΘxk]))exp(LeakyReLU(a[ΘxiΘxj]))

    in_channels (int or tuple): Size of each input sample. A tuple
        corresponds to the sizes of source and target dimensionalities.
    out_channels (int): Size of each output sample.
    heads (int, optional): Number of multi-head-attentions.
        (default: :obj:`1`)
    concat (bool, optional): If set to :obj:`False`, the multi-head
        attentions are averaged instead of concatenated.
        (default: :obj:`True`)
    negative_slope (float, optional): LeakyReLU angle of the negative
        slope. (default: :obj:`0.2`)
    dropout (float, optional): Dropout probability of the normalized
        attention coefficients which exposes each node to a stochastically
        sampled neighborhood during training. (default: :obj:`0`)
    add_self_loops (bool, optional): If set to :obj:`False`, will not add
        self-loops to the input graph. (default: :obj:`True`)
    bias (bool, optional): If set to :obj:`False`, the layer will not learn
        an additive bias. (default: :obj:`True`)
    **kwargs (optional): Additional arguments of
        :class:`torch_geometric.nn.conv.MessagePassing`.
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
PyTorch Geometric (PyG)是一个基于PyTorch的图神经网络库,提供了一系列用于处理图数据的工具和模型。GCNConv是PyG中的一个图卷积层,用于在图数据上进行卷积操作。 GCNConv使用了图卷积神经网络(Graph Convolutional Network, GCN)的思想,通过聚合每个节点的邻居节点特征来更新节点的表示。GCNConv接受一个节点特征矩阵和一个邻接矩阵作为输入,并输出更新后的节点特征矩阵。 在PyG中,使用GCNConv可以按照以下步骤进行: 1. 导入相应的库和模块: ```python import torch import torch.nn.functional as F from torch_geometric.nn import GCNConv ``` 2. 创建一个包含节点特征和邻接矩阵的数据对象: ```python x = torch.tensor(...) # 节点特征矩阵 edge_index = torch.tensor(...) # 邻接矩阵 ``` 3. 定义GCNConv层并传入输入数据的维度信息: ```python conv = GCNConv(in_channels, out_channels) ``` 其中`in_channels`表示输入特征的维度,`out_channels`表示输出特征的维度。 4. 在模型中应用GCNConv层: ```python x = conv(x, edge_index) # 进行图卷积操作 ``` 此时,`x`即为更新后的节点特征矩阵。 5. 可以根据需要进一步处理更新后的节点特征矩阵,例如应用非线性激活函数: ```python x = F.relu(x) # 应用ReLU激活函数 ``` 这是一个简单的使用GCNConv的示例。你可以根据实际需求和数据情况进行适当的调整和扩展。如果你需要更详细的信息,可以参考PyG的文档或相关教程。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值