使用DGL编写GNN模块

本文介绍了如何使用DGL库实现自定义图神经网络(GNN)模块,包括GraphSAGE卷积的实现,以及如何利用DGL提供的API创建带权重的SAGEConv。通过实例演示了用户自定义消息和聚合函数的重要性,并分享了定制GNN模块的实用建议。
摘要由CSDN通过智能技术生成

更多图神经网络和深度学习内容请关注:
在这里插入图片描述

使用DGL编写GNN模块

有时我们构建的模型不仅仅是简单地堆叠现有的GNN模块,而是需要构建满足自己任务需求的GNN模块。如:我们想发明一种通过考虑节点重要性或边权重来聚合邻居信息的新方法。

通过本文我们可以:

  • 理解 DGL信息传递API(DGL’s message passing APIs)
  • 自己实现GraphSAGE卷积模块

阅读本文我们需要先学习the basics of training a GNN for node classification或者使用DGL完成节点分类任务

GNN和消息传递

DGL遵循Gilmer等人提出的消息传递范式,他们发现本质上许多GNN模型可以适应以下框架:

m u → v ( l ) = M ( l ) ( h v ( l − 1 ) , h u ( l − 1 ) , e u → v ( l − 1 ) ) m_{u\to v}^{(l)} = M^{(l)}\left(h_v^{(l-1)}, h_u^{(l-1)}, e_{u\to v}^{(l-1)}\right) muv(l)=M(l)(hv(l1),hu(l1),euv(l1))

m v ( l ) = ∑ u ∈ N ( v ) m u → v ( l ) m_{v}^{(l)} = \sum_{u\in\mathcal{N}(v)}m_{u\to v}^{(l)} mv(l)=uN(v)muv(l)

h v ( l ) = U ( l ) ( h v ( l − 1 ) , m v ( l ) ) h_v^{(l)} = U^{(l)}\left(h_v^{(l-1)}, m_v^{(l)}\right) hv(l)=U(l)(hv(l1),mv(l))

在DGL中,称 M ( l ) M^{(l)} M(l)消息函数(message function)、 ∑ \sum 聚合函数(reduce function),此处的 ∑ \sum 可以代表任何函数,不一定是求和、 U ( l ) U^{(l)} U(l)更新函数(update function)。

例如,GraphSAGE卷积(Hamilton et al 2017)可表示为:

h N ( v ) k ← Average { h u k − 1 , ∀ u ∈ N ( v ) } h_{\mathcal{N}(v)}^k\leftarrow \text{Average}\{h_u^{k-1},\forall u\in\mathcal{N}(v)\} hN(v)kAverage{huk1,uN(v)}

h v k ← ReLU ( W k ⋅ CONCAT ( h v k − 1 , h N ( v ) k ) ) h_v^k\leftarrow \text{ReLU}\left(W^k\cdot \text{CONCAT}(h_v^{k-1}, h_{\mathcal{N}(v)}^k) \right) hvkReLU(WkCONCAT(hvk1,hN(v)k))

我们可以看到消息传递是定向(有方向)的:从一个节点u发送到另一个节点v的消息不一定与从节点v发送到相反方向的节点u的消息相同。

DGL提供了GraphSAGE的实现dgl.nn.SAGEConv,下面我们将自己使用DGL实现GraphSAGE。

import dgl
import torch
import torch.nn as nn
import dgl.function as fn

class GraphSAGE(nn.Module):
    """Graph convolution module used by the GraphSAGE model.

    Parameters
    ----------
    in_feat : int
        Input feature size.
    out_feat : int
        Output feature size.
    """
    def __init__(self, in_feat, out_feat):
        super(GraphSAGE, self).__init__()
        self.linear = nn.Linear(in_feat * 2, out_feat)
        
    def forward(self, g, features):
        """Forward computation

        Parameters
        ----------
        g : Graph
            The input graph.
        features : Tensor
            The input node feature.
        """
        with g.local_scope():
            g.ndata["h"] = features
            g.update_all(fn.copy_u("h", "m"), fn.mean("m", "h_mean"))
            h_mean = g.ndata["h_mean"]
            h_total = torch.cat([features, h_mean], dim=1) 
            return self.linear(h_total)

上述代码中的核心部分是g.update_all函数,该函数收集并平均相邻特征。这里有三个概念:

  • 消息函数fn.copy_u('h','m'),它将名为“h”的节点特征复制为发送给邻居的消息
  • 聚合函数fn.mean('m', 'h_N'),该函数对所有接收到的消息中名为’m’的信息进行平均,并将结果保存为新的节点特征’h_mean’
  • update_all让DGL触发所有节点和边的消息函数和聚合函数

然后我们可以堆叠自己的GraphSAGE卷积层以构成多层GraphSAGE网络。

import torch.nn.functional as F

class Model(nn.Module):
    
    def __init__(self, in_feats, h_feats, num_classes):
        super(Model, self).__init__()
        self.conv1 = GraphSAGE(in_feats, h_feats)
        self.conv2 = GraphSAGE(h_feats, num_classes)
    
    def forward(self, g, in_feat):
        h = self.conv1(g, in_feat)
        h = F.relu(h)
        h = self.conv2(g, h)
        return h

训练GNN

加载数据集

from dgl.data import CoraGraphDataset

dataset = CoraGraphDataset()
g = dataset[0]

def train(g, model, num_epoch=200, learning_rate=0.002):
    optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate)
    all_result = []
    best_val_acc = 0
    best_test_acc = 0
    
    features = g.ndata["feat"]
    labels = g.ndata["label"]
    train_mask = g.ndata["train_mask"]
    test_mask = g.ndata["test_mask"]
    val_mask = g.ndata["val_mask"]
    
    for e in range(num_epoch):
        result = model(g, features)
        pred = result.argmax(dim=1)
        loss = F.cross_entropy(result[train_mask], labels[train_mask])
        
        train_acc = (pred[train_mask]==labels[train_mask]).float().mean()
        test_acc = (pred[test_mask]==labels[test_mask]).float().mean()
        val_acc = (pred[val_mask]==labels[val_mask]).float().mean()
        
        if best_val_acc < val_acc:
            best_test_acc, best_val_acc = test_acc, val_acc
            
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        all_result.append(result.detach())
        
        if e % 5 == 0:
            print('In epoch {}, loss: {:.3f}, val acc: {:.3f} (best {:.3f}), test acc: {:.3f} (best {:.3f})'.format(
                e, loss, val_acc, best_val_acc, test_acc, best_test_acc))
            
in_feat = g.ndata["feat"].shape[1]
h_feat = 16
num_classes = dataset.num_classes

model = Model(in_feat, h_feat, num_classes)

train(g, model)
  NumNodes: 2708
  NumEdges: 10556
  NumFeats: 1433
  NumClasses: 7
  NumTrainingSamples: 140
  NumValidationSamples: 500
  NumTestSamples: 1000
Done loading data from cached files.
In epoch 0, loss: 1.950, val acc: 0.162 (best 0.162), test acc: 0.149 (best 0.149)
In epoch 5, loss: 1.937, val acc: 0.162 (best 0.162), test acc: 0.149 (best 0.149)
In epoch 10, loss: 1.918, val acc: 0.166 (best 0.166), test acc: 0.151 (best 0.150)
In epoch 15, loss: 1.895, val acc: 0.182 (best 0.182), test acc: 0.170 (best 0.165)
In epoch 20, loss: 1.868, val acc: 0.272 (best 0.272), test acc: 0.260 (best 0.260)
In epoch 25, loss: 1.836, val acc: 0.368 (best 0.368), test acc: 0.376 (best 0.376)
In epoch 30, loss: 1.798, val acc: 0.448 (best 0.448), test acc: 0.443 (best 0.443)
In epoch 35, loss: 1.756, val acc: 0.510 (best 0.510), test acc: 0.511 (best 0.511)
In epoch 40, loss: 1.708, val acc: 0.546 (best 0.546), test acc: 0.552 (best 0.552)
In epoch 45, loss: 1.654, val acc: 0.582 (best 0.582), test acc: 0.572 (best 0.572)
In epoch 50, loss: 1.595, val acc: 0.602 (best 0.602), test acc: 0.596 (best 0.596)
In epoch 55, loss: 1.531, val acc: 0.622 (best 0.622), test acc: 0.612 (best 0.607)
In epoch 60, loss: 1.462, val acc: 0.634 (best 0.636), test acc: 0.629 (best 0.625)
In epoch 65, loss: 1.389, val acc: 0.644 (best 0.644), test acc: 0.643 (best 0.643)
In epoch 70, loss: 1.313, val acc: 0.654 (best 0.654), test acc: 0.657 (best 0.657)
In epoch 75, loss: 1.234, val acc: 0.662 (best 0.664), test acc: 0.668 (best 0.662)
In epoch 80, loss: 1.154, val acc: 0.676 (best 0.676), test acc: 0.678 (best 0.674)
In epoch 85, loss: 1.073, val acc: 0.680 (best 0.680), test acc: 0.691 (best 0.687)
In epoch 90, loss: 0.993, val acc: 0.688 (best 0.688), test acc: 0.696 (best 0.696)
In epoch 95, loss: 0.914, val acc: 0.684 (best 0.688), test acc: 0.706 (best 0.696)
In epoch 100, loss: 0.838, val acc: 0.694 (best 0.694), test acc: 0.720 (best 0.720)
In epoch 105, loss: 0.765, val acc: 0.698 (best 0.698), test acc: 0.726 (best 0.724)
In epoch 110, loss: 0.696, val acc: 0.698 (best 0.700), test acc: 0.734 (best 0.733)
In epoch 115, loss: 0.631, val acc: 0.700 (best 0.700), test acc: 0.740 (best 0.733)
In epoch 120, loss: 0.571, val acc: 0.706 (best 0.706), test acc: 0.745 (best 0.745)
In epoch 125, loss: 0.516, val acc: 0.708 (best 0.708), test acc: 0.750 (best 0.750)
In epoch 130, loss: 0.465, val acc: 0.716 (best 0.716), test acc: 0.753 (best 0.752)
In epoch 135, loss: 0.420, val acc: 0.720 (best 0.720), test acc: 0.755 (best 0.756)
In epoch 140, loss: 0.379, val acc: 0.722 (best 0.722), test acc: 0.758 (best 0.757)
In epoch 145, loss: 0.342, val acc: 0.724 (best 0.724), test acc: 0.759 (best 0.759)
In epoch 150, loss: 0.309, val acc: 0.726 (best 0.726), test acc: 0.761 (best 0.761)
In epoch 155, loss: 0.280, val acc: 0.726 (best 0.726), test acc: 0.762 (best 0.761)
In epoch 160, loss: 0.254, val acc: 0.730 (best 0.730), test acc: 0.761 (best 0.761)
In epoch 165, loss: 0.230, val acc: 0.730 (best 0.730), test acc: 0.763 (best 0.761)
In epoch 170, loss: 0.210, val acc: 0.732 (best 0.732), test acc: 0.765 (best 0.765)
In epoch 175, loss: 0.192, val acc: 0.732 (best 0.732), test acc: 0.768 (best 0.765)
In epoch 180, loss: 0.176, val acc: 0.732 (best 0.732), test acc: 0.769 (best 0.765)
In epoch 185, loss: 0.161, val acc: 0.730 (best 0.732), test acc: 0.769 (best 0.765)
In epoch 190, loss: 0.148, val acc: 0.730 (best 0.732), test acc: 0.769 (best 0.765)
In epoch 195, loss: 0.137, val acc: 0.730 (best 0.732), test acc: 0.770 (best 0.765)

完整代码

import dgl
import torch
import torch.nn as nn
import torch.nn.functional as F
import dgl.function as fn
from dgl.data import CoraGraphDataset

class SAGEConv(nn.Module):
    """Graph convolution module used by the GraphSAGE model.

    Parameters
    ----------
    in_feat : int
        Input feature size.
    out_feat : int
        Output feature size.
    """
    def __init__(self, in_feat, out_feat):
        super(SAGEConv, self).__init__()
        self.linear = nn.Linear(in_feat * 2, out_feat)
        
    def forward(self, g, h):
        """Forward computation

        Parameters
        ----------
        g : Graph
            The input graph.
        h : Tensor
            The input node feature.
        """
        with g.local_scope():
            g.ndata["h"] = h
            g.update_all(message_func=fn.copy_u("h", "m"), reduce_func=fn.mean("m", "h_N"))
            h_N = g.ndata["h_N"]
            h_total = torch.cat([h, h_N], dim=1)
            return self.linear(h_total)

class Model(nn.Module):
    def __init__(self, in_feats, h_feats, num_classes):
        super(Model, self).__init__()
        self.conv1 = SAGEConv(in_feats, h_feats)
        self.conv2 = SAGEConv(h_feats, num_classes)
    def forward(self, g, in_feat):
        h = self.conv1(g, in_feat)
        h = F.relu(h)
        h = self.conv2(g, h)
        return h
    
def train(g, model, num_epoch = 100, learning_rate = 0.002, num_limit = 5):
    optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate)
    all_logits = list()
    best_test_acc = 0.0
    best_val_acc = 0.0
    
    features = g.ndata["feat"]
    labels = g.ndata["label"]
    train_mask = g.ndata["train_mask"]
    test_mask = g.ndata["test_mask"]
    val_mask = g.ndata["val_mask"]
    
    for e in range(num_epoch):
        logits = model(g, features)
        
        pred = logits.argmax(dim=1)
        
        loss = F.cross_entropy(logits[train_mask], labels[train_mask])
        
        train_acc = (pred[train_mask]==labels[train_mask]).float().mean()
        test_acc = (pred[test_mask]==labels[test_mask]).float().mean()
        val_acc = (pred[val_mask]==labels[val_mask]).float().mean()
        
        if best_val_acc < val_acc:
            best_test_acc = test_acc
            best_val_acc = val_acc
            
        # backward
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        all_logits.append(logits.detach())
        if e % num_limit == 0:
            print('In epoch {}, loss: {:.3f}, val acc: {:.3f} (best {:.3f}), test acc: {:.3f} (best {:.3f})'.format(
                e, loss, val_acc, best_val_acc, test_acc, best_test_acc))

def main():
    dataset = CoraGraphDataset()
    g = dataset[0]
    in_feats = g.ndata["feat"].shape[1]
    h_feats = 16
    num_classes = dataset.num_classes
    
    model = Model(in_feats, h_feats, num_classes)
    train(g, model, num_epoch=300, num_limit=20)

if __name__ == "__main__":
    main()
  NumNodes: 2708
  NumEdges: 10556
  NumFeats: 1433
  NumClasses: 7
  NumTrainingSamples: 140
  NumValidationSamples: 500
  NumTestSamples: 1000
Done loading data from cached files.
In epoch 0, loss: 1.952, val acc: 0.162 (best 0.162), test acc: 0.149 (best 0.149)
In epoch 20, loss: 1.874, val acc: 0.212 (best 0.212), test acc: 0.201 (best 0.201)
In epoch 40, loss: 1.722, val acc: 0.664 (best 0.664), test acc: 0.642 (best 0.642)
In epoch 60, loss: 1.484, val acc: 0.682 (best 0.682), test acc: 0.674 (best 0.672)
In epoch 80, loss: 1.184, val acc: 0.724 (best 0.724), test acc: 0.713 (best 0.713)
In epoch 100, loss: 0.874, val acc: 0.758 (best 0.760), test acc: 0.739 (best 0.730)
In epoch 120, loss: 0.607, val acc: 0.758 (best 0.764), test acc: 0.754 (best 0.749)
In epoch 140, loss: 0.407, val acc: 0.766 (best 0.768), test acc: 0.761 (best 0.751)
In epoch 160, loss: 0.274, val acc: 0.770 (best 0.770), test acc: 0.769 (best 0.764)
In epoch 180, loss: 0.190, val acc: 0.770 (best 0.770), test acc: 0.765 (best 0.764)
In epoch 200, loss: 0.137, val acc: 0.770 (best 0.772), test acc: 0.764 (best 0.764)
In epoch 220, loss: 0.103, val acc: 0.766 (best 0.772), test acc: 0.767 (best 0.764)
In epoch 240, loss: 0.080, val acc: 0.764 (best 0.772), test acc: 0.772 (best 0.764)
In epoch 260, loss: 0.063, val acc: 0.766 (best 0.772), test acc: 0.774 (best 0.764)
In epoch 280, loss: 0.052, val acc: 0.768 (best 0.772), test acc: 0.775 (best 0.764)

带权重的SAGEConv

DGL.function包下提供了许多内置消息函数和聚合函数,我们可以在API文档中找到更多详细信息。

这些API允许快速实现新的图卷积模块。例如,下面实现了一个新的SAGEConv,它使用加权平均聚合邻居表示。请注意,edata成员可以保存边缘特征,这些特征也可以参与消息传递。

import dgl
import torch
import torch.nn as nn
import torch.nn.functional as F
import dgl.function as fn
from dgl.data import CoraGraphDataset

class WeightedSAGEConv(nn.Module):
    """Graph convolution module used by the GraphSAGE model with edge weights.

    Parameters
    ----------
    in_feat : int
        Input feature size.
    out_feat : int
        Output feature size.
    """
    def __init__(self, in_feat, out_feat):
        super(WeightedSAGEConv, self).__init__()
        # A linear submodule for projecting the input and neighbor feature to the output.
        self.linear = nn.Linear(in_feat * 2, out_feat)

    def forward(self, g, h, w):
        """Forward computation

        Parameters
        ----------
        g : Graph
            The input graph.
        h : Tensor
            The input node feature.
        w : Tensor
            The edge weight.
        """
        with g.local_scope():
            g.ndata['h'] = h
            g.edata['w'] = w
            g.update_all(message_func=fn.u_mul_e('h', 'w', 'm'), reduce_func=fn.mean('m', 'h_N'))
            h_N = g.ndata['h_N']
            h_total = torch.cat([h, h_N], dim=1)
            return self.linear(h_total)

因为CoraGraphDataset数据集中的边没有权重,所以我们手动将所有边权重分配为1(也可以设置为其他)。

class Model(nn.Module):
    def __init__(self, in_feats, h_feats, num_classes):
        super(Model, self).__init__()
        self.conv1 = WeightedSAGEConv(in_feats, h_feats)
        self.conv2 = WeightedSAGEConv(h_feats, num_classes)

    def forward(self, g, in_feat):
        h = self.conv1(g, in_feat, torch.ones(g.num_edges()).to(g.device))
        h = F.relu(h)
        h = self.conv2(g, h, torch.ones(g.num_edges()).to(g.device))
        return h

完整代码

import dgl
import torch
import torch.nn as nn
import torch.nn.functional as F
import dgl.function as fn
from dgl.data import CoraGraphDataset

class WeightedSAGEConv(nn.Module):
    """Graph convolution module used by the GraphSAGE model with edge weights.

    Parameters
    ----------
    in_feat : int
        Input feature size.
    out_feat : int
        Output feature size.
    """
    def __init__(self, in_feat, out_feat):
        super(WeightedSAGEConv, self).__init__()
        # A linear submodule for projecting the input and neighbor feature to the output.
        self.linear = nn.Linear(in_feat * 2, out_feat)

    def forward(self, g, h, w):
        """Forward computation

        Parameters
        ----------
        g : Graph
            The input graph.
        h : Tensor
            The input node feature.
        w : Tensor
            The edge weight.
        """
        with g.local_scope():
            g.ndata['h'] = h
            g.edata['w'] = w
            g.update_all(message_func=fn.u_mul_e('h', 'w', 'm'), reduce_func=fn.mean('m', 'h_N'))
            h_N = g.ndata['h_N']
            h_total = torch.cat([h, h_N], dim=1)
            return self.linear(h_total)
        
class Model(nn.Module):
    def __init__(self, in_feats, h_feats, num_classes):
        super(Model, self).__init__()
        self.conv1 = WeightedSAGEConv(in_feats, h_feats)
        self.conv2 = WeightedSAGEConv(h_feats, num_classes)

    def forward(self, g, in_feat):
        h = self.conv1(g, in_feat, torch.ones(g.num_edges()).to(g.device))
        h = F.relu(h)
        h = self.conv2(g, h, torch.ones(g.num_edges()).to(g.device))
        return h
    
def train(g, model, num_epoch = 100, learning_rate = 0.002, num_limit = 5):
    optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate)
    all_logits = list()
    best_test_acc = 0.0
    best_val_acc = 0.0
    
    features = g.ndata["feat"]
    labels = g.ndata["label"]
    train_mask = g.ndata["train_mask"]
    test_mask = g.ndata["test_mask"]
    val_mask = g.ndata["val_mask"]
    
    for e in range(num_epoch):
        logits = model(g, features)
        
        pred = logits.argmax(dim=1)
        
        loss = F.cross_entropy(logits[train_mask], labels[train_mask])
        
        train_acc = (pred[train_mask]==labels[train_mask]).float().mean()
        test_acc = (pred[test_mask]==labels[test_mask]).float().mean()
        val_acc = (pred[val_mask]==labels[val_mask]).float().mean()
        
        if best_val_acc < val_acc:
            best_test_acc = test_acc
            best_val_acc = val_acc
            
        # backward
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        all_logits.append(logits.detach())
        if e % num_limit == 0:
            print('In epoch {}, loss: {:.3f}, val acc: {:.3f} (best {:.3f}), test acc: {:.3f} (best {:.3f})'.format(
                e, loss, val_acc, best_val_acc, test_acc, best_test_acc))

def main():
    dataset = CoraGraphDataset()
    g = dataset[0]
    in_feats = g.ndata['feat'].shape[1]
    h_feats = 16
    num_classes = dataset.num_classes
    
    model = Model(in_feats, h_feats, num_classes)
    train(g, model, num_epoch=300, num_limit=20)

if __name__ == "__main__":
    main()
  NumNodes: 2708
  NumEdges: 10556
  NumFeats: 1433
  NumClasses: 7
  NumTrainingSamples: 140
  NumValidationSamples: 500
  NumTestSamples: 1000
Done loading data from cached files.
In epoch 0, loss: 1.954, val acc: 0.116 (best 0.116), test acc: 0.104 (best 0.104)
In epoch 20, loss: 1.884, val acc: 0.208 (best 0.210), test acc: 0.224 (best 0.225)
In epoch 40, loss: 1.741, val acc: 0.482 (best 0.482), test acc: 0.479 (best 0.479)
In epoch 60, loss: 1.512, val acc: 0.654 (best 0.654), test acc: 0.667 (best 0.667)
In epoch 80, loss: 1.216, val acc: 0.722 (best 0.722), test acc: 0.721 (best 0.721)
In epoch 100, loss: 0.905, val acc: 0.728 (best 0.728), test acc: 0.741 (best 0.740)
In epoch 120, loss: 0.634, val acc: 0.730 (best 0.730), test acc: 0.749 (best 0.749)
In epoch 140, loss: 0.432, val acc: 0.728 (best 0.730), test acc: 0.747 (best 0.749)
In epoch 160, loss: 0.295, val acc: 0.732 (best 0.732), test acc: 0.755 (best 0.753)
In epoch 180, loss: 0.206, val acc: 0.738 (best 0.738), test acc: 0.757 (best 0.757)
In epoch 200, loss: 0.149, val acc: 0.736 (best 0.738), test acc: 0.758 (best 0.757)
In epoch 220, loss: 0.112, val acc: 0.738 (best 0.738), test acc: 0.759 (best 0.757)
In epoch 240, loss: 0.087, val acc: 0.742 (best 0.742), test acc: 0.760 (best 0.760)
In epoch 260, loss: 0.069, val acc: 0.744 (best 0.744), test acc: 0.759 (best 0.760)
In epoch 280, loss: 0.057, val acc: 0.744 (best 0.746), test acc: 0.757 (best 0.759)

用户自定义函数

DGL允许用户自定义消息函数和聚合函数以获得最大的表达能力。以下是一个用户定义的消息函数,它等价于fn.u_mul_e('h', 'w', 'm')

def u_mul_e_udf(edges):
    return {"m": edges.src["h"] * edges.data["w"]}

参数edges共有三个成员:srcdatadst,分别代表所有边的源节点特征,边特征和目标节点特征。

我们也可以编写自己的聚合函数。例如,下面的函数相当于内置的fn.sum('m', 'h')函数,它对传入的消息求和:

def sum_udf(nodes):
    return {"h": nodes.mailbox["m"].sum(dim=1)} # dim=1,按行求和

简而言之,DGL将按节点的度数对节点进行分组,对于每个组DGL将传入的消息沿着第2维度(行)进行堆叠,然后沿第2个维度执行缩减(reduce)以聚合消息。

有关使用用户定义函数自定义消息函数和聚合函数的详细信息,请参考API

定制GNN模块的最佳实践

DGL按照优先级高低排序推荐以下做法:

  1. 直接调用dgl.nn模块
  2. 使用dgl.nn.functional内置方法
  3. 使用update_all和内置的消息函数和聚合函数
  4. 使用用户自定义的消息函数和聚合函数

参考

本文翻译整理自Write your own GNN module

评论 26
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值