数据完全存于内存的数据集类
引言
在上一节内容中,我们学习了基于图神经网络的节点表征学习方法,并用了现成的很小的数据集实现了节点分类任务。在此第6节的上半部分,我们将学习在PyG中如何自定义一个数据完全存于内存的数据集类。
InMemoryDataset
基类简介
在PyG中,我们通过继承InMemoryDataset
类来自定义一个数据可全部存储到内存的数据集类。
class InMemoryDataset(root: Optional[str] = None,
transform: Optional[Callable] = None,
pre_transform: Optional[Callable] = None,
pre_filter: Optional[Callable] = None)
InMemoryDataset
官方文档:torch_geometric.data.InMemoryDataset
如上方的InMemoryDataset
类的构造函数接口所示,每个数据集都要有一个 根文件夹(root
),它指示数据集应该被保存在哪里。在根目录下至少有两个文件夹:
- 一个文件夹为
raw_dir
,它用于存储未处理的文件,从网络上下载的数据集文件会被存放到这里; - 另一个文件夹为
processed_dir
,处理后的数据集被保存到这里。
此外,继承InMemoryDataset
类的每个数据集类可以传递一个 transform
函数,一个 pre_transform
函数 和一个 pre_filter
函数,它们默认都为None
。
transform
函数接受Data
对象为参数,对其转换后返回。此函数在每一次数据访问时被调用,所以它应该用于数据增广(Data Augmentation)。pre_transform
函数接受Data
对象为参数,对其转换后返回。此函数在样本Data
对象保存到文件前调用,所以它最好用于只需要做一次的大量预计算。pre_filter
函数可以在保存前手动过滤掉数据对象。该函数的一个用例是,过滤样本类别。
为了创建一个InMemoryDataset
,我们需要实现四个基本方法:
raw_file_names()
这是一个属性方法,返回一个文件名列表,文件应该能在raw_dir
文件夹中找到,否则调用process()
函数下载文件到raw_dir
文件夹。processed_file_names()
。这是一个属性方法,返回一个文件名列表,文件应该能在processed_dir
文件夹中找到,否则调用process()
函数对样本做预处理然后保存到processed_dir
文件夹。download()
: 将原始数据文件下载到raw_dir
文件夹。process()
: 对样本做预处理然后保存到processed_dir
文件夹。
import torch
from torch_geometric.data import InMemoryDataset, download_url
class MyOwnDataset(InMemoryDataset):
def __init__(self, root, transform=None, pre_transform=None, pre_filter=None):
super().__init__(root=root, transform=transform, pre_transform=pre_transform, pre_filter=pre_filter)
self.data, self.slices = torch.load(self.processed_paths[0])
@property
def raw_file_names(self):
return ['some_file_1', 'some_file_2', ...]
@property
def processed_file_names(self):
return ['data.pt']
def download(self):
# Download to `self.raw_dir`.
download_url(url, self.raw_dir)
...
def process(self):
# Read data into huge `Data` list.
data_list = [...]
if self.pre_filter is not None:
data_list = [data for data in data_list if self.pre_filter(data)]
if self.pre_transform is not None:
data_list = [self.pre_transform(data) for data in data_list]
data, slices = self.collate(data_list)
torch.save((data, slices), self.processed_paths[0])
样本从原始文件转换成 Data
类对象的过程定义在process
函数中。在该函数中,有时我们需要读取和创建一个 Data
对象的列表,并将其保存到processed_dir
中。由于python保存一个巨大的列表是相当慢的,因此我们在保存之前通过collate()
函数将该列表集合成一个巨大的 Data
对象。该函数还会返回一个切片字典,以便从这个对象中重构单个样本。最后,我们需要在构造函数中把这Data
对象和切片字典分别加载到属性self.data
和self.slices
中。我们通过下面的例子来介绍生成一个InMemoryDataset
子类对象时程序的运行流程。
定义一个InMemoryDataset
子类
由于我们手头没有实际应用中的数据集,因此我们以公开数据集PubMed
为例子。PubMed
数据集存储的是文章引用网络,文章对应图的结点,如果两篇文章存在引用关系(无论引用与被引),则这两篇文章对应的结点之间存在边。该数据集来源于论文Revisiting Semi-Supervised Learning with Graph Embeddings。我们直接基于PyG中的Planetoid
类修改得到下面的PlanetoidPubMed
数据集类。
import os.path as osp
import torch
from torch_geometric.data import InMemoryDataset, download_url
from torch_geometric.io import read_planetoid_data
class PlanetoidPubMed(InMemoryDataset):
r"""The citation network datasets "PubMed" from the
`"Revisiting Semi-Supervised Learning with Graph Embeddings"
<https://arxiv.org/abs/1603.08861>`_ paper.
Nodes represent documents and edges represent citation links.
Training, validation and test splits are given by binary masks.
Args:
root (string): Root directory where the dataset should be saved.
split (string): The type of dataset split
(:obj:`"public"`, :obj:`"full"`, :obj:`"random"`).
If set to :obj:`"public"`, the split will be the public fixed split
from the
`"Revisiting Semi-Supervised Learning with Graph Embeddings"
<https://arxiv.org/abs/1603.08861>`_ paper.
If set to :obj:`"full"`, all nodes except those in the validation
and test sets will be used for training (as in the
`"FastGCN: Fast Learning with Graph Convolutional Networks via
Importance Sampling" <https://arxiv.org/abs/1801.10247>`_ paper).
If set to :obj:`"random"`, train, validation, and test sets will be
randomly generated, according to :obj:`num_train_per_class`,
:obj:`num_val` and :obj:`num_test`. (default: :obj:`"public"`)
num_train_per_class (int, optional): The number of training samples
per class in case of :obj:`"random"` split. (default: :obj:`20`)
num_val (int, optional): The number of validation samples in case of
:obj:`"random"` split. (default: :obj:`500`)
num_test (int, optional): The number of test samples in case of
:obj:`"random"` split. (default: :obj:`1000`)
transform (callable, optional): A function/transform that takes in an
:obj:`torch_geometric.data.Data` object and returns a transformed
version. The data object will be transformed before every access.
(default: :obj:`None`)
pre_transform (callable, optional): A function/transform that takes in
an :obj:`torch_geometric.data.Data` object and returns a
transformed version. The data object will be transformed before
being saved to disk. (default: :obj:`None`)
"""
url = 'https://github.com/kimiyoung/planetoid/raw/master/data'
def __init__(self, root, split="public", num_train_per_class=20,
num_val=500, num_test=1000, transform=None,
pre_transform=None):
super(PlanetoidPubMed, self).__init__(root, transform, pre_transform)
self.data, self.slices = torch.load(self.processed_paths[0])
self.split = split
assert self.split in ['public', 'full', 'random']
if split == 'full':
data = self.get(0)
data.train_mask.fill_(True)
data.train_mask[data.val_mask | data.test_mask] = False
self.data, self.slices = self.collate([data])
elif split == 'random':
data = self.get(0)
data.train_mask.fill_(False)
for c in range(self.num_classes):
idx = (data.y == c).nonzero(as_tuple=False).view(-1)
idx = idx[torch.randperm(idx.size(0))[:num_train_per_class]]
data.train_mask[idx] = True
remaining = (~data.train_mask).nonzero(as_tuple=False).view(-1)
remaining = remaining[torch.randperm(remaining.size(0))]
data.val_mask.fill_(False)
data.val_mask[remaining[:num_val]] = True
data.test_mask.fill_(False)
data.test_mask[remaining[num_val:num_val + num_test]] = True
self.data, self.slices = self.collate([data])
@property
def raw_dir(self):
return osp.join(self.root, 'raw')
@property
def processed_dir(self):
return osp.join(self.root, 'processed')
@property
def raw_file_names(self):
names = ['x', 'tx', 'allx', 'y', 'ty', 'ally', 'graph', 'test.index']
return ['ind.pubmed.{}'.format(name) for name in names]
@property
def processed_file_names(self):
return 'data.pt'
def download(self):
for name in self.raw_file_names:
download_url('{}/{}'.format(self.url, name), self.raw_dir)
def process(self):
data = read_planetoid_data(self.raw_dir, 'pubmed')
data = data if self.pre_transform is None else self.pre_transform(data)
torch.save(self.collate([data]), self.processed_paths[0])
def __repr__(self):
return '{}()'.format(self.name)
在我们生成一个PlanetoidPubMed
类的对象时,程序运行流程如下:
- 首先检查数据原始文件是否已下载:
- 检查
self.raw_dir
目录下是否存在raw_file_names()
属性方法返回的每个文件, - 如有文件不存在,则调用
download()
方法执行原始文件下载。 - 其中
self.raw_dir
为osp.join(self.root, 'raw')
。
- 检查
- 其次检查数据是否经过处理:
- 首先检查之前对数据做变换的方法:检查
self.processed_dir
目录下是否存在pre_transform.pt
文件:如果存在,意味着之前进行过数据变换,则需加载该文件获取之前所用的数据变换的方法,并检查它与当前pre_transform
参数指定的方法是否相同;如果不相同则会报出一个警告,“The pre_transform argument differs from the one used in ……”。 - 接着检查之前的样本过滤的方法:检查
self.processed_dir
目录下是否存在pre_filter.pt
文件,如果存在,意味着之前进行过样本过滤,则需加载该文件获取之前所用的样本过滤的方法,并检查它与当前pre_filter
参数指定的方法是否相同,如果不相同则会报出一个警告,“The pre_filter argument differs from the one used in ……”。其中self.processed_dir
为osp.join(self.root, 'processed')
。 - 接着检查是否存在处理好的数据:检查
self.processed_dir
目录下是否存在self.processed_paths
方法返回的所有文件,如有文件不存在,意味着不存在已经处理好的样本的文件,如需执行以下的操作:- 调用
process
方法,进行数据处理。 - 如果
pre_transform
参数不为None
,则调用pre_transform
方法进行数据处理。 - 如果
pre_filter
参数不为None
,则进行样本过滤(此例子中不需要进行样本过滤,pre_filter
参数始终为None
)。 - 保存处理好的数据到文件,文件存储在
processed_paths()
属性方法返回的路径。如果将数据保存到多个文件中,则返回的路径有多个。这些路径都在self.processed_dir
目录下,以processed_file_names()
属性方法的返回值为文件名。 - 最后保存新的
pre_transform.pt
文件和pre_filter.pt
文件,其中分别存储当前使用的数据处理方法和样本过滤方法。
- 调用
- 首先检查之前对数据做变换的方法:检查
现在让我们查看这个数据集:
dataset = PlanetoidPubMed('../dataset/Planetoid/PubMed')
print(dataset.num_classes)
print(dataset[0].num_nodes)
print(dataset[0].num_edges)
print(dataset[0].num_features)
# 3
# 19717
# 88648
# 500
可以看到这个数据集包含三个分类任务,共19,717个结点,88,648条边,节点特征维度为500。
参考资料
InMemoryDataset
官方文档:torch_geometric.data.InMemoryDataset
Data
官方文档:torch_geometric.data.Data
- 提出PubMed数据集的论文:Revisiting Semi-Supervised Learning with Graph Embeddings
Planetoid
官方文档:[torch_geometric.datasets.Planetoid](torch_geometric.datasets — pytorch_geometric 1.7.0 documentation (pytorch-geometric.readthedocs.io))
节点预测与边预测任务实践
引言
在此节的上半部分中我们学习了在PyG中如何自定义一个数据完全存于内存的数据集类。在这下半部分中,我们将实践节点预测与边预测任务。
通过完整的此章内容的学习,希望小伙伴们能够掌握应对实际中节点预测问题或边预测问题的能力。
节点预测任务实践
之前我们介绍过由2层GATConv
组成的神经网络,现在我们重定义一个GAT神经网络,使其能够通过参数定义GATConv
的层数,以及每一层GATConv
的out_channels
。我们的神经网络模型定义如下:
class GAT(torch.nn.Module):
def __init__(self, num_features, hidden_channels_list, num_classes):
super(GAT, self).__init__()
torch.manual_seed(12345)
hns = [num_features] + hidden_channels_list
conv_list = []
for idx in range(len(hidden_channels_list)):
conv_list.append((GATConv(hns[idx], hns[idx+1]), 'x, edge_index -> x'))
conv_list.append(ReLU(inplace=True),)
self.convseq = Sequential('x, edge_index', conv_list)
self.linear = Linear(hidden_channels_list[-1], num_classes)
def forward(self, x, edge_index):
x = self.convseq(x, edge_index)
x = F.dropout(x, p=0.5, training=self.training)
x = self.linear(x)
return x
由于我们的神经网络由多个GATConv
顺序相连而构成,因此我们使用了torch_geometric.nn.Sequential
容器,详细内容可见于官方文档。
完整的代码可见于codes/node_classification.py
边预测任务实践
边预测任务,如果是预测两个节点之间是否存在边。拿到一个图数据集,我们有节点特征矩阵x
,和哪些节点之间存在边的信息edge_index
。edge_index
存储的便是正样本,为了构建边预测任务,我们需要生成一些负样本,即采样一些不存在边的节点对作为负样本边,正负样本应平衡。此外要将样本分为训练集、验证集和测试集三个集合。
PyG中为我们提供了现成的方法,train_test_split_edges(data, val_ratio=0.05, test_ratio=0.1)
,其第一个参数为torch_geometric.data.Data
对象,第二参数为验证集所占比例,第三个参数为测试集所占比例。该函数将自动地采样得到负样本,并将正负样本分成训练集、验证集和测试集三个集合。它用train_pos_edge_index
、train_neg_adj_mask
、val_pos_edge_index
、val_neg_edge_index
、test_pos_edge_index
和test_neg_edge_index
属性取代edge_index
属性。注意train_neg_adj_mask
与其他属性格式不同,其实该属性在后面并没有派上用场,后面我们仍然需要进行一次负样本训练集采样。
下面我们使用Cora数据集作为例子进行边预测任务说明。
首先是获取数据集并进行分析:
import os.path as osp
from torch_geometric.utils import negative_sampling
from torch_geometric.datasets import Planetoid
import torch_geometric.transforms as T
from torch_geometric.utils import train_test_split_edges
dataset = 'Cora'
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset, transform=T.NormalizeFeatures())
data = dataset[0]
data.train_mask = data.val_mask = data.test_mask = data.y = None
data = train_test_split_edges(data)
print(data.edge_index.shape)
# torch.Size([2, 10556])
for key in data.keys:
print(key, getattr(data, key).shape)
# x torch.Size([2708, 1433])
# val_pos_edge_index torch.Size([2, 263])
# test_pos_edge_index torch.Size([2, 527])
# train_pos_edge_index torch.Size([2, 8976])
# train_neg_adj_mask torch.Size([2708, 2708])
# val_neg_edge_index torch.Size([2, 263])
# test_neg_edge_index torch.Size([2, 527])
# 263 + 527 + 8976 = 9766 != 10556
# 263 + 527 + 8976/2 = 5278 = 10556/2
我们观察到三个集合中正样本边的数量之和不等于原始边的数量。这是因为原始边的数量统计的是双向边的数量,在验证集正样本边和测试集正样本边中只需对一个方向的边做预测精度的衡量,对另一个方向的预测精度衡量属于重复,但在训练集还是保留双向的边(其实也可以可以不要,编者注)。
接下来构建神经网络模型:
import torch
from torch_geometric.nn import GCNConv
class Net(torch.nn.Module):
def __init__(self, in_channels, out_channels):
super(Net, self).__init__()
self.conv1 = GCNConv(in_channels, 128)
self.conv2 = GCNConv(128, out_channels)
def encode(self, x, edge_index):
x = self.conv1(x, edge_index)
x = x.relu()
return self.conv2(x, edge_index)
def decode(self, z, pos_edge_index, neg_edge_index):
edge_index = torch.cat([pos_edge_index, neg_edge_index], dim=-1)
return (z[edge_index[0]] * z[edge_index[1]]).sum(dim=-1)
def decode_all(self, z):
prob_adj = z @ z.t()
return (prob_adj > 0).nonzero(as_tuple=False).t()
用于做边预测的神经网络主要由两部分组成:其一是编码(encode),它与我们前面介绍的生成节点表征是一样的;其二是解码(decode),它边两端节点的表征生成边为真的几率(odds)。decode_all(self, z)
用于推断(inference)阶段,我们要对输入节点所有的节点对预测存在边的几率。
定义单个epoch的训练过程
def get_link_labels(pos_edge_index, neg_edge_index):
num_links = pos_edge_index.size(1) + neg_edge_index.size(1)
link_labels = torch.zeros(num_links, dtype=torch.float)
link_labels[:pos_edge_index.size(1)] = 1.
return link_labels
def train(data, model, optimizer):
model.train()
neg_edge_index = negative_sampling(
edge_index=data.train_pos_edge_index,
num_nodes=data.num_nodes,
num_neg_samples=data.train_pos_edge_index.size(1))
optimizer.zero_grad()
z = model.encode(data.x, data.train_pos_edge_index)
link_logits = model.decode(z, data.train_pos_edge_index, neg_edge_index)
link_labels = get_link_labels(data.train_pos_edge_index, neg_edge_index).to(data.x.device)
loss = F.binary_cross_entropy_with_logits(link_logits, link_labels)
loss.backward()
optimizer.step()
return loss
通常在图上存在边的节点对的数量往往少于不存在边的节点对的数量。为了类平衡,在每一个epoch
的训练过程中,我们只需要用到与正样本一样数量的负样本。综合以上两点原因,我们在每一个epoch
的训练过程中都采样与正样本数量一样的负样本,这样我们既做到了类平衡,又增加了训练负样本的丰富性。get_link_labels
函数用于生成完整训练集的标签。在负样本采样时,我们传递了train_pos_edge_index
为参数,于是negative_sampling
函数只会在训练集中不存在边的结点对中采样。
在训练阶段,我们应该只见训练集,对验证集与测试集都是不可见的,但在此阶段我们应该要完成对所有结点的编码,因此我们假设此处正样本训练集涉及到了所有的结点,这样就能实现对所有结点的编码。
定义单个epoch验证与测试过程
@torch.no_grad()
def test(data, model):
model.eval()
z = model.encode(data.x, data.train_pos_edge_index)
results = []
for prefix in ['val', 'test']:
pos_edge_index = data[f'{prefix}_pos_edge_index']
neg_edge_index = data[f'{prefix}_neg_edge_index']
link_logits = model.decode(z, pos_edge_index, neg_edge_index)
link_probs = link_logits.sigmoid()
link_labels = get_link_labels(pos_edge_index, neg_edge_index)
results.append(roc_auc_score(link_labels.cpu(), link_probs.cpu()))
return results
注意在验证与测试过程中,我们依然只用正样本边训练集做节点特征编码。
运行完整的训练、验证与测试
def main():
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
dataset = 'Cora'
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset, transform=T.NormalizeFeatures())
data = dataset[0]
ground_truth_edge_index = data.edge_index.to(device)
data.train_mask = data.val_mask = data.test_mask = data.y = None
data = train_test_split_edges(data)
data = data.to(device)
model = Net(dataset.num_features, 64).to(device)
optimizer = torch.optim.Adam(params=model.parameters(), lr=0.01)
best_val_auc = test_auc = 0
for epoch in range(1, 101):
loss = train(data, model, optimizer)
val_auc, tmp_test_auc = test(data, model)
if val_auc > best_val_auc:
best_val_auc = val_auc
test_auc = tmp_test_auc
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}, Val: {val_auc:.4f}, '
f'Test: {test_auc:.4f}')
z = model.encode(data.x, data.train_pos_edge_index)
final_edge_index = model.decode_all(z)
if __name__ == "__main__":
main()
完整的代码可见于codes/link_prediction.py
。
结语
在此篇文章中,我们介绍了定义一个数据可全部存储于内存的数据集类的方法,并且实践了节点预测任务于边预测任务。我们要**重点关注InMemoryDataset
子类的运行流程与实现四个函数的规范。**在图神经网络的实现中,我们可以用torch_geometric.nn.Sequential
容器对神经网络的多个模块顺序相连。
作业
- 实践问题一:对节点预测任务,尝试用PyG中的不同的网络层去代替
GCNConv
,以及不同的层数和不同的out_channels
。
使用GCNConv进行节点分类任务:
import os.path as osp
import torch
import torch.nn.functional as F
from torch.nn import Linear, ReLU
from torch_geometric.data import InMemoryDataset, download_url
from torch_geometric.io import read_planetoid_data
from torch_geometric.transforms import NormalizeFeatures
from torch_geometric.nn import GCNConv, Sequential
class PlanetoidPubMed(InMemoryDataset):
""" 节点代表文章,边代表引文关系。
训练、验证和测试的划分通过二进制掩码给出。
参数:
root (string): 存储数据集的文件夹的路径
transform (callable, optional): 数据转换函数,每一次获取数据时被调用。
pre_transform (callable, optional): 数据转换函数,数据保存到文件前被调用。
"""
url = 'https://github.com/kimiyoung/planetoid/raw/master/data'
def __init__(self, root, split="public", num_train_per_class=20,
num_val=500, num_test=1000, transform=None,
pre_transform=None):
super(PlanetoidPubMed, self).__init__(root, transform, pre_transform)
self.data, self.slices = torch.load(self.processed_paths[0])
self.split = split
assert self.split in ['public', 'full', 'random']
if split == 'full':
data = self.get(0)
data.train_mask.fill_(True)
data.train_mask[data.val_mask | data.test_mask] = False
self.data, self.slices = self.collate([data])
elif split == 'random':
data = self.get(0)
data.train_mask.fill_(False)
for c in range(self.num_classes):
idx = (data.y == c).nonzero(as_tuple=False).view(-1)
idx = idx[torch.randperm(idx.size(0))[:num_train_per_class]]
data.train_mask[idx] = True
remaining = (~data.train_mask).nonzero(as_tuple=False).view(-1)
remaining = remaining[torch.randperm(remaining.size(0))]
data.val_mask.fill_(False)
data.val_mask[remaining[:num_val]] = True
data.test_mask.fill_(False)
data.test_mask[remaining[num_val:num_val + num_test]] = True
self.data, self.slices = self.collate([data])
@property
def raw_dir(self):
return osp.join(self.root, 'raw')
@property
def processed_dir(self):
return osp.join(self.root, 'processed')
@property
def raw_file_names(self):
names = ['x', 'tx', 'allx', 'y', 'ty', 'ally', 'graph', 'test.index']
return ['ind.pubmed.{}'.format(name) for name in names]
@property
def processed_file_names(self):
return 'data.pt'
def download(self):
for name in self.raw_file_names:
download_url('{}/{}'.format(self.url, name), self.raw_dir)
def process(self):
data = read_planetoid_data(self.raw_dir, 'pubmed')
data = data if self.pre_transform is None else self.pre_transform(data)
torch.save(self.collate([data]), self.processed_paths[0])
def __repr__(self):
return 'PubMed()'
class GCN(torch.nn.Module):
def __init__(self, num_features, hidden_channnels_list, num_classes):
super(GCN, self).__init__()
torch.manual_seed(12345)
hns = [num_features] + hidden_channnels_list
conv_list = []
for idx in range(len(hidden_channnels_list)):
conv_list.append((GCNConv(hns[idx], hns[idx+1]), 'x, edge_index -> x'))
conv_list.append(ReLU(inplace=True), ) # inplace表示是否将得到的值计算得到的值覆盖之前的值
self.convseq = Sequential('x, edge_index', conv_list)
self.linear = Linear(hidden_channnels_list[-1], num_classes)
def forward(self, x, edge_index):
x = self.convseq(x, edge_index)
x = F.dropout(x, p=0.5, training=self.training)
x = self.linear(x)
return x
def train(data, model, optimizer, criterion):
model.train()
optimizer.zero_grad() # 清空梯度
out = model(data.x, data.edge_index) # 执行一次前向传播
# 基于训练的节点计算损失
loss = criterion(out[data.train_mask], data.y[data.train_mask])
loss.backward() # 反向传播
optimizer.step() # 基于梯度更新所有的参数
return loss
def test(data, model):
model.eval()
out = model(data.x, data.edge_index)
pred = out.argmax(dim=1) # 采用可能性最高的进行预测
test_correct = pred[data.test_mask] == data.y[data.test_mask] # 选择预测正确的标签
test_acc = int(test_correct.sum()) / int(data.test_mask.sum()) # 计算准确率
return test_acc
def main():
device = torch.device('cuda' if torch.cuda.torch.cuda.is_available() else 'cpu')
dataset = PlanetoidPubMed('/Dataset/Planetoid/PubMed', transform=NormalizeFeatures())
# print('data.num_features:', dataset.num_features)
data = dataset[0].to(device)
model = GCN(num_features=dataset.num_features, hidden_channnels_list=[200, 100], num_classes=dataset.num_classes).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
criterion = torch.nn.CrossEntropyLoss()
for epoch in range(1, 201):
loss =train(data, model, optimizer, criterion)
# print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}')
test_acc = test(data, model)
print(f'Test Accuracy: {test_acc:.4f}')
if __name__ == "__main__":
main()
使用不同层数的GATConv进行节点分类任务
import os.path as osp
import torch
import torch.nn.functional as F
from torch.nn import Linear, ReLU
from torch_geometric.data import InMemoryDataset, download_url
from torch_geometric.io import read_planetoid_data
from torch_geometric.transforms import NormalizeFeatures
from torch_geometric.nn import GATConv, Sequential
# 同样采用我们自定义的数据集
class PlanetoidPubMed(InMemoryDataset):
""" 节点代表文章,边代表引文关系。
训练、验证和测试的划分通过二进制掩码给出。
参数:
root (string): 存储数据集的文件夹的路径
transform (callable, optional): 数据转换函数,每一次获取数据时被调用。
pre_transform (callable, optional): 数据转换函数,数据保存到文件前被调用。
"""
url = 'https://github.com/kimiyoung/planetoid/raw/master/data'
def __init__(self, root, split="public", num_train_per_class=20,
num_val=500, num_test=1000, transform=None,
pre_transform=None):
super(PlanetoidPubMed, self).__init__(root, transform, pre_transform)
self.data, self.slices = torch.load(self.processed_paths[0])
self.split = split
assert self.split in ['public', 'full', 'random']
if split == 'full':
data = self.get(0)
data.train_mask.fill_(True)
data.train_mask[data.val_mask | data.test_mask] = False
self.data, self.slices = self.collate([data])
elif split == 'random':
data = self.get(0)
data.train_mask.fill_(False)
for c in range(self.num_classes):
idx = (data.y == c).nonzero(as_tuple=False).view(-1)
idx = idx[torch.randperm(idx.size(0))[:num_train_per_class]]
data.train_mask[idx] = True
remaining = (~data.train_mask).nonzero(as_tuple=False).view(-1)
remaining = remaining[torch.randperm(remaining.size(0))]
data.val_mask.fill_(False)
data.val_mask[remaining[:num_val]] = True
data.test_mask.fill_(False)
data.test_mask[remaining[num_val:num_val + num_test]] = True
self.data, self.slices = self.collate([data])
@property
def raw_dir(self):
return osp.join(self.root, 'raw')
@property
def processed_dir(self):
return osp.join(self.root, 'processed')
@property
def raw_file_names(self):
names = ['x', 'tx', 'allx', 'y', 'ty', 'ally', 'graph', 'test.index']
return ['ind.pubmed.{}'.format(name) for name in names]
@property
def processed_file_names(self):
return 'data.pt'
def download(self):
for name in self.raw_file_names:
download_url('{}/{}'.format(self.url, name), self.raw_dir)
def process(self):
data = read_planetoid_data(self.raw_dir, 'pubmed')
data = data if self.pre_transform is None else self.pre_transform(data)
torch.save(self.collate([data]), self.processed_paths[0])
def __repr__(self):
return 'PubMed()'
class GAT(torch.nn.Module):
def __init__(self, num_features, hidden_channnels_list, num_classes):
super(GAT, self).__init__()
torch.manual_seed(12345)
hns = [num_features] + hidden_channnels_list
conv_list = []
for idx in range(len(hidden_channnels_list)):
conv_list.append((GATConv(hns[idx], hns[idx+1]), 'x, edge_index -> x'))
conv_list.append(ReLU(inplace=True), ) # inplace表示是否将得到的值计算得到的值覆盖之前的值
self.convseq = Sequential('x, edge_index', conv_list)
self.linear = Linear(hidden_channnels_list[-1], num_classes)
def forward(self, x, edge_index):
x = self.convseq(x, edge_index)
x = F.dropout(x, p=0.5, training=self.training)
x = self.linear(x)
return x
def train(data, model, optimizer, criterion):
model.train()
optimizer.zero_grad() # 清空梯度
out = model(data.x, data.edge_index) # 执行一次前向传播
# 基于训练的节点计算损失
loss = criterion(out[data.train_mask], data.y[data.train_mask])
loss.backward() # 反向传播
optimizer.step() # 基于梯度更新所有的参数
return loss
def test(data, model):
model.eval()
out = model(data.x, data.edge_index)
pred = out.argmax(dim=1) # 采用可能性最高的进行预测
test_correct = pred[data.test_mask] == data.y[data.test_mask] # 选择预测正确的标签
test_acc = int(test_correct.sum()) / int(data.test_mask.sum()) # 计算准确率
return test_acc
def main():
device = torch.device('cuda' if torch.cuda.torch.cuda.is_available() else 'cpu')
dataset = PlanetoidPubMed('/Dataset/Planetoid/PubMed', transform=NormalizeFeatures())
# print('data.num_features:', dataset.num_features)
data = dataset[0].to(device)
model = GAT(num_features=dataset.num_features, hidden_channnels_list=[400, 200, 100], num_classes=dataset.num_classes).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
criterion = torch.nn.CrossEntropyLoss()
for epoch in range(1, 201):
loss =train(data, model, optimizer, criterion)
# print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}')
test_acc = test(data, model)
print(f'Test Accuracy: {test_acc:.4f}')
if __name__ == "__main__":
main()
- 实践问题二:对边预测任务,尝试用用
torch_geometric.nn.Sequential
容器构造图神经网络。
import os.path as osp
import torch
import torch.nn.functional as F
from torch.nn import ReLU
import torch_geometric.transforms as T
from sklearn.metrics import roc_auc_score
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import GCNConv, Sequential
from torch_geometric.utils import negative_sampling, train_test_split_edges
class Net(torch.nn.Module):
def __init__(self, in_channels, hidden_channnels_list, out_channels):
super(Net, self).__init__()
torch.manual_seed(12345)
hns = [in_channels] + hidden_channnels_list
conv_list = []
for idx in range(len(hidden_channnels_list)-1):
conv_list.append((GCNConv(hns[idx], hns[idx+1]), 'x, edge_index -> x'))
conv_list.append(ReLU(inplace=True), )
conv_list.append((GCNConv(hns[-2], hns[-1]), 'x, edge_index -> x'))
self.convseq = Sequential('x, edge_index', conv_list)
def encode(self, x, edge_index):
return self.convseq(x, edge_index)
def decode(self, z, pos_edge_index, neg_edge_index):
edge_index = torch.cat([pos_edge_index, neg_edge_index], dim=-1)
return (z[edge_index[0]] * z[edge_index[1]]).sum(dim=-1)
def decode_all(self, z):
prob_adj = z @ z.t()
return (prob_adj > 0).nonzero(as_tuple=False).t()
def get_link_labels(pos_edge_index, neg_edge_index):
num_links = pos_edge_index.size(1) + neg_edge_index.size(1)
link_labels = torch.zeros(num_links, dtype=torch.float)
link_labels[:pos_edge_index.size(1)] = 1.
return link_labels
def train(data, model, optimizer):
model.train()
neg_edge_index = negative_sampling(
edge_index=data.train_pos_edge_index,
num_nodes=data.num_nodes,
num_neg_samples=data.train_pos_edge_index.size(1))
train_neg_edge_set = set(map(tuple, neg_edge_index.T.tolist()))
val_pos_edge_set = set(map(tuple, data.val_pos_edge_index.T.tolist()))
test_pos_edge_set = set(map(tuple, data.test_pos_edge_index.T.tolist()))
if (len(train_neg_edge_set & val_pos_edge_set) > 0) or (len(train_neg_edge_set & test_pos_edge_set) > 0):
print('wrong!')
optimizer.zero_grad()
z = model.encode(data.x, data.train_pos_edge_index)
link_logits = model.decode(z, data.train_pos_edge_index, neg_edge_index)
link_labels = get_link_labels(data.train_pos_edge_index, neg_edge_index).to(data.x.device)
loss = F.binary_cross_entropy_with_logits(link_logits, link_labels)
loss.backward()
optimizer.step()
return loss
@torch.no_grad()
def test(data, model):
model.eval()
z = model.encode(data.x, data.train_pos_edge_index)
results = []
for prefix in ['val', 'test']:
pos_edge_index = data[f'{prefix}_pos_edge_index']
neg_edge_index = data[f'{prefix}_neg_edge_index']
link_logits = model.decode(z, pos_edge_index, neg_edge_index)
link_probs = link_logits.sigmoid()
link_labels = get_link_labels(pos_edge_index, neg_edge_index)
results.append(roc_auc_score(link_labels.cpu(), link_probs.cpu()))
return results
def main():
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# dataset = 'Cora'
# path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
# dataset = Planetoid(path, dataset, transform=T.NormalizeFeatures())
dataset = Planetoid('/Dataset/Planetoid/Cora', 'Cora', transform=T.NormalizeFeatures())
data = dataset[0]
ground_truth_edge_index = data.edge_index.to(device)
data.train_mask = data.val_mask = data.test_mask = data.y = None
data = train_test_split_edges(data)
data = data.to(device)
model = Net(dataset.num_features, [128], 64).to(device)
optimizer = torch.optim.Adam(params=model.parameters(), lr=0.01)
best_val_auc = test_auc = 0
for epoch in range(1, 101):
loss = train(data, model, optimizer)
val_auc, tmp_test_auc = test(data, model)
if val_auc > best_val_auc:
best_val_auc = val_auc
test_auc = tmp_test_auc
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}, Val: {val_auc:.4f}, '
f'Test: {test_auc:.4f}')
z = model.encode(data.x, data.train_pos_edge_index)
final_edge_index = model.decode_all(z)
if __name__ == "__main__":
main()
-
思考问题三:如下方代码所示,我们以
data.train_pos_edge_index
为实际参数,这样采样得到的负样本可能包含验证集正样本或测试集正样本,即可能将真实的正样本标记为负样本,由此会产生冲突。但我们还是这么做,这是为什么?以及为什么在验证与测试阶段我们只根据data.train_pos_edge_index
做结点表征的编码?neg_edge_index = negative_sampling( edge_index=data.train_pos_edge_index, num_nodes=data.num_nodes, num_neg_samples=data.train_pos_edge_index.size(1))
参考资料
Sequential
官网文档:torch_geometric.nn.Sequential