超大图上的节点表征学习
引言
当硬件条件不能满足数据实验的要求时,用在超大图上的节点表征学习方法实现。所以具备十分实用的价值。
Cluster-GCN方法
基本概念
节点表征公式如下:
z
(
l
+
1
)
=
A
′
X
(
l
)
W
(
l
)
z^{(l+1)}=A^{'}X^{(l)}W^{(l)}
z(l+1)=A′X(l)W(l),
X
(
l
+
1
)
=
σ
(
z
(
l
+
1
)
)
X^{(l+1)}=\sigma(z^{(l+1)})
X(l+1)=σ(z(l+1))
从节点表征的公式来看,只有从X上面入手减小传播的大小即可。
算法过程
- 利用图节点聚类算法将一个图的节点划分为 c个簇,每一次选择几个簇的节点和这些节点对应的边构成一个子图,然后对子图做训练。
- 每一次随机选择多个簇来组成一个batch训练。
超大图存在的问题:邻域扩展问题
过去的方法和Cluster-GCN方法之间的邻域扩展差异。红 色节点是邻域扩展的起始节点。过去的方法需要做指数级的邻域扩展(图左),而Cluster-GCN的方法可以避免巨大范围的邻域扩展(图右)。
超大图存在的问题:簇熵值问题
簇熵值小表明簇中节点的标签分布偏向于某一些类别,这意味着不同簇的标签分布有较大的差异,这将影响训练的收敛。由上图可知:与随机划分相比,采用聚类划分得到的大多数簇熵值都很小。
因此Cluster-GCN提出了一种随机多簇方法,即一个batch训练多个簇。
超大图存在的问题:训练深层问题
一般采取残差的方法。Cluster-GCN基于近距离的邻 接节点应该比远距离的的邻接节点贡献更大提出来另外一种策略。放大GCN每一层中使用的邻接矩阵A的对角线部分。即如下公式:
A ( l + 1 ) = σ ( ( A − + λ d i a g ( A − X ( l ) W ( l ) ) ) ) A^{(l+1)} = \sigma(({\overset{-}{A} }+\lambda diag({\overset{-}{A} }X^{(l)}W^{(l)}))) A(l+1)=σ((A−+λdiag(A−X(l)W(l))))
代码部分
PyG已经集成了现成的函数,只需要调用即可。
ClusterData和ClusterLoader
cluster_data = ClusterData(data, num_parts=1500, recursive=False,
save_dir=dataset.processed_dir)
train_loader = ClusterLoader(cluster_data, batch_size=20, shuffle=True,
num_workers=0)
subgraph_loader = NeighborSampler(data.edge_index, sizes=[-1], batch_size=64,
shuffle=False, num_workers=0)
整体实验代码
import torch
import torch.nn.functional as F
from torch.nn import ModuleList
from tqdm import tqdm
from torch_geometric.datasets import Reddit, Reddit2
from torch_geometric.data import ClusterData, ClusterLoader, NeighborSampler
from torch_geometric.nn import SAGEConv
# 需要下载 https://data.dgl.ai/dataset/reddit.zip 到 data/Reddit 文件夹下
dataset = Reddit('data/Reddit')
# dataset = Reddit2('data/Reddit2')
data = dataset[0]
cluster_data = ClusterData(data, num_parts=1500, recursive=False,
save_dir=dataset.processed_dir)
train_loader = ClusterLoader(cluster_data, batch_size=20, shuffle=True,
num_workers=0)
subgraph_loader = NeighborSampler(data.edge_index, sizes=[-1], batch_size=64,
shuffle=False, num_workers=0)
class Net(torch.nn.Module):
def __init__(self, in_channels, out_channels):
super(Net, self).__init__()
self.convs = ModuleList(
[SAGEConv(in_channels, 128),
SAGEConv(128, out_channels)])
def forward(self, x, edge_index):
for i, conv in enumerate(self.convs):
x = conv(x, edge_index)
if i != len(self.convs) - 1:
x = F.relu(x)
x = F.dropout(x, p=0.5, training=self.training)
return F.log_softmax(x, dim=-1)
def inference(self, x_all):
pbar = tqdm(total=x_all.size(0) * len(self.convs))
pbar.set_description('Evaluating')
# Compute representations of nodes layer by layer, using *all*
# available edges. This leads to faster computation in contrast to
# immediately computing the final representations of each batch.
for i, conv in enumerate(self.convs):
xs = []
for batch_size, n_id, adj in subgraph_loader:
edge_index, _, size = adj.to(device)
x = x_all[n_id].to(device)
x_target = x[:size[1]]
x = conv((x, x_target), edge_index)
if i != len(self.convs) - 1:
x = F.relu(x)
xs.append(x.cpu())
pbar.update(batch_size)
x_all = torch.cat(xs, dim=0)
pbar.close()
return x_all
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Net(dataset.num_features, dataset.num_classes).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.005)
def train():
model.train()
total_loss = total_nodes = 0
for batch in train_loader:
batch = batch.to(device)
optimizer.zero_grad()
out = model(batch.x, batch.edge_index)
loss = F.nll_loss(out[batch.train_mask], batch.y[batch.train_mask])
loss.backward()
optimizer.step()
nodes = batch.train_mask.sum().item()
total_loss += loss.item() * nodes
total_nodes += nodes
return total_loss / total_nodes
@torch.no_grad()
def test(): # Inference should be performed on the full graph.
model.eval()
out = model.inference(data.x)
y_pred = out.argmax(dim=-1)
accs = []
for mask in [data.train_mask, data.val_mask, data.test_mask]:
correct = y_pred[mask].eq(data.y[mask]).sum().item()
accs.append(correct / mask.sum().item())
return accs
for epoch in range(1, 31):
loss = train()
if epoch % 5 == 0:
train_acc, val_acc, test_acc = test()
print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}, Train: {train_acc:.4f}, '
f'Val: {val_acc:.4f}, test: {test_acc:.4f}')
else:
print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}')
作业
尝试将数据集切分成不同数量的簇进行实验,然后观察结果并进行比较。
选择ppi数据集。实验结果如下表。
num_parts=50 | num_parts=100 | |
---|---|---|
test | 0.9823 | 0.9574 |
参考文献
[1] https://github.com/datawhalechina/team-learning-nlp
[2] https://pytorch-geometric.readthedocs.io/en/latest/
[3] Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks