神经网络计算复杂度计算
Unlike conventional convolutional neural networks, the cost of graph convolutions is “unstable” — as the choice of graph representation and edges corresponds to the graph convolutions complexity — an explanation why.
与传统的卷积神经网络不同,图卷积的成本是“不稳定的”(因为图表示和边的选择对应于图卷积的复杂性),这就是原因的解释。
Dense vs Sparse Graph Representation: expensive vs cheap?
密集vs稀疏图形表示法:昂贵还是便宜?
Graph data for a GNN input can be represented in two ways:
GNN输入的图形数据可以两种方式表示:
A) sparse: As a list of nodes and a list of edge indices
A)稀疏:作为节点列表和边缘索引列表
B) dense: As a list of nodes and an adjacency matrix
B)密集的:作为节点列表和邻接矩阵
For any graph G with N vertices of length F and M edges, the sparse version will operate on the nodes of size N*F and a list of edge indices of size 2*M.The dense representation in contrast will require an adjacency matrix of size N*N.
对于任何具有N个顶点(长度为F和M边)的图G,稀疏版本将在大小为N * F的节点上运行,并在边缘索引的大小为2 * M的列表上进行操作,而密集表示则需要邻接矩阵为尺寸N * N。
While in general the sparse representation is much less expensive in usage inside a graph neural network for forward and backward pass, edge modification operations require searching for the right edge-node pairs and possibly adjustment of the overall size of the list, which leads to variations on the RAM usage of the network. In other words, sparse representations minimize memory usage on graps with fixed edges.
虽然一般来说,稀疏表示在图神经网络内用于向前和向后传递的费用要便宜得多,但是边沿修改操作需要搜索右边的边节点对,并且可能需要调整列表的整体大小,这会导致变化网络的RAM使用情况。 换句话说,稀疏表示将具有固定边缘的抓图上的内存使用降至最低。
While being expensive to use, the dense representation has the following advantages: Edge weights are naturally included in the adjacency matrix, edge modification can be done in a smooth manner and integrated into the network, finding edges and changing edge values does not change the size of the matrix. Those properties are crucial for Graph Neural Networks that rely on in network edge modifications.
尽管使用起来昂贵,但密集表示具有以下优点:边缘权重自然包含在邻