论文标题

Grand+:可扩展图随机神经网络

GRAND+: Scalable Graph Random Neural Networks

论文作者

Feng, Wenzheng, Dong, Yuxiao, Huang, Tinglin, Yin, Ziqi, Cheng, Xu, Kharlamov, Evgeny, Tang, Jie

论文摘要

图形神经网络(GNN)已被广泛用于图形上的半监督学习。最近的一项研究表明,图形随机神经网络(GRAND)模型可以为此问题产生最新的性能。但是,Grand很难处理大规模图,因为它的有效性依赖于计算昂贵的数据增强程序。在这项工作中,我们为半监督图学习提供了一个可扩展且高性能的GNN框架。为了解决上述问题,我们在Grand+中开发了一种广义的前向推动(GFPUSH)算法,以预先计算一般的传播矩阵,并采用它以微型批次的方式执行图形数据增强。我们表明,GFPUSH的低时间和空间复杂性使Grand+有效地扩展到大图。此外,我们在Grand+的模型优化中引入了信心意识的一致性损失,从而促进了Grand+的概括优势。我们对七个不同大小的公共数据集进行了广泛的实验。结果表明,Grand+ 1)能够比现有的可扩展GNN扩展到大图,并且运行时间要少,而2)可以在所有数据集中提供一致的准确性改进和可扩展的GNN。

Graph neural networks (GNNs) have been widely adopted for semi-supervised learning on graphs. A recent study shows that the graph random neural network (GRAND) model can generate state-of-the-art performance for this problem. However, it is difficult for GRAND to handle large-scale graphs since its effectiveness relies on computationally expensive data augmentation procedures. In this work, we present a scalable and high-performance GNN framework GRAND+ for semi-supervised graph learning. To address the above issue, we develop a generalized forward push (GFPush) algorithm in GRAND+ to pre-compute a general propagation matrix and employ it to perform graph data augmentation in a mini-batch manner. We show that both the low time and space complexities of GFPush enable GRAND+ to efficiently scale to large graphs. Furthermore, we introduce a confidence-aware consistency loss into the model optimization of GRAND+, facilitating GRAND+'s generalization superiority. We conduct extensive experiments on seven public datasets of different sizes. The results demonstrate that GRAND+ 1) is able to scale to large graphs and costs less running time than existing scalable GNNs, and 2) can offer consistent accuracy improvements over both full-batch and scalable GNNs across all datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源