论文标题

重新思考图形神经网络中的合并

Rethinking pooling in graph neural networks

论文作者

Mesquita, Diego, Souza, Amauri H., Kaski, Samuel

论文摘要

图形池是众多图形神经网络(GNN)体系结构的核心组成部分。作为传统CNN的继承,大多数方法将图形池作为集群分配问题,将常规网格中本地贴片的概念扩展到图形。尽管对这种设计选择有广泛的依从性,但尚无严格评估其对GNN成功的影响。在本文中,我们建立在代表性的GNN上,并引入变体,以挑战对局部保留表示的需求,无论是在补体图上使用随机化还是聚类。令人惊讶的是,我们的实验表明,使用这些变体不会导致性能下降。为了理解这一现象,我们研究了卷积层与随后的汇总层之间的相互作用。我们表明,卷积在学习的表示中起着主导作用。与普遍的信念相反,当地合并对GNN在相关和广泛使用的基准上的成功概不负责。

Graph pooling is a central component of a myriad of graph neural network (GNN) architectures. As an inheritance from traditional CNNs, most approaches formulate graph pooling as a cluster assignment problem, extending the idea of local patches in regular grids to graphs. Despite the wide adherence to this design choice, no work has rigorously evaluated its influence on the success of GNNs. In this paper, we build upon representative GNNs and introduce variants that challenge the need for locality-preserving representations, either using randomization or clustering on the complement graph. Strikingly, our experiments demonstrate that using these variants does not result in any decrease in performance. To understand this phenomenon, we study the interplay between convolutional layers and the subsequent pooling ones. We show that the convolutions play a leading role in the learned representations. In contrast to the common belief, local pooling is not responsible for the success of GNNs on relevant and widely-used benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源