论文标题

朝着具有可区分群归一化的更深的图形神经网络

Towards Deeper Graph Neural Networks with Differentiable Group Normalization

论文作者

Zhou, Kaixiong, Huang, Xiao, Li, Yuening, Zha, Daochen, Chen, Rui, Hu, Xia

论文摘要

图形神经网络(GNN)通过汇总邻居来学习节点的表示,已成为下游应用程序中的有效计算工具。过度平滑是随着层数增加而限制GNN的性能的关键问题之一。这是因为堆叠的聚合器会使节点表示形式收敛到无法区分的向量。通过使链接的节点对近距离和未链接对,已经进行了几次尝试解决问题。但是,他们通常会忽略内在的社区结构,并会导致次优的性能。同一社区/班级中节点的表示与促进分类相似,而预计在嵌入空间中会分离不同的类别。为了弥合差距,我们介绍了两个过度光滑的指标和一种新颖的技术,即可区分的组归一化(DGN)。它独立地将同一组内的节点归一化,以提高其平滑度,并将不同组之间的节点分布分开,以显着减轻过度平滑的问题。对现实世界数据集的实验表明,DGN使GNN模型更加强大,可以通过更深的GNN进行过度平滑和更好的性能。

Graph neural networks (GNNs), which learn the representation of a node by aggregating its neighbors, have become an effective computational tool in downstream applications. Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases. It is because the stacked aggregators would make node representations converge to indistinguishable vectors. Several attempts have been made to tackle the issue by bringing linked node pairs close and unlinked pairs distinct. However, they often ignore the intrinsic community structures and would result in sub-optimal performance. The representations of nodes within the same community/class need be similar to facilitate the classification, while different classes are expected to be separated in embedding space. To bridge the gap, we introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN). It normalizes nodes within the same group independently to increase their smoothness, and separates node distributions among different groups to significantly alleviate the over-smoothing issue. Experiments on real-world datasets demonstrate that DGN makes GNN models more robust to over-smoothing and achieves better performance with deeper GNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源