论文标题

重新审视深度GCN中的过度光滑

Revisiting Over-smoothing in Deep GCNs

论文作者

Yang, Chaoqi, Wang, Ruijie, Yao, Shuochao, Liu, Shengzhong, Abdelzaher, Tarek

论文摘要

过度厚度已被认为是深图卷积网络(GCN)中性能下降的主要原因。在本文中,我们提出了一种新观点,即深GCN可以在培训期间实际学习反对平滑的观点。这项工作将标准GCN体系结构解释为多层感知器(MLP)和图形正则化的层集成。我们分析并得出结论,在训练之前,深GCN的最终表示会过度光滑,但是,它在训练过程中学习了反对锻炼。根据结论,本文进一步设计了一种廉价但有效的技巧来改善GCN培训。我们验证我们的结论并评估三个引用网络的技巧,并进一步提供有关GCN中邻里聚集的见解。

Oversmoothing has been assumed to be the major cause of performance drop in deep graph convolutional networks (GCNs). In this paper, we propose a new view that deep GCNs can actually learn to anti-oversmooth during training. This work interprets a standard GCN architecture as layerwise integration of a Multi-layer Perceptron (MLP) and graph regularization. We analyze and conclude that before training, the final representation of a deep GCN does over-smooth, however, it learns anti-oversmoothing during training. Based on the conclusion, the paper further designs a cheap but effective trick to improve GCN training. We verify our conclusions and evaluate the trick on three citation networks and further provide insights on neighborhood aggregation in GCNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源