论文标题

学习掉落:通过拓扑denoising稳健的图神经网络

Learning to Drop: Robust Graph Neural Network via Topological Denoising

论文作者

Luo, Dongsheng, Cheng, Wei, Yu, Wenchao, Zong, Bo, Ni, Jingchao, Chen, Haifeng, Zhang, Xiang

论文摘要

图形神经网络(GNN)已证明是图形分析的强大工具。关键思想是沿给定图的边缘递归传播和汇总信息。但是,尽管他们成功了,但现有的GNN通常对输入图的质量敏感。现实世界的图通常是嘈杂的,并且包含任务 - 近距离边缘,这可能会导致学习的GNN模型中的次优概括性能。在本文中,我们提出了一个参数化的拓扑授予网络PTDNET,以通过学习降低任务 - iRrelevant边缘来提高GNN的稳健性和泛化性能。通过用参数化网络惩罚稀疏图中的边缘数量,ptdnet prunes task-irrelevant边缘。为了考虑整个图的拓扑结构,应用了核规范正则化以对所得的稀疏图施加低级别的约束,以更好地概括。 PTDNET可以用作GNN模型中的关键组件,以改善其在各种任务上的性能,例如节点分类和链接预测。关于合成和基准数据集的实验研究表明,PTDNET可以显着提高GNN的性能,并且对于更嘈杂的数据集,PTDNET可以显着提高GNN的性能,并且性能增长变得更大。

Graph Neural Networks (GNNs) have shown to be powerful tools for graph analytics. The key idea is to recursively propagate and aggregate information along edges of the given graph. Despite their success, however, the existing GNNs are usually sensitive to the quality of the input graph. Real-world graphs are often noisy and contain task-irrelevant edges, which may lead to suboptimal generalization performance in the learned GNN models. In this paper, we propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of GNNs by learning to drop task-irrelevant edges. PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks. To take into consideration of the topology of the entire graph, the nuclear norm regularization is applied to impose the low-rank constraint on the resulting sparsified graph for better generalization. PTDNet can be used as a key component in GNN models to improve their performances on various tasks, such as node classification and link prediction. Experimental studies on both synthetic and benchmark datasets show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源