论文标题

图信息瓶颈

Graph Information Bottleneck

论文作者

Wu, Tailin, Ren, Hongyu, Li, Pan, Leskovec, Jure

论文摘要

图形结构数据的表示学很具有挑战性,因为图结构和节点特征都具有重要的信息。图形神经网络(GNN)提供了一种表达方式来融合网络结构和节点特征的信息。但是,GNN容易出现对抗性攻击。在这里,我们介绍了图形信息瓶颈(GIB),这是一种信息理论原理,可以最佳地平衡图形结构数据的学会表示的表现力和鲁棒性。 GIB从一般信息瓶颈(IB)继承,旨在通过最大程度地提高表示和目标之间的相互信息来学习给定任务的足够代表,并同时限制表示表示和输入数据之间的相互信息。与一般IB不同,GIB将结构和特征信息正规化。我们设计了两种采样算法,用于结构正则化,并使用两个新模型实例化GIB原理:Gib-Cat和Gib-Bern,并通过评估对对抗性攻击的弹性来证明益处。我们表明,我们所提出的模型比最新的图形防御模型更强大。基于GIB的模型通过对图形结构的对抗扰动以及节点特征的经验改善了31%。

Representation learning of graph-structured data is challenging because both graph structure and node features carry important information. Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features. However, GNNs are prone to adversarial attacks. Here we introduce Graph Information Bottleneck (GIB), an information-theoretic principle that optimally balances expressiveness and robustness of the learned representation of graph-structured data. Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task by maximizing the mutual information between the representation and the target, and simultaneously constraining the mutual information between the representation and the input data. Different from the general IB, GIB regularizes the structural as well as the feature information. We design two sampling algorithms for structural regularization and instantiate the GIB principle with two new models: GIB-Cat and GIB-Bern, and demonstrate the benefits by evaluating the resilience to adversarial attacks. We show that our proposed models are more robust than state-of-the-art graph defense models. GIB-based models empirically achieve up to 31% improvement with adversarial perturbation of the graph structure as well as node features.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源