论文标题
达到一致性和互补性:多视图信息瓶颈方法
Towards Consistency and Complementarity: A Multiview Graph Information Bottleneck Approach
论文作者
论文摘要
图形神经网络(GNN)的经验研究广泛地将原始节点特征和邻接关系作为单视图输入,而忽略了多个图视图的丰富信息。为了避免此问题,已经开发了多视图形分析框架来融合跨视图的图形信息。如何建模和集成共享(即一致性)和特定视图(即互补性)信息是多视图分析中的关键问题。在本文中,我们提出了一种新颖的多视图变化图信息瓶颈(MVGIB)原理,以最大程度地提高共同表示的协议和针对特定视图的表示的分歧。根据这一原则,我们通过使用共同信息的约束来制定跨多视图的常见和特定信息瓶颈目标。但是,这些目标很难直接优化,因为相互信息在计算上是棘手的。为了应对这一挑战,我们得出了相互信息术语的差异下限和上限,然后优化变分界以找到信息目标的近似解决方案。图基准数据集上的广泛实验证明了该方法的出色有效性。
The empirical studies of Graph Neural Networks (GNNs) broadly take the original node feature and adjacency relationship as singleview input, ignoring the rich information of multiple graph views. To circumvent this issue, the multiview graph analysis framework has been developed to fuse graph information across views. How to model and integrate shared (i.e. consistency) and view-specific (i.e. complementarity) information is a key issue in multiview graph analysis. In this paper, we propose a novel Multiview Variational Graph Information Bottleneck (MVGIB) principle to maximize the agreement for common representations and the disagreement for view-specific representations. Under this principle, we formulate the common and view-specific information bottleneck objectives across multiviews by using constraints from mutual information. However, these objectives are hard to directly optimize since the mutual information is computationally intractable. To tackle this challenge, we derive variational lower and upper bounds of mutual information terms, and then instead optimize variational bounds to find the approximate solutions for the information objectives. Extensive experiments on graph benchmark datasets demonstrate the superior effectiveness of the proposed method.