论文标题
紧凑的图形结构通过共同信息压缩学习
Compact Graph Structure Learning via Mutual Information Compression
论文作者
论文摘要
最近,图表结构学习(GSL)最近吸引了其优化图形结构的能力以及同时学习图形神经网络(GNN)的合适参数。当前的GSL方法主要从单个或多个信息源(基本视图)学习最佳的图形结构(最终视图),但是关于最佳图形结构的理论指导仍未探索。从本质上讲,最佳的图形结构应仅包含有关任务的信息,同时尽可能地压缩冗余噪声,该噪声定义为“最小足够的结构”,以保持准确性和鲁棒性。如何以原则性的方式获得这种结构?在本文中,我们从理论上证明,如果我们基于相互信息优化基本视图和最终视图,并同时保持其在标签上的性能,则最终视图将是最小的足够结构。通过此指南,我们提出了由MI Compressions的紧凑型GSL体系结构,名为COGSL。具体而言,从原始图作为模型的两个输入中提取了两个基本视图,这些视图通过视图估计器重新估计。然后,我们提出了一种自适应技术,将估计的观点融合到最终视图中。此外,我们保持估计观点和最终视图的表现,并减少每两种观点的相互信息。为了全面评估COGSL的性能,我们在干净和攻击的条件下对几个数据集进行了广泛的实验,这证明了COGSL的有效性和鲁棒性。
Graph Structure Learning (GSL) recently has attracted considerable attentions in its capacity of optimizing graph structure as well as learning suitable parameters of Graph Neural Networks (GNNs) simultaneously. Current GSL methods mainly learn an optimal graph structure (final view) from single or multiple information sources (basic views), however the theoretical guidance on what is the optimal graph structure is still unexplored. In essence, an optimal graph structure should only contain the information about tasks while compress redundant noise as much as possible, which is defined as "minimal sufficient structure", so as to maintain the accurancy and robustness. How to obtain such structure in a principled way? In this paper, we theoretically prove that if we optimize basic views and final view based on mutual information, and keep their performance on labels simultaneously, the final view will be a minimal sufficient structure. With this guidance, we propose a Compact GSL architecture by MI compression, named CoGSL. Specifically, two basic views are extracted from original graph as two inputs of the model, which are refinedly reestimated by a view estimator. Then, we propose an adaptive technique to fuse estimated views into the final view. Furthermore, we maintain the performance of estimated views and the final view and reduce the mutual information of every two views. To comprehensively evaluate the performance of CoGSL, we conduct extensive experiments on several datasets under clean and attacked conditions, which demonstrate the effectiveness and robustness of CoGSL.