论文标题
以自我图形信息最大化的图形神经网络的转移学习
Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization
论文作者
论文摘要
图形神经网络(GNN)在各种应用中都取得了出色的性能,但是对于大规模图,培训专用的GNN可能是昂贵的。最近的一些工作开始研究GNN的预培训。但是,它们都没有提供理论上的见解,以了解其框架的设计,或者清楚的要求并保证其可转让性。在这项工作中,我们为GNN的转移学习建立了一个理论上扎根且实际上有用的框架。首先,我们对基本图形信息提出了一种新颖的看法,并主张将其作为可转让GNN培训的目标,这激发了EGI的设计(Ego-Graph信息最大化),以分析实现这一目标。其次,当节点特征与结构相关时,我们就源图和目标图之间的局部图拉普拉奇人之间的差异进行了EGI转移性分析。我们进行受控的合成实验,以直接证明我们的理论结论是合理的。在两个现实世界网络数据集上的全面实验在分析的直接转移设置中显示出一致的结果,而大规模知识图上的综合实验在通过微调传输的更实用的环境中显示出令人鼓舞的结果。
Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs. Some recent work started to study the pre-training of GNNs. However, none of them provide theoretical insights into the design of their frameworks, or clear requirements and guarantees towards their transferability. In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs. Firstly, we propose a novel view towards the essential graph information and advocate the capturing of it as the goal of transferable GNN training, which motivates the design of EGI (Ego-Graph Information maximization) to analytically achieve this goal. Secondly, when node features are structure-relevant, we conduct an analysis of EGI transferability regarding the difference between the local graph Laplacians of the source and target graphs. We conduct controlled synthetic experiments to directly justify our theoretical conclusions. Comprehensive experiments on two real-world network datasets show consistent results in the analyzed setting of direct-transfering, while those on large-scale knowledge graphs show promising results in the more practical setting of transfering with fine-tuning.