论文标题
策略-GNN:图形神经网络的聚合优化
Policy-GNN: Aggregation Optimization for Graph Neural Networks
论文作者
论文摘要
在许多现实世界中,图形数据普遍存在。最近,越来越多的关注点已经在图形神经网络(GNN)上,旨在通过汇总具有可堆叠网络模块的邻居的信息来对局部图结构进行建模并捕获层次模式。通过观察到不同节点通常需要不同的聚合迭代以完全捕获结构信息的动机,在本文中,我们建议明确采样聚集的多种迭代,以提高GNN的性能。给定复杂的图和稀疏特征,为每个节点制定有效的聚合策略是一项具有挑战性的任务。此外,得出有效的算法并不直接,因为我们需要将采样节点馈入不同数量的网络层。为了应对上述挑战,我们提出了政策 - gnn,这是一种元策略框架,该框架将GNN的抽样过程和消息传递建模为合并的学习过程。具体而言,Policy-GNN使用元派利来自适应地确定每个节点的聚合数量。通过从模型中利用反馈,通过深入加强学习(RL)对元元素进行了训练。我们进一步引入参数共享和缓冲机制,以提高训练效率。三个现实世界基准数据集的实验结果表明,策略-GNN明显优于最先进的替代方案,显示了对GNN的聚合优化的承诺。
Graph data are pervasive in many real-world applications. Recently, increasing attention has been paid on graph neural networks (GNNs), which aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors with stackable network modules. Motivated by the observation that different nodes often require different iterations of aggregation to fully capture the structural information, in this paper, we propose to explicitly sample diverse iterations of aggregation for different nodes to boost the performance of GNNs. It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features. Moreover, it is not straightforward to derive an efficient algorithm since we need to feed the sampled nodes into different number of network layers. To address the above challenges, we propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process. Specifically, Policy-GNN uses a meta-policy to adaptively determine the number of aggregations for each node. The meta-policy is trained with deep reinforcement learning (RL) by exploiting the feedback from the model. We further introduce parameter sharing and a buffer mechanism to boost the training efficiency. Experimental results on three real-world benchmark datasets suggest that Policy-GNN significantly outperforms the state-of-the-art alternatives, showing the promise in aggregation optimization for GNNs.