论文标题

克服图形神经网络中的灾难性遗忘

Overcoming Catastrophic Forgetting in Graph Neural Networks

论文作者

Liu, Huihui, Yang, Yiding, Wang, Xinchao

论文摘要

灾难性忘记是指神经网络“忘记”以前学习的知识的趋势。先前的方法集中在克服卷积神经网络(CNN)上,其中像图像之类的输入样本位于网格域中,但在很大程度上忽略了处理非网格数据的图形神经网络(GNN)。在本文中,我们提出了一种新的计划,致力于克服灾难性的遗忘问题,从而加强了GNN中的持续学习。我们方法的核心是通用模块,称为拓扑感知的重量〜(TWP),适用于以插件方式适用于GNN的任意形式。与基于CNN的主要持续学习方法的主要流不同,这些方法仅依赖于降低对下游任务很重要的参数的更新,TWP明确探讨了输入图的局部结构,并试图稳定参数在拓扑聚集中扮演关键角色的参数。我们在几个数据集上评估了不同GNN骨架的TWP,并证明它的性能优于最新情况。代码可在\ url {https://github.com/hhliu79/twp}上公开获得。

Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks. Prior methods have been focused on overcoming this problem on convolutional neural networks (CNNs), where the input samples like images lie in a grid domain, but have largely overlooked graph neural networks (GNNs) that handle non-grid data. In this paper, we propose a novel scheme dedicated to overcoming catastrophic forgetting problem and hence strengthen continual learning in GNNs. At the heart of our approach is a generic module, termed as topology-aware weight preserving~(TWP), applicable to arbitrary form of GNNs in a plug-and-play fashion. Unlike the main stream of CNN-based continual learning methods that rely on solely slowing down the updates of parameters important to the downstream task, TWP explicitly explores the local structures of the input graph, and attempts to stabilize the parameters playing pivotal roles in the topological aggregation. We evaluate TWP on different GNN backbones over several datasets, and demonstrate that it yields performances superior to the state of the art. Code is publicly available at \url{https://github.com/hhliu79/TWP}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源