论文标题
自主何时有助于图形卷积网络?
When Does Self-Supervision Help Graph Convolutional Networks?
论文作者
论文摘要
自学作为一种新兴技术已被用来训练卷积神经网络(CNN),以进行图像的更多可转让,可推广和强大的代表性学习。但是,很少探索其在图形数据上运行的图形卷积网络(GCN)的介绍。在这项研究中,我们报告了将自学意义纳入GCN的首次系统探索和评估。我们首先阐述了三种机制,将自学意义纳入GCN,分析预训练和填充和自我训练的局限性,并开始专注于多任务学习。此外,我们建议通过理论原理和数值比较来研究针对GCN的三项新型的自我监督学习任务。最后,我们进一步将多任务的自任务集成为图形对抗训练。我们的结果表明,通过精心设计的任务形式和合并机制,自我实施机制使GCN有益于获得更具普遍性和鲁棒性。我们的代码可在https://github.com/shen-lab/ss-gcns上找到。
Self-supervision as an emerging technique has been employed to train convolutional neural networks (CNNs) for more transferrable, generalizable, and robust representation learning of images. Its introduction to graph convolutional networks (GCNs) operating on graph data is however rarely explored. In this study, we report the first systematic exploration and assessment of incorporating self-supervision into GCNs. We first elaborate three mechanisms to incorporate self-supervision into GCNs, analyze the limitations of pretraining & finetuning and self-training, and proceed to focus on multi-task learning. Moreover, we propose to investigate three novel self-supervised learning tasks for GCNs with theoretical rationales and numerical comparisons. Lastly, we further integrate multi-task self-supervision into graph adversarial training. Our results show that, with properly designed task forms and incorporation mechanisms, self-supervision benefits GCNs in gaining more generalizability and robustness. Our codes are available at https://github.com/Shen-Lab/SS-GCNs.