论文标题
半监督图分类的标签不变增强
Label-invariant Augmentation for Semi-Supervised Graph Classification
论文作者
论文摘要
最近,基于对比度的增强幅度在计算机视觉域上呈一个新的高潮,其中一些操作(包括旋转,作物和翻转)结合了专用算法,大大提高了模型的概括和鲁棒性。遵循这种趋势,一些开创性的尝试采用了与图形数据相似的想法。然而,与图像不同,在不更改图形性质的情况下设计合理的增强功能要困难得多。尽管令人兴奋,但当前的对比度学习并不像视觉对比度学习那样有希望的表现。我们猜测当前的图形对比学习的表现可能受到违反标签不变增强假设的限制。鉴于此,我们为图形结构数据提出了标签不变的增强,以应对这一挑战。与节点/边缘修改和子图提取不同,我们在表示空间中进行了增强,并在最困难的方向上生成增强样品,同时将增强数据标签保持与原始样品相同。在半监督的情况下,我们演示了我们提出的方法的表现优于基于经典的图形神经网络方法和八个基准测试图形结构化数据的最新图形对比度学习,然后进行了几个深入的实验,进一步探索了几个方面的标签不变增强。
Recently, contrastiveness-based augmentation surges a new climax in the computer vision domain, where some operations, including rotation, crop, and flip, combined with dedicated algorithms, dramatically increase the model generalization and robustness. Following this trend, some pioneering attempts employ the similar idea to graph data. Nevertheless, unlike images, it is much more difficult to design reasonable augmentations without changing the nature of graphs. Although exciting, the current graph contrastive learning does not achieve as promising performance as visual contrastive learning. We conjecture the current performance of graph contrastive learning might be limited by the violation of the label-invariant augmentation assumption. In light of this, we propose a label-invariant augmentation for graph-structured data to address this challenge. Different from the node/edge modification and subgraph extraction, we conduct the augmentation in the representation space and generate the augmented samples in the most difficult direction while keeping the label of augmented data the same as the original samples. In the semi-supervised scenario, we demonstrate our proposed method outperforms the classical graph neural network based methods and recent graph contrastive learning on eight benchmark graph-structured data, followed by several in-depth experiments to further explore the label-invariant augmentation in several aspects.