论文标题
图形神经网络的两阶段培训用于图形分类
Two-stage Training of Graph Neural Networks for Graph Classification
论文作者
论文摘要
图神经网络(GNN)在图形上的机器学习领域受到了极大的关注。受神经网络成功的启发,已经进行了一系列研究,以培训GNN来处理各种任务,例如节点分类,图形分类和链接预测。在这项工作中,我们感兴趣的任务是图形分类。已经提出了几种GNN模型,并在此任务中表现出很高的准确性。但是,问题是通常的培训方法是否完全意识到了GNN模型的能力。 在这项工作中,我们提出了一个基于三胞胎损失的两阶段培训框架。在第一阶段,GNN经过训练,可以将每个图映射到欧几里得空间向量,以便同一类的图形近距离,而不同类别的图形则映射到远处。一旦根据标签进行了良好的分离,分类器就会被训练以区分不同的类别。从某种意义上说,此方法与任何GNN模型兼容。通过将五个GNN模型调整为我们的方法,我们证明了每种GNN分配能力的准确性和利用率一致地提高了每个模型的原始训练方法,最高为12个数据集中的5.4 \%点。
Graph neural networks (GNNs) have received massive attention in the field of machine learning on graphs. Inspired by the success of neural networks, a line of research has been conducted to train GNNs to deal with various tasks, such as node classification, graph classification, and link prediction. In this work, our task of interest is graph classification. Several GNN models have been proposed and shown great accuracy in this task. However, the question is whether usual training methods fully realize the capacity of the GNN models. In this work, we propose a two-stage training framework based on triplet loss. In the first stage, GNN is trained to map each graph to a Euclidean-space vector so that graphs of the same class are close while those of different classes are mapped far apart. Once graphs are well-separated based on labels, a classifier is trained to distinguish between different classes. This method is generic in the sense that it is compatible with any GNN model. By adapting five GNN models to our method, we demonstrate the consistent improvement in accuracy and utilization of each GNN's allocated capacity over the original training method of each model up to 5.4\% points in 12 datasets.