论文标题

图形卷积神经网络学会什么?

What Do Graph Convolutional Neural Networks Learn?

论文作者

Bhasin, Sannat Singh, Holani, Vaibhav, Sanjanwala, Divij

论文摘要

在过去的几年中,图形神经网络(GNN)在众多机器学习任务中的出色表现都获得了吸引力。图形卷积神经网络(GCN)是GNN的常见变体,已知在半监督节点分类(SSNC)中具有高性能,并且在同质性的假设下正常工作。最近的文献强调,在某些“特殊条件”下,GCN可以在异性图上实现强大的性能。这些论点促使我们了解为什么GCN学会执行SSNC。我们发现,类中节点的潜在节点嵌入的相似性与GCN的性能之间存在正相关。我们对数据集基础图结构的研究发现,GCN的SSNC性能受到了类中节点邻域结构的一致性和独特性的显着影响。

Graph neural networks (GNNs) have gained traction over the past few years for their superior performance in numerous machine learning tasks. Graph Convolutional Neural Networks (GCN) are a common variant of GNNs that are known to have high performance in semi-supervised node classification (SSNC), and work well under the assumption of homophily. Recent literature has highlighted that GCNs can achieve strong performance on heterophilous graphs under certain "special conditions". These arguments motivate us to understand why, and how, GCNs learn to perform SSNC. We find a positive correlation between similarity of latent node embeddings of nodes within a class and the performance of a GCN. Our investigation on underlying graph structures of a dataset finds that a GCN's SSNC performance is significantly influenced by the consistency and uniqueness in neighborhood structure of nodes within a class.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源