论文标题
将图形自动编码器与线性模型有关
Relating graph auto-encoders to linear models
论文作者
论文摘要
图自动编码器被广泛用于在欧几里得矢量空间中构造图形表示。但是,已经从经验上指出,许多任务上的线性模型可以超过图形自动编码器。在我们的工作中,我们证明了图自动编码器引起的解决方案空间是线性映射的解决方案空间的子集。这表明,线性嵌入模型至少具有基于图形卷积网络的图形自动编码器的表示能力。那么,为什么我们仍在使用非线性图自动编码器呢?原因之一可能是,积极限制线性解决方案空间可能会引入诱导偏见,有助于改善学习和概括。尽管许多研究人员认为编码器的非线性是针对此目的的关键要素,但我们将图的节点特征确定为更强大的归纳偏见。我们通过在线性模型中引入相应的偏见并分析解决方案空间的变化来提供理论见解。我们的实验与此问题上的其他经验工作一致,并表明线性编码器在使用特征信息时可以胜过非线性编码器。
Graph auto-encoders are widely used to construct graph representations in Euclidean vector spaces. However, it has already been pointed out empirically that linear models on many tasks can outperform graph auto-encoders. In our work, we prove that the solution space induced by graph auto-encoders is a subset of the solution space of a linear map. This demonstrates that linear embedding models have at least the representational power of graph auto-encoders based on graph convolutional networks. So why are we still using nonlinear graph auto-encoders? One reason could be that actively restricting the linear solution space might introduce an inductive bias that helps improve learning and generalization. While many researchers believe that the nonlinearity of the encoder is the critical ingredient towards this end, we instead identify the node features of the graph as a more powerful inductive bias. We give theoretical insights by introducing a corresponding bias in a linear model and analyzing the change in the solution space. Our experiments are aligned with other empirical work on this question and show that the linear encoder can outperform the nonlinear encoder when using feature information.