论文标题

二进制图神经网络

Binary Graph Neural Networks

论文作者

Bahri, Mehdi, Bahl, Gaétan, Zafeiriou, Stefanos

论文摘要

图形神经网络(GNN)已成为一种强大而灵活的框架,用于表示不规则数据。当他们将古典CNN在网格上的操作推广到任意拓扑时,GNN还带来了其欧几里得对应物的许多实施挑战。模型大小,内存足迹和能源消耗是许多实际应用的普遍关注点。网络二进制化将一个位分配给参数和激活,从而大大降低了内存要求(与单精度的浮点数相比,最高32倍),并最大程度地提高了对可测量加速的现代硬件的快速SIMD指令的好处。然而,尽管对于经典CNN进行了大量的二线化工作,但该领域在几何深度学习中仍未探索。在本文中,我们介绍并评估图形神经网络的二进制的不同策略。我们表明,通过仔细设计模型以及对训练过程的控制,二进制图神经网络只能以适度的成本进行培训,以挑战性基准的准确性。特别是,我们介绍了锤子空间中的第一个动态图神经网络,能够利用二进制向量上的有效的K-NN搜索来加快动态图的构造。我们进一步验证了二进制模型在嵌入式设备上可节省大量资金。我们的代码在GitHub上公开可用。

Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data. As they generalize the operations of classical CNNs on grids to arbitrary topologies, GNNs also bring much of the implementation challenges of their Euclidean counterparts. Model size, memory footprint, and energy consumption are common concerns for many real-world applications. Network binarization allocates a single bit to parameters and activations, thus dramatically reducing the memory requirements (up to 32x compared to single-precision floating-point numbers) and maximizing the benefits of fast SIMD instructions on modern hardware for measurable speedups. However, in spite of the large body of work on binarization for classical CNNs, this area remains largely unexplored in geometric deep learning. In this paper, we present and evaluate different strategies for the binarization of graph neural networks. We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks. In particular, we present the first dynamic graph neural network in Hamming space, able to leverage efficient k-NN search on binary vectors to speed-up the construction of the dynamic graph. We further verify that the binary models offer significant savings on embedded devices. Our code is publicly available on Github.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源