论文标题

通过高维计算象征性地粘合神经网络

Gluing Neural Networks Symbolically Through Hyperdimensional Computing

论文作者

Sutor, Peter, Yuan, Dehao, Summers-Stay, Douglas, Fermuller, Cornelia, Aloimonos, Yiannis

论文摘要

高维计算提供了简单而强大的操作,以创建可以有效地编码信息,用于学习的长度高维矢量(高矢量),并且具有动态性足够的动态,以进行即时修改。在本文中,我们探讨了使用二进制高量向量直接编码最终的,对神经网络的输出信号进行分类,以便将不同的网络融合在一起。这允许多个神经网络共同解决问题,几乎没有其他开销。分类之前的输出信号被编码为过量向量,并通过共识求和捆绑在一起以训练分类过度向量。可以通过在单个神经网络上进行迭代,甚至可以通过对多重分类高量向量达成共识来执行此过程。我们发现,这表现优于艺术的状态,或者与之相当,而使用很少的开销,因为与神经网络相比,HyperVector操作非常快速有效。这种共识过程可以在线学习,甚至可以实时成长或失去模型。过量向量充当可以存储的回忆,甚至随着时间的流逝而进一步捆绑在一起,从而提供了寿命长的学习能力。此外,这种共识结构继承了高维计算的好处,而无需牺牲现代机器学习的性能。该技术几乎可以推断到任何神经模型,并且几乎不需要进行修改 - 一个人只需要在显示测试示例时记录网络的输出信号。

Hyperdimensional Computing affords simple, yet powerful operations to create long Hyperdimensional Vectors (hypervectors) that can efficiently encode information, be used for learning, and are dynamic enough to be modified on the fly. In this paper, we explore the notion of using binary hypervectors to directly encode the final, classifying output signals of neural networks in order to fuse differing networks together at the symbolic level. This allows multiple neural networks to work together to solve a problem, with little additional overhead. Output signals just before classification are encoded as hypervectors and bundled together through consensus summation to train a classification hypervector. This process can be performed iteratively and even on single neural networks by instead making a consensus of multiple classification hypervectors. We find that this outperforms the state of the art, or is on a par with it, while using very little overhead, as hypervector operations are extremely fast and efficient in comparison to the neural networks. This consensus process can learn online and even grow or lose models in real time. Hypervectors act as memories that can be stored, and even further bundled together over time, affording life long learning capabilities. Additionally, this consensus structure inherits the benefits of Hyperdimensional Computing, without sacrificing the performance of modern Machine Learning. This technique can be extrapolated to virtually any neural model, and requires little modification to employ - one simply requires recording the output signals of networks when presented with a testing example.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源