论文标题

基于硅突触的回忆深度信念神经网络

A memristive deep belief neural network based on silicon synapses

论文作者

Wang, Wei, Danial, Loai, Li, Yang, Herbelin, Eric, Pikhay, Evgeny, Roizin, Yakov, Hoffer, Barak, Wang, Zhongrui, Kvatinsky, Shahar

论文摘要

基于Memristor的神经形态计算可以克服传统的von Neumann计算体系结构的局限性 - 其中数据在单独的内存和处理单元之间改组 - 并提高深神经网络的性能。但是,这将需要准确的突触样装置性能,而回忆录通常会遭受较差的收率和有限数量的可靠电导状态。在这里,我们报告了在商业互补的金属氧化物 - 气门导体(CMOS)工艺中制造的浮栅栅格突触设备。这些硅突触提供了模拟可调节性,高耐力,较长的保留时间,可预测的循环降解,中等设备到设备的变化和高收率。与图形处理单元相比,它们还提供两个数量级的能量效率。我们使用两个12 x 8阵列的回忆设备来对基于对比差异的梯度下降算法进行19 x 8的限制性Boltzmann机器的原位训练。然后,我们创建了一个由三台重新限制的玻尔兹曼机器组成的回忆深度信念神经网络。我们在改装的国家标准技术研究所(MNIST)数据集上对此进行了测试,证明识别精度高达97.05%。

Memristor-based neuromorphic computing could overcome the limitations of traditional von Neumann computing architectures -- in which data are shuffled between separate memory and processing units -- and improve the performance of deep neural networks. However, this will require accurate synaptic-like device performance, and memristors typically suffer from poor yield and a limited number of reliable conductance states. Here we report floating gate memristive synaptic devices that are fabricated in a commercial complementary metal-oxide-semiconductor (CMOS) process. These silicon synapses offer analogue tunability, high endurance, long retention times, predictable cycling degradation, moderate device-to-device variations, and high yield. They also provide two orders of magnitude higher energy efficiency for multiply-accumulate operations than graphics processing units. We use two 12-by-8 arrays of the memristive devices for in-situ training of a 19-by-8 memristive restricted Boltzmann machine for pattern recognition via a gradient descent algorithm based on contrastive divergence. We then create a memristive deep belief neural network consisting of three memristive restricted Boltzmann machines. We test this on the modified National Institute of Standards and Technology (MNIST) dataset, demonstrating recognition accuracy up to 97.05%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源