论文标题

发射和神经形态计算是您需要能节能的计算机视觉的全部

In-Sensor & Neuromorphic Computing are all you need for Energy Efficient Computer Vision

论文作者

Datta, Gourav, Liu, Zeyu, Kaiser, Md Abdullah-Al, Kundu, Souvik, Mathai, Joe, Yin, Zihan, Jacob, Ajey P., Jaiswal, Akhilesh R., Beerel, Peter A.

论文摘要

由于较高的激活稀疏性和累积(AC)而不是昂贵的多重和蓄积(MAC),神经形态尖峰神经网络(SNN)已成为多个计算机视觉应用程序(CV)应用的传统DNN的有希望的低功率替代品。但是,大多数现有的SNN都需要多个时间步骤才能接受可接受的推理准确性,阻碍实时部署以及增加尖峰活动,从而增加了能源消耗。最近的作品提出了直接编码,该编码直接在SNN的第一层中直接馈送模拟像素值,以显着减少时间步骤的数量。尽管具有直接编码的第一层MAC的开销对于深SNN而言可以忽略不计,并且使用SNN进行了CV处理有效,但图像传感器和下游处理之间的数据传输成本很大,并且可能主导总能量。为了减轻这种关注,我们提出了一个传感器计算硬件软件共同设计框架,用于针对图像识别任务的SNN。与传统的CV处理相比,我们的方法将传感和处理之间的带宽减少12-96倍,总能量的带宽增加了2.32倍,ImageNet的准确性降低了3.8%。

Due to the high activation sparsity and use of accumulates (AC) instead of expensive multiply-and-accumulates (MAC), neuromorphic spiking neural networks (SNNs) have emerged as a promising low-power alternative to traditional DNNs for several computer vision (CV) applications. However, most existing SNNs require multiple time steps for acceptable inference accuracy, hindering real-time deployment and increasing spiking activity and, consequently, energy consumption. Recent works proposed direct encoding that directly feeds the analog pixel values in the first layer of the SNN in order to significantly reduce the number of time steps. Although the overhead for the first layer MACs with direct encoding is negligible for deep SNNs and the CV processing is efficient using SNNs, the data transfer between the image sensors and the downstream processing costs significant bandwidth and may dominate the total energy. To mitigate this concern, we propose an in-sensor computing hardware-software co-design framework for SNNs targeting image recognition tasks. Our approach reduces the bandwidth between sensing and processing by 12-96x and the resulting total energy by 2.32x compared to traditional CV processing, with a 3.8% reduction in accuracy on ImageNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源