论文标题

通过特征分解和记忆,贝叶斯神经网络的有效计算减少

Efficient Computation Reduction in Bayesian Neural Networks Through Feature Decomposition and Memorization

论文作者

Jia, Xiaotao, Yang, Jianlei, Liu, Runze, Wang, Xueyan, Cotofana, Sorin Dan, Zhao, Weisheng

论文摘要

贝叶斯方法能够捕获现实世界中的不确定性/不完整,并适当解决深层神经网络所面临的过度合适问题。近年来,贝叶斯神经网络(BNNS)引起了AI研究人员的极大关注,并在许多应用中被证明是成功的。但是,所需的高计算复杂性使BNN难以在功率预算有限的计算系统中部署。在本文中,提出了有效的BNN推理流量来减少然后通过软件和硬件实现来评估计算成本。特征分解和记忆(\ texttt {dm})策略用于以减少的方式改革BNN推断流。与通过理论分析和软件验证证明的传统方法相比,可以消除大约一半的计算。随后,为了解决硬件资源限制,进一步部署了一个对内存友好的计算框架,以减少\ texttt {dm}策略引入的内存开销。最后,我们在Verilog中实施方法,并使用45 $ nm $ freepdk技术合成它。多层BNNS上的硬件仿真结果表明,与传统的BNN推理方法相比,它提供了73 \%和4 $ \ times $加速的能源消耗,以14 \%的面积为费用。

Bayesian method is capable of capturing real world uncertainties/incompleteness and properly addressing the over-fitting issue faced by deep neural networks. In recent years, Bayesian Neural Networks (BNNs) have drawn tremendous attentions of AI researchers and proved to be successful in many applications. However, the required high computation complexity makes BNNs difficult to be deployed in computing systems with limited power budget. In this paper, an efficient BNN inference flow is proposed to reduce the computation cost then is evaluated by means of both software and hardware implementations. A feature decomposition and memorization (\texttt{DM}) strategy is utilized to reform the BNN inference flow in a reduced manner. About half of the computations could be eliminated compared to the traditional approach that has been proved by theoretical analysis and software validations. Subsequently, in order to resolve the hardware resource limitations, a memory-friendly computing framework is further deployed to reduce the memory overhead introduced by \texttt{DM} strategy. Finally, we implement our approach in Verilog and synthesise it with 45 $nm$ FreePDK technology. Hardware simulation results on multi-layer BNNs demonstrate that, when compared with the traditional BNN inference method, it provides an energy consumption reduction of 73\% and a 4$\times$ speedup at the expense of 14\% area overhead.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源