论文标题

L1批归一化对深度学习模型抗模拟噪声特性的影响

Impact of L1 Batch Normalization on Analog Noise Resistant Property of Deep Learning Models

论文作者

Fagbohungbe, Omobayode, Qian, Lijun

论文摘要

由于其快速执行和能源效率,模拟硬件最近已成为机器学习的流行选择。但是,模拟硬件中噪声的固有存在以及噪声对部署的深神经网络(DNN)模型的负面影响限制了它们的使用情况。由于噪声而引起的性能下降,需要具有出色噪声属性的DNN模型的新设计,利用了DNN模型的基本构建块的属性。在这项工作中,提出了提出具有出色耐噪声属性的DNN型号的L1或TOPK BatchNorm类型,这是一种基本的DNN模型构建块。具体而言,通过具有L1/TOPK batchNorm类型的训练DNN模型进行了系统的研究,并且将性能与具有L2 BatchNorm类型的DNN模型进行了比较。通过向模型权重注入加性噪声并评估由于噪声引起的新模型推理精度,测试了抗噪声的模型属性属性。结果表明,L1和Topk BatchNorm类型具有出色的噪声属性,并且由于BatchNorm类型从L2到L1/Topk BatchNorm类型的变化而没有牺牲性能。

Analog hardware has become a popular choice for machine learning on resource-constrained devices recently due to its fast execution and energy efficiency. However, the inherent presence of noise in analog hardware and the negative impact of the noise on deployed deep neural network (DNN) models limit their usage. The degradation in performance due to the noise calls for the novel design of DNN models that have excellent noiseresistant property, leveraging the properties of the fundamental building block of DNN models. In this work, the use of L1 or TopK BatchNorm type, a fundamental DNN model building block, in designing DNN models with excellent noise-resistant property is proposed. Specifically, a systematic study has been carried out by training DNN models with L1/TopK BatchNorm type, and the performance is compared with DNN models with L2 BatchNorm types. The resulting model noise-resistant property is tested by injecting additive noise to the model weights and evaluating the new model inference accuracy due to the noise. The results show that L1 and TopK BatchNorm type has excellent noise-resistant property, and there is no sacrifice in performance due to the change in the BatchNorm type from L2 to L1/TopK BatchNorm type.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源