论文标题
利用BNN的内核压缩
Exploiting Kernel Compression on BNNs
论文作者
论文摘要
二进制神经网络(BNN)在现实的图像分类任务上表现出巨大的成功。值得注意的是,它们的准确性类似于根据边缘设备量身定制的全精度型号获得的最新精度。在这方面,BNN非常适合边缘设备,因为它们使用1位来存储输入和权重,因此它们的存储要求很低。同样,BNNS计算主要使用XNOR和POP-COUNTS操作进行,这些操作非常有效地使用简单的硬件结构实现。尽管如此,在移动CPU上有效地支持BNN远非微不足道,因为它们的好处受到频繁的内存访问加载权重和输入的阻碍。 在BNN中,使用一个位存储一个重量或输入,旨在提高存储和计算效率,其中一些被作为一组位的序列包装在一起。在这项工作中,我们观察到代表一组权重的唯一序列的数量通常很低。同样,我们已经看到,在评估BNN层时,一小部分独特的序列比其他序列更频繁。因此,我们建议通过使用Huffman编码来编码位序列,然后使用间接表在BNN评估期间对其进行解码来利用此观察。同样,我们提出了一个聚类方案,以识别最常见的位序列,并用一些相似的共同序列替换较不常见的序列。因此,由于常见序列的编码较少,因此我们减少了存储要求和内存访问。 我们通过添加一个小型硬件结构来扩展移动CPU,该结构可以有效地缓存并解码压缩序列。我们使用ImaacNet数据集使用ReaAcnet模型评估我们的方案。我们的实验结果表明,我们的技术可以将记忆需求减少1.32倍,并提高1.35倍的性能。
Binary Neural Networks (BNNs) are showing tremendous success on realistic image classification tasks. Notably, their accuracy is similar to the state-of-the-art accuracy obtained by full-precision models tailored to edge devices. In this regard, BNNs are very amenable to edge devices since they employ 1-bit to store the inputs and weights, and thus, their storage requirements are low. Also, BNNs computations are mainly done using xnor and pop-counts operations which are implemented very efficiently using simple hardware structures. Nonetheless, supporting BNNs efficiently on mobile CPUs is far from trivial since their benefits are hindered by frequent memory accesses to load weights and inputs. In BNNs, a weight or an input is stored using one bit, and aiming to increase storage and computation efficiency, several of them are packed together as a sequence of bits. In this work, we observe that the number of unique sequences representing a set of weights is typically low. Also, we have seen that during the evaluation of a BNN layer, a small group of unique sequences is employed more frequently than others. Accordingly, we propose exploiting this observation by using Huffman Encoding to encode the bit sequences and then using an indirection table to decode them during the BNN evaluation. Also, we propose a clustering scheme to identify the most common sequences of bits and replace the less common ones with some similar common sequences. Hence, we decrease the storage requirements and memory accesses since common sequences are encoded with fewer bits. We extend a mobile CPU by adding a small hardware structure that can efficiently cache and decode the compressed sequence of bits. We evaluate our scheme using the ReAacNet model with the Imagenet dataset. Our experimental results show that our technique can reduce memory requirement by 1.32x and improve performance by 1.35x.