论文标题
神经网络激活用位信息瓶颈量化
Neural Network Activation Quantization with Bitwise Information Bottlenecks
论文作者
论文摘要
关于信息瓶颈的最新研究为打开黑匣子编码的黑匣子的持续尝试提供了新的启示。受到无线通信的有损信号压缩问题的启发,本文提出了一种量化和编码神经网络激活的瓶颈方法。基于速率延伸理论,位信息瓶颈试图通过分配和近似与每个位相关的稀疏系数来确定激活表示中最重要的位。考虑到有限的平均代码速率的限制,信息瓶颈以灵活的逐层方式最大程度地减少了最佳激活量化的速率。 ImageNet和其他数据集的实验表明,通过最大程度地降低了带有信息瓶颈的神经网络的量化率延伸,可以通过低精度激活实现最新的准确性。同时,通过降低代码速率,与具有标准单位表示的深神经网络相比,提出的方法可以提高记忆和计算效率以上以上。当纸张被接受\ url {https://github.com/bitbottleneck/publiccode}时,代码将在GitHub上可用。
Recent researches on information bottleneck shed new light on the continuous attempts to open the black box of neural signal encoding. Inspired by the problem of lossy signal compression for wireless communication, this paper presents a Bitwise Information Bottleneck approach for quantizing and encoding neural network activations. Based on the rate-distortion theory, the Bitwise Information Bottleneck attempts to determine the most significant bits in activation representation by assigning and approximating the sparse coefficient associated with each bit. Given the constraint of a limited average code rate, the information bottleneck minimizes the rate-distortion for optimal activation quantization in a flexible layer-by-layer manner. Experiments over ImageNet and other datasets show that, by minimizing the quantization rate-distortion of each layer, the neural network with information bottlenecks achieves the state-of-the-art accuracy with low-precision activation. Meanwhile, by reducing the code rate, the proposed method can improve the memory and computational efficiency by over six times compared with the deep neural network with standard single-precision representation. Codes will be available on GitHub when the paper is accepted \url{https://github.com/BitBottleneck/PublicCode}.