论文标题

迈向变压器模型的完全8位整数推断

Towards Fully 8-bit Integer Inference for the Transformer Model

论文作者

Lin, Ye, Li, Yanyang, Liu, Tengbo, Xiao, Tong, Liu, Tongran, Zhu, Jingbo

论文摘要

8位整数推断是减少深神经网络的延迟和存储的有希望的方向,最近取得了巨大进步。另一方面,以前的系统仍然依靠32位浮点在复杂模型中(例如,变压器中的软马克斯)中的某些功能,并大量使用量化和去量化。在这项工作中,我们表明,在对变压器体系结构进行了原则修改后,可以得出(几乎)(几乎)完全8位整数推理算法量表传播的(几乎)。必要时采用去量化化,这使网络更有效。我们在WMT16 EN <-> RO,WMT14 EN <-> de和En-> fr翻译任务以及Wikitext-103语言建模任务上的实验表明,完全8位变压器系统可与浮点基线相当,但需要减少4倍的内存足迹。

8-bit integer inference, as a promising direction in reducing both the latency and storage of deep neural networks, has made great progress recently. On the other hand, previous systems still rely on 32-bit floating point for certain functions in complex models (e.g., Softmax in Transformer), and make heavy use of quantization and de-quantization. In this work, we show that after a principled modification on the Transformer architecture, dubbed Integer Transformer, an (almost) fully 8-bit integer inference algorithm Scale Propagation could be derived. De-quantization is adopted when necessary, which makes the network more efficient. Our experiments on WMT16 En<->Ro, WMT14 En<->De and En->Fr translation tasks as well as the WikiText-103 language modelling task show that the fully 8-bit Transformer system achieves comparable performance with the floating point baseline but requires nearly 4x less memory footprint.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源