论文标题

稀疏的深度神经网络推断,并有效地实施GPU

At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation

论文作者

Hidayetoglu, Mert, Pearson, Carl, Mailthody, Vikram Sharma, Ebrahimi, Eiman, Xiong, Jinjun, Nagi, Rakesh, Hwu, Wen-Mei

论文摘要

本文介绍了2020年稀疏深神经网络挑战的推理模型的GPU性能优化和扩展结果。网络质量的需求迅速增加,推动了大小,从而超出了许多神经网络的内存需求,超出了可用加速器的能力。稀疏的深神经网络(SPDNN)表现出有望在大型神经网络的记忆足迹中进行重新制定的希望。但是,在GPU上实施SPDNN操作还有改善的空间。这项工作提出了与Relu函数融合的优化稀疏矩阵乘法内核。优化的内核重用输入特征从共享内存和寄存器稀疏的权重。对于多GPU并行性,我们的SPDNN实现重复了权重,并在跨GPU的特征图上静态划分。挑战基准的结果表明,所提出的内核设计和多GPU并行化达到了每秒最高180个TERA-EDGES。与2019年稀疏的Deep Deep Deep Network Graph挑战相比,单个GPU的速度最高为4.3倍,并且全尺寸的数量级要快。使用相同的实现,我们还显示NVIDIA A100上的单GPU吞吐量比V100快2.37 $ \ times $。

This paper presents GPU performance optimization and scaling results for inference models of the Sparse Deep Neural Network Challenge 2020. Demands for network quality have increased rapidly, pushing the size and thus the memory requirements of many neural networks beyond the capacity of available accelerators. Sparse deep neural networks (SpDNN) have shown promise for reining in the memory footprint of large neural networks. However, there is room for improvement in implementing SpDNN operations on GPUs. This work presents optimized sparse matrix multiplication kernels fused with the ReLU function. The optimized kernels reuse input feature maps from the shared memory and sparse weights from registers. For multi-GPU parallelism, our SpDNN implementation duplicates weights and statically partition the feature maps across GPUs. Results for the challenge benchmarks show that the proposed kernel design and multi-GPU parallelization achieve up to 180 tera-edges per second inference throughput. These results are up to 4.3x faster for a single GPU and an order of magnitude faster at full scale than those of the champion of the 2019 Sparse Deep Neural Network Graph Challenge for the same generation of NVIDIA V100 GPUs. Using the same implementation, we also show single-GPU throughput on NVIDIA A100 is 2.37$\times$ faster than V100.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源