论文标题
C3-SL:基于循环卷积的批量压缩
C3-SL: Circular Convolution-Based Batch-Wise Compression for Communication-Efficient Split Learning
论文作者
论文摘要
大多数现有研究通过压缩传输特征来提高分裂学习(SL)的效率。但是,大多数工作集中在尺寸的压缩上,将高维特征转化为低维空间。在本文中,我们建议针对SL(C3-SL)的基于圆周卷积的批量压缩,以将多个特征压缩为一个功能。为了避免在合并多个特征时信息丢失,我们利用具有圆形卷积和叠加的高维空间中特征的准正交性。据我们所知,我们是第一个在SL方案下探索批处理压缩潜力的人。基于CIFAR-10和CIFAR-100的仿真结果,与香草SL相比,我们的方法具有可忽略不计的精度下降的16倍压缩比。此外,与最先进的尺寸压缩方法相比,C3-SL显着降低了1152x内存和2.25x计算开销。
Most existing studies improve the efficiency of Split learning (SL) by compressing the transmitted features. However, most works focus on dimension-wise compression that transforms high-dimensional features into a low-dimensional space. In this paper, we propose circular convolution-based batch-wise compression for SL (C3-SL) to compress multiple features into one single feature. To avoid information loss while merging multiple features, we exploit the quasi-orthogonality of features in high-dimensional space with circular convolution and superposition. To the best of our knowledge, we are the first to explore the potential of batch-wise compression under the SL scenario. Based on the simulation results on CIFAR-10 and CIFAR-100, our method achieves a 16x compression ratio with negligible accuracy drops compared with the vanilla SL. Moreover, C3-SL significantly reduces 1152x memory and 2.25x computation overhead compared to the state-of-the-art dimension-wise compression method.