论文标题
通过沟通压缩的分布式学习中的下限和几乎最佳的算法
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression
论文作者
论文摘要
分布式优化和学习的最新进展表明,沟通压缩是减少交流的最有效手段之一。尽管在通信压缩下的收敛速率有很多结果,但理论下限仍然缺失。 通过通信压缩的算法的分析将收敛归因于两个抽象属性:无偏见的属性或承包属性。它们可以通过单向压缩(只有工人到服务器的消息被压缩)或双向压缩来应用它们。在本文中,我们考虑分布式随机算法,以最大程度地减少通信压缩下的平滑和非凸目标功能。我们为算法建立了收敛的下限,无论是在单向或双向中使用无偏压缩机还是使用承包压缩机。为了缩小下限和现有上限之间的差距,我们进一步提出了一种新石器时代的算法,该算法在轻度条件下几乎达到了我们的下限(达到对数因素)。我们的结果还表明,使用承包双向压缩可以产生与使用无偏见的单向压缩的迭代方法一样快地收敛的方法。实验结果验证了我们的发现。
Recent advances in distributed optimization and learning have shown that communication compression is one of the most effective means of reducing communication. While there have been many results on convergence rates under communication compression, a theoretical lower bound is still missing. Analyses of algorithms with communication compression have attributed convergence to two abstract properties: the unbiased property or the contractive property. They can be applied with either unidirectional compression (only messages from workers to server are compressed) or bidirectional compression. In this paper, we consider distributed stochastic algorithms for minimizing smooth and non-convex objective functions under communication compression. We establish a convergence lower bound for algorithms whether using unbiased or contractive compressors in unidirection or bidirection. To close the gap between the lower bound and the existing upper bounds, we further propose an algorithm, NEOLITHIC, which almost reaches our lower bound (up to logarithm factors) under mild conditions. Our results also show that using contractive bidirectional compression can yield iterative methods that converge as fast as those using unbiased unidirectional compression. The experimental results validate our findings.