论文标题
个性化联合学习的下限和最佳算法
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
论文作者
论文摘要
在这项工作中,我们考虑了Hanzely和Richtárik(2020)最近引入的个性化联合学习的优化表述,该学习被证明是对本地{\ tt SGD}方法的工作的另一种解释。我们的第一个贡献是建立该公式的第一个下限,以达到通信复杂性和局部甲骨文的复杂性。我们的第二个贡献是几种最佳方法的设计,几乎在所有制度中都匹配这些下限。这些是第一个用于个性化联合学习的最佳方法。我们的最佳方法包括{\ tt fedProx}的加速变体,以及{\ tt fedAvg}/local {\ tt sgd}的加速差异版本。我们通过广泛的数值实验证明了我们方法的实际优势。
In this work, we consider the optimization formulation of personalized federated learning recently introduced by Hanzely and Richtárik (2020) which was shown to give an alternative explanation to the workings of local {\tt SGD} methods. Our first contribution is establishing the first lower bounds for this formulation, for both the communication complexity and the local oracle complexity. Our second contribution is the design of several optimal methods matching these lower bounds in almost all regimes. These are the first provably optimal methods for personalized federated learning. Our optimal methods include an accelerated variant of {\tt FedProx}, and an accelerated variance-reduced version of {\tt FedAvg}/Local {\tt SGD}. We demonstrate the practical superiority of our methods through extensive numerical experiments.