论文标题
FEDSPLIT:快速联合优化的算法框架
FedSplit: An algorithmic framework for fast federated optimization
论文作者
论文摘要
在联邦学习的激励下,我们考虑了分布式优化的轮毂和辐条模型,其中中央当局在限制通信的同时,在许多代理商中协调解决方案的计算。我们首先研究了一些过去的联合优化过程,并表明它们的固定点也不必对应于原始优化问题的固定点,即使在具有确定性更新的简单凸设置中也是如此。为了解决这些问题,我们介绍了FedSplit,FedSplit是一类基于操作员分裂程序的算法,以求解用添加剂结构求解分布式凸的最小化。我们证明这些过程具有正确的固定点,与原始优化问题的最佳相对应,并且我们在不同的设置下表征了它们的收敛速率。我们的理论表明,这些方法对于中间局部数量的不精确计算非常健壮。我们通过一些简单的实验来补充我们的理论,这些实验证明了我们方法在实践中的好处。
Motivated by federated learning, we consider the hub-and-spoke model of distributed optimization in which a central authority coordinates the computation of a solution among many agents while limiting communication. We first study some past procedures for federated optimization, and show that their fixed points need not correspond to stationary points of the original optimization problem, even in simple convex settings with deterministic updates. In order to remedy these issues, we introduce FedSplit, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure. We prove that these procedures have the correct fixed points, corresponding to optima of the original optimization problem, and we characterize their convergence rates under different settings. Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities. We complement our theory with some simple experiments that demonstrate the benefits of our methods in practice.