论文标题
球体:超球联邦学习
SphereFed: Hyperspherical Federated Learning
论文作者
论文摘要
联合学习的目的是在不交换其私人本地数据的情况下从多个分散设备(即客户)培训全球模型。一个关键的挑战是处理非i.i.d。 (独立分布的)数据,这些数据可能引起其本地功能的差异。我们介绍了超球联邦学习(球形)框架,以解决非i.i.d。通过限制学习数据点的学会表示的问题,以在客户共享的单位超孔上。具体而言,所有客户都通过将权重横跨单位的固定分类器而言最小化,以最大程度地限制损失。在联合培训改善了全球模型之后,通过最大程度地减少平方平方损失,通过封闭形式的解决方案进一步校准了该分类器。我们表明,可以有效地计算校准解决方案,而无需直接访问本地数据。广泛的实验表明,我们的球体方法能够通过在数据集和模型体系结构之间增强的计算和通信效率,通过相当大的差距(在具有挑战性的数据集中达到6%)来提高多个现有联合学习算法的准确性。
Federated Learning aims at training a global model from multiple decentralized devices (i.e. clients) without exchanging their private local data. A key challenge is the handling of non-i.i.d. (independent identically distributed) data across multiple clients that may induce disparities of their local features. We introduce the Hyperspherical Federated Learning (SphereFed) framework to address the non-i.i.d. issue by constraining learned representations of data points to be on a unit hypersphere shared by clients. Specifically, all clients learn their local representations by minimizing the loss with respect to a fixed classifier whose weights span the unit hypersphere. After federated training in improving the global model, this classifier is further calibrated with a closed-form solution by minimizing a mean squared loss. We show that the calibration solution can be computed efficiently and distributedly without direct access of local data. Extensive experiments indicate that our SphereFed approach is able to improve the accuracy of multiple existing federated learning algorithms by a considerable margin (up to 6% on challenging datasets) with enhanced computation and communication efficiency across datasets and model architectures.