论文标题
持续的本地培训,以更好地初始化联合模型
Continual Local Training for Better Initialization of Federated Models
论文作者
论文摘要
联合学习(FL)是指直接在由智能边缘设备组成的分散系统中训练机器学习模型的学习范式,而无需传输原始数据,从而避免了沉重的通信成本和隐私问题。鉴于在这种情况下典型的异质数据分布,与集中模型相比,与集中模型相比,流行的FL算法\ emph {联合平均}(FedAvg)遭受重量差异,因此无法实现全球模型的竞争性能(FL)。在本文中,我们提出了本地持续的培训策略来解决这个问题。在中央服务器上的一个小代理数据集上评估重要性权重,然后用于限制本地培训。通过这个附加术语,我们可以减轻权重差异,并将不同本地客户的知识不断地整合到全球模型中,从而确保更好的概括能力。在各种FL设置上进行的实验表明,我们的方法显着提高了联合模型的初始性能,几乎没有额外的沟通成本。
Federated learning (FL) refers to the learning paradigm that trains machine learning models directly in the decentralized systems consisting of smart edge devices without transmitting the raw data, which avoids the heavy communication costs and privacy concerns. Given the typical heterogeneous data distributions in such situations, the popular FL algorithm \emph{Federated Averaging} (FedAvg) suffers from weight divergence and thus cannot achieve a competitive performance for the global model (denoted as the \emph{initial performance} in FL) compared to centralized methods. In this paper, we propose the local continual training strategy to address this problem. Importance weights are evaluated on a small proxy dataset on the central server and then used to constrain the local training. With this additional term, we alleviate the weight divergence and continually integrate the knowledge on different local clients into the global model, which ensures a better generalization ability. Experiments on various FL settings demonstrate that our method significantly improves the initial performance of federated models with few extra communication costs.