论文标题
加速联邦学习,可以缓解当地培训中的遗忘
Acceleration of Federated Learning with Alleviated Forgetting in Local Training
论文作者
论文摘要
联合学习(FL)可实现机器学习模型的分布式优化,同时通过在每个客户端独立培训本地模型,然后在中央服务器上汇总参数,从而生成有效的全局模型,从而保护隐私。尽管已经提出了多种FL算法,但是当数据不独立和相同分布(非I.I.D。)时,它们的培训效率仍然很低。我们观察到,现有方法的收敛速率缓慢是(至少部分地)是由于每个客户在本地培训阶段的灾难性遗忘问题引起的,这导致有关其他客户的先前培训数据的损失功能的大幅度提高。在这里,我们提出了FedReg,这是一种算法,通过在本地培训阶段遗忘了本地训练的参数,并在生成的伪数据中丢失,以减轻本地培训阶段的知识遗忘,该算法编码了全球模型学到的先前培训数据的知识。我们的全面实验表明,FedReg不仅显着提高了FL的收敛速率,尤其是当神经网络体系结构深度且客户的数据极为非I.I.I.D.时,而且还能够更好地保护分类问题的隐私性,并且可以更强大地抵抗梯度反转攻击。该代码可在以下网址提供:https://github.com/zoesgithub/fedreg。
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy by independently training local models on each client and then aggregating parameters on a central server, thereby producing an effective global model. Although a variety of FL algorithms have been proposed, their training efficiency remains low when the data are not independently and identically distributed (non-i.i.d.) across different clients. We observe that the slow convergence rates of the existing methods are (at least partially) caused by the catastrophic forgetting issue during the local training stage on each individual client, which leads to a large increase in the loss function concerning the previous training data at the other clients. Here, we propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage by regularizing locally trained parameters with the loss on generated pseudo data, which encode the knowledge of previous training data learned by the global model. Our comprehensive experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep and the clients' data are extremely non-i.i.d., but is also able to protect privacy better in classification problems and more robust against gradient inversion attacks. The code is available at: https://github.com/Zoesgithub/FedReg.