论文标题
联合的未学习:如何在FL中有效擦除客户?
Federated Unlearning: How to Efficiently Erase a Client in FL?
论文作者
论文摘要
通过隐私立法赋予用户有权被遗忘的权利,因此,使模型成为忘记其一些培训数据的模型至关重要。但是,由于学习协议的差异和多个参与者的存在,机器学习上下文中现有的未学习方法不能直接应用于联合学习等分布式设置。在本文中,我们通过从受过训练的全球模型中删除其整个本地数据的影响来解决擦除客户的情况的联盟的问题。为了删除客户,我们建议先在客户端进行本地学习,然后将本地未学习的模型用作初始化,以在服务器和其余客户之间进行几轮联合学习,以获取未经学习的全局模型。我们通过在三个数据集上采用多种绩效指标来凭经验评估我们的学习方法,并证明我们的未学习方法将可相当的性能作为从Scratch的金标准未学习方法中获得的金标准未学习方法,同时又有效率很高。与先前的工作不同,我们的学习方法不需要全局访问用于培训的数据,也不需要服务器或任何客户端存储的参数更新历史记录。
With privacy legislation empowering the users with the right to be forgotten, it has become essential to make a model amenable for forgetting some of its training data. However, existing unlearning methods in the machine learning context can not be directly applied in the context of distributed settings like federated learning due to the differences in learning protocol and the presence of multiple actors. In this paper, we tackle the problem of federated unlearning for the case of erasing a client by removing the influence of their entire local data from the trained global model. To erase a client, we propose to first perform local unlearning at the client to be erased, and then use the locally unlearned model as the initialization to run very few rounds of federated learning between the server and the remaining clients to obtain the unlearned global model. We empirically evaluate our unlearning method by employing multiple performance measures on three datasets, and demonstrate that our unlearning method achieves comparable performance as the gold standard unlearning method of federated retraining from scratch, while being significantly efficient. Unlike prior works, our unlearning method neither requires global access to the data used for training nor the history of the parameter updates to be stored by the server or any of the clients.