论文标题

通过梯度放大捕捞大批量联合学习的用户数据

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification

论文作者

Wen, Yuxin, Geiping, Jonas, Fowl, Liam, Goldblum, Micah, Goldstein, Tom

论文摘要

由于其隐私和效率的承诺,联邦学习(FL)迅速上升。以前的工作通过从梯度更新中恢复用户数据,在FL管道中暴露了隐私漏洞。但是,现有的攻击无法解决现实设置,因为它们要么1)需要具有很小批量尺寸的玩具设置,要么2)需要不现实且引人注目的架构修改。我们引入了一种新的策略,该策略会大大提升现有攻击,以在任意大小的批次上运行,而没有建筑修改。我们的模型敏捷策略仅需要对发送给用户的模型参数进行修改,这在许多情况下都是现实的威胁模型。我们展示了挑战大规模设置的策略,在跨设备和联合学习中获得了高保真数据提取。

Federated learning (FL) has rapidly risen in popularity due to its promise of privacy and efficiency. Previous works have exposed privacy vulnerabilities in the FL pipeline by recovering user data from gradient updates. However, existing attacks fail to address realistic settings because they either 1) require toy settings with very small batch sizes, or 2) require unrealistic and conspicuous architecture modifications. We introduce a new strategy that dramatically elevates existing attacks to operate on batches of arbitrarily large size, and without architectural modifications. Our model-agnostic strategy only requires modifications to the model parameters sent to the user, which is a realistic threat model in many scenarios. We demonstrate the strategy in challenging large-scale settings, obtaining high-fidelity data extraction in both cross-device and cross-silo federated learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源