论文标题

FEDSEL:在当地差异隐私下使用Top-K Dimension选择的联合SGD

FedSel: Federated SGD under Local Differential Privacy with Top-k Dimension Selection

论文作者

Liu, Ruixuan, Cao, Yang, Yoshikawa, Masatoshi, Chen, Hong

论文摘要

随着小型小工具产生大量数据,移动设备上的联合学习已成为一种新兴趋势。在联合环境中,随机梯度下降(SGD)已广泛用于各种机器学习模型的联合学习中。为了防止根据用户敏感数据计算的梯度泄漏的隐私泄漏,最近在Federated SGD中将当地差异隐私(LDP)视为隐私保证。但是,现有的解决方案存在一个维度依赖性问题:注入的噪声与尺寸$ d $基本成正比。在这项工作中,我们为在自然党下为联邦SGD提供了一个两阶段的框架,以减轻此问题。我们的关键想法是,并非所有维度都同样重要,因此我们根据它们在联合SGD的每次迭代中的贡献私下选择Top-K维度。具体而言,我们提出了三种私人维度选择机制,并适应梯度积累技术以嘈杂的更新稳定学习过程。我们还理论上分析了FEDSEL的隐私,准确性和时间复杂性,这表现优于最先进的解决方案。对现实世界和合成数据集的实验验证了我们框架的有效性和效率。

As massive data are produced from small gadgets, federated learning on mobile devices has become an emerging trend. In the federated setting, Stochastic Gradient Descent (SGD) has been widely used in federated learning for various machine learning models. To prevent privacy leakages from gradients that are calculated on users' sensitive data, local differential privacy (LDP) has been considered as a privacy guarantee in federated SGD recently. However, the existing solutions have a dimension dependency problem: the injected noise is substantially proportional to the dimension $d$. In this work, we propose a two-stage framework FedSel for federated SGD under LDP to relieve this problem. Our key idea is that not all dimensions are equally important so that we privately select Top-k dimensions according to their contributions in each iteration of federated SGD. Specifically, we propose three private dimension selection mechanisms and adapt the gradient accumulation technique to stabilize the learning process with noisy updates. We also theoretically analyze privacy, accuracy and time complexity of FedSel, which outperforms the state-of-the-art solutions. Experiments on real-world and synthetic datasets verify the effectiveness and efficiency of our framework.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源