论文标题
具有自定义隐私保护的社会意识聚类联合学习
Social-Aware Clustered Federated Learning with Customized Privacy Preservation
论文作者
论文摘要
联合学习(FL)的关键特征是保留最终用户的数据隐私。但是,在FL下交换梯度时,仍然存在潜在的隐私泄漏。结果,最近的研究经常探索差异隐私(DP)方法,以在计算结果中添加噪声,以解决低间接开销的隐私问题,但是这些模型性能降低了。在本文中,我们通过利用用户之间的普遍社交联系来实现数据隐私和效率的平衡。具体来说,我们建议SCFL是一种新型的社会意识聚集的联邦学习计划,在该计划中,相互信任的个人可以自由地形成社交集群并在每个集群中汇总其原始模型更新(例如,梯度),然后再上传到云到云中以进行全球聚合。通过在社会群体中混合模型更新,对手只能窃听社交层的综合结果,而不能窃听个人的隐私。我们在三个步骤中展开SCFL的设计。考虑到用户的异质培训样本和数据分布,我们将最佳的社交集群形成问题制定为联邦游戏,并设计了一种公平的收入分配机制来抵制自由骑行者。 ii)差异化的信任映射}。对于具有低相互信任的集群,我们设计了一种可自定义的隐私保护机制,可以根据社会信任度适应参与者的模型更新。 iii)分布式收敛}。设计了分布式的双向匹配算法,以获得具有NASH稳定收敛性的优化分离分区。 Facebook Network和MNIST/CIFAR-10数据集的实验验证了我们的SCFL可以有效地增强学习实用程序,提高用户的收益并强制执行可自定义的隐私保护。
A key feature of federated learning (FL) is to preserve the data privacy of end users. However, there still exist potential privacy leakage in exchanging gradients under FL. As a result, recent research often explores the differential privacy (DP) approaches to add noises to the computing results to address privacy concerns with low overheads, which however degrade the model performance. In this paper, we strike the balance of data privacy and efficiency by utilizing the pervasive social connections between users. Specifically, we propose SCFL, a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster and aggregate their raw model updates (e.g., gradients) inside each cluster before uploading to the cloud for global aggregation. By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals. We unfold the design of SCFL in three steps.i) Stable social cluster formation. Considering users' heterogeneous training samples and data distributions, we formulate the optimal social cluster formation problem as a federation game and devise a fair revenue allocation mechanism to resist free-riders. ii) Differentiated trust-privacy mapping}. For the clusters with low mutual trust, we design a customizable privacy preservation mechanism to adaptively sanitize participants' model updates depending on social trust degrees. iii) Distributed convergence}. A distributed two-sided matching algorithm is devised to attain an optimized disjoint partition with Nash-stable convergence. Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection.