论文标题

一种提高效率的客户选择计划,用于联合学习,并保证公平保证

An Efficiency-boosting Client Selection Scheme for Federated Learning with Fairness Guarantee

论文作者

Huang, Tiansheng, Lin, Weiwei, Wu, Wentai, He, Ligang, Li, Keqin, Zomaya, Albert Y.

论文摘要

在集中的AI模型培训期间,潜在的隐私泄漏问题引起了公众的密集关注。称为联合学习(FL)的平行和分布式计算方案(或PDC)方案已成为一种新的范式,可以通过允许客户在本地进行模型培训,而无需上传其个人敏感数据,以应对隐私问题。在FL中,客户数量可能足够大,但是可用于模型分配和重新上传的带宽非常有限,这使得仅涉及一部分志愿者参加培训过程,这是明智的。就培训效率,最终模型的质量和公平性而言,客户选择策略对FL过程至关重要。在本文中,我们将对公平性的客户选择为Lyapunov优化问题进行建模,然后提出了一种基于C2MAB的方法,以估算每个客户端和服务器之间的模型交换时间,基于我们设计的公平性保证了称为RBCS-F的公平算法,以解决问题。 RBCS-F的遗憾严格由有限的常数,证明其理论可行性是合理的。除非理论结果,否则可以从我们在公共数据集上的真实培训实验中得出更多的经验数据。

The issue of potential privacy leakage during centralized AI's model training has drawn intensive concern from the public. A Parallel and Distributed Computing (or PDC) scheme, termed Federated Learning (FL), has emerged as a new paradigm to cope with the privacy issue by allowing clients to perform model training locally, without the necessity to upload their personal sensitive data. In FL, the number of clients could be sufficiently large, but the bandwidth available for model distribution and re-upload is quite limited, making it sensible to only involve part of the volunteers to participate in the training process. The client selection policy is critical to an FL process in terms of training efficiency, the final model's quality as well as fairness. In this paper, we will model the fairness guaranteed client selection as a Lyapunov optimization problem and then a C2MAB-based method is proposed for estimation of the model exchange time between each client and the server, based on which we design a fairness guaranteed algorithm termed RBCS-F for problem-solving. The regret of RBCS-F is strictly bounded by a finite constant, justifying its theoretical feasibility. Barring the theoretical results, more empirical data can be derived from our real training experiments on public datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源