论文标题
联合学习的汇总服务:有效,安全且更具弹性的实现
Aggregation Service for Federated Learning: An Efficient, Secure, and More Resilient Realization
论文作者
论文摘要
Federated Learning最近成为一种范式,承诺利用来自不同来源的丰富数据来培训高质量模型的好处,其培训数据集的显着功能永远不会离开本地设备。只有模型更新是在本地计算和共享的,以用于聚集以产生全局模型。在联邦学习会大大减轻隐私问题而不是使用集中数据的学习问题,但共享模型更新仍然带来隐私风险。在本文中,我们提出了一个系统设计,该系统在整个学习过程中提供了对单个模型更新的有效保护,允许客户仅提供模型更新,而云服务器仍然可以执行聚合。我们的联合学习系统首先通过支持轻巧的加密和汇总,以及对辍学客户的弹性,从而脱离了先前的作品,而对未来的回合没有影响。同时,先前的工作在很大程度上忽略了密文域中的带宽效率优化,并支持对积极的对抗云服务器的安全性支持,我们在本文中也充分探索了这一效率,并提供了有效和高效的机制。在几个基准数据集(MNIST,CIFAR-10和CELEBA)上进行的广泛实验表明,我们的系统实现了与明文基线相当的准确性,并且具有实际的性能。
Federated learning has recently emerged as a paradigm promising the benefits of harnessing rich data from diverse sources to train high quality models, with the salient features that training datasets never leave local devices. Only model updates are locally computed and shared for aggregation to produce a global model. While federated learning greatly alleviates the privacy concerns as opposed to learning with centralized data, sharing model updates still poses privacy risks. In this paper, we present a system design which offers efficient protection of individual model updates throughout the learning procedure, allowing clients to only provide obscured model updates while a cloud server can still perform the aggregation. Our federated learning system first departs from prior works by supporting lightweight encryption and aggregation, and resilience against drop-out clients with no impact on their participation in future rounds. Meanwhile, prior work largely overlooks bandwidth efficiency optimization in the ciphertext domain and the support of security against an actively adversarial cloud server, which we also fully explore in this paper and provide effective and efficient mechanisms. Extensive experiments over several benchmark datasets (MNIST, CIFAR-10, and CelebA) show our system achieves accuracy comparable to the plaintext baseline, with practical performance.