论文标题
没有免费的午餐定理用于联合学习中的安全和公用事业
No Free Lunch Theorem for Security and Utility in Federated Learning
论文作者
论文摘要
在联邦学习方案中,多方共同从其各自的数据中学习模型,有两个相互矛盾的目标是选择适当的算法。一方面,在\ textit {semi-honest}合作伙伴存在下,必须尽可能地保持私人和敏感的培训数据,而另一方面,为了学习实用程序,必须在不同的各方之间交换一定数量的信息。这样的挑战要求提供隐私的联合学习解决方案,该解决方案最大程度地提高了学习模型的效用,并保留了参与各方私人数据的可证明的隐私保证。 本文说明了一个一般框架,即a)从统一信息理论的角度来制定隐私损失和效用损失之间的权衡,b)在使用不同的保护机制(包括随机化,稀疏性和同源加密)(包括随机化机制)时,描述了隐私 - 实用性权衡的定量界限。结果表明,一般而言\ textit {没有免费的午餐来进行隐私 - 私人权衡取舍},并且必须用一定程度的降级效用进行保存隐私。本文中说明的定量分析可以作为实用联合学习算法设计的指导。
In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms. On one hand, private and sensitive training data must be kept secure as much as possible in the presence of \textit{semi-honest} partners, while on the other hand, a certain amount of information has to be exchanged among different parties for the sake of learning utility. Such a challenge calls for the privacy-preserving federated learning solution, which maximizes the utility of the learned model and maintains a provable privacy guarantee of participating parties' private data. This article illustrates a general framework that a) formulates the trade-off between privacy loss and utility loss from a unified information-theoretic point of view, and b) delineates quantitative bounds of privacy-utility trade-off when different protection mechanisms including Randomization, Sparsity, and Homomorphic Encryption are used. It was shown that in general \textit{there is no free lunch for the privacy-utility trade-off} and one has to trade the preserving of privacy with a certain degree of degraded utility. The quantitative analysis illustrated in this article may serve as the guidance for the design of practical federated learning algorithms.