论文标题

不仅仅是隐私:在人工智能的关键领域应用差异隐私

More Than Privacy: Applying Differential Privacy in Key Areas of Artificial Intelligence

论文作者

Zhu, Tianqing, Ye, Dayong, Wang, Wei, Zhou, Wanlei, Yu, Philip S.

论文摘要

近年来,人工智能(AI)引起了很多关注。但是,除了其所有进步之外,还出现了问题,例如侵犯隐私,安全问题和模型公平。作为一种有前途的数学模型,差异隐私具有几种有吸引力的属性,可以帮助解决这些问题,从而使其成为非常有价值的工具。因此,在AI中广泛应用了差异隐私,但迄今为止,尚无研究记录哪些差异隐私机制可以或已被利用以克服其问题或使这一可能成为可能的属性。在本文中,我们表明,差异隐私不仅可以保护隐私保护。它还可以用来提高安全性,稳定学习,建立公平的模型并在AI选定领域施加组成。侧重于定期的机器学习,分布式机器学习,深度学习和多代理系统,本文的目的是对通过差异隐私技术改善AI性能的许多可能性发表新的看法。

Artificial Intelligence (AI) has attracted a great deal of attention in recent years. However, alongside all its advancements, problems have also emerged, such as privacy violations, security issues and model fairness. Differential privacy, as a promising mathematical model, has several attractive properties that can help solve these problems, making it quite a valuable tool. For this reason, differential privacy has been broadly applied in AI but to date, no study has documented which differential privacy mechanisms can or have been leveraged to overcome its issues or the properties that make this possible. In this paper, we show that differential privacy can do more than just privacy preservation. It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI. With a focus on regular machine learning, distributed machine learning, deep learning, and multi-agent systems, the purpose of this article is to deliver a new view on many possibilities for improving AI performance with differential privacy techniques.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源