论文标题

审计私人机器学习:私人SGD的私人如何?

Auditing Differentially Private Machine Learning: How Private is Private SGD?

论文作者

Jagielski, Matthew, Ullman, Jonathan, Oprea, Alina

论文摘要

我们调查了差异化私人SGD在实践中是否提供了比其最新分析所保证的更好的隐私。我们通过新颖的数据中毒攻击来做到这一点,这与现实的隐私攻击相对应。虽然先前的工作(Ma等,Arxiv 2019)提出了差异隐私与数据中毒作为对数据中毒的辩护之间的联系,但我们用作理解特定机制隐私的工具是新的。更笼统地,我们的工作采用了一种定量,经验的方法来理解差异私人算法的特定实施所提供的隐私,我们认为这些算法有可能补充和影响分析工作对差异隐私的分析工作。

We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis. We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks. While previous work (Ma et al., arXiv 2019) proposed this connection between differential privacy and data poisoning as a defense against data poisoning, our use as a tool for understanding the privacy of a specific mechanism is new. More generally, our work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that we believe has the potential to complement and influence analytical work on differential privacy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源