论文标题

可验证的差异隐私

Verifiable Differential Privacy

论文作者

Biswas, Ari, Cormode, Graham

论文摘要

差异隐私(DP)通常被视为具有广泛适用性的强大隐私技术,并主张作为释放敏感数据的总统计数据的事实标准。但是,在许多实施方案中,DP引入了一个新的攻击表面:具有释放统计的恶意实体可以操纵结果并将DP的随机性用作方便的烟幕,以掩盖其邪恶性。由于揭示随机噪声将消除引入它的目的,因此不法行为可能具有完美的不利率。要关闭此漏洞,我们介绍了\ textIt {可验证的微分隐私}的想法,该}要求发布实体输出零知识证明,以说服有效的验证者认为输出既是DP又可靠。这样的定义似乎是无法实现的,因为验证者必须验证DP随机性是忠实地产生的,而无需了解随机性本身。我们通过仔细混合私人和公共随机性来通过理论保证来计算可验证的DP计数查询,并表明它对于现实世界部署也很实际,我们解决了这种悖论。我们还证明,在我们对可验证性的定义下,通过显示信息理论DP和计算DP之间的分离是必要的。

Differential Privacy (DP) is often presented as a strong privacy-enhancing technology with broad applicability and advocated as a de-facto standard for releasing aggregate statistics on sensitive data. However, in many embodiments, DP introduces a new attack surface: a malicious entity entrusted with releasing statistics could manipulate the results and use the randomness of DP as a convenient smokescreen to mask its nefariousness. Since revealing the random noise would obviate the purpose of introducing it, the miscreant may have a perfect alibi. To close this loophole, we introduce the idea of \textit{Verifiable Differential Privacy}, which requires the publishing entity to output a zero-knowledge proof that convinces an efficient verifier that the output is both DP and reliable. Such a definition might seem unachievable, as a verifier must validate that DP randomness was generated faithfully without learning anything about the randomness itself. We resolve this paradox by carefully mixing private and public randomness to compute verifiable DP counting queries with theoretical guarantees and show that it is also practical for real-world deployment. We also demonstrate that computational assumptions are necessary by showing a separation between information-theoretic DP and computational DP under our definition of verifiability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源