论文标题
对抗机器学习的政治
Politics of Adversarial Machine Learning
论文作者
论文摘要
除了其安全性外,对抗机器学习攻击和防御措施还具有政治维度。它们为机器学习系统的主题以及部署它们的人提供了某些选择,为公民自由和人权造成风险。在本文中,我们借鉴了科学和技术研究,人类学和人权文献的见解,以告知如何使用针对对抗性攻击的防御措施来抑制异议并限制研究机器学习系统的尝试。为了制作这种具体,我们使用现实世界中的示例,说明如何将攻击,模型反转或成员推理等攻击用于社会期望的目的。尽管这种分析的预测似乎很可怕,但仍然有希望。在商业间谍软件行业解决人权问题的努力为确保ML系统服务于民主而不是专制目的的类似措施提供了指导
In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creating risks for civil liberties and human rights. In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. To make this concrete, we use real-world examples of how attacks such as perturbation, model inversion, or membership inference can be used for socially desirable ends. Although the predictions of this analysis may seem dire, there is hope. Efforts to address human rights concerns in the commercial spyware industry provide guidance for similar measures to ensure ML systems serve democratic, not authoritarian ends