论文标题

DDOD:双重拒绝对人类团队的决策攻击

DDoD: Dual Denial of Decision Attacks on Human-AI Teams

论文作者

Tag, Benjamin, van Berkel, Niels, Verma, Sunny, Zhao, Benjamin Zi Hao, Berkovsky, Shlomo, Kaafar, Dali, Kostakos, Vassilis, Ohrimenko, Olga

论文摘要

人工智能(AI)系统已越来越多地用于使决策过程更快,更准确,更高效。但是,此类系统也有遭受攻击的风险。尽管针对基于AI的应用程序的大多数攻击旨在操纵分类器或培训数据并改变AI模型的输出,但最近提出的针对AI模型的海绵攻击旨在通过消耗大量资源来阻碍分类器的执行。在这项工作中,我们提出\ textit {双重拒绝决策(DDOD)对协作人类AI团队的攻击}。我们讨论了这种攻击的目的是如何耗尽\ textit {计算和人类资源}的资源,并严重损害了决策能力。我们描述了关于人类和计算资源的DDOD,并在一系列示例领域中提出了潜在的风险情景。

Artificial Intelligence (AI) systems have been increasingly used to make decision-making processes faster, more accurate, and more efficient. However, such systems are also at constant risk of being attacked. While the majority of attacks targeting AI-based applications aim to manipulate classifiers or training data and alter the output of an AI model, recently proposed Sponge Attacks against AI models aim to impede the classifier's execution by consuming substantial resources. In this work, we propose \textit{Dual Denial of Decision (DDoD) attacks against collaborative Human-AI teams}. We discuss how such attacks aim to deplete \textit{both computational and human} resources, and significantly impair decision-making capabilities. We describe DDoD on human and computational resources and present potential risk scenarios in a series of exemplary domains.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源