论文标题

对象hider:对象检测器的对抗贴片攻击

Object Hider: Adversarial Patch Attack Against Object Detectors

论文作者

Zhao, Yusheng, Yan, Huanqian, Wei, Xingxing

论文摘要

深层神经网络已被广泛用于许多计算机视觉任务。但是,事实证明,它们容易受到输入中添加的小,不可察觉的扰动的影响。具有精心设计的扰动的输入可以欺骗深度学习模型,称为对抗性示例,他们对深神经网络的安全提出了极大的关注。对象检测算法旨在在图像或视频中定位和分类对象,它们是许多计算机视觉任务的核心,它们具有良好的研究价值和广泛的应用程序。在本文中,我们专注于对某些最新对象检测模型的对抗性攻击。作为实际的替代方法,我们使用对抗斑块进行攻击。已经提出了两种对抗斑块产生算法:基于热图的算法和基于共识的算法。实验结果表明,所提出的方法是高效,可转移和通用的。此外,我们还将提出的方法应用于竞争“对象检测的对抗挑战”,该竞赛由阿里巴巴在天奇平台上组织,并在1701年的球队中赢得了前7名。代码可在以下网址找到:https://github.com/fenhua/detdak

Deep neural networks have been widely used in many computer vision tasks. However, it is proved that they are susceptible to small, imperceptible perturbations added to the input. Inputs with elaborately designed perturbations that can fool deep learning models are called adversarial examples, and they have drawn great concerns about the safety of deep neural networks. Object detection algorithms are designed to locate and classify objects in images or videos and they are the core of many computer vision tasks, which have great research value and wide applications. In this paper, we focus on adversarial attack on some state-of-the-art object detection models. As a practical alternative, we use adversarial patches for the attack. Two adversarial patch generation algorithms have been proposed: the heatmap-based algorithm and the consensus-based algorithm. The experiment results have shown that the proposed methods are highly effective, transferable and generic. Additionally, we have applied the proposed methods to competition "Adversarial Challenge on Object Detection" that is organized by Alibaba on the Tianchi platform and won top 7 in 1701 teams. Code is available at: https://github.com/FenHua/DetDak

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源