论文标题

advsmo:通过平滑纹理线性结构的黑盒对抗攻击

AdvSmo: Black-box Adversarial Attack by Smoothing Linear Structure of Texture

论文作者

Xia, Hui, Zhang, Rui, Jiang, Shuliang, Kang, Zi

论文摘要

黑盒攻击通常面临两个问题:可转移性差和无法逃避对抗性防御。为了克服这些缺点,我们通过平滑良性图像中纹理的线性结构(称为advsmo)来创建一种原始方法来生成对抗性示例。我们在不依赖目标模型的任何内部信息的情况下构建对抗性示例,并设计不察觉到的高攻击成功率约束,以指导Gabor滤波器选择适当的角度和鳞片以使输入图像中的线性纹理平滑以生成对抗性示例。从上面的设计概念中受益,Advsmo将产生具有强大可转移性和稳固性的对抗性例子。最后,与八种目标模型的四种高级黑盒对抗攻击方法相比,结果表明,ADVSMO在CIFAR-10上将平均攻击成功率提高了9%,而在Tiny-ImageNet数据集中,与这些攻击方法中的最佳攻击方法相比。

Black-box attacks usually face two problems: poor transferability and the inability to evade the adversarial defense. To overcome these shortcomings, we create an original approach to generate adversarial examples by smoothing the linear structure of the texture in the benign image, called AdvSmo. We construct the adversarial examples without relying on any internal information to the target model and design the imperceptible-high attack success rate constraint to guide the Gabor filter to select appropriate angles and scales to smooth the linear texture from the input images to generate adversarial examples. Benefiting from the above design concept, AdvSmo will generate adversarial examples with strong transferability and solid evasiveness. Finally, compared to the four advanced black-box adversarial attack methods, for the eight target models, the results show that AdvSmo improves the average attack success rate by 9% on the CIFAR-10 and 16% on the Tiny-ImageNet dataset compared to the best of these attack methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源