论文标题
对物理世界中深度学习面部识别的强烈攻击
Robust Attacks on Deep Learning Face Recognition in the Physical World
论文作者
论文摘要
深度神经网络(DNN)已越来越多地用于面部识别系统(FR)系统。然而,最近的研究表明,DNN容易受到对抗性例子的影响,这些例子可能会使用物理世界中的DNN误导FR系统。对这些系统的现有攻击要么仅在数字世界中产生扰动,要么依靠定制设备来产生扰动,并且在不同的物理环境中并不强大。在本文中,我们提出了Faceadv,这是一种物理世界攻击,它可以制作对抗性贴纸来欺骗FR系统。它主要由贴纸发电机和一个变压器组成,前者可以在其中制作几个具有不同形状的贴纸,后者的变压器的目的是将贴纸连接到人脸上,并向发电机提供反馈以提高贴纸的有效性。我们进行了广泛的实验,以评估FACEADV对攻击3个典型的FR系统(即Arcface,Cosface和FaceNet)的有效性。结果表明,与最先进的攻击相比,Faceadv可以显着提高躲避和模仿攻击的成功率。我们还进行了全面的评估,以证明Faceadv的鲁棒性。
Deep neural networks (DNNs) have been increasingly used in face recognition (FR) systems. Recent studies, however, show that DNNs are vulnerable to adversarial examples, which can potentially mislead the FR systems using DNNs in the physical world. Existing attacks on these systems either generate perturbations working merely in the digital world, or rely on customized equipments to generate perturbations and are not robust in varying physical environments. In this paper, we propose FaceAdv, a physical-world attack that crafts adversarial stickers to deceive FR systems. It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes and the latter transformer aims to digitally attach stickers to human faces and provide feedbacks to the generator to improve the effectiveness of stickers. We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems (i.e., ArcFace, CosFace and FaceNet). The results show that compared with a state-of-the-art attack, FaceAdv can significantly improve success rate of both dodging and impersonating attacks. We also conduct comprehensive evaluations to demonstrate the robustness of FaceAdv.