论文标题
对自主驾驶模型的对抗攻击和防御的分析
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models
论文作者
论文摘要
如今,自动驾驶引起了行业和学术界的广泛关注。卷积神经网络(CNN)是自动驾驶中的关键组成部分,在智能手机,可穿戴设备和物联网网络等普遍计算中也越来越多地采用。先前的工作表明,基于CNN的分类模型容易受到对抗攻击的影响。但是,尚不确定诸如驾驶模型之类的回归模型在多大程度上容易受到对抗性攻击的影响,现有防御技术的有效性以及对系统和中间件建筑商的防御含义。本文对三种驾驶模型进行了五种对抗性攻击和四种防御方法的深入分析。实验表明,与分类模型相似,这些模型仍然很容易受到对抗攻击的影响。这对自动驾驶构成了巨大的安全威胁,因此应在实践中考虑。尽管这些防御方法可以有效地防止不同的攻击,但它们都无法为所有五次攻击提供足够的保护。我们对系统和中间件建造者产生了几种影响:(1)在添加针对对抗性攻击的防御组件时,与对良好的攻击进行良好覆盖范围很重要,(2)Blackbox攻击与白色盒子的攻击相比有效多得多,这与保持模型的模型架构(例如,模型架构)相比,这一点很重要,这一点很重要。如果计算资源允许,则优选使用复杂的架构,因为它们比简单模型更具有对抗性攻击。
Nowadays, autonomous driving has attracted much attention from both industry and academia. Convolutional neural network (CNN) is a key component in autonomous driving, which is also increasingly adopted in pervasive computing such as smartphones, wearable devices, and IoT networks. Prior work shows CNN-based classification models are vulnerable to adversarial attacks. However, it is uncertain to what extent regression models such as driving models are vulnerable to adversarial attacks, the effectiveness of existing defense techniques, and the defense implications for system and middleware builders. This paper presents an in-depth analysis of five adversarial attacks and four defense methods on three driving models. Experiments show that, similar to classification models, these models are still highly vulnerable to adversarial attacks. This poses a big security threat to autonomous driving and thus should be taken into account in practice. While these defense methods can effectively defend against different attacks, none of them are able to provide adequate protection against all five attacks. We derive several implications for system and middleware builders: (1) when adding a defense component against adversarial attacks, it is important to deploy multiple defense methods in tandem to achieve a good coverage of various attacks, (2) a blackbox attack is much less effective compared with a white-box attack, implying that it is important to keep model details (e.g., model architecture, hyperparameters) confidential via model obfuscation, and (3) driving models with a complex architecture are preferred if computing resources permit as they are more resilient to adversarial attacks than simple models.