论文标题
通过严格的层输出操作来利用在卷积神经网络中汇集的脆弱性,以进行对抗攻击
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks
论文作者
论文摘要
卷积神经网络(CNN)越来越多地应用于智能车辆等移动机器人技术中。在机器人技术应用程序中,CNN的安全是一个重要的问题,对CNN的潜在对抗性攻击值得研究。合并是CNN中尺寸降低和信息丢弃的典型步骤。丢弃此类信息可能会导致对数据特征的不利影响和错误保护,从而在很大程度上影响网络的输出。这可能会加剧CNN对对抗攻击的脆弱性。在本文中,我们通过研究和利用合并的脆弱性来从网络结构的角度对CNN进行对抗性攻击。首先,提出了一种名为“严格层输出操作(SLOM)”的新型对抗攻击方法。然后,一种基于严格合并操作(SPM)的攻击方法是对SLOM精神的实例化,旨在有效地实现对目标CNN的I型和II型对抗攻击。还研究并比较了基于SPM在不同深度的SPM的攻击性能。此外,比较了通过使用CNN不同操作层实例化SLOM精神设计的攻击方法的性能。实验结果反映出,与CNN中的其他操作相比,合并往往更容易受到对抗攻击的影响。
Convolutional neural networks (CNN) have been more and more applied in mobile robotics such as intelligent vehicles. Security of CNNs in robotics applications is an important issue, for which potential adversarial attacks on CNNs are worth research. Pooling is a typical step of dimension reduction and information discarding in CNNs. Such information discarding may result in mis-deletion and mis-preservation of data features which largely influence the output of the network. This may aggravate the vulnerability of CNNs to adversarial attacks. In this paper, we conduct adversarial attacks on CNNs from the perspective of network structure by investigating and exploiting the vulnerability of pooling. First, a novel adversarial attack methodology named Strict Layer-Output Manipulation (SLOM) is proposed. Then an attack method based on Strict Pooling Manipulation (SPM) which is an instantiation of the SLOM spirit is designed to effectively realize both type I and type II adversarial attacks on a target CNN. Performances of attacks based on SPM at different depths are also investigated and compared. Moreover, performances of attack methods designed by instantiating the SLOM spirit with different operation layers of CNNs are compared. Experiment results reflect that pooling tends to be more vulnerable to adversarial attacks than other operations in CNNs.