论文标题
提高MLP混合器的对抗性可转移性
Boosting Adversarial Transferability of MLP-Mixer
论文作者
论文摘要
基于新架构(例如MLP混合仪和VIT)的模型的安全性需要紧急研究。但是,当前的大多数研究主要是针对对VIT的对抗性攻击,而在MLP混合物上仍然相对较少。我们提出了一种针对称为Maxwell's Demon Attack(MA)的MLP混合物的对抗攻击方法。 MA通过控制MLP-Mixer的每个混合层的零件输入,打破MLP混合的通道混合和混合机制,并打扰MLP混合仪以获取图像的主要信息。我们的方法可以掩盖混合层的零件输入,避免过度拟合对抗示例到源模型,并提高跨架构的可传递性。广泛的实验评估证明了所提出的MA的有效性和出色性能。我们的方法可以很容易地与现有方法结合使用,并且可以在基于MLP的RESMLP上提高可转移性高达38.0%。我们的方法在MLP混合仪上产生的对抗性实例能够超过使用Densenet对CNN产生的对抗性示例的转移性。据我们所知,我们是研究MLP混合使用对抗性可转移性的第一项工作。
The security of models based on new architectures such as MLP-Mixer and ViTs needs to be studied urgently. However, most of the current researches are mainly aimed at the adversarial attack against ViTs, and there is still relatively little adversarial work on MLP-mixer. We propose an adversarial attack method against MLP-Mixer called Maxwell's demon Attack (MA). MA breaks the channel-mixing and token-mixing mechanism of MLP-Mixer by controlling the part input of MLP-Mixer's each Mixer layer, and disturbs MLP-Mixer to obtain the main information of images. Our method can mask the part input of the Mixer layer, avoid overfitting of the adversarial examples to the source model, and improve the transferability of cross-architecture. Extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed MA. Our method can be easily combined with existing methods and can improve the transferability by up to 38.0% on MLP-based ResMLP. Adversarial examples produced by our method on MLP-Mixer are able to exceed the transferability of adversarial examples produced using DenseNet against CNNs. To the best of our knowledge, we are the first work to study adversarial transferability of MLP-Mixer.