论文标题

在线逃避攻击反复的模型:幻觉未来的力量

Online Evasion Attacks on Recurrent Models:The Power of Hallucinating the Future

论文作者

Joe, Byunggill, Shin, Insik, Hamm, Jihun

论文摘要

经常使用的模型经常用于在线任务中,例如自动驾驶,并且需要对其脆弱性进行全面研究。现有的研究仅在一般性方面有限,仅解决特定应用程序的脆弱性或做出令人难以置信的假设,例如对未来输入的知识。在本文中,我们为在线任务提供了一个通用攻击框架,其中包含在线设置的独特约束与离线任务不同的。我们的框架用途广泛,因为它涵盖了时变的对抗性目标和各种优化的约束,从而可以全面研究鲁棒性。使用该框架,我们还提出了一种新颖的白盒攻击,称为“预测性攻击”,该攻击“幻觉”了未来。这次攻击平均达到了理想但不可行的千里眼攻击的98%。我们通过各种实验来验证提出的框架的有效性和攻击。

Recurrent models are frequently being used in online tasks such as autonomous driving, and a comprehensive study of their vulnerability is called for. Existing research is limited in generality only addressing application-specific vulnerability or making implausible assumptions such as the knowledge of future input. In this paper, we present a general attack framework for online tasks incorporating the unique constraints of the online setting different from offline tasks. Our framework is versatile in that it covers time-varying adversarial objectives and various optimization constraints, allowing for a comprehensive study of robustness. Using the framework, we also present a novel white-box attack called Predictive Attack that `hallucinates' the future. The attack achieves 98 percent of the performance of the ideal but infeasible clairvoyant attack on average. We validate the effectiveness of the proposed framework and attacks through various experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源