论文标题
针对对抗性攻击的强大轨迹预测
Robust Trajectory Prediction against Adversarial Attacks
论文作者
论文摘要
使用深神经网络(DNN)的轨迹预测是自主驾驶(AD)系统的重要组成部分。但是,这些方法容易受到对抗攻击的影响,从而导致严重的后果,例如碰撞。在这项工作中,我们确定了两种关键成分,以捍卫轨迹预测模型,以防止(1)设计有效的对抗训练方法,以及(2)添加特定领域的数据增强以减轻清洁数据的性能降低。我们证明,与经过干净数据训练的模型相比,我们的方法能够在对抗数据上提高性能46%,而在干净数据上仅需3%的性能退化。此外,与现有的强大方法相比,我们的方法可以在对抗性示例中提高21%的性能,而在干净的数据上可以提高性能。我们的健壮模型与计划者一起评估,以研究其下游影响。我们证明我们的模型可以大大降低严重的事故率(例如碰撞和越野驾驶)。
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks including (1) designing effective adversarial training methods and (2) adding domain-specific data augmentation to mitigate the performance degradation on clean data. We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data. Additionally, compared to existing robust methods, our method can improve performance by 21% on adversarial examples and 9% on clean data. Our robust model is evaluated with a planner to study its downstream impacts. We demonstrate that our model can significantly reduce the severe accident rates (e.g., collisions and off-road driving).