论文标题
基准测试3D对象检测的LiDAR-CAMERA融合的稳健性
Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object Detection
论文作者
论文摘要
在自动驾驶,相机和激光雷达中,有两个关键传感器,用于3D感知。该相机提供了丰富的语义信息,例如颜色,纹理,LIDAR反映了周围物体的3D形状和位置。人们发现,融合这两种方式可以显着提高3D感知模型的性能,因为每种方式都具有互补的信息。但是,我们观察到当前的数据集是从昂贵的车辆中捕获的,这些车辆是为数据收集目的而明确设计的,并且由于各种原因无法真正反映现实的数据分布。为此,我们收集了一系列具有嘈杂数据分布的现实情况,并系统地制定了稳健性基准工具包,这些工具包在任何干净的自动驾驶数据集上模拟这些案例。我们通过在两个广泛的自动驾驶数据集(Nuscenes and Waymo)上建立稳健性基准来展示我们的工具包的有效性,因此,据我们所知,首次将最先进的融合方法基准为我们所知。我们观察到:i)大多数融合方法仅在这些数据上开发时,当激光雷达输入造成破坏时,往往会不可避免地失败; ii)摄像机输入的改进明显低于雨衣。我们进一步提出了一种有效的鲁棒训练策略,以改善当前融合方法的鲁棒性。基准和代码可从https://github.com/kcyu2014/lidar-camera-robust-benchmark获得
There are two critical sensors for 3D perception in autonomous driving, the camera and the LiDAR. The camera provides rich semantic information such as color, texture, and the LiDAR reflects the 3D shape and locations of surrounding objects. People discover that fusing these two modalities can significantly boost the performance of 3D perception models as each modality has complementary information to the other. However, we observe that current datasets are captured from expensive vehicles that are explicitly designed for data collection purposes, and cannot truly reflect the realistic data distribution due to various reasons. To this end, we collect a series of real-world cases with noisy data distribution, and systematically formulate a robustness benchmark toolkit, that simulates these cases on any clean autonomous driving datasets. We showcase the effectiveness of our toolkit by establishing the robustness benchmark on two widely-adopted autonomous driving datasets, nuScenes and Waymo, then, to the best of our knowledge, holistically benchmark the state-of-the-art fusion methods for the first time. We observe that: i) most fusion methods, when solely developed on these data, tend to fail inevitably when there is a disruption to the LiDAR input; ii) the improvement of the camera input is significantly inferior to the LiDAR one. We further propose an efficient robust training strategy to improve the robustness of the current fusion method. The benchmark and code are available at https://github.com/kcyu2014/lidar-camera-robust-benchmark