论文标题
从单眼图像和稀疏雷达数据的深度估计
Depth Estimation from Monocular Images and Sparse Radar Data
论文作者
论文摘要
在本文中,我们探讨了通过使用深神经网络融合单眼图像和雷达点来实现更准确的深度估计的可能性。我们全面研究了来自不同方面的RGB图像与雷达测量之间的融合,并根据观察结果提出了一种工作解决方案。我们发现,雷达测量中存在的噪声是阻止人们应用于LiDAR数据和图像开发的现有融合方法的主要原因之一,并将其应用于雷达数据和图像之间的新融合问题。实验是在Nuscenes数据集上进行的,Nuscenes数据集是在不同场景和天气条件下以相机,雷达和激光唱片为特色的第一批数据集之一。广泛的实验表明,我们的方法优于现有的融合方法。我们还提供详细的消融研究,以显示我们方法中每个组件的有效性。
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network. We give a comprehensive study of the fusion between RGB images and Radar measurements from different aspects and proposed a working solution based on the observations. We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods developed for LiDAR data and images to the new fusion problem between Radar data and images. The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions. Extensive experiments demonstrate that our method outperforms existing fusion methods. We also provide detailed ablation studies to show the effectiveness of each component in our method.