论文标题
对自动驾驶汽车的雷达数据开发的深度学习
Deep learning for radar data exploitation of autonomous vehicle
论文作者
论文摘要
自动驾驶需要详细了解复杂的驾驶场景。车辆传感器的冗余性和互补性为环境提供了准确,强大的理解,从而提高了性能和安全水平。该论文将重点放在汽车雷达上,该雷达是一种低成本的活动传感器,测量周围物体的特性,包括它们的相对速度,并且具有不受不利天气条件影响的关键优势。随着深度学习的快速发展和公共驾驶数据集的可用性,基于视力的驾驶系统的感知能力已大大提高。雷达传感器很少用于场景理解,因为其角度分辨率不足,雷达原始数据的大小,噪声和复杂性以及缺乏可用数据集。本文提出了对雷达场景理解的广泛研究,从构造带注释的数据集到适应深度学习体系结构的概念。首先,该论文的详细方法可以解决当前缺乏数据的方法。将提供一个简单的仿真以及创建注释数据的生成方法。它还将描述由使用半自动注释方法组成的Carrada数据集,该数据集由同步相机和雷达数据组成。然后,本文介绍了一组提议的深度学习体系结构,及其与雷达语义分割相关的损失函数。它还引入了一种开放对LIDAR和雷达传感器融合的研究,以了解场景的理解。最后,本文揭示了协作贡献,即具有同步高清(HD)雷达,激光镜头和相机的径向数据集。还提出了深度学习体系结构,以估计雷达信号处理管道,同时执行多任务学习以进行对象检测和自由驱动空间分割。
Autonomous driving requires a detailed understanding of complex driving scenes. The redundancy and complementarity of the vehicle's sensors provide an accurate and robust comprehension of the environment, thereby increasing the level of performance and safety. This thesis focuses the on automotive RADAR, which is a low-cost active sensor measuring properties of surrounding objects, including their relative speed, and has the key advantage of not being impacted by adverse weather conditions. With the rapid progress of deep learning and the availability of public driving datasets, the perception ability of vision-based driving systems has considerably improved. The RADAR sensor is seldom used for scene understanding due to its poor angular resolution, the size, noise, and complexity of RADAR raw data as well as the lack of available datasets. This thesis proposes an extensive study of RADAR scene understanding, from the construction of an annotated dataset to the conception of adapted deep learning architectures. First, this thesis details approaches to tackle the current lack of data. A simple simulation as well as generative methods for creating annotated data will be presented. It will also describe the CARRADA dataset, composed of synchronised camera and RADAR data with a semi-automatic annotation method. This thesis then present a proposed set of deep learning architectures with their associated loss functions for RADAR semantic segmentation. It also introduces a method to open up research into the fusion of LiDAR and RADAR sensors for scene understanding. Finally, this thesis exposes a collaborative contribution, the RADIal dataset with synchronised High-Definition (HD) RADAR, LiDAR and camera. A deep learning architecture is also proposed to estimate the RADAR signal processing pipeline while performing multitask learning for object detection and free driving space segmentation.