论文标题
4D汽车雷达的自我监督场景流量估算
Self-Supervised Scene Flow Estimation with 4-D Automotive Radar
论文作者
论文摘要
场景流程使自动驾驶汽车可以推理多个独立对象的任意运动,这是长期移动自主权的关键。虽然估计LiDAR的场景流动最近进展,但它在很大程度上仍未知如何估计场景从4D雷达流动 - 这是一种越来越流行的汽车传感器,它的稳健性在不利的天气和照明条件下。与LiDAR点云相比,雷达数据更为稀疏,嘈杂,分辨率更低。雷达场景流的注释数据集在现实世界中也没有且昂贵。这些因素共同提出了雷达场景流程估计是一个具有挑战性的问题。这项工作旨在解决上述挑战,并通过利用自我监督的学习来估算从4-D雷达点云的流动。稳健的场景流估计结构和三个新颖的损失是定制的,旨在应对棘手的雷达数据。现实世界实验结果验证了我们的方法能够稳健地估计野生中的雷达场景流,并有效地支持运动分割的下游任务。
Scene flow allows autonomous vehicles to reason about the arbitrary motion of multiple independent objects which is the key to long-term mobile autonomy. While estimating the scene flow from LiDAR has progressed recently, it remains largely unknown how to estimate the scene flow from a 4-D radar - an increasingly popular automotive sensor for its robustness against adverse weather and lighting conditions. Compared with the LiDAR point clouds, radar data are drastically sparser, noisier and in much lower resolution. Annotated datasets for radar scene flow are also in absence and costly to acquire in the real world. These factors jointly pose the radar scene flow estimation as a challenging problem. This work aims to address the above challenges and estimate scene flow from 4-D radar point clouds by leveraging self-supervised learning. A robust scene flow estimation architecture and three novel losses are bespoken designed to cope with intractable radar data. Real-world experimental results validate that our method is able to robustly estimate the radar scene flow in the wild and effectively supports the downstream task of motion segmentation.