论文标题

3D激光雷达数据的学习场景先验的生成范围成像

Generative Range Imaging for Learning Scene Priors of 3D LiDAR Data

论文作者

Nakashima, Kazuto, Iwashita, Yumi, Kurazume, Ryo

论文摘要

3D激光雷达传感器对于自动移动机器人的强大视野是必不可少的。但是,部署基于激光雷达的感知算法通常由于训练环境的域间隙(例如,角度分辨率不一致和属性缺失)而失败。现有研究通过学习域间映射来解决问题,而可转移性受训练配置的约束,训练容易受到称为Ray-Drop的特殊损失噪声的影响。为了解决该问题,本文提出了适用于数据级域传输的LiDAR范围图像的生成模型。激励雷达测量是基于点范围范围成像的事实,我们训练一个隐式图像表示基于图像表示的生成对抗网络以及可区分的射线滴滴效应。与基于点和基于图像的最新生成模型相比,我们证明了模型的忠诚度和多样性。我们还展示了提高采样和恢复应用。此外,我们引入了SIM2REAL应用程序,以进行激光雷达语义分割。我们证明我们的方法是一种现实的射线模拟器,并且优于最先进的方法。

3D LiDAR sensors are indispensable for the robust vision of autonomous mobile robots. However, deploying LiDAR-based perception algorithms often fails due to a domain gap from the training environment, such as inconsistent angular resolution and missing properties. Existing studies have tackled the issue by learning inter-domain mapping, while the transferability is constrained by the training configuration and the training is susceptible to peculiar lossy noises called ray-drop. To address the issue, this paper proposes a generative model of LiDAR range images applicable to the data-level domain transfer. Motivated by the fact that LiDAR measurement is based on point-by-point range imaging, we train an implicit image representation-based generative adversarial networks along with a differentiable ray-drop effect. We demonstrate the fidelity and diversity of our model in comparison with the point-based and image-based state-of-the-art generative models. We also showcase upsampling and restoration applications. Furthermore, we introduce a Sim2Real application for LiDAR semantic segmentation. We demonstrate that our method is effective as a realistic ray-drop simulator and outperforms state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源