论文标题
SST-CALIB:LIDAR和相机之间同时进行时空参数校准
SST-Calib: Simultaneous Spatial-Temporal Parameter Calibration between LIDAR and Camera
论文作者
论文摘要
借助来自多个输入方式的信息,基于传感器融合的算法通常在机器人技术中表现出其单模式的表现。带有互补语义和深度信息的相机和激光镜头是复杂驾驶环境中检测任务的典型选择。但是,对于大多数摄像机融合算法,传感器套件的校准将极大地影响性能。更具体地说,检测算法通常需要多个传感器之间的准确几何关系作为输入,并且通常假定这些传感器的内容是同时捕获的。准备此类传感器套件涉及精心设计的校准钻机和准确的同步机制,并且制备过程通常是离线进行的。在这项工作中,提出了一个基于分割的框架,以共同估计摄像机套件校准中的几何和时间参数。首先将语义分割掩码应用于两种传感器方式,并通过像素双向损失优化校准参数。我们特异性地合并了来自光流的速度信息,以用于时间参数。由于仅在分段级别进行监督,因此在框架内不需要校准标签。提出的算法在KITTI数据集上进行了测试,结果显示了几何和时间参数的准确实时校准。
With information from multiple input modalities, sensor fusion-based algorithms usually out-perform their single-modality counterparts in robotics. Camera and LIDAR, with complementary semantic and depth information, are the typical choices for detection tasks in complicated driving environments. For most camera-LIDAR fusion algorithms, however, the calibration of the sensor suite will greatly impact the performance. More specifically, the detection algorithm usually requires an accurate geometric relationship among multiple sensors as the input, and it is often assumed that the contents from these sensors are captured at the same time. Preparing such sensor suites involves carefully designed calibration rigs and accurate synchronization mechanisms, and the preparation process is usually done offline. In this work, a segmentation-based framework is proposed to jointly estimate the geometrical and temporal parameters in the calibration of a camera-LIDAR suite. A semantic segmentation mask is first applied to both sensor modalities, and the calibration parameters are optimized through pixel-wise bidirectional loss. We specifically incorporated the velocity information from optical flow for temporal parameters. Since supervision is only performed at the segmentation level, no calibration label is needed within the framework. The proposed algorithm is tested on the KITTI dataset, and the result shows an accurate real-time calibration of both geometric and temporal parameters.