论文标题
基于6D RGB-D频谱和KinectFusion的室内空间一致重建的组合方法
A Combined Approach Toward Consistent Reconstructions of Indoor Spaces Based on 6D RGB-D Odometry and KinectFusion
论文作者
论文摘要
我们提出了一种6D RGB-D轨道测量方法,该方法通过按键提取到连续的RGB-D帧之间的相对摄像头姿势,并在RGB和深度图像平面上匹配特征。此外,我们将估计的姿势馈送到高度精确的KinectFusion算法中,该算法使用快速的ICP(迭代最接近点)将框架到框架相对姿势调整并融合深度数据中的框架。我们对Sturm等人的公开可用RGB-D SLAM基准数据集进行了评估。实验结果表明,我们提出的重建方法仅基于视觉探光仪,而KinectFusion优于最先进的RGB-D SLAM系统精确度。此外,我们的算法在没有任何后处理步骤的情况下输出现成的多边形网格(非常适合创建3D虚拟世界)。
We propose a 6D RGB-D odometry approach that finds the relative camera pose between consecutive RGB-D frames by keypoint extraction and feature matching both on the RGB and depth image planes. Furthermore, we feed the estimated pose to the highly accurate KinectFusion algorithm, which uses a fast ICP (Iterative Closest Point) to fine-tune the frame-to-frame relative pose and fuse the depth data into a global implicit surface. We evaluate our method on a publicly available RGB-D SLAM benchmark dataset by Sturm et al. The experimental results show that our proposed reconstruction method solely based on visual odometry and KinectFusion outperforms the state-of-the-art RGB-D SLAM system accuracy. Moreover, our algorithm outputs a ready-to-use polygon mesh (highly suitable for creating 3D virtual worlds) without any postprocessing steps.