论文标题
迭代相应的几何形状:融合区域和深度,用于高效的3D跟踪无纹理对象
Iterative Corresponding Geometry: Fusing Region and Depth for Highly Efficient 3D Tracking of Textureless Objects
论文作者
论文摘要
在3D空间中跟踪对象并预测其6DOF姿势是计算机视觉中的重要任务。最先进的方法通常依靠对象纹理来解决此问题。但是,尽管它们取得了令人印象深刻的结果,但许多物体不包含足够的纹理,违反了主要的基础假设。因此,我们提出了ICG,这是一种融合区域和深度信息的新型概率跟踪器,仅需要对象几何形状。我们的方法部署了对应线,并指向迭代的姿势。我们还实施了可靠的闭塞处理,以提高现实世界中的性能。在YCB-VIDEO,OPT和CHOI数据集上的实验表明,即使对于纹理对象,我们的方法在准确性和鲁棒性方面的当前状态都超过了当前的最新状态。同时,ICG显示出快速的收敛性和出色的效率,在单个CPU核心上只需要每帧1.3 ms。最后,我们分析了各个组件的影响,并讨论了与基于深度学习的方法相比的性能。我们的跟踪器的源代码公开可用。
Tracking objects in 3D space and predicting their 6DoF pose is an essential task in computer vision. State-of-the-art approaches often rely on object texture to tackle this problem. However, while they achieve impressive results, many objects do not contain sufficient texture, violating the main underlying assumption. In the following, we thus propose ICG, a novel probabilistic tracker that fuses region and depth information and only requires the object geometry. Our method deploys correspondence lines and points to iteratively refine the pose. We also implement robust occlusion handling to improve performance in real-world settings. Experiments on the YCB-Video, OPT, and Choi datasets demonstrate that, even for textured objects, our approach outperforms the current state of the art with respect to accuracy and robustness. At the same time, ICG shows fast convergence and outstanding efficiency, requiring only 1.3 ms per frame on a single CPU core. Finally, we analyze the influence of individual components and discuss our performance compared to deep learning-based methods. The source code of our tracker is publicly available.