论文标题

学习在线多传感器深度融合

Learning Online Multi-Sensor Depth Fusion

论文作者

Sandström, Erik, Oswald, Martin R., Kumar, Suryansh, Weder, Silvan, Yu, Fisher, Sminchisescu, Cristian, Van Gool, Luc

论文摘要

许多手持或混合现实设备与单个传感器一起用于3D重建,尽管它们通常包含多个传感器。多传感器深度融合能够实质上提高3D重建方法的鲁棒性和准确性,但是现有技术不够强大,无法处理具有不同值范围以及噪声范围以及噪声和外相统计数据的传感器。为此,我们介绍了Senfunet,这是一种深度融合方法,它可以学习特定于传感器的噪声和离群统计数据,并以在线方式将深度框架的数据流组合在一起。我们的方法融合了多传感器深度流,而不论时间同步和校准如何,并且在很少的训练数据中可以很好地概括。我们在现实世界中使用各种传感器组合进行实验,以及scene3D数据集以及副本数据集。实验表明,我们的融合策略表现优于传统和最新的在线深度融合方法。另外,与使用单个传感器相比,多个传感器的组合产生的更强大的离群处理和更精确的表面重建。源代码和数据可在https://github.com/tfy14esa/senfunet上获得。

Many hand-held or mixed reality devices are used with a single sensor for 3D reconstruction, although they often comprise multiple sensors. Multi-sensor depth fusion is able to substantially improve the robustness and accuracy of 3D reconstruction methods, but existing techniques are not robust enough to handle sensors which operate with diverse value ranges as well as noise and outlier statistics. To this end, we introduce SenFuNet, a depth fusion approach that learns sensor-specific noise and outlier statistics and combines the data streams of depth frames from different sensors in an online fashion. Our method fuses multi-sensor depth streams regardless of time synchronization and calibration and generalizes well with little training data. We conduct experiments with various sensor combinations on the real-world CoRBS and Scene3D datasets, as well as the Replica dataset. Experiments demonstrate that our fusion strategy outperforms traditional and recent online depth fusion approaches. In addition, the combination of multiple sensors yields more robust outlier handling and more precise surface reconstruction than the use of a single sensor. The source code and data are available at https://github.com/tfy14esa/SenFuNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源