论文标题
神经灌注:潜在空间中的在线深度融合
NeuralFusion: Online Depth Fusion in Latent Space
论文作者
论文摘要
我们提出了一种新颖的在线深度图融合方法,该方法了解潜在特征空间中的深度图聚集。虽然先前的融合方法使用符号距离函数(SDF)等明确的场景表示,但我们为融合提供了学习的特征表示。关键想法是通过附加的翻译网络使用用于融合的场景表示和输出场景表示之间的分离。我们的神经网络架构包括两个主要部分:深度和特征融合子网络,其后是翻译子网络,以生成可视化或其他任务的最终表面表示(例如TSDF)。我们的方法是在线过程,处理高噪声水平,并且特别能够处理基于光度立体声的深度图常见的总离群值。与艺术的状态相比,对真实和合成数据的实验表明结果有所改善,尤其是在具有大量噪声和异常值的具有挑战性的情况下。
We present a novel online depth map fusion approach that learns depth map aggregation in a latent feature space. While previous fusion methods use an explicit scene representation like signed distance functions (SDFs), we propose a learned feature representation for the fusion. The key idea is a separation between the scene representation used for the fusion and the output scene representation, via an additional translator network. Our neural network architecture consists of two main parts: a depth and feature fusion sub-network, which is followed by a translator sub-network to produce the final surface representation (e.g. TSDF) for visualization or other tasks. Our approach is an online process, handles high noise levels, and is particularly able to deal with gross outliers common for photometric stereo-based depth maps. Experiments on real and synthetic data demonstrate improved results compared to the state of the art, especially in challenging scenarios with large amounts of noise and outliers.