论文标题
公正的4D:具有神经变形模型的单眼4D重建
Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model
论文作者
论文摘要
从单眼RGB视频中捕获一般的变形场景对于许多计算机图形和视觉应用至关重要。但是,当前的方法遇到了缺点,例如挣扎着大型场景变形,不准确的形状完成或需要2D点轨道。相比之下,我们的方法UB4D处理大变形,在遮挡区域内执行形状完成,并且可以通过使用可区分的音量渲染直接在单眼RGB视频上操作。该技术在非辅导3D重建组件的背景下包括三个新的,即1)基于坐标的非辅助场景的基于坐标的和隐式神经表示形式,结合使用可区分的体积渲染,可以无公正地重建动态场景,2)动态场景的动态场景,以及动态场景的动态场景,以及动态场景,以及3个动态场景,以及3个)的动态场景,并实现了3个)的动态场景,并实现了3个)的动态场景,并实现了3次动态场景,并实现了3个)的动态场景,并实现了该场景的启示。通过利用其他方法的粗略估计来重建较大的变形。在我们的新数据集中,将公开可用的结果表明,就表面重建精度和对大变形的鲁棒性而言,对最新情况有了明显的改善。
Capturing general deforming scenes from monocular RGB video is crucial for many computer graphics and vision applications. However, current approaches suffer from drawbacks such as struggling with large scene deformations, inaccurate shape completion or requiring 2D point tracks. In contrast, our method, Ub4D, handles large deformations, performs shape completion in occluded regions, and can operate on monocular RGB videos directly by using differentiable volume rendering. This technique includes three new in the context of non-rigid 3D reconstruction components, i.e., 1) A coordinate-based and implicit neural representation for non-rigid scenes, which in conjunction with differentiable volume rendering enables an unbiased reconstruction of dynamic scenes, 2) a proof that extends the unbiased formulation of volume rendering to dynamic scenes, and 3) a novel dynamic scene flow loss, which enables the reconstruction of larger deformations by leveraging the coarse estimates of other methods. Results on our new dataset, which will be made publicly available, demonstrate a clear improvement over the state of the art in terms of surface reconstruction accuracy and robustness to large deformations.