论文标题

SimplereCon:3D重建而无需3D卷积

SimpleRecon: 3D Reconstruction Without 3D Convolutions

论文作者

Sayed, Mohamed, Gibson, John, Watson, Jamie, Prisacariu, Victor, Firman, Michael, Godard, Clément

论文摘要

传统上,来自摆姿势的图像的3D室内场景重建在两个阶段中发生:每个图像深度估计,然后进行深度合并和表面重建。最近,出现了一个直接在最终3D体积特征空间中进行重建的方法。尽管这些方法显示出令人印象深刻的重建结果,但它们依赖于昂贵的3D卷积层,从而限制了它们在资源受限环境中的应用。在这项工作中,我们回到了传统的路线,并展示着专注于高质量的多视图深度预测如何使用简单的现成深度融合来高度准确的3D重建。我们提出了一个简单的最先进的多视图深度估计器,其中有两个主要贡献:1)精心设计的2D CNN,它利用强大的图像先验以及平面扫描的特征量和几何损失,结合2)与2)组合键框和几何元数据的集成,从而使成本量可以允许知识的深度得分。我们的方法在当前的最新估计中取得了重大领先地位,以进行深度估计,并在扫描仪和7个镜头上进行3D重建,但仍允许在线实时实时低音重建。代码,模型和结果可在https://nianticlabs.github.io/simplerecon上找到

Traditionally, 3D indoor scene reconstruction from posed images happens in two phases: per-image depth estimation, followed by depth merging and surface reconstruction. Recently, a family of methods have emerged that perform reconstruction directly in final 3D volumetric feature space. While these methods have shown impressive reconstruction results, they rely on expensive 3D convolutional layers, limiting their application in resource-constrained environments. In this work, we instead go back to the traditional route, and show how focusing on high quality multi-view depth prediction leads to highly accurate 3D reconstructions using simple off-the-shelf depth fusion. We propose a simple state-of-the-art multi-view depth estimator with two main contributions: 1) a carefully-designed 2D CNN which utilizes strong image priors alongside a plane-sweep feature volume and geometric losses, combined with 2) the integration of keyframe and geometric metadata into the cost volume which allows informed depth plane scoring. Our method achieves a significant lead over the current state-of-the-art for depth estimation and close or better for 3D reconstruction on ScanNet and 7-Scenes, yet still allows for online real-time low-memory reconstruction. Code, models and results are available at https://nianticlabs.github.io/simplerecon

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源