论文标题

野外神经3D重建

Neural 3D Reconstruction in the Wild

论文作者

Sun, Jiaming, Chen, Xi, Wang, Qianqian, Li, Zhengqi, Averbuch-Elor, Hadar, Zhou, Xiaowei, Snavely, Noah

论文摘要

我们目睹了计算机视觉和图形中神经隐性表示的爆炸。他们的适用性最近扩大了超越任务,例如形状生成和基于图像的渲染,以基于图像的3D重建的基本问题。但是,现有方法通常假设受约束的3D环境,并通过一小组大致分布的摄像机捕获的恒定照明。我们介绍了一种新方法,该方法可以在有不同照明的情况下从互联网照片集中进行有效,准确的表面重建。为了实现这一目标,我们提出了一种混合体素和表面引导的采样技术,该技术允许在表面周围进行更有效的射线采样,并导致重建质量的显着改善。此外,我们提出了一种新的基准和协议,用于评估这种野外场景的重建性能。我们进行广泛的实验,表明我们的方法超过了各种指标的经典和神经重建方法。

We are witnessing an explosion of neural implicit representations in computer vision and graphics. Their applicability has recently expanded beyond tasks such as shape generation and image-based rendering to the fundamental problem of image-based 3D reconstruction. However, existing methods typically assume constrained 3D environments with constant illumination captured by a small set of roughly uniformly distributed cameras. We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections in the presence of varying illumination. To achieve this, we propose a hybrid voxel- and surface-guided sampling technique that allows for more efficient ray sampling around surfaces and leads to significant improvements in reconstruction quality. Further, we present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes. We perform extensive experiments, demonstrating that our approach surpasses both classical and neural reconstruction methods on a wide variety of metrics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源