论文标题
SurfaceNet+:用于非常稀疏的多视图立体曲线的端到端3D神经网络
SurfaceNet+: An End-to-end 3D Neural Network for Very Sparse Multi-view Stereopsis
论文作者
论文摘要
多视图立体操作(MVS)试图从2D图像中恢复3D模型。随着观察结果变得更加稀疏,大量的3D信息损失使MVS问题更具挑战性。我们不仅要专注于密集的采样条件,还研究了具有较大基线角度的稀疏MV,因为稀疏的感觉更实用,更具成本效益。通过研究各种观察稀释度,我们表明经典的深度融合管道对于较大的基线角度而变得无能为力。作为解决方案的另一条线,我们提出了SurfaceNet+,这是一种体积方法,用于处理非常稀疏的MVS设置引起的“不完整”和“不准确”问题。具体而言,以前的问题是通过新颖的体积选择方法来处理的。它在选择有效的视图时具有优势,同时通过考虑几何以前,丢弃无效的遮挡视图。此外,后一个问题是通过多尺度策略来处理的,该策略因此通过重复模式来完善该地区周围的几何形状。实验证明了在精度和召回方面表面+和最新方法之间的巨大性能差距。在两个数据集中的极端稀疏MV设置下,现有方法只能返回几个点,SurfaceNet+仍然可以正常工作,并且在密集的MVS设置中工作。基准和实施方式可在https://github.com/mjiust/surfacenet-plus上公开获得。
Multi-view stereopsis (MVS) tries to recover the 3D model from 2D images. As the observations become sparser, the significant 3D information loss makes the MVS problem more challenging. Instead of only focusing on densely sampled conditions, we investigate sparse-MVS with large baseline angles since the sparser sensation is more practical and more cost-efficient. By investigating various observation sparsities, we show that the classical depth-fusion pipeline becomes powerless for the case with a larger baseline angle that worsens the photo-consistency check. As another line of the solution, we present SurfaceNet+, a volumetric method to handle the 'incompleteness' and the 'inaccuracy' problems induced by a very sparse MVS setup. Specifically, the former problem is handled by a novel volume-wise view selection approach. It owns superiority in selecting valid views while discarding invalid occluded views by considering the geometric prior. Furthermore, the latter problem is handled via a multi-scale strategy that consequently refines the recovered geometry around the region with the repeating pattern. The experiments demonstrate the tremendous performance gap between SurfaceNet+ and state-of-the-art methods in terms of precision and recall. Under the extreme sparse-MVS settings in two datasets, where existing methods can only return very few points, SurfaceNet+ still works as well as in the dense MVS setting. The benchmark and the implementation are publicly available at https://github.com/mjiUST/SurfaceNet-plus.