论文标题
VolRecon:可推广的多视图重建的签名射线距离功能的音量渲染
VolRecon: Volume Rendering of Signed Ray Distance Functions for Generalizable Multi-View Reconstruction
论文作者
论文摘要
新型视图合成中神经辐射场(NERF)的成功激发了研究人员提出神经隐式现场重建。但是,大多数现有的神经隐式重建方法都优化了人均参数,因此缺乏对新场景的普遍性。我们介绍了Volrecon,这是一种具有符号射线距离函数(SRDF)的可推广隐式重建方法。为了用细节良好的细节重建场景,Volrecon结合了从多视图功能汇总的投影功能,并从粗糙的全局功能卷中插值的音量功能。使用射线变压器,我们计算射线上采样点的SRDF值,然后呈现颜色和深度。在DTU数据集上,Volrecon在稀疏视图重建中优于Sparseneus约30%,并且在完整视图重建中获得了与MVSNet相当的精度。此外,我们的方法在大型ETH3D基准测试中表现出良好的概括性能。
The success of the Neural Radiance Fields (NeRF) in novel view synthesis has inspired researchers to propose neural implicit scene reconstruction. However, most existing neural implicit reconstruction methods optimize per-scene parameters and therefore lack generalizability to new scenes. We introduce VolRecon, a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF). To reconstruct the scene with fine details and little noise, VolRecon combines projection features aggregated from multi-view features, and volume features interpolated from a coarse global feature volume. Using a ray transformer, we compute SRDF values of sampled points on a ray and then render color and depth. On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction. Furthermore, our approach exhibits good generalization performance on the large-scale ETH3D benchmark.