论文标题
用雕刻的神经点查看合成
View Synthesis with Sculpted Neural Points
论文作者
论文摘要
我们解决了查看综合的任务,从而产生了一组图像作为输入的场景的新视图。在诸如NERF(Mildenhall等,2020)之类的许多最新作品中,场景几何形状使用神经隐式表示(即MLP)进行了参数化。隐式神经表示已达到令人印象深刻的视觉质量,但在计算效率方面具有弊端。在这项工作中,我们提出了一种使用点云执行视图合成的新方法。这是第一个基于点的方法,其视觉质量比NERF更好,而渲染速度的速度快100倍。我们的方法基于现有的基于基于点的渲染的作品,但引入了一种新颖的技术,我们称为“雕刻神经点(SNP)”,这大大改善了重建点云中错误和孔的稳健性。我们进一步建议使用基于球形谐波的视点特征来捕获非斜角表面,以及基于点的渲染管道中的新设计,从而进一步提高了性能。最后,我们证明我们的系统支持细粒度的现场编辑。代码可在https://github.com/princeton-vl/snp上找到。
We address the task of view synthesis, generating novel views of a scene given a set of images as input. In many recent works such as NeRF (Mildenhall et al., 2020), the scene geometry is parameterized using neural implicit representations (i.e., MLPs). Implicit neural representations have achieved impressive visual quality but have drawbacks in computational efficiency. In this work, we propose a new approach that performs view synthesis using point clouds. It is the first point-based method that achieves better visual quality than NeRF while being 100x faster in rendering speed. Our approach builds on existing works on differentiable point-based rendering but introduces a novel technique we call "Sculpted Neural Points (SNP)", which significantly improves the robustness to errors and holes in the reconstructed point cloud. We further propose to use view-dependent point features based on spherical harmonics to capture non-Lambertian surfaces, and new designs in the point-based rendering pipeline that further boost the performance. Finally, we show that our system supports fine-grained scene editing. Code is available at https://github.com/princeton-vl/SNP.