论文标题

以对象为中心的神经场景渲染

Object-Centric Neural Scene Rendering

论文作者

Guo, Michelle, Fathi, Alireza, Wu, Jiajun, Funkhouser, Thomas

论文摘要

我们提出了一种从捕获的对象图像中构成影像现实场景的方法。我们的工作建立在神经辐射场(NERFS)的基础上,该田地隐含地模拟了场景的体积密度和方向性的辐射。尽管Nerfs合成了逼真的图片,但它们仅建模静态场景,并与特定的成像条件紧密相关。该属性使NERF很难推广到新的方案,包括新的照明或对象的新安排。我们建议没有学习场景辐射字段,而是建议学习以对象为中心的神经散射功能(OSFS),该表示的表示,该表示,该表示使用照明依赖和观点的神经网络隐式地模拟每个对象光传输。即使对象或灯光移动,也可以在不进行重新训练的情况下进行渲染场景。结合体积路径追踪程序,我们的框架能够呈现出内部和对象间轻运输效应,包括遮挡,镜面,阴影和间接照明。我们在场景组成上评估了我们的方法,并表明它概括为新的照明条件,从而产生了逼真的,物理上准确的多对象场景渲染。

We present a method for composing photorealistic scenes from captured images of objects. Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene. While NeRFs synthesize realistic pictures, they only model static scenes and are closely tied to specific imaging conditions. This property makes NeRFs hard to generalize to new scenarios, including new lighting or new arrangements of objects. Instead of learning a scene radiance field as a NeRF does, we propose to learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network. This enables rendering scenes even when objects or lights move, without retraining. Combined with a volumetric path tracing procedure, our framework is capable of rendering both intra- and inter-object light transport effects including occlusions, specularities, shadows, and indirect illumination. We evaluate our approach on scene composition and show that it generalizes to novel illumination conditions, producing photorealistic, physically accurate renderings of multi-object scenes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源