论文标题
Block-nerf:可扩展的大场景神经视图综合
Block-NeRF: Scalable Large Scene Neural View Synthesis
论文作者
论文摘要
我们提出了块-NERF,这是可以代表大规模环境的神经辐射场的变体。具体而言,我们证明,当将NERF缩放到跨越多个块的城市规模场景时,将场景分解成单独训练的NERF是至关重要的。该分解将渲染时间从场景大小呈现,使渲染能够扩展到任意较大的环境,并允许环境的每个块更新。我们采用了几种架构变化,以使NERF在不同的环境条件下在数月内捕获的数据稳健。我们添加了外观嵌入,学习的姿势细化以及可控制的每个单独的nerf,并引入了一个程序,以使相邻nerf之间的外观对齐外观,以便可以将它们无缝合并。我们从280万张图像中建立了一个块网格,以创建迄今为止最大的神经场景代表,能够渲染整个旧金山社区。
We present Block-NeRF, a variant of Neural Radiance Fields that can represent large-scale environments. Specifically, we demonstrate that when scaling NeRF to render city-scale scenes spanning multiple blocks, it is vital to decompose the scene into individually trained NeRFs. This decomposition decouples rendering time from scene size, enables rendering to scale to arbitrarily large environments, and allows per-block updates of the environment. We adopt several architectural changes to make NeRF robust to data captured over months under different environmental conditions. We add appearance embeddings, learned pose refinement, and controllable exposure to each individual NeRF, and introduce a procedure for aligning appearance between adjacent NeRFs so that they can be seamlessly combined. We build a grid of Block-NeRFs from 2.8 million images to create the largest neural scene representation to date, capable of rendering an entire neighborhood of San Francisco.