论文标题
端到端视图通过NERF注意
End-to-end View Synthesis via NeRF Attention
论文作者
论文摘要
在本文中,我们提出了一个简单的SEQ2SEQ公式,用于查看合成,其中我们将一组射线点作为输入和输出颜色,与光线相对应。在此SEQ2SEQ公式上直接应用标准变压器具有两个局限性。首先,标准注意力不能成功拟合体积渲染过程,因此在合成视图中缺少高频组件。其次,将全球关注对所有射线和像素都非常低效率。受神经辐射场(NERF)的启发,我们提出NERF注意(NERFA)来解决上述问题。一方面,Nerfa将体积渲染方程视为软特征调制过程。通过这种方式,特征调制可以通过类似NERF的电感偏置增强变压器。另一方面,Nerfa执行多阶段的关注以减少计算开销。此外,Nerfa模型采用射线和像素变压器来学习射线和像素之间的相互作用。 Nerfa在四个数据集上表现出优于NERF和Nerformer的卓越性能:DeepVoxels,Blender,LLFF和CO3D。此外,Nerfa在两种设置下建立了一个新的最新最新技术:单场景视图综合和以类别为中心的小说视图综合。
In this paper, we present a simple seq2seq formulation for view synthesis where we take a set of ray points as input and output colors corresponding to the rays. Directly applying a standard transformer on this seq2seq formulation has two limitations. First, the standard attention cannot successfully fit the volumetric rendering procedure, and therefore high-frequency components are missing in the synthesized views. Second, applying global attention to all rays and pixels is extremely inefficient. Inspired by the neural radiance field (NeRF), we propose the NeRF attention (NeRFA) to address the above problems. On the one hand, NeRFA considers the volumetric rendering equation as a soft feature modulation procedure. In this way, the feature modulation enhances the transformers with the NeRF-like inductive bias. On the other hand, NeRFA performs multi-stage attention to reduce the computational overhead. Furthermore, the NeRFA model adopts the ray and pixel transformers to learn the interactions between rays and pixels. NeRFA demonstrates superior performance over NeRF and NerFormer on four datasets: DeepVoxels, Blender, LLFF, and CO3D. Besides, NeRFA establishes a new state-of-the-art under two settings: the single-scene view synthesis and the category-centric novel view synthesis.