论文标题

压缩显式体素电网表示:快速nerf也变得很小

Compressing Explicit Voxel Grid Representations: fast NeRFs become also small

论文作者

Deng, Chenxi Lola, Tartaglione, Enzo

论文摘要

Nerf由于其内在的紧凑性而彻底改变了每场辐射场的重建世界。 NERF的主要局限性之一是它们在训练和推理时间时的渲染速度缓慢。最近的研究着重于代表场景的显式体素电网(EVG)的优化,可以将其与神经网络配对以学习辐射性领域。这种方法在火车和推理时间都大大提高了速度,但以大型记忆占用为代价。在这项工作中,我们提出了RE:NERF,该方法专门针对EVG-NERF可压缩性,旨在减少NERF模型的存储器存储,同时保持可比性的性能。我们在四个流行的基准测试中使用三种不同的EVG-NERF架构对我们的方法进行了基准测试,显示了RE:NERF的广泛可用性和有效性。

NeRFs have revolutionized the world of per-scene radiance field reconstruction because of their intrinsic compactness. One of the main limitations of NeRFs is their slow rendering speed, both at training and inference time. Recent research focuses on the optimization of an explicit voxel grid (EVG) that represents the scene, which can be paired with neural networks to learn radiance fields. This approach significantly enhances the speed both at train and inference time, but at the cost of large memory occupation. In this work we propose Re:NeRF, an approach that specifically targets EVG-NeRFs compressibility, aiming to reduce memory storage of NeRF models while maintaining comparable performance. We benchmark our approach with three different EVG-NeRF architectures on four popular benchmarks, showing Re:NeRF's broad usability and effectiveness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源