论文标题

NeuralVDB:使用分层神经网络的高分辨率稀疏体积表示

NeuralVDB: High-resolution Sparse Volume Representation using Hierarchical Neural Networks

论文作者

Kim, Doyub, Lee, Minjae, Museth, Ken

论文摘要

我们介绍了NeuralVDB,该NeuralVDB通过利用机器学习的最新进步来提高现有的行业标准,以有效地存储稀疏体积数据的有效存储稀疏体积数据。我们的新型混合数据结构可以通过数量级来减少VDB体积的内存足迹,同时保持其灵活性,并且仅产生小(用户控制的)压缩误差。具体而言,NeuralVDB用多个层次神经网络替换了浅和宽VDB树结构的下节点,分别通过神经分类器和回归器分别编码拓扑和价值信息。事实证明,这种方法可以最大程度地提高压缩比,同时保持高级VDB数据结构提供的空间适应性。对于稀疏的符号距离场和密度量,我们已经观察到从已经压缩的VDB输入中的10倍至100倍以上的压缩比,几乎没有视觉伪像。此外,与其他神经表示相比,神经VDB可提供更有效的压缩性能,例如神经几何学的细节水平[Takikawa等。 2021],可变的比特率神经场[Takikawa等。 2022a]和即时神经图形原始图[Müller等。 2022]。最后,我们证明了从以前的框架中启动的温暖可以加速训练,即动画体积的压缩以及改善模型推理的时间相干性,即减压。

We introduce NeuralVDB, which improves on an existing industry standard for efficient storage of sparse volumetric data, denoted VDB [Museth 2013], by leveraging recent advancements in machine learning. Our novel hybrid data structure can reduce the memory footprints of VDB volumes by orders of magnitude, while maintaining its flexibility and only incurring small (user-controlled) compression errors. Specifically, NeuralVDB replaces the lower nodes of a shallow and wide VDB tree structure with multiple hierarchical neural networks that separately encode topology and value information by means of neural classifiers and regressors respectively. This approach is proven to maximize the compression ratio while maintaining the spatial adaptivity offered by the higher-level VDB data structure. For sparse signed distance fields and density volumes, we have observed compression ratios on the order of 10x to more than 100x from already compressed VDB inputs, with little to no visual artifacts. Furthermore, NeuralVDB is shown to offer more effective compression performance compared to other neural representations such as Neural Geometric Level of Detail [Takikawa et al. 2021], Variable Bitrate Neural Fields [Takikawa et al. 2022a], and Instant Neural Graphics Primitives [Müller et al. 2022]. Finally, we demonstrate how warm-starting from previous frames can accelerate training, i.e., compression, of animated volumes as well as improve temporal coherency of model inference, i.e., decompression.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源