论文标题

metasdf:元学习签名距离功能

MetaSDF: Meta-learning Signed Distance Functions

论文作者

Sitzmann, Vincent, Chan, Eric R., Tucker, Richard, Snavely, Noah, Wetzstein, Gordon

论文摘要

神经隐式形状表示是一种新兴范式,它比常规离散表示,包括以高空间分辨率的记忆效率,提供许多潜在的好处。具有这种神经隐式表示的形状概括等于在各个功能空间上学习先验,并从部分或嘈杂的观察中实现几何重建。现有的概括方法依赖于在低维的潜在代码上调节神经网络,该代码要么由编码器回归或在自动删除器框架中共同优化。在这里,我们将形状空间的学习形式化为元学习问题,并利用基于梯度的元学习算法来解决此任务。我们证明,这种方法以基于自动编码器的方法在标准杆上执行,同时在测试时间推理中更快的数量级。我们进一步证明,所提出的基于梯度的方法优于基于编码的方法,该方法利用了基于池的集合编码器。

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution. Generalizing across shapes with such neural implicit representations amounts to learning priors over the respective function space and enables geometry reconstruction from partial or noisy observations. Existing generalization methods rely on conditioning a neural network on a low-dimensional latent code that is either regressed by an encoder or jointly optimized in the auto-decoder framework. Here, we formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task. We demonstrate that this approach performs on par with auto-decoder based approaches while being an order of magnitude faster at test-time inference. We further demonstrate that the proposed gradient-based method outperforms encoder-decoder based methods that leverage pooling-based set encoders.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源