论文标题

Metappearance:视觉外观繁殖的元学习

Metappearance: Meta-Learning for Visual Appearance Reproduction

论文作者

Fischer, Michael, Ritschel, Tobias

论文摘要

目前存在两种主要的方法,可以使用机器学习(ML)复制视觉外观:第一个是训练模型,这些模型将在问题的不同实例(例如数据集的不同图像)中推广。随着一声的接近,这些提供了快速的推断,但质量通常不足。第二种方法不是训练跨任务概括的模型,而是过度拟合问题的一个实例,例如材料的闪光图像。这些方法提供高质量,但需要很长时间才能训练。我们建议使用元学习端到端组合这两种技术:我们在内部循环中过度融合到一个问题实例上,同时还学习如何在许多示例中的外环中有效地进行此操作。为此,我们得出所需的形式主义,该形式允许将元学习应用于各种视觉外观繁殖问题:纹理,BRDF,SVBRDF,照明或场景的整个光传输。在我们的框架中分析了元学习参数对视觉外观几个不同方面的影响,并为不同任务提供了具体指导。 Metappearance可以使视觉质量类似于仅在其运行时的一小部分,同时保持一般模型的适应性。

There currently exist two main approaches to reproducing visual appearance using Machine Learning (ML): The first is training models that generalize over different instances of a problem, e.g., different images of a dataset. As one-shot approaches, these offer fast inference, but often fall short in quality. The second approach does not train models that generalize across tasks, but rather over-fit a single instance of a problem, e.g., a flash image of a material. These methods offer high quality, but take long to train. We suggest to combine both techniques end-to-end using meta-learning: We over-fit onto a single problem instance in an inner loop, while also learning how to do so efficiently in an outer-loop across many exemplars. To this end, we derive the required formalism that allows applying meta-learning to a wide range of visual appearance reproduction problems: textures, BRDFs, svBRDFs, illumination or the entire light transport of a scene. The effects of meta-learning parameters on several different aspects of visual appearance are analyzed in our framework, and specific guidance for different tasks is provided. Metappearance enables visual quality that is similar to over-fit approaches in only a fraction of their runtime while keeping the adaptivity of general models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源