论文标题

PBN:基于物理的神经模拟器,用于无监督服装姿势空间变形

PBNS: Physically Based Neural Simulator for Unsupervised Garment Pose Space Deformation

论文作者

Bertiche, Hugo, Madadi, Meysam, Escalera, Sergio

论文摘要

我们提出了一种通过深度学习来自动获得姿势空间变形(PSD)基础的方法。经典方法依靠基于物理的模拟(PBS)来使衣服动画。这些是一般解决方案,鉴于空间和时间的足够细粒度的离散化,可以实现高度现实的结果。但是,它们在计算上是昂贵的,任何场景修改都促使需要重新模拟。但是,与PSD的线性混合肤色(LB)提供了PBS的轻量级替代品,但是它需要大量数据才能学习适当的PSD。我们建议使用深度学习(以隐式PBS的形式制定),在受约束的场景中毫不客气地学习逼真的布料姿势空间变形:穿着的人类。此外,我们表明可以在一段时间内训练这些模型,可与几个序列的PBS相当。据我们所知,我们是第一个提出一块神经模拟器的人。尽管域中的基于深层的方法正在成为一种趋势,但这些是渴望数据的模型。此外,作者经常提出复杂的配方,以更好地从PBS数据中学习皱纹。监督的学习导致身体上不一致的预测,需要使用碰撞解决。同样,对PBS数据的依赖性限制了这些解决方案的可扩展性,而它们的公式则阻碍了其适用性和兼容性。通过提出一种无监督的方法来学习LBS模型(3D动画标准)的PSD,我们克服了这两个缺点。结果获得的结果显示了动画服装中的布一致性以及有意义的姿势依赖的褶皱和皱纹。我们的解决方案非常有效,处理多层布,可以调整无监督的服装,并可以轻松地应用于任何自定义的3D头像。

We present a methodology to automatically obtain Pose Space Deformation (PSD) basis for rigged garments through deep learning. Classical approaches rely on Physically Based Simulations (PBS) to animate clothes. These are general solutions that, given a sufficiently fine-grained discretization of space and time, can achieve highly realistic results. However, they are computationally expensive and any scene modification prompts the need of re-simulation. Linear Blend Skinning (LBS) with PSD offers a lightweight alternative to PBS, though, it needs huge volumes of data to learn proper PSD. We propose using deep learning, formulated as an implicit PBS, to unsupervisedly learn realistic cloth Pose Space Deformations in a constrained scenario: dressed humans. Furthermore, we show it is possible to train these models in an amount of time comparable to a PBS of a few sequences. To the best of our knowledge, we are the first to propose a neural simulator for cloth. While deep-based approaches in the domain are becoming a trend, these are data-hungry models. Moreover, authors often propose complex formulations to better learn wrinkles from PBS data. Supervised learning leads to physically inconsistent predictions that require collision solving to be used. Also, dependency on PBS data limits the scalability of these solutions, while their formulation hinders its applicability and compatibility. By proposing an unsupervised methodology to learn PSD for LBS models (3D animation standard), we overcome both of these drawbacks. Results obtained show cloth-consistency in the animated garments and meaningful pose-dependant folds and wrinkles. Our solution is extremely efficient, handles multiple layers of cloth, allows unsupervised outfit resizing and can be easily applied to any custom 3D avatar.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源