论文标题

人体姿势通过自适应分层变形而转移

Human Pose Transfer by Adaptive Hierarchical Deformation

论文作者

Zhang, Jinsong, Liu, Xingzi, Li, Kun

论文摘要

人类姿势转移是一项未对准的图像生成任务,非常具有挑战性。现有方法无法有效利用输入信息,这些信息通常无法保留头发和衣服的样式和形状。在本文中,我们提出了一个具有两个层次变形水平的自适应人姿势转移网络。第一级产生与目标姿势一致的人类语义解析,第二层通过语义指导在目标姿势中生成最终纹理的人形象。为了避免将所有像素视为有效信息的香草卷积的缺点,我们在两个级别中使用门控卷积来动态选择重要特征,并按一层适应图像。我们的模型几乎没有参数,并且很快收敛。实验结果表明,与最先进的方法相比,我们的模型具有更一致的头发,脸部和衣服,具有更一致的头发,脸部和衣服的性能。此外,我们的方法可以应用于衣服纹理转移。

Human pose transfer, as a misaligned image generation task, is very challenging. Existing methods cannot effectively utilize the input information, which often fail to preserve the style and shape of hair and clothes. In this paper, we propose an adaptive human pose transfer network with two hierarchical deformation levels. The first level generates human semantic parsing aligned with the target pose, and the second level generates the final textured person image in the target pose with the semantic guidance. To avoid the drawback of vanilla convolution that treats all the pixels as valid information, we use gated convolution in both two levels to dynamically select the important features and adaptively deform the image layer by layer. Our model has very few parameters and is fast to converge. Experimental results demonstrate that our model achieves better performance with more consistent hair, face and clothes with fewer parameters than state-of-the-art methods. Furthermore, our method can be applied to clothing texture transfer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源