论文标题

几何驱动的渐进式翘曲,以进行一声脸动画

Geometry Driven Progressive Warping for One-Shot Face Animation

论文作者

Zhong, Yatao, Amjadi, Faezeh, Zharkov, Ilya

论文摘要

Face Animation旨在创建带有动画姿势和表情的照片真实的肖像视频。一种常见的做法是生成用于翘曲像素和特征从源到目标的特征的位移字段。但是,先前的尝试通常会产生亚最佳位移。在这项工作中,我们提出了一个几何驱动的模型,并提出了两种几何模式作为指导:3D面呈现位移图和姿势神经代码。该模型可以选择使用其中一种模式作为位移估计的指导。为了模拟面部模型不涵盖的位置的位移(例如,头发),我们求助于源图像特征,以获取上下文信息,并提出了一个渐进式翘曲模块,该模块在增加分辨率下在特征扭曲和位移估计之间交替。我们表明,所提出的模型可以以高保真度合成肖像视频,并在Voxceleb1和Voxceleb2数据集上实现新的最新结果,以实现交叉身份和相同的身份重建。

Face animation aims at creating photo-realistic portrait videos with animated poses and expressions. A common practice is to generate displacement fields that are used to warp pixels and features from source to target. However, prior attempts often produce sub-optimal displacements. In this work, we present a geometry driven model and propose two geometric patterns as guidance: 3D face rendered displacement maps and posed neural codes. The model can optionally use one of the patterns as guidance for displacement estimation. To model displacements at locations not covered by the face model (e.g., hair), we resort to source image features for contextual information and propose a progressive warping module that alternates between feature warping and displacement estimation at increasing resolutions. We show that the proposed model can synthesize portrait videos with high fidelity and achieve the new state-of-the-art results on the VoxCeleb1 and VoxCeleb2 datasets for both cross identity and same identity reconstruction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源