论文标题
由言语相关的面部动作单元和基于多模式表示的音频驱动的说话头产生
Talking Head Generation Driven by Speech-Related Facial Action Units and Audio- Based on Multimodal Representation Fusion
论文作者
论文摘要
说话的头部生成是通过输入任意的面部图像和相应的音频剪辑来综合唇部同步说话的头视频。现有方法不仅忽略了跨模式信息的相互作用和关系,而且忽略了口腔肌肉的局部驾驶信息。在这项研究中,我们提出了一个新颖的生成框架,该框架包含一个扩张的非临时卷积自我发项网络作为多模式融合模块,以促进跨模式特征的关系学习。此外,我们提出的方法同时将与音频和语音相关的面部动作单元(AUS)用作驱动信息。与语音有关的AU信息可以更准确地指导口腔动作。由于语音与语音相关的AU高度相关,因此我们提出了一个音频到AU模块来预测与语音相关的AU信息。我们利用预训练的AU分类器来确保生成的图像包含正确的AU信息。我们验证了在网格和TCD-TIMIT数据集上提出的模型的有效性。还进行了一项消融研究,以验证每个组件的贡献。定量和定性实验的结果表明,我们的方法在图像质量和唇部同步准确性方面都优于现有方法。
Talking head generation is to synthesize a lip-synchronized talking head video by inputting an arbitrary face image and corresponding audio clips. Existing methods ignore not only the interaction and relationship of cross-modal information, but also the local driving information of the mouth muscles. In this study, we propose a novel generative framework that contains a dilated non-causal temporal convolutional self-attention network as a multimodal fusion module to promote the relationship learning of cross-modal features. In addition, our proposed method uses both audio- and speech-related facial action units (AUs) as driving information. Speech-related AU information can guide mouth movements more accurately. Because speech is highly correlated with speech-related AUs, we propose an audio-to-AU module to predict speech-related AU information. We utilize pre-trained AU classifier to ensure that the generated images contain correct AU information. We verify the effectiveness of the proposed model on the GRID and TCD-TIMIT datasets. An ablation study is also conducted to verify the contribution of each component. The results of quantitative and qualitative experiments demonstrate that our method outperforms existing methods in terms of both image quality and lip-sync accuracy.