论文标题

实时神经辐射说话肖像通过音频空间分解

Real-time Neural Radiance Talking Portrait Synthesis via Audio-spatial Decomposition

论文作者

Tang, Jiaxiang, Wang, Kaisiyuan, Zhou, Hang, Chen, Xiaokang, He, Dongliang, Hu, Tianshu, Liu, Jingtuo, Zeng, Gang, Wang, Jingdong

论文摘要

虽然动态神经辐射场(NERF)在说话肖像的高保真3D建模中表现出成功,但缓慢的训练和推理速度严重阻碍了它们的潜在用法。在本文中,我们提出了一个有效的基于NERF的框架,该框架可以通过利用基于网格的NERF的最新成功来实时综合会说话的肖像和更快的融合。我们的关键见解是将固有的高维肖像代表分解为三个低维特征网格。具体而言,分解的音频空间编码模块具有3D空间网格和2D音频网格的动态头模型。躯干在轻质伪3D可变形模块中用另一个2D网格处理。这两个模块都集中在良好渲染质量的前提下。广泛的实验表明,我们的方法可以生成逼真的和音频同步的会说话肖像视频,同时与以前的方法相比也很高。

While dynamic Neural Radiance Fields (NeRF) have shown success in high-fidelity 3D modeling of talking portraits, the slow training and inference speed severely obstruct their potential usage. In this paper, we propose an efficient NeRF-based framework that enables real-time synthesizing of talking portraits and faster convergence by leveraging the recent success of grid-based NeRF. Our key insight is to decompose the inherently high-dimensional talking portrait representation into three low-dimensional feature grids. Specifically, a Decomposed Audio-spatial Encoding Module models the dynamic head with a 3D spatial grid and a 2D audio grid. The torso is handled with another 2D grid in a lightweight Pseudo-3D Deformable Module. Both modules focus on efficiency under the premise of good rendering quality. Extensive experiments demonstrate that our method can generate realistic and audio-lips synchronized talking portrait videos, while also being highly efficient compared to previous methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源