论文标题
动态面合成的可控辐射场
Controllable Radiance Fields for Dynamic Face Synthesis
论文作者
论文摘要
关于3D感知图像合成的最新工作通过神经渲染的进步取得了令人信服的结果。但是,面部动态的3D感知综合并没有得到太多关注。在这里,我们研究了如何明确控制表现出非刚性运动的面部动力学的生成模型合成(例如面部表达变化),同时确保3D意识。为此,我们提出了一个可控的辐射场(CORF):1)运动控制是通过将运动特征嵌入基于样式的发电机的分层潜在运动空间中来实现的; 2)为了确保背景,运动特征和特定于主题的特定属性,例如照明,纹理,形状,反照率和身份,脸部解析网络,头部回归器和身份编码器已合并。在Head图像/视频数据上,我们显示CORF是3D感知的,同时可以编辑身份,查看方向和运动。
Recent work on 3D-aware image synthesis has achieved compelling results using advances in neural rendering. However, 3D-aware synthesis of face dynamics hasn't received much attention. Here, we study how to explicitly control generative model synthesis of face dynamics exhibiting non-rigid motion (e.g., facial expression change), while simultaneously ensuring 3D-awareness. For this we propose a Controllable Radiance Field (CoRF): 1) Motion control is achieved by embedding motion features within the layered latent motion space of a style-based generator; 2) To ensure consistency of background, motion features and subject-specific attributes such as lighting, texture, shapes, albedo, and identity, a face parsing net, a head regressor and an identity encoder are incorporated. On head image/video data we show that CoRFs are 3D-aware while enabling editing of identity, viewing directions, and motion.