论文标题

TexMesh:从RGB-D视频中重建详细的人纹理和几何形状

TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video

论文作者

Zhi, Tiancheng, Lassner, Christoph, Tung, Tony, Stoll, Carsten, Narasimhan, Srinivasa G., Vo, Minh

论文摘要

我们提出TexMesh,这是一种新颖的方法,可从RGB-D视频中重建具有高分辨率全身纹理的详细人类网格。 TexMesh实现了人类的高质量免费观看点渲染。鉴于RGB框架,捕获的环境图以及RGB-D跟踪中的粗制人类网格,我们的方法重建了时空一致且详细的人均网格以及高分辨率的反击式纹理。通过使用入射照明,我们能够准确估计局部表面几何形状和反照率,这使我们能够进一步使用光度限制,以自我审查的方式将经过合成的训练模型适应现实世界序列,以详细的表面几何形状和高分辨率纹理估计。在实践中,我们以一个简短的示例序列进行自我适应的序列训练,然后该模型随后以交互式帧速率运行。我们在综合和现实世界中验证了TexMesh,并在定量和定性上表现出胜过最优于最先进的状态。

We present TexMesh, a novel approach to reconstruct detailed human meshes with high-resolution full-body texture from RGB-D video. TexMesh enables high quality free-viewpoint rendering of humans. Given the RGB frames, the captured environment map, and the coarse per-frame human mesh from RGB-D tracking, our method reconstructs spatiotemporally consistent and detailed per-frame meshes along with a high-resolution albedo texture. By using the incident illumination we are able to accurately estimate local surface geometry and albedo, which allows us to further use photometric constraints to adapt a synthetically trained model to real-world sequences in a self-supervised manner for detailed surface geometry and high-resolution texture estimation. In practice, we train our models on a short example sequence for self-adaptation and the model runs at interactive framerate afterwards. We validate TexMesh on synthetic and real-world data, and show it outperforms the state of art quantitatively and qualitatively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源