论文标题

SDF-SRN:学习签名距离3D对象从静态图像重建

SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images

论文作者

Lin, Chen-Hsuan, Wang, Chaoyang, Lucey, Simon

论文摘要

来自单个图像的密集的3D对象重建最近见证了显着的进步,但是由于创建配对的图像形状数据集的艰辛过程,使用地面3D形状的神经网络具有不切实际。最近的努力转向学习3D重建,而无需从带注释的2D剪影的RGB图像进行3D监督,从而大大降低了注释的成本和精力。但是,这些技术仍然不切实际,因为它们仍需要在训练过程中对同一对象实例的多视图注释。结果,迄今为止的大多数实验工作都仅限于合成数据集。在本文中,我们解决了这个问题,并提出了SDF-SRN,这种方法只需要在培训时间单一视图,为实际情况提供更大的实用程序。 SDF-SRN学习隐式3D形状表示,以处理数据集中可能存在的任意形状拓扑。为此,我们从2D剪影中得出了一种用于学习签名距离函数(SDF)的新型可区分渲染公式。我们的方法在综合数据集和现实世界数据集上的挑战性单视监督设置下优于最新技术。

Dense 3D object reconstruction from a single image has recently witnessed remarkable advances, but supervising neural networks with ground-truth 3D shapes is impractical due to the laborious process of creating paired image-shape datasets. Recent efforts have turned to learning 3D reconstruction without 3D supervision from RGB images with annotated 2D silhouettes, dramatically reducing the cost and effort of annotation. These techniques, however, remain impractical as they still require multi-view annotations of the same object instance during training. As a result, most experimental efforts to date have been limited to synthetic datasets. In this paper, we address this issue and propose SDF-SRN, an approach that requires only a single view of objects at training time, offering greater utility for real-world scenarios. SDF-SRN learns implicit 3D shape representations to handle arbitrary shape topologies that may exist in the datasets. To this end, we derive a novel differentiable rendering formulation for learning signed distance functions (SDF) from 2D silhouettes. Our method outperforms the state of the art under challenging single-view supervision settings on both synthetic and real-world datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源