论文标题
学习局部线性变换的无监督形状对应的学习规范嵌入
Learning Canonical Embeddings for Unsupervised Shape Correspondence with Locally Linear Transformations
论文作者
论文摘要
我们提出了一种新的方法,可以在点云对之间进行无监督的形状对应学习。我们首次尝试适应经典的局部线性嵌入算法(LLE)(最初是为非线性维度降低)的形状对应关系的。关键思想是通过首先获得低维点云的高维邻域保护嵌入,然后使用局部线性转换对源和目标嵌入对齐,从而找到形状之间的密集对应。我们证明,使用新的LLE启发的点云重建目标学习嵌入会产生准确的形状对应关系。更具体地说,该方法包括一个端到端可学习的框架,该框架是提取高维邻域保护的嵌入,估计嵌入空间中的局部线性变换,以及通过基于差异测量的构建构建和目标形状的构图函数来重建形状。我们的方法强制将形状的嵌入在对应中,以位于相同的通用/规范嵌入空间中,这最终有助于正规化学习过程,并导致形状嵌入之间的简单最近的邻居接近以找到可靠的对应关系。全面的实验表明,新方法对涵盖人类和非人类形状的标准形状信号基准数据集进行了明显的改进。
We present a new approach to unsupervised shape correspondence learning between pairs of point clouds. We make the first attempt to adapt the classical locally linear embedding algorithm (LLE) -- originally designed for nonlinear dimensionality reduction -- for shape correspondence. The key idea is to find dense correspondences between shapes by first obtaining high-dimensional neighborhood-preserving embeddings of low-dimensional point clouds and subsequently aligning the source and target embeddings using locally linear transformations. We demonstrate that learning the embedding using a new LLE-inspired point cloud reconstruction objective results in accurate shape correspondences. More specifically, the approach comprises an end-to-end learnable framework of extracting high-dimensional neighborhood-preserving embeddings, estimating locally linear transformations in the embedding space, and reconstructing shapes via divergence measure-based alignment of probabilistic density functions built over reconstructed and target shapes. Our approach enforces embeddings of shapes in correspondence to lie in the same universal/canonical embedding space, which eventually helps regularize the learning process and leads to a simple nearest neighbors approach between shape embeddings for finding reliable correspondences. Comprehensive experiments show that the new method makes noticeable improvements over state-of-the-art approaches on standard shape correspondence benchmark datasets covering both human and nonhuman shapes.