论文标题
学习RGB-D功能嵌入,用于看不见的对象实例细分
Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation
论文作者
论文摘要
在混乱的场景中分割看不见的对象是机器人需要获得的重要技能,以便在新环境中执行任务。在这项工作中,我们通过从合成数据中学习RGB-D功能嵌入来提出了一种新方法来进行看不见的对象实例分割。度量学习损失函数用于学习产生像素的特征嵌入,从而使来自同一对象的像素彼此接近,并且来自不同对象的像素在嵌入空间中分离。借助学习的功能嵌入,可以将平均偏移聚类算法应用于发现和分割看不见的对象。我们通过新的两阶段聚类算法进一步提高了分割精度。我们的方法表明,非遗嘱现实化的合成RGB和深度图像可用于学习特征嵌入,这些嵌入方式可以很好地传输到现实世界图像,以进行看不见的对象实例分割。
Segmenting unseen objects in cluttered scenes is an important skill that robots need to acquire in order to perform tasks in new environments. In this work, we propose a new method for unseen object instance segmentation by learning RGB-D feature embeddings from synthetic data. A metric learning loss function is utilized to learn to produce pixel-wise feature embeddings such that pixels from the same object are close to each other and pixels from different objects are separated in the embedding space. With the learned feature embeddings, a mean shift clustering algorithm can be applied to discover and segment unseen objects. We further improve the segmentation accuracy with a new two-stage clustering algorithm. Our method demonstrates that non-photorealistic synthetic RGB and depth images can be used to learn feature embeddings that transfer well to real-world images for unseen object instance segmentation.