论文标题

OGC:无监督的3D对象分割从点云的刚性动力学

OGC: Unsupervised 3D Object Segmentation from Rigid Dynamics of Point Clouds

论文作者

Song, Ziyang, Yang, Bo

论文摘要

在本文中,我们研究了原始点云中3D对象分割的问题。与通常需要大量人类注释以进行全面监督的所有现有方法不同,我们提出了第一种称为OGC的无监督方法,即同时在单个正向通行证中同时识别多个3D对象,而无需任何类型的人类注释。我们方法的关键是,将动态运动模式完全利用在顺序点云上的动态运动模式,因为监督信号自动发现刚性对象。我们的方法由三个主要组成部分,1)对象分割网络直接从单点云框架中直接估算多对象掩码,2)辅助自我监督场景流估计器; 3)我们的核心对象几何一致性组件。通过仔细设计一系列损失功能,我们有效地考虑了时间和空间尺度中的多对象刚性一致性和对象形状不变性。这使我们的方法即使在没有注释的情况下也可以真正发现对象几何形状。我们在五个数据集上广泛评估了我们的方法,证明了室内和充满挑战的室外场景中对象零件实例细分和一般对象分割的出色性能。

In this paper, we study the problem of 3D object segmentation from raw point clouds. Unlike all existing methods which usually require a large amount of human annotations for full supervision, we propose the first unsupervised method, called OGC, to simultaneously identify multiple 3D objects in a single forward pass, without needing any type of human annotations. The key to our approach is to fully leverage the dynamic motion patterns over sequential point clouds as supervision signals to automatically discover rigid objects. Our method consists of three major components, 1) the object segmentation network to directly estimate multi-object masks from a single point cloud frame, 2) the auxiliary self-supervised scene flow estimator, and 3) our core object geometry consistency component. By carefully designing a series of loss functions, we effectively take into account the multi-object rigid consistency and the object shape invariance in both temporal and spatial scales. This allows our method to truly discover the object geometry even in the absence of annotations. We extensively evaluate our method on five datasets, demonstrating the superior performance for object part instance segmentation and general object segmentation in both indoor and the challenging outdoor scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源