论文标题

MOPT:多对象的全景跟踪

MOPT: Multi-Object Panoptic Tracking

论文作者

Hurtado, Juana Valeria, Mohan, Rohit, Burgard, Wolfram, Valada, Abhinav

论文摘要

对动态场景的全面理解是智能机器人在其环境中自主操作的关键先决条件。该领域的研究包括各种感知问题,主要集中于单独解决特定任务,而不是建模整体理解动态场景的能力。在本文中,我们介绍了一项新颖的感知任务,该任务表示为多对象圆锥体跟踪(MOPT),该任务统一了传统的语义分割,实例分割和多对象跟踪的差异。 MOPT允许随着时间的流逝,允许“事物”和“东西”类,时间连贯性和像素级关联的像素级语义信息,以使每个单个子问题的共同利益。为了以统一的方式促进MOPT的定量评估,我们提出了软泳道跟踪质量(SPTQ)度量。作为解决此任务的第一步,我们提出了新颖的PanopticTrackNet体系结构,该体系结构构建在最新的自上而下的全盘细分网络效率下,通过添加新的跟踪头,以同时以端到端方式同时学习所有子任务。此外,我们提出了几个强大的基线,这些基线结合了最先进的全景分割和多对象跟踪模型进行比较的预测。我们对基于视力的MOPT进行了广泛的定量和定性评估,这些评估表现出令人鼓舞的结果。

Comprehensive understanding of dynamic scenes is a critical prerequisite for intelligent robots to autonomously operate in their environment. Research in this domain, which encompasses diverse perception problems, has primarily been focused on addressing specific tasks individually rather than modeling the ability to understand dynamic scenes holistically. In this paper, we introduce a novel perception task denoted as multi-object panoptic tracking (MOPT), which unifies the conventionally disjoint tasks of semantic segmentation, instance segmentation, and multi-object tracking. MOPT allows for exploiting pixel-level semantic information of 'thing' and 'stuff' classes, temporal coherence, and pixel-level associations over time, for the mutual benefit of each of the individual sub-problems. To facilitate quantitative evaluations of MOPT in a unified manner, we propose the soft panoptic tracking quality (sPTQ) metric. As a first step towards addressing this task, we propose the novel PanopticTrackNet architecture that builds upon the state-of-the-art top-down panoptic segmentation network EfficientPS by adding a new tracking head to simultaneously learn all sub-tasks in an end-to-end manner. Additionally, we present several strong baselines that combine predictions from state-of-the-art panoptic segmentation and multi-object tracking models for comparison. We present extensive quantitative and qualitative evaluations of both vision-based and LiDAR-based MOPT that demonstrate encouraging results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源