论文标题

共同训练语义分割模型的无监督域适应

Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models

论文作者

Gómez, Jose L., Villalonga, Gabriel, López, Antonio M.

论文摘要

语义图像细分是自动驾驶中的一项核心且具有挑战性的任务,该任务是通过训练深层模型来解决的。由于这种训练借鉴了基于人类的图像标签的诅咒,因此使用带有自动生成标签的合成图像以及未标记的现实世界图像是一种有希望的选择。这意味着解决无监督的域适应性(UDA)问题。在本文中,我们为语义分割模型的合成器UDA提出了一个新的共同训练程序。它由一个自我训练阶段组成,该阶段提供了两个适应领域的模型,以及一个模型协作循环,用于相互改进这两个模型。然后,这些模型用于为现实世界图像提供最终的语义分割标签(伪标记)。总体过程将深层模型视为黑匣子,并在伪标记的目标图像级别上驱动其协作,即,不需要修改损失功能,也不需要明确的特征对齐。我们测试有关标准合成和现实世界数据集的建议,以进行板载语义细分。我们的程序显示,在基线上的进步范围从〜13到〜26 miou点不等,因此建立了新的最新结果。

Semantic image segmentation is a central and challenging task in autonomous driving, addressed by training deep models. Since this training draws to a curse of human-based image labeling, using synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies to address an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic segmentation models. It consists of a self-training stage, which provides two domain-adapted models, and a model collaboration loop for the mutual improvement of these two models. These models are then used to provide the final semantic segmentation labels (pseudo-labels) for the real-world images. The overall procedure treats the deep models as black boxes and drives their collaboration at the level of pseudo-labeled target images, i.e., neither modifying loss functions is required, nor explicit feature alignment. We test our proposal on standard synthetic and real-world datasets for on-board semantic segmentation. Our procedure shows improvements ranging from ~13 to ~26 mIoU points over baselines, so establishing new state-of-the-art results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源