论文标题
ESL:语义分割中域适应的熵引导的自我监督学习
ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in Semantic Segmentation
论文作者
论文摘要
尽管完全监督的深度学习为城市场景语义细分提供了良好的模型,但这些模型努力将其推广到具有不同照明或天气条件的新环境。此外,制作任务所需的广泛的像素级注释的代价很大。无监督的域适应性(UDA)是一种试图解决这些问题以使此类系统更可扩展的方法。特别是,自我监督的学习(SSL)最近已成为语义细分中UDA的有效策略。这种方法的核心位于“伪标记”,即将高率类预测分配为伪标签的实践,随后用作目标数据的真实标签。为了收集伪标签,以前的作品通常依赖于最高的软磁得分,我们在这里认为这是一种不利的置信度测量。 在这项工作中,我们提出了熵引导的自我监督学习(ESL),利用熵作为产生更准确的伪标记的置信指标。在不同的UDA基准测试中,ESL始终优于强大的SSL基准,并取得最新的结果。
While fully-supervised deep learning yields good models for urban scene semantic segmentation, these models struggle to generalize to new environments with different lighting or weather conditions for instance. In addition, producing the extensive pixel-level annotations that the task requires comes at a great cost. Unsupervised domain adaptation (UDA) is one approach that tries to address these issues in order to make such systems more scalable. In particular, self-supervised learning (SSL) has recently become an effective strategy for UDA in semantic segmentation. At the core of such methods lies `pseudo-labeling', that is, the practice of assigning high-confident class predictions as pseudo-labels, subsequently used as true labels, for target data. To collect pseudo-labels, previous works often rely on the highest softmax score, which we here argue as an unfavorable confidence measurement. In this work, we propose Entropy-guided Self-supervised Learning (ESL), leveraging entropy as the confidence indicator for producing more accurate pseudo-labels. On different UDA benchmarks, ESL consistently outperforms strong SSL baselines and achieves state-of-the-art results.