论文标题
fogadapt:自我监督的域适应性,用于雾图像的语义分割
FogAdapt: Self-Supervised Domain Adaptation for Semantic Segmentation of Foggy Images
论文作者
论文摘要
本文介绍了Fogadapt,这是一种针对密集有雾场景的语义细分的域适应性的新方法。尽管已指示大量研究以减少语义细分的域转移,但适应不利天气条件的场景仍然是一个悬而未决的问题。由于天气条件(例如雾,烟雾和阴霾),现场可见性的大大差异加剧了域的转移,因此在这种情况下,无监督的适应性。我们提出了一个自我注入和多尺度信息增强自我监督的域适应方法(Fogadapt),以最大程度地减少雾景场景中的域移位。在经验证据的支持下,雾密度的增加导致了高度的分割概率,我们引入了一种基于自我的损失函数来指导适应方法。此外,在不同图像尺度上获得的推论是由不确定性组合并加权的,以生成目标域的尺度不变的伪标记。这些规模不变的伪标签对于可见性和比例变化是可靠的。我们在真实的晴朗场景上评估了所提出的模型,以适应真实的雾化场景和合成的非潮汐图像,以适应真实的雾化场景适应场景。我们的实验表明,在雾图像的语义分割中,雾载体显着优于当前的最新技术。具体而言,通过考虑标准设置与最先进的方法(SOTA)方法相比,雾糊苏黎世的雾化增长了3.8%,雾气驾驶量为6.0%,在MIOU中从City scapes to Foggy Zurich适应MIOU时的雾气驾驶3.6%。
This paper presents FogAdapt, a novel approach for domain adaptation of semantic segmentation for dense foggy scenes. Although significant research has been directed to reduce the domain shift in semantic segmentation, adaptation to scenes with adverse weather conditions remains an open question. Large variations in the visibility of the scene due to weather conditions, such as fog, smog, and haze, exacerbate the domain shift, thus making unsupervised adaptation in such scenarios challenging. We propose a self-entropy and multi-scale information augmented self-supervised domain adaptation method (FogAdapt) to minimize the domain shift in foggy scenes segmentation. Supported by the empirical evidence that an increase in fog density results in high self-entropy for segmentation probabilities, we introduce a self-entropy based loss function to guide the adaptation method. Furthermore, inferences obtained at different image scales are combined and weighted by the uncertainty to generate scale-invariant pseudo-labels for the target domain. These scale-invariant pseudo-labels are robust to visibility and scale variations. We evaluate the proposed model on real clear-weather scenes to real foggy scenes adaptation and synthetic non-foggy images to real foggy scenes adaptation scenarios. Our experiments demonstrate that FogAdapt significantly outperforms the current state-of-the-art in semantic segmentation of foggy images. Specifically, by considering the standard settings compared to state-of-the-art (SOTA) methods, FogAdapt gains 3.8% on Foggy Zurich, 6.0% on Foggy Driving-dense, and 3.6% on Foggy Driving in mIoU when adapted from Cityscapes to Foggy Zurich.