论文标题

类问题:一种跨域语义分割的细粒对抗方法

Classes Matter: A Fine-grained Adversarial Approach to Cross-domain Semantic Segmentation

论文作者

Wang, Haoran, Shen, Tong, Zhang, Wei, Duan, Lingyu, Mei, Tao

论文摘要

尽管在监督语义细分方面取得了长足的进步,但在野外部署模型时通常会观察到大量的性能下降。域适应方法通过对齐源域和目标域来解决问题。但是,大多数现有的方法试图从整体视图执行对齐,而忽略了目标域中的基础类别级别的数据结构。为了充分利用源域中的监督,我们提出了一种针对类级特征对准的细粒对抗性学习策略,同时保留跨域语义的内部结构。我们采用了一个细粒的域歧视器,不仅扮演域差异范围,而且在班级级别区分了域。传统的二进制域标签也被推广到域编码,作为指导细粒特征对准的监督信号。与其他最先进的方法相比,对班级中心距离(CCD)的分析证明了我们的细粒对抗策略可以实现更好的类一级对齐。我们的方法易于实施,并且对三个经典领域适应任务进行评估,即GTA5至CityScapes,合成城市景观和CityScapes和跨城市的城市景观。大型绩效增长表明,我们的方法优于其他基于全球功能的基于特征对准和基于班级的对准。该代码可在https://github.com/jdai-cv/fada上公开获取。

Despite great progress in supervised semantic segmentation,a large performance drop is usually observed when deploying the model in the wild. Domain adaptation methods tackle the issue by aligning the source domain and the target domain. However, most existing methods attempt to perform the alignment from a holistic view, ignoring the underlying class-level data structure in the target domain. To fully exploit the supervision in the source domain, we propose a fine-grained adversarial learning strategy for class-level feature alignment while preserving the internal structure of semantics across domains. We adopt a fine-grained domain discriminator that not only plays as a domain distinguisher, but also differentiates domains at class level. The traditional binary domain labels are also generalized to domain encodings as the supervision signal to guide the fine-grained feature alignment. An analysis with Class Center Distance (CCD) validates that our fine-grained adversarial strategy achieves better class-level alignment compared to other state-of-the-art methods. Our method is easy to implement and its effectiveness is evaluated on three classical domain adaptation tasks, i.e., GTA5 to Cityscapes, SYNTHIA to Cityscapes and Cityscapes to Cross-City. Large performance gains show that our method outperforms other global feature alignment based and class-wise alignment based counterparts. The code is publicly available at https://github.com/JDAI-CV/FADA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源