论文标题

UCL-DeHaze:通过无监督的对比学习迈向现实世界的图像

UCL-Dehaze: Towards Real-world Image Dehazing via Unsupervised Contrastive Learning

论文作者

Wang, Yongzhen, Yan, Xuefeng, Wang, Fu Lee, Xie, Haoran, Yang, Wenhan, Wei, Mingqiang, Qin, Jing

论文摘要

尽管训练图像模型的智慧模型可以减轻收集现实世界中的朦胧/清洁图像对的困难,但它带来了众所周知的域移位问题。从不同但新的角度来看,本文通过对抗性训练的工作探讨了对比度学习,以利用未配对的现实世界的朦胧和干净的图像,从而避免了合成和现实世界中的雾化之间的差距。我们提出了一个有效的无监督的对比度学习范式,用于去掩饰,称为UCL-DeHaze。未配对的现实世界清洁和朦胧的图像很容易捕获,在训练我们的UCL-Dehaze网络时,将分别作为重要的正面和负面样本。为了更有效地训练网络,我们制定了一种新的自对比性感知损失函数,该功能鼓励恢复的图像接近正面样本并远离嵌入空间中的负样本。除了UCL-Dehaze的整体网络体系结构外,对抗性训练还用于对齐正样品和脱掩的图像之间的分布。与最近的图像除尘作品相比,UCL-Dehaze在训练过程中不需要配对数据,并利用了未配对的正/负数据来更好地提高除尘性能。我们进行全面的实验来评估我们的UCL-狂欢,并证明其优于最先进的实验,即使只有1,800个未配对的现实世界图像也用于训练我们的网络。源代码已在https://github.com/yz-wang/ucl-dehaze上找到。

While the wisdom of training an image dehazing model on synthetic hazy data can alleviate the difficulty of collecting real-world hazy/clean image pairs, it brings the well-known domain shift problem. From a different yet new perspective, this paper explores contrastive learning with an adversarial training effort to leverage unpaired real-world hazy and clean images, thus bridging the gap between synthetic and real-world haze is avoided. We propose an effective unsupervised contrastive learning paradigm for image dehazing, dubbed UCL-Dehaze. Unpaired real-world clean and hazy images are easily captured, and will serve as the important positive and negative samples respectively when training our UCL-Dehaze network. To train the network more effectively, we formulate a new self-contrastive perceptual loss function, which encourages the restored images to approach the positive samples and keep away from the negative samples in the embedding space. Besides the overall network architecture of UCL-Dehaze, adversarial training is utilized to align the distributions between the positive samples and the dehazed images. Compared with recent image dehazing works, UCL-Dehaze does not require paired data during training and utilizes unpaired positive/negative data to better enhance the dehazing performance. We conduct comprehensive experiments to evaluate our UCL-Dehaze and demonstrate its superiority over the state-of-the-arts, even only 1,800 unpaired real-world images are used to train our network. Source code has been available at https://github.com/yz-wang/UCL-Dehaze.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源