论文标题
监督和无监督的参数化颜色增强的学习
Supervised and Unsupervised Learning of Parameterized Color Enhancement
论文作者
论文摘要
我们将增强颜色的问题视为图像翻译任务,我们使用监督和无监督的学习来解决。与传统图像与图像发生器不同,我们的翻译是使用全局参数化颜色转换执行的,而不是学习直接映射图像信息。在有监督的情况下,每个训练图像都与所需的目标图像配对,卷积神经网络(CNN)从专家修饰的图像中学习转换的参数。在未配合的情况下,我们采用双向生成对抗网络(GAN)来学习这些参数并应用圆形约束。与受监督的(配对数据)和无监督(未配对的数据)图像增强方法相比,我们获得了最新的结果。此外,我们通过将其应用于20世纪初的照片和黑暗的视频框架上,显示了我们方法的概括能力。
We treat the problem of color enhancement as an image translation task, which we tackle using both supervised and unsupervised learning. Unlike traditional image to image generators, our translation is performed using a global parameterized color transformation instead of learning to directly map image information. In the supervised case, every training image is paired with a desired target image and a convolutional neural network (CNN) learns from the expert retouched images the parameters of the transformation. In the unpaired case, we employ two-way generative adversarial networks (GANs) to learn these parameters and apply a circularity constraint. We achieve state-of-the-art results compared to both supervised (paired data) and unsupervised (unpaired data) image enhancement methods on the MIT-Adobe FiveK benchmark. Moreover, we show the generalization capability of our method, by applying it on photos from the early 20th century and to dark video frames.