论文标题

受益于学习真实世界图像超级分辨率的双尺寸下采样图像

Benefiting from Bicubically Down-Sampled Images for Learning Real-World Image Super-Resolution

论文作者

Rad, Mohammad Saeed, Yu, Thomas, Musat, Claudiu, Ekenel, Hazim Kemal, Bozorgtabar, Behzad, Thiran, Jean-Philippe

论文摘要

传统上,超分辨率(SR)是基于一对高分辨率图像(HR)及其低分辨率(LR)的对应物,并用Bicubic Downpling人为获得。但是,在现实世界中,存在多种现实的图像降低,并且在分析上对这些现实的降级进行建模可能很困难。在这项工作中,我们建议通过将这个不足的问题分为两个相对较大的步骤来处理现实世界中的SR。首先,我们通过使用真实的LR/HR对和合成对,以监督的方式训练网络以将真实的LR图像转换为双尺寸下采样图像的空间。其次,我们采用了一个通用的SR网络,该网络在双次化采样图像上训练,以超级溶解转换后的LR图像。管道的第一步通过将大量退化的图像记录到一个普通的图像空间中来解决问题。然后,第二步利用了SR在双次采样的图像上已经令人印象深刻的性能,从而避开了具有许多不同图像降解的数据集中端到端培训的问题。我们通过将其与现实世界中的最新方法进行比较,证明了我们提出的方法的有效性,并表明我们所提出的方法在定性和定量结果方面优于最先进的方法,以及对几个真实图像数据集进行的广泛用户研究的结果。

Super-resolution (SR) has traditionally been based on pairs of high-resolution images (HR) and their low-resolution (LR) counterparts obtained artificially with bicubic downsampling. However, in real-world SR, there is a large variety of realistic image degradations and analytically modeling these realistic degradations can prove quite difficult. In this work, we propose to handle real-world SR by splitting this ill-posed problem into two comparatively more well-posed steps. First, we train a network to transform real LR images to the space of bicubically downsampled images in a supervised manner, by using both real LR/HR pairs and synthetic pairs. Second, we take a generic SR network trained on bicubically downsampled images to super-resolve the transformed LR image. The first step of the pipeline addresses the problem by registering the large variety of degraded images to a common, well understood space of images. The second step then leverages the already impressive performance of SR on bicubically downsampled images, sidestepping the issues of end-to-end training on datasets with many different image degradations. We demonstrate the effectiveness of our proposed method by comparing it to recent methods in real-world SR and show that our proposed approach outperforms the state-of-the-art works in terms of both qualitative and quantitative results, as well as results of an extensive user study conducted on several real image datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源