论文标题

MMSR:多模型学习的图像超分辨率从特定于类的图像先验中受益

MMSR: Multiple-Model Learned Image Super-Resolution Benefiting From Class-Specific Image Priors

论文作者

Korkmaz, Cansu, Tekalp, A. Murat, Dogan, Zafer

论文摘要

假设已知的降解模型,学到的图像超分辨率(SR)模型的性能取决于训练集中的图像特征的多样性与测试集中的图像特征相匹配。结果,根据特定图像的特征与训练集中的特征相似,SR模型的性能在测试集上从图像到图像的性能明显变化。因此,通常,单个SR模型不能很好地概括所有类型的图像内容。在这项工作中,我们表明,为不同类别的图像(例如,用于文本,纹理等)培训多个SR模型,以利用特定类的图像先验,并采用后处理网络,该网络学习如何最好地融合这些多个SR模型产生的输出,从而超过了Sate-The-The-The-The-The-Art通用SR模型的性能。实验结果清楚地表明,所提出的多模型SR(MMSR)方法显着优于单个预训练的最先进的SR SR模型,均定量和视觉上。它甚至超过了在类似文本或纹理图像上训练的最佳单一类SR模型的性能。

Assuming a known degradation model, the performance of a learned image super-resolution (SR) model depends on how well the variety of image characteristics within the training set matches those in the test set. As a result, the performance of an SR model varies noticeably from image to image over a test set depending on whether characteristics of specific images are similar to those in the training set or not. Hence, in general, a single SR model cannot generalize well enough for all types of image content. In this work, we show that training multiple SR models for different classes of images (e.g., for text, texture, etc.) to exploit class-specific image priors and employing a post-processing network that learns how to best fuse the outputs produced by these multiple SR models surpasses the performance of state-of-the-art generic SR models. Experimental results clearly demonstrate that the proposed multiple-model SR (MMSR) approach significantly outperforms a single pre-trained state-of-the-art SR model both quantitatively and visually. It even exceeds the performance of the best single class-specific SR model trained on similar text or texture images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源