论文标题
Meta学习的内核用于盲目的超分辨率内核估计
Meta-Learned Kernel For Blind Super-Resolution Kernel Estimation
论文作者
论文摘要
最近的图像降解估计方法已启用了单位超级分辨率(SR)方法,以更好地启动现实世界图像。在这些方法中,明确的内核估计方法在处理未知降解时表现出了前所未有的性能。尽管如此,在下游SR模型使用时,许多局限性会限制其功效。具体而言,该方法家族产生i)由于每位型适应时间长,而ii)由于内核不匹配而导致的图像较低。在这项工作中,我们介绍了一种学习对学习的方法,该方法是从图像分布中包含的信息中进行的,从而可以更快地适应新图像,并在内核估计和图像保真度中具有显着提高性能。具体而言,我们在一系列任务上,我们的元素生成的gan被称为metakernelgan,因此,当出现新图像时,发电机从知情的内核估算开始,歧视器开始以强大的能力来区分贴片分布。与最先进的方法相比,我们的实验表明,Metakernelgan可以更好地估计内核的幅度和协方差,从而在与非盲型SR模型结合使用时,导致最新的盲目SR在类似的计算状态下结果。通过监督学习者的学习,我们的方法保持了无监督学习者的普遍性,提高了内核估计的优化稳定性,从而提高了图像适应性的稳定性,并导致更快的推断,并在14.24至102.1x之间的加速速度超过了现有方法。
Recent image degradation estimation methods have enabled single-image super-resolution (SR) approaches to better upsample real-world images. Among these methods, explicit kernel estimation approaches have demonstrated unprecedented performance at handling unknown degradations. Nonetheless, a number of limitations constrain their efficacy when used by downstream SR models. Specifically, this family of methods yields i) excessive inference time due to long per-image adaptation times and ii) inferior image fidelity due to kernel mismatch. In this work, we introduce a learning-to-learn approach that meta-learns from the information contained in a distribution of images, thereby enabling significantly faster adaptation to new images with substantially improved performance in both kernel estimation and image fidelity. Specifically, we meta-train a kernel-generating GAN, named MetaKernelGAN, on a range of tasks, such that when a new image is presented, the generator starts from an informed kernel estimate and the discriminator starts with a strong capability to distinguish between patch distributions. Compared with state-of-the-art methods, our experiments show that MetaKernelGAN better estimates the magnitude and covariance of the kernel, leading to state-of-the-art blind SR results within a similar computational regime when combined with a non-blind SR model. Through supervised learning of an unsupervised learner, our method maintains the generalizability of the unsupervised learner, improves the optimization stability of kernel estimation, and hence image adaptation, and leads to a faster inference with a speedup between 14.24 to 102.1x over existing methods.