论文标题
Dynavsr:动态自适应盲目视频超级分辨率
DynaVSR: Dynamic Adaptive Blind Video Super-Resolution
论文作者
论文摘要
大多数常规监督的超分辨率(SR)算法都认为,低分辨率(LR)数据是通过使用固定已知内核来降低高分辨率(HR)数据来获得的,但是在实际情况下,这种假设通常不存在。已经提出了一些最近的盲目SR算法来估计每个输入LR图像的不同缩小内核。但是,它们遭受了沉重的计算开销,使它们无法直接应用于视频。在这项工作中,我们提出了Dynavsr,这是一种新型的基于元学习的现实视频SR的框架,可实现有效的降低模型估计和对当前输入的适应性。具体而言,我们使用各种类型的合成模糊内核来训练多帧的缩减模块,该模块与视频SR网络无缝结合,以进行输入感知适应。实验结果表明,DynaVSR始终将最先进的视频SR模型的性能提高了一个大幅度,与现有的盲目SR方法相比,推理时间更快。
Most conventional supervised super-resolution (SR) algorithms assume that low-resolution (LR) data is obtained by downscaling high-resolution (HR) data with a fixed known kernel, but such an assumption often does not hold in real scenarios. Some recent blind SR algorithms have been proposed to estimate different downscaling kernels for each input LR image. However, they suffer from heavy computational overhead, making them infeasible for direct application to videos. In this work, we present DynaVSR, a novel meta-learning-based framework for real-world video SR that enables efficient downscaling model estimation and adaptation to the current input. Specifically, we train a multi-frame downscaling module with various types of synthetic blur kernels, which is seamlessly combined with a video SR network for input-aware adaptation. Experimental results show that DynaVSR consistently improves the performance of the state-of-the-art video SR models by a large margin, with an order of magnitude faster inference time compared to the existing blind SR approaches.