论文标题

通过Denoiser缩放来提高即插即用先验的性能

Boosting the Performance of Plug-and-Play Priors via Denoiser Scaling

论文作者

Xu, Xiaojian, Liu, Jiaming, Sun, Yu, Wohlberg, Brendt, Kamilov, Ulugbek S.

论文摘要

插入式先验(PNP)是图像重建框架,使用图像Denoiser作为成像先验。与传统的正规化反演不同,PNP不需要以正规化函数的形式表达。这种灵活性使PNP算法能够利用最有效的图像DeNoiser,从而在各种成像任务中导致其最先进的性能。在本文中,我们提出了一种新的Denoiser缩放技术,以明确控制PNP正则化的量。传统上,PNP算法的性能是通过与噪声方差相关的DeNoiser的固有参数来控制的。但是,许多强大的DeNoiser,例如基于卷积神经网络(CNN)的Denoiser,没有可调的参数,可以控制其在PNP中的影响。为了解决此问题,我们引入了一个缩放参数,该参数调整了DeNoiser输入和输出的大小。从近端优化,统计估计和共识平衡的角度来看,我们从理论上证明了DINOISER缩放率。最后,我们提供了数值实验,以证明Denoiser缩放系统的能力系统地改善了PNP的性能,用于不明确可调参数的Denoe CNN先验。

Plug-and-play priors (PnP) is an image reconstruction framework that uses an image denoiser as an imaging prior. Unlike traditional regularized inversion, PnP does not require the prior to be expressible in the form of a regularization function. This flexibility enables PnP algorithms to exploit the most effective image denoisers, leading to their state-of-the-art performance in various imaging tasks. In this paper, we propose a new denoiser scaling technique to explicitly control the amount of PnP regularization. Traditionally, the performance of PnP algorithms is controlled via intrinsic parameters of the denoiser related to the noise variance. However, many powerful denoisers, such as the ones based on convolutional neural networks (CNNs), do not have tunable parameters that would allow controlling their influence within PnP. To address this issue, we introduce a scaling parameter that adjusts the magnitude of the denoiser input and output. We theoretical justify the denoiser scaling from the perspectives of proximal optimization, statistical estimation, and consensus equilibrium. Finally, we provide numerical experiments demonstrating the ability of denoiser scaling to systematically improve the performance of PnP for denoising CNN priors that do not have explicitly tunable parameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源