论文标题
解决深度神经网络的反问题 - 包括鲁棒性?
Solving Inverse Problems With Deep Neural Networks -- Robustness Included?
论文作者
论文摘要
在过去的五年中,深度学习方法已成为解决各种反问题的最新方法。在此类方法可以在安全至关重要的领域中找到应用之前,对其可靠性的验证似乎是强制性的。最近的作品指出了针对多个图像重建任务的深神经网络的不稳定性。类似于分类中的对抗攻击,结果表明,输入域中的轻微扭曲可能会导致严重的伪影。本文通过对基于学习的深度学习算法的鲁棒性进行了广泛的研究,阐明了这一问题的新启示。这涵盖了使用高斯测量值以及从傅立叶和ra尺测量值的图像恢复的压缩传感,包括用于磁共振成像的真实情况(使用NYU-FASTMRI数据集)。我们的主要重点是计算最大化重建误差的测量值的对抗性扰动。我们方法的一个独特特征是定量和定性比较与总变化最小化,这是一种可证明的强大参考方法。与以前的发现相反,我们的结果表明,标准的端到端网络体系结构不仅有抵御统计噪声的弹性,而且还针对对抗性扰动。所有被考虑的网络均通过常见的深度学习技术培训,而没有复杂的防御策略。
In the past five years, deep learning methods have become state-of-the-art in solving various inverse problems. Before such approaches can find application in safety-critical fields, a verification of their reliability appears mandatory. Recent works have pointed out instabilities of deep neural networks for several image reconstruction tasks. In analogy to adversarial attacks in classification, it was shown that slight distortions in the input domain may cause severe artifacts. The present article sheds new light on this concern, by conducting an extensive study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems. This covers compressed sensing with Gaussian measurements as well as image recovery from Fourier and Radon measurements, including a real-world scenario for magnetic resonance imaging (using the NYU-fastMRI dataset). Our main focus is on computing adversarial perturbations of the measurements that maximize the reconstruction error. A distinctive feature of our approach is the quantitative and qualitative comparison with total-variation minimization, which serves as a provably robust reference method. In contrast to previous findings, our results reveal that standard end-to-end network architectures are not only resilient against statistical noise, but also against adversarial perturbations. All considered networks are trained by common deep learning techniques, without sophisticated defense strategies.