论文标题

Nestanet:稳定,准确,有效的神经网络,用于分析逆问题

NESTANets: Stable, accurate and efficient neural networks for analysis-sparse inverse problems

论文作者

Neyra-Nesterenko, Maksym, Adcock, Ben

论文摘要

解决反问题是科学,工程和数学的基本组成部分。随着深度学习的出现,深度神经网络具有胜过现有的基于模型的最新方法来解决反问题的重要潜力。但是,众所周知,当前数据驱动的方法面临着几个关键问题,尤其是幻觉,不稳定性和不可预测的概括,并在关键任务(例如医学成像)中的潜在影响。这就提出了一个关键问题,即是否可以为具有明确稳定性和准确性保证的反相反问题构建深层神经网络。在这项工作中,我们为与一般分析模型(称为Nestanets的一般分析模型的逆问题)提供了一种新颖的精确,稳定和有效的神经网络的结构。要构建网络,我们首先展开Nesta,这是一种加速凸优化的一阶方法。这种方法的缓慢收敛性导致效率低下的深网。因此,为了获得浅水,因此更有效的网络,我们将Nesta与新颖的重新启动方案相结合。然后,我们使用压缩感测技术来证明准确性和稳定性。在傅立叶成像的情况下,我们展示了这种方法,并通过一系列数值实验来验证其稳定性和性能。这项工作的主要影响是证明基于展开的有效神经网络的构建,并保证了稳定性和准确性。

Solving inverse problems is a fundamental component of science, engineering and mathematics. With the advent of deep learning, deep neural networks have significant potential to outperform existing state-of-the-art, model-based methods for solving inverse problems. However, it is known that current data-driven approaches face several key issues, notably hallucinations, instabilities and unpredictable generalization, with potential impact in critical tasks such as medical imaging. This raises the key question of whether or not one can construct deep neural networks for inverse problems with explicit stability and accuracy guarantees. In this work, we present a novel construction of accurate, stable and efficient neural networks for inverse problems with general analysis-sparse models, termed NESTANets. To construct the network, we first unroll NESTA, an accelerated first-order method for convex optimization. The slow convergence of this method leads to deep networks with low efficiency. Therefore, to obtain shallow, and consequently more efficient, networks we combine NESTA with a novel restart scheme. We then use compressed sensing techniques to demonstrate accuracy and stability. We showcase this approach in the case of Fourier imaging, and verify its stability and performance via a series of numerical experiments. The key impact of this work is demonstrating the construction of efficient neural networks based on unrolling with guaranteed stability and accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源