论文标题
深度自动对焦用于合成孔径声纳
Deep Autofocus for Synthetic Aperture Sonar
论文作者
论文摘要
合成孔径声纳(SAS)需要精确的位置和环境信息,以在图像重建步骤中产生聚焦的输出。但是,这些测量中的错误通常会导致散落的图像。为了克服这些问题,\ emph {autococus}算法被用作图像重建之后的后处理步骤,目的是使用图像内容本身来改善图像质量。这些算法通常是迭代且基于度量的,因为它们试图优化图像清晰度度量。在这封信中,我们演示了机器学习的潜力,特别是深度学习,以解决自动对焦问题。我们使用我们称为“深度自动对焦”的深网络将问题提出为一个自我监督的,阶段误差估算任务。我们的表述具有非著作(因此快速)的优势,并且不需要以地面真理为中心的图像对其他深度学习方法经常需要。我们将技术与使用梯度下降在现实世界数据集上优化的一组常见清晰度指标进行了比较。我们的结果表明,深度对焦可以产生与基准迭代技术一样好的图像,但计算成本大大降低。我们得出的结论是,我们提出的深层自动对焦可以提供比具有重大研究潜力的最先进的替代品更有利的成本质量权衡。
Synthetic aperture sonar (SAS) requires precise positional and environmental information to produce well-focused output during the image reconstruction step. However, errors in these measurements are commonly present resulting in defocused imagery. To overcome these issues, an \emph{autofocus} algorithm is employed as a post-processing step after image reconstruction for the purpose of improving image quality using the image content itself. These algorithms are usually iterative and metric-based in that they seek to optimize an image sharpness metric. In this letter, we demonstrate the potential of machine learning, specifically deep learning, to address the autofocus problem. We formulate the problem as a self-supervised, phase error estimation task using a deep network we call Deep Autofocus. Our formulation has the advantages of being non-iterative (and thus fast) and not requiring ground truth focused-defocused images pairs as often required by other deblurring deep learning methods. We compare our technique against a set of common sharpness metrics optimized using gradient descent over a real-world dataset. Our results demonstrate Deep Autofocus can produce imagery that is perceptually as good as benchmark iterative techniques but at a substantially lower computational cost. We conclude that our proposed Deep Autofocus can provide a more favorable cost-quality trade-off than state-of-the-art alternatives with significant potential of future research.