论文标题
超高定义的低光图像增强:基于基准和变压器的方法
Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and Transformer-Based Method
论文作者
论文摘要
随着光传感器的质量提高,需要处理大规模图像。特别是,设备捕获超高定义(UHD)图像和视频的能力对图像处理管道提出了新的需求。在本文中,我们考虑了低光图像增强功能(LLIE)的任务,并引入了一个大规模数据库,该数据库由4K和8K分辨率的图像组成。我们进行系统的基准测试研究,并提供当前LLIE算法的比较。作为第二个贡献,我们介绍了一种基于变压器的低光增强方法LLFormer。 LLFormer的核心成分是基于轴的多头自我注意力和跨层注意融合块,这大大降低了线性复杂性。新数据集和现有公共数据集的广泛实验表明,LLFormer的表现优于最先进的方法。我们还表明,采用在我们的基准测试中培训的现有LLIE方法可显着提高下游任务的性能,例如在弱光条件下的面部检测。源代码和预训练模型可在https://github.com/taowangzj/llformer上找到。
As the quality of optical sensors improves, there is a need for processing large-scale images. In particular, the ability of devices to capture ultra-high definition (UHD) images and video places new demands on the image processing pipeline. In this paper, we consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution. We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms. As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method. The core components of LLFormer are the axis-based multi-head self-attention and cross-layer attention fusion block, which significantly reduces the linear complexity. Extensive experiments on the new dataset and existing public datasets show that LLFormer outperforms state-of-the-art methods. We also show that employing existing LLIE methods trained on our benchmark as a pre-processing step significantly improves the performance of downstream tasks, e.g., face detection in low-light conditions. The source code and pre-trained models are available at https://github.com/TaoWangzj/LLFormer.