论文标题

高光谱图像重建的粗到细稀疏变压器

Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction

论文作者

Cai, Yuanhao, Lin, Jing, Hu, Xiaowan, Wang, Haoqian, Yuan, Xin, Zhang, Yulun, Timofte, Radu, Van Gool, Luc

论文摘要

已经开发了许多算法来解决编码光圈快照光谱成像(CASSI)的反问题,即从2D压缩测量中恢复3D高光谱图像(HSIS)。近年来,基于学习的方法表明了有希望的表现,并主导了主流研究方向。但是,现有的基于CNN的方法显示了捕获长期依赖性和非本地自相似性的局限性。以前的基于变压器的方法密集样品令牌,其中一些是不明显的,并计算了某些在内容中无关的令牌之间的多头自我注意力(MSA)。这不符合HSI信号的空间稀疏性质,并限制了模型可伸缩性。在本文中,我们提出了一种新型的基于变压器的方法,即粗到1的稀疏变压器(CST),首先将HSI稀疏嵌入到HSI重建的深度学习中。特别是,CST使用我们提出的光谱感知筛选机制(SASM)进行粗贴片选择。然后,选定的贴片被馈入我们的定制光谱 - 聚集多头自我注意(SAH-MSA)中,以进行精细的像素聚类和自相似性捕获。全面的实验表明,我们的CST在需要更便宜的计算成本的同时,明显优于最先进的方法。代码和模型将在https://github.com/caiyuanhao1998/mst上发布

Many algorithms have been developed to solve the inverse problem of coded aperture snapshot spectral imaging (CASSI), i.e., recovering the 3D hyperspectral images (HSIs) from a 2D compressive measurement. In recent years, learning-based methods have demonstrated promising performance and dominated the mainstream research direction. However, existing CNN-based methods show limitations in capturing long-range dependencies and non-local self-similarity. Previous Transformer-based methods densely sample tokens, some of which are uninformative, and calculate the multi-head self-attention (MSA) between some tokens that are unrelated in content. This does not fit the spatially sparse nature of HSI signals and limits the model scalability. In this paper, we propose a novel Transformer-based method, coarse-to-fine sparse Transformer (CST), firstly embedding HSI sparsity into deep learning for HSI reconstruction. In particular, CST uses our proposed spectra-aware screening mechanism (SASM) for coarse patch selecting. Then the selected patches are fed into our customized spectra-aggregation hashing multi-head self-attention (SAH-MSA) for fine pixel clustering and self-similarity capturing. Comprehensive experiments show that our CST significantly outperforms state-of-the-art methods while requiring cheaper computational costs. The code and models will be released at https://github.com/caiyuanhao1998/MST

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源