论文标题

使用变压器网络深度高光谱脉

Deep Hyperspectral Unmixing using Transformer Network

论文作者

Ghosh, Preetam, Roy, Swalpa Kumar, Koirala, Bikram, Rasti, Behnood, Scheunders, Paul

论文摘要

目前,本文正在IEEE中进行审查。变形金刚以自然语言处理的最新表现吸引了视觉研究界。凭借其出色的性能,变形金刚在高光谱图像分类领域找到了自己的道路,并取得了令人鼓舞的结果。在本文中,我们利用了变形金刚征服高光谱不混合的任务,并提出了一种新颖的深层抗变压器模型。我们旨在利用变压器更好地捕获全球特征依赖性的能力,以增强末端光谱和丰富图的质量。提出的模型是卷积自动编码器和变压器的组合。高光谱数据由卷积编码器编码。变压器捕获了从编码器得出的表示之间的远程依赖。数据是使用卷积解码器重建的。我们将提出的Unmixing模型应用于三个广泛使用的Unmixing数据集,即Samson,Apex和Washington DC购物中心,并就根平方误差和频谱角度距离进行了比较。提出的模型的源代码将在\ url {https://github.com/preetam22n/deeptrans-hsu}上公开获得。

Currently, this paper is under review in IEEE. Transformers have intrigued the vision research community with their state-of-the-art performance in natural language processing. With their superior performance, transformers have found their way in the field of hyperspectral image classification and achieved promising results. In this article, we harness the power of transformers to conquer the task of hyperspectral unmixing and propose a novel deep unmixing model with transformers. We aim to utilize the ability of transformers to better capture the global feature dependencies in order to enhance the quality of the endmember spectra and the abundance maps. The proposed model is a combination of a convolutional autoencoder and a transformer. The hyperspectral data is encoded by the convolutional encoder. The transformer captures long-range dependencies between the representations derived from the encoder. The data are reconstructed using a convolutional decoder. We applied the proposed unmixing model to three widely used unmixing datasets, i.e., Samson, Apex, and Washington DC mall and compared it with the state-of-the-art in terms of root mean squared error and spectral angle distance. The source code for the proposed model will be made publicly available at \url{https://github.com/preetam22n/DeepTrans-HSU}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源