论文标题
基于特征相互映射的无监督图像融合方法
Unsupervised Image Fusion Method based on Feature Mutual Mapping
论文作者
论文摘要
近年来,基于深度学习的图像融合方法已广泛关注,从而在视觉感知方面取得了令人鼓舞的表现。但是,当前基于深度学习的方法中的融合模块受到两个限制,\ textit {i.e。},手动设计的融合函数和无独立的网络学习。在本文中,我们提出了一种无监督的自适应图像融合方法来解决上述问题。我们提出了一个功能相互映射融合模块和双支分支多尺度自动编码器。更具体地说,我们构建了一个全局地图,以测量输入源图像之间像素的连接。 %发现的映射关系指导图像融合。此外,我们通过采样转换设计了双分支多尺度网络,以提取区分图像特征。我们通过解码过程中的特征聚合进一步丰富了不同尺度的特征表示。最后,我们提出了一个修改后的损失函数,以训练网络具有有效的收敛属性。通过对红外和可见图像数据集的足够培训,我们的方法还显示了多聚焦和医疗图像融合中出色的广义性能。我们的方法在视觉感知和客观评估中都取得了出色的表现。实验证明,我们提出的方法在各种图像融合任务上的性能超过了其他最先进的方法,证明了我们方法的有效性和多功能性。
Deep learning-based image fusion approaches have obtained wide attention in recent years, achieving promising performance in terms of visual perception. However, the fusion module in the current deep learning-based methods suffers from two limitations, \textit{i.e.}, manually designed fusion function, and input-independent network learning. In this paper, we propose an unsupervised adaptive image fusion method to address the above issues. We propose a feature mutual mapping fusion module and dual-branch multi-scale autoencoder. More specifically, we construct a global map to measure the connections of pixels between the input source images. % The found mapping relationship guides the image fusion. Besides, we design a dual-branch multi-scale network through sampling transformation to extract discriminative image features. We further enrich feature representations of different scales through feature aggregation in the decoding process. Finally, we propose a modified loss function to train the network with efficient convergence property. Through sufficient training on infrared and visible image data sets, our method also shows excellent generalized performance in multi-focus and medical image fusion. Our method achieves superior performance in both visual perception and objective evaluation. Experiments prove that the performance of our proposed method on a variety of image fusion tasks surpasses other state-of-the-art methods, proving the effectiveness and versatility of our approach.