摘要

Infrared and visible image fusion aims to obtain a synthetic image that can simultaneously exhibit salient objects and provide abundant texture details. However, existing deep learning-based methods gener-ally depend on convolutional operations, which indeed have good local feature extraction ability, but the restricted receptive field limits its capability in modeling long-range dependencies. To conquer this dilemma, we propose an infrared and visible image fusion method based on Transformer and cross cor-relation, named TCCFusion. Specifically, we design a local feature extraction branch (LFEB) to preserve local complementary information, in which a dense-shape network is introduced to reuse the informa-tion that may be lost during the convolutional operation. To avoid the limitation of the receptive field and to fully extract the global significant information, a global feature extraction branch (GFEB) is de-vised that consists of three Transformer blocks for long-range relationship construction. In addition, LFEB and GFEB are arranged in a parallel fashion to maintain local and global useful information in a more effective way. Furthermore, we design a cross correlation loss to train the proposed fusion model in an unsupervised manner, with which the fusion result can obtain adequate thermal radiation information in an infrared image and ample texture details in a visible image. Massive experiments on two mainstream datasets illustrate that our TCCFusion outperforms state-of-the-art algorithms not only on visual qual-ity but also on quantitative assessments. Ablation experiments on the network framework and objective function demonstrate the effectiveness of the proposed method.

  • 单位
    武汉大学