• Infrared Technology
  • Vol. 45, Issue 7, 721 (2023)
Bicao LI1,2, Jiaxi LU1, Zhoufeng LIU1, Chunlei LI1, and Jie ZHANG1
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • show less
    DOI: Cite this Article
    LI Bicao, LU Jiaxi, LIU Zhoufeng, LI Chunlei, ZHANG Jie. Infrared and Visible Light Image Fusion Method Based on Swin Transformer and Hybrid Feature Aggregation[J]. Infrared Technology, 2023, 45(7): 721 Copy Citation Text show less
    References

    [1] MA J, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization [J]. Information Fusion, 2016, 31: 100-109.

    [2] Bavirisetti D P, D Huli R. Two-scale image fusion of visible and infrared images using saliency detection [J]. Infrared Physics & Technology, 2016, 76: 52-64.

    [3] Bavirisetti D P, Dhuli R. Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen-loeve transform [J]. IEEE Sensors Journal, 2015, 16(1): 203-9.

    [4] LIU Y, CHEN X, WARD R K, et al. Image fusion with convolutional sparse representation [J]. IEEE Signal Processing Letters, 2016, 23(12): 1882-6.

    [5] ZHOU Z, WANG B, LI S, et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters [J]. Information Fusion, 2016, 30: 15-26.

    [6] Prabhakar K R, Srikar V S, Babu R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C/OL]//Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, https://arxiv.org/abs/1712.07384.

    [7] ZHANG Y, LIU Y, SUN P, et al. IFCNN: A general image fusion framework based on convolutional neural network [J]. Information Fusion, 2020, 54: 99-118.

    [8] XU H, MA J, JIANG J, et al. U2Fusion: a unified unsupervised image fusion network [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44(1): 502 - 18.

    [9] LI H, WU X J, KITTLER J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images [J]. Information Fusion, 2021, 73: 72-86.

    [10] MA J, YU W, LIANG P, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion [J]. Information Fusion, 2019, 48: 11-26.

    [11] FU Y, WU X J, DURRANI T. Image fusion based on generative adversarial network consistent with perception [J]. Information Fusion, 2021, 72: 110-25.

    [12] SONG A, DUAN H, PEI H, et al. Triple-discriminator generative adversarial network for infrared and visible image fusion [J]. Neurocomputing, 2022, 483: 183-94.

    [13] XUE W, HUAN XIN C, SHENG YI S, et al. MSFSA-GAN: multi-scale fusion self attention generative adversarial network for single image deraining [J]. IEEE Access, 2022, 10: 34442-8.

    [14] ZHANG H, YUAN J, TIAN X, et al. GAN-FM: infrared and visible image fusion using gan with full-scale skip connection and dual markovian discriminators [J]. IEEE Transactions on Computational Imaging, 2021, 7: 1134-47.

    [15] HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011-23.

    [16] LI B, LIU Z, GAO S, et al. CSpA-DN: channel and spatial attention dense network for fusing PET and MRI images[C]//Proceedings of the 25th International Conference on Pattern Recognition, 2021, DOI: 10.1109/ICPR48806.2021.9412543.

    [17] HUANG G, LIU Z, MAATEN L V D, et al. Densely connected convolutional networks[C/OL]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, https://arxiv.org/abs/1608.06993.

    [18] LI H, WU X. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-23.

    [19] ZHOU Z, Rahman Siddiquee M M, Tajbakhsh N, et al. UNet++: A Nested U-Net architecture for medical image segmentation[J/OL]. Computer Vision and Pattern Recognition, 2018, https://arxiv.org/abs/1807.10165.

    [20] LI H, WU X J, DURRANI T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models [J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9645-56.

    [21] TOET A. TNO Image Fusion Dataset[EB/OL]. 2014, https://doi.org/10.6084/m9.figshare.1008029.v2.

    [22] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: common objects in context[J/OL]. European Conference on Computer Vision, 2014, https://arxiv.org/abs/1405.0312.

    [23] LI H, WU X, KITTLER J. Infrared and visible image fusion using a deep learning framework[C]// Proceedings of the 24th International Conference on Pattern Recognition (ICPR), 2018: 2705-2710, DOI: 10.1109/ICPR.2018.8546006.

    [24] XU H, ZHANG H, MA J. Classification saliency-based rule for visible and infrared image fusion [J]. IEEE Transactions on Computational Imaging, 2021, 7: 824-36.

    [25] FU Y, WU X J. A dual-branch network for infrared and visible image fusion [J/OL]. International Conference on Pattern Recognition (ICPR), 2021, https://arxiv.org/abs/2101.09643.

    [26] Xydeas C S, Petrovi. V. Objective image fusion performance measure [J]. Electronics Letters, 2000, 36(4): 308-309.

    [27] HAN Y, CAI Y, CAO Y, et al. A new image fusion performance metric based on visual information fidelity [J]. Information Fusion, 2013, 14(2): 127-135.

    [28] CUI G, FENG H, XU Z, et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition [J]. Optics Communications, 2015, 341: 199-209.

    [29] AARDT V, JAN. Assessment of image fusion procedures using entropy, image quality, and multispectral classification [J]. Journal of Applied Remote Sensing, 2008, 2(1): 1-28.

    [30] Haghighat M, Razian M A. Fast-FMI: Non-reference image fusion metric[C]//Proceedings of the IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), 2014: 1-3, DOI: 10.1109/ICAICT.2014.7036000.

    [31] ZHAO J, LAGANIERE R, LIU Z. Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement[J]. International Journal of Innovative Computing Information & Control Ijicic, 2006, 3(6): 1433-1447.

    [32] TANG L, YUAN J, ZHANG H, et al. PIAFusion: A progressive infrared and visible image fusion network based on illumination aware[J]. Information Fusion, 2022, 83-84: 79-92.

    LI Bicao, LU Jiaxi, LIU Zhoufeng, LI Chunlei, ZHANG Jie. Infrared and Visible Light Image Fusion Method Based on Swin Transformer and Hybrid Feature Aggregation[J]. Infrared Technology, 2023, 45(7): 721
    Download Citation