• Laser & Optoelectronics Progress
  • Vol. 61, Issue 20, 2011011 (2024)
Yihang Cheng1,2,*, Zhengyu Qiao1,3, Yong Huang1,2,3, and Qun Hao1,2,3
Author Affiliations
  • 1School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
  • 2Yangtze Delta Region Academy, Beijing Institute of Technology, Jiaxing 314003, Zhejiang , China
  • 3National Key Laboratory on Near-Surface Detection, Beijing 100087, China
  • show less
    DOI: 10.3788/LOP241637 Cite this Article Set citation alerts
    Yihang Cheng, Zhengyu Qiao, Yong Huang, Qun Hao. Luminance-Adaptive Infrared and Visible Image Fusion Based on Retinex Theory (Invited)[J]. Laser & Optoelectronics Progress, 2024, 61(20): 2011011 Copy Citation Text show less

    Abstract

    To address the problems of inadequate adaptability and poor visual quality in existing infrared and visible image fusion methods under varying luminance conditions, this paper proposes a fusion method based on Retinex theory. First, the dimension of visible light images is enhanced using an encoder, followed by the decomposition of these images into reflectance and illuminance feature maps, which is consistent with Retinex theory. Second, the reflectance feature is combined with the infrared image feature obtained via the encoder,which enhanced using a structure tensor representation. In addition, convolution kernels with varying sizes are employed to extract multiscale features, which enriches the image's hierarchical information. Finally, the decoder reduces the feature map's dimensionality, and a learnable gamma transform layer is introduced to improve the contrast of the fused image. The model's performance is validated using multiple evaluation metrics on the LLVIP public dataset. The experimental results demonstrate that the proposed method enables adaptive fusion of visible and infrared images under different luminance environments, achieving superior fusion results in terms of both visual perception and quantitative assessment.