• Photonics Research
  • Vol. 13, Issue 4, 827 (2025)
Yitong Pan1,2,3,4, Zhenqi Niu1,2,3,4,5, Songlin Wan1,2,3,4, Xiaolin Li1,2,3,4..., Zhen Cao1,2,3,4, Yuying Lu1,2,3,4, Jianda Shao1,2,3,4 and Chaoyang Wei1,2,3,4,*|Show fewer author(s)
Author Affiliations
  • 1Precision Optical Manufacturing and Testing Center, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China
  • 2Key Laboratory for High Power Laser Material of Chinese Academy of Sciences, Shanghai Institute of Optics and Fine Mechanics, Shanghai 201800, China
  • 3Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
  • 4China-Russia Belt and Road Joint Laboratory on Laser Science, Shanghai 201800, China
  • 5e-mail: niuzhenqi@siom.ac.cn
  • show less
    DOI: 10.1364/PRJ.541560 Cite this Article Set citation alerts
    Yitong Pan, Zhenqi Niu, Songlin Wan, Xiaolin Li, Zhen Cao, Yuying Lu, Jianda Shao, Chaoyang Wei, "Spatial–spectral sparse deep learning combined with a freeform lens enables extreme depth-of-field hyperspectral imaging," Photonics Res. 13, 827 (2025) Copy Citation Text show less
    Proposed E-DoF HI system. The AED hyperspectral images can be regained from the blurred image captured by the camera through subsequent processing by a deep learning neural network with SSA.
    Fig. 1. Proposed E-DoF HI system. The AED hyperspectral images can be regained from the blurred image captured by the camera through subsequent processing by a deep learning neural network with SSA.
    Overview of AED HI system. (a) Schematic of the proposed system. System allows for high-fidelity imaging across a broad distance from Zmin to Zmax; (b) learning pipelines for freeform-DOE. In forward pass, the convolved images of the object and PSFs modulated by the freeform-DOE are captured by a camera and reconstructed by the network. In back pass, the loss function of the recovery network guides the optimization for the freeform-DOE until a satisfactory image can be obtained. (c) Basic framework of SSA. The image is segmented into nine distinct subregions within the 2D plane for processing, each represented by blocks of a unique color. The SSA method is then utilized to limit the difference between neighboring pixels in 2D space and spectral dimension to obtain an optimal output hyperspectral image. (d) Results of hyperspectral reconstruction (illustrated by RGB false color) and its images at different spectral channels.
    Fig. 2. Overview of AED HI system. (a) Schematic of the proposed system. System allows for high-fidelity imaging across a broad distance from Zmin to Zmax; (b) learning pipelines for freeform-DOE. In forward pass, the convolved images of the object and PSFs modulated by the freeform-DOE are captured by a camera and reconstructed by the network. In back pass, the loss function of the recovery network guides the optimization for the freeform-DOE until a satisfactory image can be obtained. (c) Basic framework of SSA. The image is segmented into nine distinct subregions within the 2D plane for processing, each represented by blocks of a unique color. The SSA method is then utilized to limit the difference between neighboring pixels in 2D space and spectral dimension to obtain an optimal output hyperspectral image. (d) Results of hyperspectral reconstruction (illustrated by RGB false color) and its images at different spectral channels.
    PSF characteristics of proposed system. (a) PSFs of the system at different object distances. (b) Zoomed-in PSFs at different spectral channels.
    Fig. 3. PSF characteristics of proposed system. (a) PSFs of the system at different object distances. (b) Zoomed-in PSFs at different spectral channels.
    Comparison of simulation results of the three different HI systems considered in the study for the same object at different objective distances d. The images are illustrated by RGB false color (Dataset 1, Ref. [67]).
    Fig. 4. Comparison of simulation results of the three different HI systems considered in the study for the same object at different objective distances d. The images are illustrated by RGB false color (Dataset 1, Ref. [67]).
    Experimental results for extended DoF. (a) The experimental scenario diagram. (b) Objects at 1.2–5.2 m from the camera are all in focus; (c) experimental results for extended DoF at a close distance. Objects at 0.5–0.7 m from the camera are also in focus (Dataset 1, Ref. [67]).
    Fig. 5. Experimental results for extended DoF. (a) The experimental scenario diagram. (b) Objects at 1.2–5.2 m from the camera are all in focus; (c) experimental results for extended DoF at a close distance. Objects at 0.5–0.7 m from the camera are also in focus (Dataset 1, Ref. [67]).
    (a) Visual comparison between proposed system and ground truth (GT). Recovered hyperspectral image and GT are both illustrated by RGB false color. (b) Intensity and accuracy of chosen dots at different wavelengths.
    Fig. 6. (a) Visual comparison between proposed system and ground truth (GT). Recovered hyperspectral image and GT are both illustrated by RGB false color. (b) Intensity and accuracy of chosen dots at different wavelengths.
    Experimental results of reconstruction of moving objects. The blocks are pushed down from a height by a pen and their falling process is captured by a camera used in our system. The results of selected moments are shown with the RGB false color and the full results can be found in Visualization 1.
    Fig. 7. Experimental results of reconstruction of moving objects. The blocks are pushed down from a height by a pen and their falling process is captured by a camera used in our system. The results of selected moments are shown with the RGB false color and the full results can be found in Visualization 1.
    Wavelength (nm)420430440450460
    SpA (%)86.7386.3388.0588.5990.34
    Wavelength (nm)470480490500510
    SpA (%)91.2593.1195.2496.8194.93
    Wavelength (nm)520530540550560
    SpA (%)92.2993.5092.9992.1390.55
    Wavelength (nm)570580590600610
    SpA (%)90.5390.8592.3395.0695.79
    Wavelength (nm)620630640650660
    SpA (%)96.1794.0993.2492.5291.69
    Table 1. Average of SpA of Chosen Dots at Different Wavelengths
    Application ScenarioReferenceDispersionSpectral Resolution (nm)PSNR (dB)System ComplexityDoF (m)
    Depth detectionZhang et al. [63]Metalens40\Simple0.16
    RGB imagingFontbonne et al. [65]Phase mask\\Complex0.5 (0.4–0.9)
    HIBaek et al. [51]DOE1029.31Ultra-simple1.6 (0.4–2.0)
    Kou et al. [66]Spectral camera130.56Ultra-complex1.5
    Sahin et al. [64]DOE1028.86Simple1.6 (0.4–2.0)
    OursDOE1034.85Ultra-simple4.7 (0.5–5.2)
    Table 2. Quantitative Comparison of the State-of-the-Art Methods for E-DoF in Spectral Domain
    Yitong Pan, Zhenqi Niu, Songlin Wan, Xiaolin Li, Zhen Cao, Yuying Lu, Jianda Shao, Chaoyang Wei, "Spatial–spectral sparse deep learning combined with a freeform lens enables extreme depth-of-field hyperspectral imaging," Photonics Res. 13, 827 (2025)
    Download Citation