The single-photon LiDAR is widely utilized in fields such as biology, geology, remote sensing, robotics, and navigation due to its long-range capability and high imaging resolution. When combined with the time-correlated single-photon counting technology, the system can achieve picosecond-level time resolution, enabling the reconstruction of high-resolution depth images. The system has a pulsed laser that emits periodic short pulses toward a target scene and a single-photon detector that counts the reflected photons. By scanning each pixel over an extended period, a photon count of the order of 103 can be achieved for each pixel. This process effectively reduces background noise and detector dark counts and minimizes the range uncertainty caused by photon flight time jitter. These beneficial effects make it possible to realize millimeter- to micrometer-level distance accuracy and high-resolution depth image reconstruction. Traditional methods that rely on repeated measurements have long data acquisition times, limiting the applicability of single-photon LiDAR in dynamic target remote sensing, autonomous driving, and non-line-of-sight imaging. When acquisition times are short and echo signals very weak, only a few photons are detected, leading to a low signal-to-noise ratio (SNR). Reconstructing high-precision depth images with minimal echo photon data under these low-SNR conditions is a major challenge for existing single-photon counting LiDAR technology. To address this challenge, a novel depth image reconstruction algorithm is proposed. This algorithm integrates a photon-counting LiDAR detection probability model with a backpropagation neural network. This approach enhances the accuracy of depth image reconstruction under varying SNR conditions and improves the performance of single-photon LiDAR in complex scenarios.
The proposed method improves depth image reconstruction from single-photon LiDAR data under low-SNR conditions. The method comprises three main steps. The first step is filtering based on the photon-counting LiDAR detection theory. This involves windowed processing and adjustments for noise cluster probabilities to enhance the signal by reducing noise. In the second step, a backpropagation neural network fills in the missing pixels, ensuring image continuity. Unlike existing deep-learning approaches, this step eliminates the need for additional datasets. In the final step, total variation regularization is performed to refine the depth image and improve accuracy. This step enhances depth estimation precision and effectively manages the challenges of low SNRs. By combining these techniques, the proposed method significantly improves the depth estimation performance for single-photon LiDAR systems.
Simulations using the Middlebury dataset were performed to evaluate single-photon avalanche diode measurements obtained under different scenes and lighting conditions. The performance of the proposed algorithm was compared with that of the Shin and the Rapp algorithms by measuring the average reconstruction error for the depth images across nine typical noise levels and four test scenarios.
The results indicate that under typical noisy conditions, particularly in environments with very low SNRs, the proposed algorithm significantly outperforms the Shin and Rapp algorithms. A comparison of the average reconstruction errors of the three algorithms for different scenes showed that compared to the Shin and Rapp algorithms, the proposed algorithm achieved improvements of 38.67 times and 56%, respectively, in the Art scene; 62.07 and 1.05 times, respectively, in the Bowling scene; 52.67 and 1.78 times, respectively, in the Laundry scene; and 14.15 times and 42%, respectively, in the Reindeer scene.
The Shin algorithm significantly reduced the estimation error when the SNR exceeded 0.05. This improvement is attributed to the algorithm's binomial estimation approach, which efficiently extracts signals and suppresses noise as the SNR increases. The Rapp method performs well at high SNRs but shows a notable decline in performance when the ratio drops to 0.01. Since the Rapp method relies on neighboring pixel photon data, it can effectively distinguish signals under high SNR conditions but is prone to boundary errors in low-SNR scenarios. In the proposed method, although the error increases with rising noise levels below 0.05, the increase rate is slower than that of the Shin and Rapp algorithms, indicating greater robustness of the proposed method under extremely low-SNR conditions.
From the perspective of computational efficiency, the average running time of the proposed algorithm is slightly lower than that of the Rapp algorithm but higher than that of the Shin algorithm. Although the Shin algorithm has the shortest run time, it suffers from high reconstruction errors. Thus, the proposed algorithm achieves a better balance of performance and efficiency, making it more suitable for environments with low SNRs.
Traditional methods work well for reconstructing depth images under high SNRs. However, in low-SNR environments, background noise often hides details of the target objects, making it hard to distinguish them from their surroundings. To tackle this problem, a new depth image reconstruction algorithm is proposed. It combines photon-counting LiDAR models with deep-learning techniques. This algorithm is specifically designed for low-SNR conditions, effectively smoothing depth images while preserving edge details. A comparison with other methods showed that the proposed algorithm greatly reduces the reconstruction errors compared to the existing methods, especially under very low-SNR conditions.
The proposed method is expected to enhance the application of photon imaging in challenging scenarios, such as non-line-of-sight and ghost imaging. It can also refine modules in the photon-counting LiDAR image reconstruction, offering potential for future upgrades and customization.