
Search by keywords or author
Journals >Acta Photonica Sinica
Export citation format
Blur and Distortion in Turbulence-degraded Image
Shangwei LI, Yichong REN, Xinmiao LI, Haiping MEI... and Ruizhong RAO|Show fewer author(s)
Incoherent light imaging has been widely used in astronomical observation, industrial production, medical treatment and other fields. It is of great value to study the theory and application of incoherent light imaging. However, incoherent light imaging in atmospheric environment is inevitably affected by atmospheric tIncoherent light imaging has been widely used in astronomical observation, industrial production, medical treatment and other fields. It is of great value to study the theory and application of incoherent light imaging. However, incoherent light imaging in atmospheric environment is inevitably affected by atmospheric turbulence, which leads to blur and distortion of the image, and degrades the quality of the image obtained. Blur is the high-order aberration caused by turbulence, which makes the light unable to effectively concentrate on one point to smooth the image, while distortion is caused by the nonlinear change of wavefront phase, which makes the image pixels move.The degradation of image in turbulent flow can be expressed as the superposition of the point spread function corresponding to multiple point sources, and the blur and distortion of the point spread function also reflect the degradation degree of the image. The blur and distortion caused by turbulence are important factors limiting the application of incoherent imaging. It is important to study the blur and distortion of point spread function caused by turbulence to alleviate the degradation of turbulent image. In the current methods of turbulence image degradation, many algorithms are based on the condition of isoplanatism, and the anisoplanatism of turbulence can not be ignored. At the same time, there is a lack of research and analysis on the blur and distortion of the point spread function under anisoplanatic conditions. In order to simulate the anisoplanatic effect, a turbulence imaging model is constructed based on the principle of ray propagation and phase screen superposition. The wavefront phase corresponding to the target point source can be calculated, and the point spread function corresponding to the target point source can be obtained by Fourier transform. The model can simulate the light emitted by different point sources through different turbulent paths to obtain the blur and distortion under anisoplanatic conditions. Since the scenario of horizontal imaging is relatively common, this scenario is chosen as the verification scenario, and the simulation model is verified by the modulation transfer function area and isoplanatic angle. The results show that the simulated value of the area of the modulation transfer function is consistent with the theoretical value, and the difference is small. If the number of statistics is increased, the error can be further reduced. The simulated value of the isoplanatic angle is consistent with the theoretical value, and the simulated value is slightly larger than the theoretical value. This is because the theoretical value is obtained based on the assumption that the pupil tends to infinity, and the pupil diameter is limited during simulation, so the simulated value is slightly larger than the simulated value.This means that the model can simulate the blur and distortion in anisoplanatic conditions well. In addition, we use the relationship between the average correlation coefficient and the isoplanatic angle to define the approximate invariant region of the image and give the trend of the region of the image approximate invariance with . The approximate invariant region of the image square is convenient to understand the variation degree of the point spread function and can provide a reference for setting hyperparameters in image restoration algorithms. The distribution of turbulence can also affect the blur and distortion of the point diffusion function. From the process of light propagation in turbulence, we speculate that the blur effect is mainly caused by the turbulence on the lens side, while the distortion effect is mainly caused by the turbulence on the object side. To verify this hypothesis, we designed two extreme turbulence scenarios: one in which turbulence is concentrated on the camera side, and the other in which turbulence is concentrated on the object side. The simulation results show that under the same conditions, when turbulence is concentrated on the lens side, the area of the modulation transfer function is smaller and the approximate invariant region of the image space is larger, while when turbulence is concentrated on the object side, the situation is opposite. This means that for the same turbulence strength, turbulence closer to the lens can produce more blur effects, and turbulence closer to the object can produce more distorting effects. Moreover, even if the weak turbulence is concentrated on the lens side, the blur effect on the image is much greater than that of the strong turbulence concentrated on the object side, while the distortion effect is the opposite. This proves that the blur of image degradation is mainly contributed by turbulence near the lens side, and the distortion of image degradation is mainly contributed by turbulence near the object side. Since the light is propagated from the object to the lens, this can explain the reason why the operation of tilt before blur is more reasonable than that of blur before tilt in the turbulent image degradation model. In ground-to-air imaging, turbulence is concentrated on the lens side, which means that the blurring effect is much more than the distortion effect, and the image restoration algorithm based on spatial invariance is very effective. However, in air-to-ground imaging, turbulence is concentrated on the object side, which means that blur has less impact on imaging, while distortion has more impact on imaging. In this case, the image restoration algorithm should consider how to reduce the impact of distortion. It also shows that in turbulent image restoration, a suitable restoration algorithm should be selected according to the degraded scene of the image to achieve better restoration effect..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0801001 (2024)
Bathymetric Inversion Method for Active-passive Remote Sensing Fused Radiative Transfer Information Convolutional Neural Networks
Congshuang XIE, Peng CHEN, and Delu PAN
The nearshore area is of paramount importance in the ecosystem. Accurate bathymetric maps which depict underwater topography play a key role in supporting activities such as coastal research, environmental management and marine spatial planning. The new generation of Ice, Cloud, and Land Elevation Satellite 2 (ICESat-2The nearshore area is of paramount importance in the ecosystem. Accurate bathymetric maps which depict underwater topography play a key role in supporting activities such as coastal research, environmental management and marine spatial planning. The new generation of Ice, Cloud, and Land Elevation Satellite 2 (ICESat-2) is equipped with an advanced Terrain Laser Altimeter System (ATLAS), which delivers considerable benefits in providing accurate bathymetric data across extensive geographical regions. ATLAS data can be combined with passive optical remote sensing imagery to realize efficient bathymetry estimation using a machine learning approach. Therefore, this study proposes a Convolutional Neural Network (CNN) model for physical radiation transmission information, which combines optical radiation transmission information with CNN models. The physical radiation transfer data highlights the spectral characteristics of shallow water areas, while the CNN structure takes into account the surrounding information of water depth measurement point pixels well. An adaptive elliptic density segmentation algorithm approach is applied to generate training and test samples based on the spectral reflectance characteristics and radiative transfer properties of Sentinel-2, using the reference bathymetry points of ICESat-2 as priori training data. The training datasets are generated based on the spectral reflectance and radiative transfer features of Sentinel-2. Next, a convolutional neural network model is appied to establish a link with the reference bathymetric point of ICESat-2. Finally, a complete bathymetric map would be generated by feeding the spectral feature data of the entire Sentinel-2 image into the trained convolutional neural network model. The obtained results are analyzed to validate the methodology, and comprehensively explores the effects of ICESat-2 extracted bathymetry point accuracy, inversion model and atmospheric correction on the performance of satellite-based remote sensing bathymetry inversion results. The continuously updated digital elevation model field data on the island of St. Croix are used to verify the accuracy and robustness of the water depth maps generated by the physical radiation transfer CNN model. The experimental results show that the adaptive elliptical density segmentation algorithm can better track water depth information compared to the standard fixed parameter density clustering algorithm. The adaptive elliptical density segmentation algorithm well eliminates the noise points and reduces the impact of noisy bathymetric points on the subsequent bathymetric inversion. The CNN model containing physical radiation transmission information exhibits higher accuracy and the RMSE using the CNN model containing physical radiation transmission information is reduced by 10% compared to the model without physical radiation transmission information in St. Thomas. The accuracy of the inversion results of the physical radiation transfer CNN model for water depth exceeds 95%, with an error of less than 1.6 m in all three study areas. In addition, the RMSE of the error evaluation of the bathymetric results using the data including the diffuse attenuation factor is 1.59 m, with an accuracy of 97%, which are better than those of the bathymetric results trained without the a priori diffuse attenuation factor, and the inclusion of the diffuse attenuation factor of the optical nature in the inversion process is favorable for the shallow water depth inversion. The ICESat-2 reference bathymetry data are used as the field data to validate the simulation estimation results, and the RMSE of the error evaluation is 1.78 m and the accuracy could reach 95%, which shows that the method is still valid and stable when using different data sources. The above results demonstrate the potential of the convolutional neural network modeling approach based on physical radiative transfer information in obtaining high-precision bathymetric information, which is expected to play an active role in the large-scale application of satellite-mounted LiDAR..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0801002 (2024)
Subspot Background Noise Removal Method Based on Bezier Surface Fitting
Han GUO, Wang ZHAO, Shuai WANG, Ping YANG... and Chensi ZHAO|Show fewer author(s)
Due to the influence of the sky background, the background light of the wavelet spot image of the Shake-Hartmann wavefront sensor is enhanced, resulting in the location of the centroid of the wavelet spot cannot be accurately extracted, and the detection accuracy of the wavefront sensor is reduced. Due to the influenceDue to the influence of the sky background, the background light of the wavelet spot image of the Shake-Hartmann wavefront sensor is enhanced, resulting in the location of the centroid of the wavelet spot cannot be accurately extracted, and the detection accuracy of the wavefront sensor is reduced. Due to the influence of system assembly error and lens diffraction limitation, the system will produce a large vignetting, resulting in an uneven distribution of skylight background noise in the focal plane of the sub-aperture. Once the background light distribution is not uniform, the conventional denoising method can't accurately remove the noise.For the adaptive optical system under non-uniform background, a noise removal method which can adapt to various background light distribution characteristics is needed. By using this method, the background noise can be accurately removed, and the accuracy of spot centroid extraction and wave front recovery can be improved.The paper suggests utilizing Bezier surfaces to fit the background noise, and the fitting result is subtracted as the true noise value to achieve the purpose of separating noise. When working during the day or in an environment with strong background light interference, affected by background noise, the wavefront detection error of the adaptive optical system increases and the correction ability decreases, limiting its work efficiency. Under a strong skylight background, the interference to the detector is mainly additive interference caused by strong background light. According to the law of radiation transfer, the background light intensity of the skylight is related to the off-axis angle of the solar beam relative to the optical axis and the scattering angle after passing through the optical element. After sunlight enters the optical system, it passes through multi-layer optical path turns and is affected by system assembly errors and lens diffraction effects, resulting in uneven distribution of background light detected by the CCD detector. The Bezier surface can fit the unevenly distributed background noise well. This method avoids the selection of fitting basis functions, and selecting control points at certain intervals can effectively reduce the weight of the influence of target point information on surface fitting. Taking the diffraction limit of the light spot as the interval, select control points in two mutually perpendicular directions in the image to construct a Bezier surface. The obtained Bezier surface is used as the true value of the skylight background light intensity distribution, and this surface function is subtracted from the spot array image detected by the CCD to achieve the purpose of separating the skylight background. Then centroid extraction and wavefront recovery are carried out.This method can effectively remove strong background light interference in the spot array image and adapt to unevenly distributed background light intensity changes. It has the advantages of simple implementation, strong adaptability and good effect. By comparing the centroid positioning accuracy and wavefront recovery residual RMS of multiple sub-spot images under different background light environments, it is confirmed that the method proposed in this article can effectively eliminate background interference in the spot array image, and improved accuracy of wavefront restoration. Experiments further confirmed that the centroid calculation accuracy of this method is greatly improved compared with other methods. The wavefront restoration residual RMS obtained by the method proposed in this article is improved by 33% compared with the traditional method.Through simulation and experiments, it is proved that the method proposed in this article can effectively separate the background light intensity from the target signal. This method avoids the selection of fitting basis functions, is more sensitive to changes in the background light intensity distribution that needs to be separated, has stronger adaptability to background light intensity non-uniformity, and is more conducive to the integrated application of the algorithm in actual systems. Experiments have shown that this method can effectively restore the wavefront in a strong background light interference environment with a back signal ratio ranging from 30 to 120, and the recovery effect is further improved compared with traditional methods..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0801003 (2024)
Extended Resampling of Subharmonic Atmospheric Turbulence Simulation Phase Screen Research
Wenyong LU, Yan SHI, Jianyong CHEN, Chunlian ZHAN, and Shangzhong JIN
The simulation of wavefront distortion caused by atmospheric turbulence has been a key technique in the study of remote target imaging, light propagation in the atmosphere, large astronomical telescopes and the design of adaptive optical systems. The rapid generation of high-precision atmospheric turbulence phase screeThe simulation of wavefront distortion caused by atmospheric turbulence has been a key technique in the study of remote target imaging, light propagation in the atmosphere, large astronomical telescopes and the design of adaptive optical systems. The rapid generation of high-precision atmospheric turbulence phase screens is particularly important for studying the performance of remote target imaging and adaptive optical systems. At present, the most commonly used simulation method is the spectral inversion method based on fast Fourier transform. Due to the constraints of Fourier transform itself, the phase screen model generated by spectral inversion method lacks low frequency information. Turbulence energy is mainly concentrated in the low frequency region, and the commonly used subharmonic methods developed on this basis fail to adequately sample the low frequency region, resulting in energy leakage. In this paper, an extended resampling subharmonic method is proposed to rapidly generate the atmospheric turbulence phase screen, expand the original region, and divide it into four sub-blocks. Then, each sub-block is further divided into four smaller sub-blocks, and the extended low-frequency region is intensively sampled, and so on. With the continuous grid division, lower frequency components can be continuously added, the initial region of low-frequency compensation can be expanded, and a new sampling method for the low-frequency compensation region is designed. Based on the simulation with Kolmogorov theory, the error between the phase structure function and the theoretical value of the phase plate generated by setting different harmonic order is analyzed. The results show that when the subharmonic order is 11, the relative error between the phase structure function and the theoretical value of the phase plate generated by the extended resampling subharmonic method is 0.368%. Because the external field experiment is affected by environment, climate and other factors, the uncertainty is large, so it is necessary to build an atmospheric turbulence simulator in the laboratory for experiments. According to the results of the external field experiment, the telescope used in the external field experiment was scaled down, and the commonly used lens materials were used in the software to simulate and design the atmospheric turbulence simulator. Finally, the MTF curves of each field of view were close to the diffraction limit, and the Huygens point diffusion function had no obvious defects. According to the simulation results, the atmospheric turbulence simulator system was built in the laboratory for analysis. The system includes a light source, a target object, a front lens, a splitter prism, a liquid crystal spatial light modulator, an aperture, a back lens and a board level camera. The light emitted by the object passes through the refraction of the front group of lenses and forms parallel light in each field of view. After the beam splitting prism, linearly polarized light with polarization direction parallel to the experimental platform is formed in the transmission direction and incident on the surface of LCOS-SLM. LCOS-SLM can carry out phase modulation on the incident light by loading the atmospheric turbulence phase diagram on the computer. The light beam reflected by the LCOS-SLM is focused through the rear lens to form a telecentric light path in the image side, and the final image is imaged on the receiving surface of the board level camera. The results show that after loading the atmospheric turbulence phase screen generated by this method, the absolute error of the turbulence effect on image MTF and the turbulence modulation transfer function curve is small, the average error is 0.033 6, that is, the method can generate a high-precision atmospheric turbulence phase screen..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0801004 (2024)
A Multi-scale Hierarchical Residual Network-based Method for Tiny Object Detection in Optical Remote Sensing Images
Xiangjin ZENG, Genghuan LIU, Jianming CHEN, Jiazhen DOU... and Yuwen QIN|Show fewer author(s)
Optical remote sensing image object detection aims to precisely locate and categorize targets such as aircraft, vehicles, and ships. Challenges arise due to the vast distances in remote sensing, leading to numerous tiny objects that are hard to characterize. Additionally, complex backgrounds and environmental factors lOptical remote sensing image object detection aims to precisely locate and categorize targets such as aircraft, vehicles, and ships. Challenges arise due to the vast distances in remote sensing, leading to numerous tiny objects that are hard to characterize. Additionally, complex backgrounds and environmental factors like lighting and weather conditions reduce signal-to-noise ratios, increasing detection difficulties. Although Convolutional Neural Networks (CNNs), especially those from the YOLO family, are employed for their efficient feature extraction capabilities, they perform poorly in detecting these tiny objects. The key to realize the detection of tiny objects in optical remote sensing images is to obtain sufficiently rich multi-scale feature information and clear tiny object features.Aiming at the above problems, this paper proposes a multi-scale hierarchical residual network based optical remote sensing image tiny object detection algorithm MHRM -YOLO on the basis of YOLOv5, and designs a simple and efficient Multi-scale Hierarchical Residual tiny object feature extraction Module (MHRM). This module expands on Cross Stage Partial (CSP) module by doing more layered design and using different convolutional combinations to extract features from different layered, which allows the network to obtain richer gradient information flow and output richer feature map combinations. In addition, MHRM can be easily embedded into the existing mainstream YOLO detection algorithm backbone network, which can obtain richer sensory fields at a finer granularity level and can effectively capture the contextual information of tiny objects and retain their spatial feature information. The network structure of the MHRM-YOLO algorithm is mainly divided into three parts, namely the backbone, the neck, and the head for prediction. The backbone consists of MHRM and basic convolution module, which performs fine-grained feature extraction to obtain more multi-scale information and larger sensory field; the neck part uses the conventional CSP plus Path Aggregation Network (PAN) feature pyramid structure to perform multi-scale feature fusion; and the prediction part uses the optimized localization loss function to perform computation. Since tiny object detection is sensitive to positional offsets during regression, the localization loss penalty term is further improved to enhance the algorithm's ability to perceive positional offsets. The shape penalty term of the baseline CIoU localization loss has lost its effect, in this regard, the optimized loss function retains the Euclidean distance penalty term of the centroid and adjusts it to a scalable exponential function, and improves the shape penalty term to a bounding box distance penalty term, which weakens the detection algorithm's sensitivity to positional offsets, and further improves the performance of the detection algorithm.In order to validate the effectiveness of the proposed detection algorithm, MHRM-YOLO conducts systematic experiments on the challenging optical remote sensing image tiny object detection dataset AI-TODv2 and the tiny pedestrian dataset TinyPerson. Systematic ablation experiments are conducted for the effects between different module combinations, the effects of the loss function, the performance difference between different backbone network modules and the portability of the algorithm, and the experimental results show that both the MHRM module and the localization loss function can improve the performance of the detection algorithm. Compared with the benchmark YOLOv5 algorithm, the average detection accuracy of MHRM-YOLO on the two datasets is improved by 5.5% and 1.8% respectively, which effectively reduces the false detection rate and the leakage rate of the detection of tiny objects in optical remote sensing images. Of course, due to the use of larger-scale feature layers for detection, the MHRM-YOLO detection algorithm has an increased computational volume and a slight decrease in inference speed compared with the benchmark algorithm. The algorithm still has the problem of missed detection for relatively irregularly shaped target algorithms. In addition, the experimental results show that although the detection accuracy of the MHRM-YOLO algorithm has an advantage over the mainstream detection algorithms, the detection results are generally low, much lower than the accuracy of conventional target detection, and the algorithm still has room for further optimization..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0810001 (2024)
Camouflage Object Detection Based on Feature Fusion and Edge Detection
Cheng DING, Xueqiong BAI, Yong LV, Yang LIU... and Xin LIU|Show fewer author(s)
Camouflaged Object Detection (COD) holds significant research and application value in various fields. The ability of deep learning is pushing the performance of target detection algorithms to new heights. Designing a network that effectively integrates features of different layer sizes and eliminates background noise Camouflaged Object Detection (COD) holds significant research and application value in various fields. The ability of deep learning is pushing the performance of target detection algorithms to new heights. Designing a network that effectively integrates features of different layer sizes and eliminates background noise while preserving detailed information presents the main challenges in this field. We propose Feature Fusion and Edge Detection Net (F2-EDNet), a camouflaged object segmentation model based on feature fusion and edge detection.ConvNeXt is used as the backbone to extract multi-scale contextual features. The extensiveness and diversity of features are then enhanced through two approaches. The first approach involves using the Feature Enhancement Module (FEM) to refine and downsize the multi-scale contextual features. The second approach introduces an auxiliary task to fuse cross-layer features through the Cross-layer Guided Edge prediction Branch (CGEB). The process extracts edge features and predicts edge information to increase feature diversity. Additionally, the Multiscale Feature Aggregation Module (MFAM) improves feature fusion by capturing and fusing information about interlayer differences between edge features and contextual features through multiscale attention and feature cascading. The model's prediction results are subjected to deep supervision to obtain the final target detection results. To validate the performance of the proposed model, it is compared qualitatively and quantitatively with eight camouflage object models from the past three years on three publicly available datasets. This comparison aims to observe its detection accuracy. Additionally, a model efficiency analysis is conducted by comparing it with five open-source models. Finally, the module's effectiveness is verified through ablation experiments to determine the optimal structure.The results of a quantitative experiment indicate that on the CAMO dataset, the S-measure, F-measure, E-measure correlation and mean absolute error metrics for F2-EDNet are optimal. On the COD10K dataset, the structural similarity metric indicates that the proposed algorithm is optimal, while the mean precision and recall, E-measure and metrics reach sub-optimal levels. On NC4K, all four metrics for the proposed algorithm reach optimization. From the visualized detection results, it can be observed that in the camouflage object detection task, the prediction results of the proposed model are more accurate and refined than those of other methods. Compared with other models, although the number of parameters in the proposed model is higher, the simple structure of the model framework enables it to outperform models specifically designed for lightweight purposes, faster than most other models. In comparison of the number of operations, the arithmetic complexity of the proposed model shows a significant decrease compared to a model that also utilizes multi-task learning. The model presented maintains high accuracy in target detection performance while ensuring a reasonable balance between computing speed and the number of operations. The results of ablation experiments demonstrate that each of the current modules plays the expected role, and the model's performance has been optimized.Experimental results show that the proposed algorithm achieves optimal detection accuracy. Compared to suboptimal models, our model demonstrates an average improvement of 1.41%, 1.74%, 0.14%, and 0.77% on the S-measure, F-measure, , and E-measure indices across three datasets. Additionally, the model's design achieves a reasonable balance between operation volume and operation rate. During performance testing, the model's test speed was 46 fps, striking a balance between detection accuracy and execution efficiency, demonstrating practical application value. In future work, the algorithms will be lightened to further reduce the amount of computation to improve the speed of model inference; in applications, the model can be helpful in directions such as medical segmentation, defect detection with transparent object segmentation through migration learning..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0810002 (2024)
Dynamic Weight Cost Aggregation Algorithm for Stereo Matching Based on Adaptive Window
Fupei WU, Yuhao LIU, Rui WANG, and Shengping LI
Stereo matching is the key to binocular vision measurement, which extracts depth information from left and right images captured by binocular cameras to achieve three-dimensional measurement of the target. Reconstructing the three-dimensional morphology of sample surface by a binocular vision system can facilitate the Stereo matching is the key to binocular vision measurement, which extracts depth information from left and right images captured by binocular cameras to achieve three-dimensional measurement of the target. Reconstructing the three-dimensional morphology of sample surface by a binocular vision system can facilitate the quantification of product surface quality information, characterize defects during the manufacturing process of the product, and assist in analyzing the distribution patterns of product defects. However, due to factors such as an unstable physical environment, the geometric shape of the surface being measured, and the precision of the acquisition equipment, existing stereo matching algorithms are difficult to balance accuracy and real-time performance simultaneously, which can affect the efficiency of industrial testing. How to improve the accuracy of stereo matching of binocular images and enhance the measurement accuracy of binocular vision is still the main problem facing this research field. For these reasons, the stereo matching model is established based on binocular visual imaging system, and a dynamic weight cost aggregation stereo matching algorithm based on adaptive windows is proposed in this manuscript.Firstly, traditional local matching algorithms usually use a single weight for cost aggregation under different aggregation windows, while ignoring the differences between pixels in different regions, which can easily lead to unstable stereo matching accuracy based on binocular vision measurement. a cost aggregation adaptive cross window is constructed based on the gradient information representation model as a constraint to adapt to the different requirements of window size in weak texture regions and disparity discontinuous regions in this paper. The algorithm proposed in the paper can achieve a large adaptive window in weak texture regions and can limit its arm length extension in texture rich regions.Secondly, analyze the pixel features of discontinuous disparity regions and weak texture regions, a cost aggregation model is established based on pixel distance and color difference dual threshold weights to calculate the dynamic weight influence factors of each window, which can achieve the distribution of cost weights for different windows. In terms of cost aggregation performance testing, the comparative experiment with the AD-Census algorithm shows that the average mismatch rate of the proposed algorithm in the paper is 4.21%, and its overall matching accuracy has a significant advantage.Thirdly, in order to recover the information of invalid pixels, a local neighborhood of invalid pixels is constructed based on the cross intersection method, and then the occluded points and mismatched points are interpolated and filled separately to obtain a denser disparity image. Additionally, the disparity image area is segmented based on the linear iterative clustering method. By utilizing the mean and variance information of local regions, singular disparity values are removed, and reliable pixel disparity values are searched for to fill in, thereby improving the overall matching accuracy of the disparity map.Finally, the experimental results show that in the testing of the Middlebury dataset, the proposed algorithm has an average mismatch rate of 4.11% for non-occluded areas and 5.65% for all areas, respectively, which is better than traditional matching algorithms. Based on the algorithm proposed in this paper, a binocular system platform was constructed for experiments, and the measurement results of 3D printed samples were compared with those obtained by the triangular laser method. For 4 groups of measured samples, the average relative error of global length measurement is less than 1.2%, and the average relative error of global height measurement is less than 2.7%. The experimental results verify the effectiveness and reliability of the algorithm proposed in this paper..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0810003 (2024)
A Dual Branch Edge Convolution Fusion Network for Infrared and Visible Images
Hongde ZHANG, Xin FENG, Jieming YANG, and Guohang QIU
Image fusion technology is the process of extracting and integrating complementary information from a set of images, and fusing them into a single image. This process aggregates more effective information, removes redundant information, and enhances the quality of information and scene perception capabilities in the imImage fusion technology is the process of extracting and integrating complementary information from a set of images, and fusing them into a single image. This process aggregates more effective information, removes redundant information, and enhances the quality of information and scene perception capabilities in the image. Among them, infrared and visible image fusion is a common branch in the field of image fusion and is widely used in the field of image processing. Infrared images can capture hidden heat source targets and have strong anti-interference capabilities. Visible images have rich scene information through reflective imaging. The fusion of the two can complement the rich detail texture information in the visible image and the highlighted target information in the infrared image, obtain a clearer and more accurate description of the scene content, which is beneficial for target recognition and tracking. However, most of the current fusion methods based on deep learning focus on feature extraction and design of loss function, and do not separate public information from modal information, and use the same feature extractor to extract features of different modes without considering the differences between different modes. Based on this, this paper proposes an infrared and visible image fusion method based on a dual-branch edge convolution fusion network. First, based on the dual-branch autoencoder, an improved dual-branch edge convolution structure is proposed, which decomposes the extracted feature information into common information and modality information, and introduces an edge convolution block in each branch to better extract deep features; then a convolutional block attention module is introduced in the fusion layer to enhance the features of different modalities separately for better fusion effect; finally, based on the characteristics of the encoding and decoding network in this paper, a loss function combining reconstruction loss and fusion loss is proposed, which better retains the information of the source image. In order to verify the effectiveness of the proposed method, 10 pairs images were randomly selected on the TNO dataset and the test set of MSRS dataset respectively to test on 6 indicators, such as MSE, SF, CC, PSNR, , and MS-SSIM. Firstly, four sets of ablation experiments were designed to verify the effectiveness of the edge convolution block and the convolutional block attention module. The results show that the edge convolution block can more effectively extract the features of the image, retain more edge information, and the fusion effect of the convolutional block attention module on modality information is also significantly enhanced. In addition, the optimal parameters of the loss function are found as by grid search method. Besides, the proposed method is compared with the mainstream infrared and visible image fusion methods, including SeAFusion, SwinFuse, etc. The results show that the proposed method retains the high-brightness targets of the infrared image and the clear background of the visible image, with a higher contrast, and has a better visual effect. To be specific, the proposed method in this paper leads other methods in the four indicators of MSE, CC, PSNR and MS-SSIM, with the best overall quality. The above experimental results prove that compared with other methods, the fusion result of the proposed method can better retain the thermal radiation information of the infrared image and the texture information of the visible image, and surpasses the existing Infrared and Visible Image Fusion methods in terms of comprehensive performance. Although the experiment was only tested on the task of Infrared and Visible Image Fusion, the method in this paper can also be extended to the fusion of more than two modalities. Future work will continue to test its performance in other multi-modal information fusion tasks, and optimize the network structure to obtain better fusion results..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0810004 (2024)
Laboratory Geometric Calibration Method for Multi-band Fisheye Lens Camera
Caixia WANG, Hongyao CHEN, Xiaolong SI, Xin LI... and Shiwei BAO|Show fewer author(s)
The significant distortion introduced by fisheye lenses, while expanding the field of view, poses a new scientific problem: the projection process cannot be described using traditional pinhole photography models. To obtain the mapping relationship between pixel points and the angle of incident light, it is necessary toThe significant distortion introduced by fisheye lenses, while expanding the field of view, poses a new scientific problem: the projection process cannot be described using traditional pinhole photography models. To obtain the mapping relationship between pixel points and the angle of incident light, it is necessary to reconstruct the imaging model based on the unique nonlinear distortion characteristics of fisheye cameras. However, due to their unique curvature and optical characteristics, even the common universal models cannot completely eliminate radial residual distortion, with errors reaching nearly 10 pixels. Also, due to the chromatic aberration characteristics of fisheye lenses, there are differences in the refraction of light in different bands.This article studies the calibration principle and calibration process of multi-band fisheye cameras, proposing a fisheye lens calibration method based on a separated precision two-dimensional turntable. The rotary indexing table and the vertical turntable have good repeated positioning accuracy, which are 2 s and 10.8 s respectively, and the orthogonal error of the two rotation axes is less than 10 s. Both drive the camera and collimator to rotate, so that the light spot covers the entire field of view of the lens. To simplify the coordinate conversion process, it is necessary to adjust the collimator to align its optical axis perpendicular to the rotation axis of the rotary indexing table. According to the theory that an ideal lens can focus parallel light from an infinite distance on the main point, fine-tune the camera's position and posture until the camera rotates around the Z-axis from 0° to 360°, and the position of the spot on the image remains unchanged. At this point, the centroid coordinates of the spot are the pixel coordinates of the main point. On this basis, a fifth-degree polynomial is used to fit the residuals and describe the camera projection process together with an equisolid angle projection. The five bands from visible light to near-infrared are calibrated separately to improve the geometric calibration accuracy.Based on experimental results, the influence of lateral chromatic aberration on radial distance in geometric distortion was analyzed and discussed. It was found that the maximum difference in radial distance between different bands at the same incident angle is 8 pixels. Combined with the edge resolution of the lens, which is approximately 0.11°, this difference will result in an angle error of approximately 0.88°. Therefore, when the fisheye lens is applied to different bands, it is necessary to independently calibrate each band to improve the accuracy of geometric calibration. In addition, to analyze the reliability of the data, this article calculated the uncertainty of five main influencing factors during the calibration process. The results are as follows: the measurement error of the two-dimensional precision turntable is 13 s, the accuracy of spot centroid extraction is 0.034 pixels, the main point positioning error is 1.5 pixels, the focal length fitting error is 0.14 μ, and the distortion compensation coefficient fitting error is 1×10-9.To verify the accuracy of the calibration results, in Hefei, Anhui, China, with latitude and longitude of 117.1661°E and 31.9039°N, a flat and open field, the sun was imaged on the afternoon of November 2, 2023, and the morning of November 3, 2023, covering the time from 9 am to 16 pm. The camera parameters were adjusted to keep the sun image within the dynamic range. With the help of a theodolite, considering a 4° magnetic declination angle in the Hefei area, align the camera aperture scale with the north direction, and adjust the camera tripod according to the state of the spirit level to keep the camera in a horizontal state. Calculate the zenith angle and azimuth angle of the sun at different times using astronomical algorithms, and use geometric calibration results to calculate corresponding positions. Compare the difference between the two to verify the accuracy of the model. In the collected data, the observed range of solar zenith angle is 47.22° to 77.22°, with root mean square errors of zenith angle and azimuth angle being 0.226° and 0.487°, respectively.Based on the above analysis and experimental verification, the proposed camera calibration method can establish an accurate mapping relationship between the incident angle of light and pixel coordinates, simplifying the calibration process. In addition, this article discusses the influence of lateral chromatic aberration on radial distance in geometric distortion and proposes that different bands should be calibrated separately to achieve higher geometric calibration accuracy..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0811001 (2024)
Research on Gamma Correction of Field-stacked Silicon-based OLED Micro-displays Based on Genetic Algorithm
Baoliang CHEN, Yuan JI, Xinjie HUANG, and Junkai LIU
In the context of the emerging and evolving concept of the Metaverse, the technology of Virtual Reality (VR) has incrementally unveiled its substantial importance. Within this particular domain, micro-displays, functioning as a pivotal interface bridging the virtual world with the tangible reality, have assumed an immeIn the context of the emerging and evolving concept of the Metaverse, the technology of Virtual Reality (VR) has incrementally unveiled its substantial importance. Within this particular domain, micro-displays, functioning as a pivotal interface bridging the virtual world with the tangible reality, have assumed an immensely crucial role. Silicon-based OLED micro-displays, in particular, distinguished by their attributes of high resolution, superior contrast, and exceptionally vivid colors, have emerged as cornerstone products in the arena of next-generation micro-display technologies. The operational methodologies for these silicon-based OLED micro-displays are primarily bifurcated into two types: digital driving and analog driving. Digital driving, acclaimed for its prompt response and elevated contrast levels, has been extensively embraced in the industry. This specific mode of operation predominantly utilizes the technique known as Pulse Width Modulation (PWM), a method employed to generate an array of distinct grayscale levels. This is achieved by meticulously adjusting the proportional duration of pixel activation and deactivation. Within the diverse landscape of PWM methodologies, the technique of field-stacked driving is particularly noteworthy. This method ingeniously orchestrates fields with varying weights in a meticulously structured sequence, effectively diminishing the instantaneous bandwidth while proficiently representing diverse levels of grayscale. Nevertheless, one can not overlook the significance of the brightness pulses that are generated by the equivalent capacitive characteristics of OLED devices during their activation phase. In the scenario of field-stacked driving, the brightness pulses emanating from fixed fractional subfields that undergo on-off transitions have a direct and profound impact on the displayed brightness, consequently leading to a nonlinear escalation in the grayscale curve. This issue predominantly manifests in two forms: the nonlinearity of the grayscale curve itself, and a paradoxical decrease in brightness as the grayscale increases, culminating in the emergence of ineffective grayscale points. Together, these challenges add a layer of complexity to the process of Gamma correction. A prevalent strategy in Gamma correction is the augmentation of bit depth, which offers a broader spectrum of grayscale levels, thereby allowing for a more precise approximation of the nonlinear characteristics inherent to the Gamma 2.2 curve. A linear progression in the grayscale curve simplifies the Gamma correction process by obviating the need for individual point adjustments. However, the characteristic of nonlinear progression in the grayscale curve leads to a reduction in the quantity of utilizable grayscale levels, thereby impinging upon its linear portrayal. To execute Gamma correction effectively, it is imperative to eradicate the nonlinear progression present within the grayscale curve.In response to this necessity, this paper introduces an innovative brightness model. This model is founded on the principles of non-ideal field-stacked digital driving and incorporates a synthesis of various elements such as the sequencing of fields, the weighting of fields, the Vcom voltage value, and the configuration of Vcom voltage. This integration effectively reconstructs the grayscale curve that has been impacted by non-ideal brightness pulses. By judiciously adjusting these parameters, it becomes feasible to substantially diminish the frequency of brightness pulse occurrences and to compensate for the impacts of non-ideal brightness pulses. Consequently, this paper employs a genetic algorithm to optimize the grayscale curve, with the explicit objective of minimizing the root mean square error and the count of ineffective grayscale points between the actual grayscale curve and its ideal counterpart. This model can be calibrated using a nonlinear least squares fitting approach by measuring various Vcom values and time t, along with corresponding brightness levels, on a full-color silicon-based OLED micro-display with a resolution of 2 560×2 560×3. By applying the non-ideal field-stacked driven OLED brightness model in conjunction with a genetic algorithm, this paper meticulously develops appropriate populations and fitness functions specifically designed to optimize both the root mean square error and the number of ineffective grayscale points in the grayscale curve. Through the iterative process of evolving multiple generations of these populations, the paper successfully identifies the optimal population, which effectively represents the most advantageous parameter space. When this optimal parameter space is implemented and measured in a full-color silicon-based OLED micro-display, a marked improvement in the grayscale curve is observed. The optimization process significantly reduces the root mean square error from an unoptimized state of 21.65 cd/m2 and 15 395 redundant grayscale points to a much more refined state of 1.62 cd/m2 and a mere 2 977 points. The Gamma 2.2 curve, post-optimization, successfully aligns with the ideal characteristics of the Gamma 2.2 curve, and it exhibits a notably enhanced differentiation in the low grayscale range, especially when compared to traditional analog driving techniques..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0811002 (2024)
Analysis of Imaging Limit Capability for Natural Rendezvous of Low Earth Orbit Debris
Yaru LI, Liang ZHOU, Zhaohui LIU, Wenji SHE, and Kai CUI
In order to achieve precise attitude monitoring of large-sized space debris, the spatial resolution of cameras is continuously improving. However, this improvement leads to an increasing blurring effect caused by relative motion during exposure time. It is particularly important to research on how to balance camera resIn order to achieve precise attitude monitoring of large-sized space debris, the spatial resolution of cameras is continuously improving. However, this improvement leads to an increasing blurring effect caused by relative motion during exposure time. It is particularly important to research on how to balance camera resolution and the issue of image blur caused by motion-induced displacement. For low earth orbit natural rendezvous imaging scenarios, changes in the camera's observational angle before and after the rendezvous lead to variations in the position and orientation of the target in the camera line of sight. The image motion generated as a result of this is referred to as intrinsic image motion in the natural rendezvous imaging scenario. This passage establishes the equation for the image plane position through the mapping relationship between the points of space target objects and their corresponding image points. The equation involves the transformation of coordinates in seven different coordinate systems. In theory, taking the derivative of this equation with respect to time yields the instantaneous velocity equation for image motion. However, due to the immense computational complexity of the matrix and numerous parameters involved (including the orbital parameters of the imaging platform and target, spatial resolution of the camera, exposure time, etc.), it is impractical to provide an exact expression for intrinsic image motion. Therefore, we obtain the image plane positions at different time points based on specific orbital and imaging parameters, calculate the intrinsic image motion within the exposure time, and then employ a data fitting method to obtain a model for the intrinsic image motion function. Through analyzing the relative radial velocity of the target with the camera and the displacement of the target along the optical axis at the rendezvous moment, it can be understood that the rotational image motion of the target around the optical axis is a primary factor in intrinsic image motion. Therefore, this intrinsic image motion is directly correlated with the relative rotational angular velocity between the target and the camera, exposure time, angular separation between the target and the camera, and the spatial resolution of the camera. Taking into consideration these influencing factors, this paper calculates the intrinsic image motion for specific imaging orbits and various influencing factors using the image plane position equation. The obtained data is utilized as a training set for fitting the intrinsic image motion function. The fitting correlation coefficient for the training set is 0.99, with a root mean square error of 0.12. Subsequently, intrinsic image motion calculated with different orbital parameters is used as a test set to validate the accuracy of the fitting function. The correlation coefficients for different independent variables are all greater than 0.9, and the root mean square error are all less than 0.2. This indicates that the fitting accuracy of the intrinsic image motion function is high, and the fitting results are reliable. The intrinsic image motion function model reveals that intrinsic image motion is linearly correlated with relative rotational angular velocity, exposure time, and the angular separation between the target and the camera. It is also exponentially correlated with the spatial resolution of the camera. This paper analyzes the impact of this image motion on the modulation transfer function. When the image motion is greater than 0.5 pixels, the modulation transfer function decreases by approximately 10%, failing to meet the overall system design requirements. Therefore, this paper takes an image displacement of 0.5 pixels as the maximum allowable displacement and establishes a constraint on the camera's spatial resolution at the time of natural intersection. This constraint illustrates the relationship between camera resolution and the relative angular velocity, exposure time, and angular separation between the target and the camera under the condition of satisfying the maximum allowable image motion. Taking a specific set of imaging orbits as an example, we demonstrate the method of calculating the maximum resolution of the camera at the rendezvous moment using this constraint condition. We point out that in low earth orbit rendezvous imaging scenarios, even if the spatial camera resolution exceeds this limit, there is no improvement in image quality. This indicates that the constraint condition has a significance for the design of imaging cameras and the selection of exposure parameters..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0811003 (2024)
Development of Gain-managed Amplification All-fiber Femtosecond Laser Technology for Multimode Nonlinear Optics Imaging
Rumeng LEI, Zhongchao LI, Xiaoshen LI, Junchang SU, and Wei LIU
Multimodal Nonlinear Optical Imaging (NLOI) has revolutionized biological research by enabling high-resolution, three-dimensional fluorescence imaging with minimal cell phototoxicity. Unlike traditional microscopes using ultraviolet or visible light, NLOI employs near-infrared wavelengths (700~1 ?100 nm), maximizing tiMultimodal Nonlinear Optical Imaging (NLOI) has revolutionized biological research by enabling high-resolution, three-dimensional fluorescence imaging with minimal cell phototoxicity. Unlike traditional microscopes using ultraviolet or visible light, NLOI employs near-infrared wavelengths (700~1 ?100 nm), maximizing tissue penetration and minimizing photodamage. NLOI relies on femtosecond lasers to excite fluorescent markers like Green Fluorescent Protein (GFP).Despite their excellent performance, existing high-repetition-rate light sources, often tunable mode-locked Titanium: Sapphire lasers, face limitations in generating high-peak power pulses at low exposure powers. This restricts their capabilities for deep tissue imaging. Fiber lasers, offer a compelling solution.Our novel fiber laser setup, based on Gain-Managed Amplification (GMA), addresses these limitations, generating high-quality, ultrashort femtosecond pulses ideal for NLOI. This compact and cost-effective system boasts outstanding features: a 35 MHz repetition rate, 39.5 fs pulse width, and 267.4 mW average power. Significantly broadening the spectrum (~80 nm) and achieving near-Fourier-transform-limited pulses after compression, it surpasses conventional methods in both performance and affordability.Detailed simulations using the Generalized Nonlinear Schr?dinger Equation (GNLSE) guided the optimal design of our setup, ensuring precise control over pulse propagation and optimizing pulse compression quality. We demonstrate the success of our approach by constructing an all-fiberized experiment setup encompassing seed source, pre-chirp management, GMA, and pulse compression modules. This innovative fiber laser holds immense potential for advancing NLOI applications, particularly in deep tissue cell imaging.We investigated the effects of different pre-chirp Group-Delay Dispersion (GDD) and seed energy on the output results using Fiberdesk software. The results indicate that input parameters with varying positive and negative pre-chirping GDD lead to a broader spectral broadening compared to unchirped cases. Specifically, conditions with negative pre-chirping GDD result in pulses characterized by smaller pedestals and more effective compression. Additionally, it was observed that within an input pulse energy window of 0.06 to 0.3 nJ (corresponding to an average power of 2 to 10.5 mW), substantial spectral broadening and efficient compression by grating pairs can be achieved. However, further increasing the pulse energy introduces complex higher-order nonlinear phase components, which hinder additional compression by the grating pair. These findings were instrumental in the construction of a gain-managed amplifier.In our experiment, we have measured the spectrum and pulses after GMA when the seed current is 1.5 A, and with the increase of pump current from 731 mW to 950 mW, the spectral width after GMA increased from 20 nm to 80 nm. And the pulse duration notably decreased after post-compression by a 1 000 l/mm grating pair. At the highest pump power of 950 mW, the output power of the pulse after GMA reached 349.4 mW, with a single pulse energy of 9.98 nJ, marking a 20 dB increase from the pre-main amplifier level. The pulse duration after grating pair compression is 39.5 fs, closely approaching the Fourier transform limit. The compressed pulse's output power was measured at 267.4 mW, with a single pulse energy of 7.64 nJ.To facilitate widespread application in NLOI imaging and other non-laboratory environments, we have engineered the entire system for encapsulation, successfully reducing its volume to a compact structure. Additionally, the root mean square error of the measured output power of the laser in three hours was only 0.11%, indicating that the laser not only delivers high-performance output but also maintains long-term stability..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0814001 (2024)
Multi-wavelength and Transverse-mode-switchable Yb-doped Fiber Laser
Jianao PENG, Wei CHEN, Chaoqi HOU, Dandan LIU... and Tingyun WANG|Show fewer author(s)
Multi-wavelength lasers and transverse-mode-switchable lasers are expected to find applications in Wavelength Division Multiplexing (WDM) and Mode Division Multiplexing (MDM) systems. High-order transverse modes play a crucial role in the generation of cylindrical vector beams and vortex beams, making them suitable forMulti-wavelength lasers and transverse-mode-switchable lasers are expected to find applications in Wavelength Division Multiplexing (WDM) and Mode Division Multiplexing (MDM) systems. High-order transverse modes play a crucial role in the generation of cylindrical vector beams and vortex beams, making them suitable for applications such as micro-particle manipulation, quantum information, and laser material processing. While in solid-state lasers, specific transverse modes can be excited through phase and amplitude modulation, the entire laser system is relatively underdeveloped, with limited mode scalability. Therefore, the all-fiber structure of multi-wavelength and transverse-mode-switchable fiber lasers has sparked significant interest among scientists. Various technological approaches have been proposed to achieve multi-wavelength or transverse mode outputs from fiber lasers. However, there has been limited reporting on fiber lasers capable of simultaneously operating at multiple wavelengths while generating switchable transverse modes. Reported multi-wavelength and transverse mode-switchable fiber lasers often employ gain media consisting of traditional doped fibers, where the competitive advantage of the fundamental mode within the resonant cavity is significantly greater than that of higher-order modes. Consequently, these fiber lasers struggle to maintain stable and efficient output of higher-order modes. In this study, a Ring-Core Yb-Doped Fiber (RCYDF) was designed and fabricated. The unique structure of the ring-shaped doping region aligns well with the dual-peak spatial electromagnetic field distribution of the LP11 mode, prioritizing the gain acquisition for the LP11 mode within the gain fiber. This design facilitates stable oscillation and efficient output of the LP11 mode laser. Few-Mode Fiber Bragg Gratings (FMFBGs) serve not only as ideal wavelength-selective elements but also as components for achieving transverse mode switching. Utilizing a pair of FMFBGs as the laser resonant cavity and the fabricated RCYDF as the laser gain medium, a multi-wavelength and transverse-mode-switchable fiber laser was demonstrated. By simply adjusting the polarization controller placed on the RCYDF, stable laser oscillation at both single and dual wavelengths can be achieved. When operating in a single-wavelength state, two transverse modes, namely LP01 and LP11 modes, can be switched. The 3 linewidths for both modes are less than 0.08 , with a Side-Mode Suppression Ratio (SMSR) of 52.2 for LP01 mode and 46.5 for LP11 mode. The output spectra were monitored every 10 minutes over a total duration of 60 minutes. Fluctuations in the central wavelength and optical intensity of the two laser modes were observed to be 0.01 nm and 1 dB, respectively. No significant changes were observed in the spectral shape. Furthermore, the laser thresholds for LP01 and LP11 modes are 372.51 , and 482.51 , respectively, with corresponding slope efficiencies of 34.34% and 49.09%. The higher slope efficiency of the LP11 mode is attributed to the enhanced LP11 mode resonance capability of the custom-designed RCYDF compared to traditional Yb-doped fibers, making it highly valuable for applications in fiber lasers targeting higher-order mode outputs. The proposed dual-wavelength and transverse-mode-switchable fiber laser offers advantages of simplicity, ease of control, and stable operation, presenting promising applications in MDM system and laser material processing..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0814002 (2024)
Research on Fiber Raman Laser Source for Mobile Quantum Gravimeter
Junchao GAO, Junjie CHEN, Liuxian YE, Bing CHENG... and Qiang LIN|Show fewer author(s)
With the development of atomic manipulation technology, the cold atom interferometry is widely used to measure astronomical and physical parameters such as gravitational acceleration, gravitational constant, fine structure constant and gravitational wave. Among them, the quantum gravimeter based on the cold atomic inteWith the development of atomic manipulation technology, the cold atom interferometry is widely used to measure astronomical and physical parameters such as gravitational acceleration, gravitational constant, fine structure constant and gravitational wave. Among them, the quantum gravimeter based on the cold atomic interference has developed rapidly due to advantages of small size, strong mobility, high sensitivity and high stability. As an impact on the performance of cold quantum gravimeters, the phase noise of Raman light directly affects the sensitivity of quantum gravimeter. Therefore, the study of low phase noise Raman laser source for quantum gravimeter has become one of the research hotspots. The schemes of Raman laser generation include the electro-optic modulation method, the acousto-optic modulation method and optical locking method. The electro-optic modulation method loses a lot of power during the modulation process, and the modulation sideband is easy to introduce unstable system effects. In the acousto-optic modulation method, the change of the external environment causes the instability of the mirror in the spatial optical path to introduce additional errors, and the high-frequency AOM diffraction efficiency is very low, it is difficult to generate high-frequency signals, and it is expensive. The optical phase-locked method generates a phase error signal after the beat frequency detection of the master and slave lasers, the error signal is fed back to the master laser through the circuit control system to maintain phase synchronization, and a Raman laser is generated finally. Compared with the other two methods, the all-fiber optical phase-locked method does not require complex spatial optical paths, have high reliability and does not generate excess sidebands. Therefore, this paper constructs a low phase noise Raman fiber laser system with low phase noise, high stability, strong environmental adaptability, and can be used for field gravity measurement. A complete phase noise analysis model for Raman laser system is established, and its phase noise characteristics are theoretically analyzed and optimized. The phase noise power spectral density can reach -118 dBc/Hz in the range of 10 Hz~1 MHz, and the corresponding phase noise is 22.7 , and the gravity measurement sensitivity obtained by applying it to the gravimeter is 10.93 . The effects of beat frequency optical power and different frequency reference sources on the phase noise of Raman laser are studied, and when the output optical power of the master and slave lasers P1∶P2 = 1∶1, the frequency stability of the Raman laser source is the best. By testing the phase noise stability of the Raman laser source for three hours and the frequency stability for 25 minutes, the calculated standard deviation of the phase noise is 0.734 and the corresponding standard deviation of the gravity sensitivity is only 0.349 , and when the integration time is 1 s, the frequency stability of the phase locking is , which verifies that the laser source has low phase noise and high stability. Moreover, the theoretical results obtained by using the phase noise model are highly consistent with the experimental results, which confirms the correctness of the theoretical model..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0814003 (2024)
Fast Quasi-continuous Wavelength Tuning Method of MG-Y Laser Based on DRSN
Yumeng DU, Wei ZHUANG, Xu ZHANG, Le WANG, and Mingli DONG
The Modulated Grating Y-branch (MG-Y) tunable semiconductor laser stands out due to its short tuning time, wide tuning range, and high output power, making it a core device in the field of optical fiber sensing. However, achieving rapid and accurate quasi-continuous wavelength tuning within the tuning range of MG-Y lasThe Modulated Grating Y-branch (MG-Y) tunable semiconductor laser stands out due to its short tuning time, wide tuning range, and high output power, making it a core device in the field of optical fiber sensing. However, achieving rapid and accurate quasi-continuous wavelength tuning within the tuning range of MG-Y lasers poses challenges, particularly in terms of the efficiency and accuracy of generating control parameter tables. Traditional wavelength tuning methods rely on spectrometers for wavelength acquisition, which are time-consuming and costly, failing to meet the demands of high-precision optical fiber sensing applications.To address these issues, this paper proposes a rapid quasi-continuous wavelength tuning method for MG-Y lasers based on the Deep Residual Shrinkage Network (DRSN). This method aims to collect optical power through Photodiodes (PD) instead of spectrometers for wavelength acquisition, combined with the DRSN model to rapidly generate high-precision control parameter tables, thereby realizing fast and accurate tuning of MG-Y lasers to meet the high-precision requirements of optical fiber sensing demodulation applications.The proposed method improves the process of control parameter table generation for MG-Y lasers. Instead of using wavelength meters, we employ photodiodes for collecting output optical power data, drastically reducing the data acquisition time from minutes to mere seconds. This significant speedup paves the way for more efficient subsequent processing steps. At the heart of our approach lies the DRSN model, which is specifically designed to rapidly classify the current tuning regions of the MG-Y laser. The model is trained on an extensive dataset comprising control currents, output optical power measurements, and precisely labeled tuning regions. The DRSN architecture incorporates residual modules, which alleviate the degradation problem commonly encountered in deep neural networks, ensuring that the model's performance remains stable as it grows deeper. Furthermore, the introduction of Residual Shrinkage Building Units (RSBUs) within the DRSN model effectively suppresses noise and enhances the model's generalization capabilities, resulting in more robust classifications. Once the tuning regions are classified, we employ the Lagrange interpolation method to generate high-precision control parameter tables. This approach ensures that the resulting tables enable precise and stable wavelength tuning across the entire tuning range of the MG-Y laser.To validate the effectiveness of the MG-Y laser control parameter table constructed using the DRSN model in practical engineering applications, accuracy experiments were conducted first. The stability and accuracy of the laser output wavelength under the control of this parameter table were verified using an ATLS7503 laser and an AQ6151 wavelength meter with a measurement accuracy of 0.3 pm. The deviation between actual and target wavelengths was within 1 pm, with a standard deviation of 0.28 pm. Subsequently, a Fabry-Perot (F-P) etalon wavelength demodulation experiment was performed to verify the effectiveness of the control parameter table in a laboratory environment. The wavelength standard deviation of the 51 transmission peaks of the F-P etalon was within 1.8 pm. Finally, strain demodulation experiments were conducted on two strain FBGs, with errors consistently below 1 με across 21 different strain conditions, and the standard deviation was consistently below 0.6 με. The result has been demonstrated that the control parameter table generated using the proposed method can be effectively applied to fiber optic sensing systems, showing excellent practical utility. The proposed method excels in improving control parameter table generation efficiency and accuracy, meeting the high-precision requirements of optical fiber sensing demodulation applications. In the future, this method is expected to be further promoted and applied in a broader range of optical fiber sensing fields..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0814004 (2024)
Laser Wireless Transmission System with Electric Output of 461 W
Fanglei YU, Zhaoran ZOU, Xiangxiang MENG, Yue PENG, and Jingang ZHANG
Laser wireless energy transmission system is a new power supply technology, which uses high-energy laser beam as energy carrier, photovoltaic cells for photoelectric conversion and non-contact energy transmission in space. In the scene with high air humidity and salinity, the traditional plug-in power supply method is Laser wireless energy transmission system is a new power supply technology, which uses high-energy laser beam as energy carrier, photovoltaic cells for photoelectric conversion and non-contact energy transmission in space. In the scene with high air humidity and salinity, the traditional plug-in power supply method is used to supply power to electrical equipment, which has great security risks and the equipment is vulnerable to electromagnetic interference. The laser wireless energy transmission system can realize safety in the same working environment and provide high-power electric energy for electrical equipment. The laser wireless energy transmission system consists of a transmitter and a receiver. The volume and weight of the receiving end of the system are strictly limited by the carrier in most applications, and the photoelectric efficiency of the receiving end of the high-power laser wireless energy transmission system will be far less than the measured value in the laboratory due to the low duty ratio of the photosensitive surface, the influence of the circuit efficiency on the illumination distribution of the light spot and the temperature increase caused by continuous illumination. In order to eliminate the influence of the above three factors, the conventional methods include photovoltaic cell arrangement, large-aperture focusing lens reception, heat sink and fan cooling, etc, but there are still some shortcomings such as complicated arrangement, mismatched shapes and heavy weight of the receiving end. Aiming at the above problems, a set of high-power laser wireless energy transmission system is developed. The transmitting end of the system consists of two 808 nm semiconductor lasers, an optical fiber coupling lens, a square-core optical fiber and an object telecentric projection lens. The transmitter uses square-core fiber to homogenize Gaussian beam. A projection lens is designed by using the "positive-positive-negative" structure. The three lenses of the projection lens are all spherical mirrors made of JGS1 material. The front end of the lens is 60 mm away from the optical fiber port, and the total length of the system is 100 mm. The half-height of 0.5 mm corresponds to 209.86 mm and the half-height of 0.707 mm corresponds to 297.48 mm. When the object NA=0.17, the minimum luminous radius of the system lens is 10.32 mm. The geometric speckle is within the diffraction limit, and the maximum relative distortion is 0.025%. The laser spot of 1 mm×1 mm can be projected to 25 m, and the spot size is 420 mm×420 mm. The lens barrel of the projection lens of the projection lens is made of aluminum alloy, and the radiating fins are added on the outer side to facilitate the overall heat dissipation. In order to improve the duty ratio of photosensitive surface of single photovoltaic cell, an integrated lens-photovoltaic cell packaging method is proposed. The light irradiated on the electrodes and gaps around the photosensitive surface is focused on the photosensitive surface by the lens, and the material is optical plastics, and then it is integrated with the battery by injection molding. The high-power laser wireless energy transmission system developed in this paper can generate a high-power square uniform spot by shaping the square-core fiber at the transmitter. A projection optical system is designed to enlarge the outgoing end face of the square-core fiber and irradiate it on the 420 mm×420 mm photovoltaic panel with matching shape, which reduces the difficulty of arranging the photovoltaic array at the receiver. By using the lens-photovoltaic integrated packaging technology, 1 024 GaAs photovoltaic cells were integrated and packaged with the focusing lens at the receiving end, and the duty cycle of the single photovoltaic cell was increased to 96.5%, which was 13.15% higher than that of the traditional photovoltaic cell packaging method, effectively solving the problem of low duty cycle of the photosensitive surface. The experimental results show that when the incident laser power is 1 700 W, the electronic load terminal receives 461 W of electric power, and the overall photoelectric conversion efficiency of the receiving terminal is 27.1%. It can provide a solution for wireless power supply of high-power load in specific environment..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0814005 (2024)
Dual-pass Off-beam Quartz-enhanced Photoacoustic Spectroscopic Gas Sensor
Xin SUI, Yanming MA, Xiaoteng LIU, Lei ZHANG... and Chuantao ZHENG|Show fewer author(s)
The detection of trace gases is of great significance in the fields of fire alarm, industrial production, biomedicine and so on. Long-term and stable monitoring is needed for the explosive, toxic and harmful gases. Acetylene (C2H2), as a characteristic gas produced in transformer overheating and discharge, is an importThe detection of trace gases is of great significance in the fields of fire alarm, industrial production, biomedicine and so on. Long-term and stable monitoring is needed for the explosive, toxic and harmful gases. Acetylene (C2H2), as a characteristic gas produced in transformer overheating and discharge, is an important parameter to monitor transformer faults. Optical gas sensors based on Lambert-Beer law are more suitable for real-time and long-term gas detection because of their advantages such as high sensitivity, small drift and non-invasive. In 2002, Quartz Enhanced Photoacoustic Spectroscopy (QEPAS) based on Quartz Tuning Fork (QTF) was proposed. Quartz tuning fork has the advantages of high-quality factor, stable resonant frequency and narrow resonant band width, so it has higher sensitivity and stronger anti-environmental noise ability than traditional PAS. In addition, small size and low cost of QTF make a QEPAS system small, portable and easy to integrate.To further improve the detection sensitivity of QEPAS system, methods such as using acoustic micro-resonator (AmR), customizing special-size QTFs and multi-channel detection were widely used. Sensor systems based on QEPAS shows excellent performance in gas detection and provide an excellent scheme for real-time and long-term monitoring of trace gases. However, there is still a problem that part of the laser power will lose due to partial irradiation on the QTF surface, which is not conducive to improving the detection sensitivity. In order to reduce the difficulty of system assembly and optical path adjustment, and avoid the photothermal effect caused by laser beam irradiation on the surface of the tuning fork, off-beam resonant tube configuration is used to isolate the detection beam from the quartz tuning fork, and two resonant tubes with central grooves are placed on both front sides of the tuning fork. To further improve the detection sensitivity, the optical path is equipped with a right-angle prism to realize dual-pass measurement. Optical fiber collimator, resonant tube, QTF, right angle prism and stainless steel gas chamber are integrated, and the overall module size is 5.4 cm×4.0 cm×1.5 cm. Taking C2H2 as the target gas, the performance of the sensor system is evaluated, and the reliability of the micro-miniature QEPAS gas sensor system is verified.Firstly, the theory of laser absorption spectroscopy, the principle of wavelength modulation technology and the principle of photoacoustic spectroscopy are analyzed, and the relationship between photoacoustic signal and gas concentration is obtained.To derive the optimal resonant tube size and laser beam excitation position, the acoustic finite element method was used to simulate and optimize the resonant tube size and laser beam excitation position in COMSOL. According to the simulation results, a stainless steel capillary tube was micro-machinized, and the tube was 0.6 mm in diameter and 8.8 mm in length, and it was assembled at 0.6 mm away from the top of the QTF. The grooves cut in the center of the resonant tube are equal in width to the interfinger gap of the QTF, so that the sound field in the tube leaks. The two resonant tubes are placed on both sides of the front of the QTF, and the grooves are aligned with the interfinger gap of the QTF for sound detection. The optical path is equipped with a right angle prism, and the incident light of the collimated fiber is reflected twice through the right angle prism to achieve dual-pass measurement, which enhances the sensitivity of the system. Compared with the traditional on-beam configuration, the difficulty of system assembly is greatly reduced, the photothermal effect is completely avoided, and the utilization rate of the laser is further improved. The optical fiber collimator, resonant tube, QTF, right angle prism and stainless steel gas chamber are integrated, and the overall size of the acoustic detection module is 5.4 cm×4.0 cm×1.5 cm, which meets the miniaturization standard. The experimental results show that there is a good linear relationship between the photoacoustic signal and the gas concentration, and the fitting goodness is 0.991. The minimum detection limit and normalized noise equivalent absorption coefficient of system are 11.6×10-6 and 6.7×10-9 W·cm-1·Hz-1/2. Finally, the Allan variance analysis was carried out. When the averaging time is 0.5 s, the minimum detection limit of the system is 10.22×10-6, and when the averaging time is 50 s, the minimum detection limit is 1.20×10-6. These results verify the reliability and stability of the miniature QEPAS gas sensor system, and provide a new idea for the development of portable C2H2 sensor system..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0830001 (2024)
Carbon Dioxide Measurement Based on Off-axis Integrated Cavity Output Spectroscopy Technology
Juncheng LU, Lu GAO, Qiong WU, Wen LIU... and Jie SHAO|Show fewer author(s)
Carbon dioxide (CO2) accounts for about 0.04% of the atmospheric composition and is one of the major greenhouse gases. With the development of industrial society, anthropogenic CO2 emissions are increasing every year, which undoubtedly aggravates global warming. Therefore, it is of great significance to monitor the CO2Carbon dioxide (CO2) accounts for about 0.04% of the atmospheric composition and is one of the major greenhouse gases. With the development of industrial society, anthropogenic CO2 emissions are increasing every year, which undoubtedly aggravates global warming. Therefore, it is of great significance to monitor the CO2 concentration in the atmosphere to manage the CO2 emission scientifically. In this paper, a simple atmospheric CO2 gas detection equipment based on Off-axis Integrated Cavity Output Spectroscopy (OA-ICOS) was constructed using a 1.573 μm distributed feedback diode laser. Firstly, the CO2 detection system was built and optimized. In this paper, the absorption spectrum of CO2 at 6 358.65 cm-1 with a line intensity of 1.732×10-23 cm-1/(molecule·cm-2) is selected, and then the CO2 direct absorption signal of 400×10-6 in the sealed cavity is measured, and the experimental results show that an effective optical range of about 2.4 km is realized in the lens reflectivity of 99.98% and 60 cm cavity length. There is a large amount of residual cavity mode noise in the transmission signal acquired in a single time, and the system noise can be reduced by averaging calculation. The experimental results show that the optimal value of the averaging number is at 1 000 times, and at this averaging time, the relative error of the absorbing area is averaged at 7.67×10-3 and the STD is 5.87×10-3. Second, the performance of the CO2 detection system was analyzed. By measuring CO2 gas from 400 to 2 000×10-6 with an interval of 200×10-6, the linearity of the OA-ICOS system was obtained to be 0.996, and the maximum value of the STD of the absorbed area at each volume fraction was 3.23×10-3 (1 200×10-6) and the minimum was 2.34×10-3 (1 800×10-6). By measuring the CO2 direct absorption signal in the sealed chamber for a long time, the optimal integration time of the system was obtained from the Allan curve analysis to be 98.3 s, and the minimum detectable limit was 0.63×10-6. The system response time is able to analyze the response speed of the system to the change of the measured value, which is an important parameter to characterize the performance of the system. In order to obtain the system response time, the gas control valve of the mass flow meter was repeatedly switched to alternately pass 2 000×10-6 of CO2 and N2, and the ventilation time was 180 s. The experimental results showed that the average value of the system rise (CO2 pass) response time was 50 s, and the fall (N2 pass) response time was >180 s. The fall time was longer than the rise time due to the fact that the CO2 density was larger than that of N2, and CO2 was deposited at the bottom of the cavity, resulting in CO2 residuals inside the cavity after 180 s of N2 pass-through (~300×10-6). Finally, the application of indoor CO2 measurement was carried out on the OA-ICOS system. The results of 96 h of continuous indoor CO2 detection experiments show that the system can better reflect the activities of laboratory personnel, and some details of indoor CO2 changes, such as the intermittent stay of experimental personnel near the experimental platform to carry out other experiments, can also be monitored, which verifies the reliability and stability of the measurement device, and at the same time, provides a practical scientific basis for the management of indoor CO2 emissions. Conclusion, the CO2 detection equipment proposed in this paper is characterized by simple structure, high sensitivity and robustness, which is suitable for the detection of CO2 in the atmospheric background, and lays the foundation for the further development of CO2 detection instruments for atmospheric CO2..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0830002 (2024)
Fiber Optics and Optical Communications
High Coupling Efficiency Mode Field Adapter with Low NA LMA Fiber
Feng XIONG, Wei MU, Yang WANG, Yunliang MA... and Xiaobei ZHANG|Show fewer author(s)
Transverse Mode Instability (TMI) and Nonlinear Optical Effects (NLE) prevent high-power all-fiber lasers from further power scaling. Low Numerical Aperture (NA) Large Mode Area (LMA) active fibers can maintain large effective areas while suppressing the Higher-Order Modes (HOMs) and increasing the thresholds of NLE anTransverse Mode Instability (TMI) and Nonlinear Optical Effects (NLE) prevent high-power all-fiber lasers from further power scaling. Low Numerical Aperture (NA) Large Mode Area (LMA) active fibers can maintain large effective areas while suppressing the Higher-Order Modes (HOMs) and increasing the thresholds of NLE and TMI. Additionally, it is easier to match the passive component for low NA LMA fiber because its structure is simpler and consistent with the step-index fiber structure. Mode Field Adapters (MFAs) match the mode field between LMA fibers and Single-Mode Fibers (SMFs) and are crucial passive components in fiber laser systems. The insertion loss and beam quality of MFAs significantly affect the power scaling and beam quality of laser systems. This paper made MFA based on tapered low NA LMA fiber with NA=0.05 and Thermally Expanded Core (TEC) fibers, maintaining high coupling efficiency and improving the output beam quality. The impact of mode field mismatch, core offset, and angular misalignment between LMA fibers with different NAs and SMF on coupling efficiency and beam quality of MFA was studied theoretically and experimentally. First, theoretical models for TEC and tapered LMA fibers were built based on diffusion and adiabatic criterion equations. The mode field distribution and propagation characteristics of TEC and tapered LMA fibers were simulated based on the beam propagation method to optimize the device structure and preparation parameters. Second, a simulation model was created to analyze the insertion loss and beam quality degradation caused by mode field mismatch, core offset, and angular misalignment between LMA fibers with different NAs and SMFs. The simulation results show that reducing the NA of LMA fibers helps suppress HOMs and reduces the insertion loss and beam quality degradation of MFAs caused by core offset and angular misalignment. During the experimental process, the SMF was heated with an H2-O2 flame to expand the Mode Field Diameter (MFD) of SMF without causing transmission loss. The MFD of the TEC SMF under different heating times was measured using the far-field method. Two MFAs were prepared by LMA fibers (25/400 μm) with NAs of 0.06 and 0.05, respectively, to TEC SMF (5.3/125 μm, NA=0.14). The insertion loss and beam quality factor (M2) of the devices were measured. The forward insertion loss decreased from 4.50 dB to 0.29 dB, and the difference in bi-directional insertion loss decreased from 2.50 dB to 0.19 dB when the LMA fiber (NA=0.06) for MFD matched with the SMF. The experimental results show that matching the MFD of SMF and LMA fibers effectively reduces the insertion loss and the difference in bi-directional insertion loss. The taper ratios of the LMA fibers are both 2, and the heating times for the SMF are 25 min and 20 min, respectively, when the MFD is matched between the LMA fibers with NAs of 0.05 and 0.06 and SMF. Due to the MFD matching between the LMA fibers and SMFs, only the unavoidable core offset and angular misalignment during the fusion process, which affect the coupling efficiency and beam quality, are considered, and these misalignments are random variables. The impact of misalignment on beam quality and coupling efficiency in LMA fibers with different NAs was indirectly reflected by performing multiple measurements and calculating the mean and standard deviation. The forward insertion loss decreases to 0.29 dB with a standard deviation of 0.085, and the bi-directional insertion loss difference is 0.19 dB with a standard deviation of 0.077 when the MFD is matched between the LMA (NA=0.06) fiber and the SMF. The forward insertion loss decreases to 0.23 dB with a standard deviation of 0.024, and the bi-directional insertion loss difference is 0.06 dB with a standard deviation of 0.011 when the MFD is matched between the LMA (NA=0.05) fiber and SMF. Cladding modes caused the difference in bi-directional insertion loss, and HOMs were not stripped by the cladding light strippers as they do not propagate in SMF. Therefore, this difference is positively correlated with the beam quality of the MFA. The difference in bi-directional insertion loss for the MFAs based on LMA fibers with NAs of 0.05 and 0.06 are 1.76 dB and 2.50 dB, and the M2 are 1.88 and 2.15, respectively, when the MFD of the SMF and the LMA fiber are not matched. This difference for the MFAs based on LMA fibers with NAs of 0.05 and 0.06 is 0.06 dB and 0.19 dB, and the M2 value is 1.15 with a standard deviation of 0.017 and 1.26 with a standard deviation of 0.092, respectively, and when the MFD of the SMF and the LMA fiber are matched. The experimental results show that the LMA fiber with NA=0.05 contains fewer HOMs and reduces beam quality degradation and insertion loss caused by core offset and angular misalignment during the splicing process, resulting in higher beam quality and lower insertion loss of the MFA. These conclusions are consistent with the theoretical analysis and simulation results. The MFA based on LMA fiber with NA=0.05 has promising application prospects in single-mode output high-power fiber lasers due to its low insertion loss and high beam quality advantages..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0806001 (2024)
Fourier Series Based Grating Wavelength Signal Reconstruction and Accurate Vibration Displacement Measurement
Cui ZHANG, Rui LUO, Yinjie ZHANG, Sikai JIA, and Weibing GAN
During operation of a water turbine, the vibration of its mechanical structure may reflect its operating condition. If the peak value of the vibration displacement exceeds a certain range, the water turbine may malfunction, causing serious accidents resulting in casualties and property damage. Therefore, real-time moniDuring operation of a water turbine, the vibration of its mechanical structure may reflect its operating condition. If the peak value of the vibration displacement exceeds a certain range, the water turbine may malfunction, causing serious accidents resulting in casualties and property damage. Therefore, real-time monitoring of the vibration displacement of water turbines is of great importance to ensure the safe operation of water turbines. Existing vibration displacement calculation methods do not consider the influence of vibration frequency, resulting in significant errors when measuring complex vibration displacements. This paper proposes a grating wavelength conversion method based on Fourier series. First, the wavelength variation of the fiber-optic grating sensor feedback is decomposed into multiple wavelength components containing only a single frequency according to different frequencies. Calculate the vibration acceleration generated by each wavelength component corresponding to the vibration component based on the sensitivity of the sensor. Calculate the vibration displacement generated by each vibration component by quadratic integration of the vibration acceleration. Sum these vibration displacements to obtain the total vibration displacement. Perform displacement measurement for complex vibrations. In order to improve the accuracy of vibration displacement measurement, we conducted calibration experiments on the sensitivity outside the stable operating frequency band of fiber-optic grating sensors, because only a part of the corresponding relationship between vibration frequency and sensitivity is included in the stable operating frequency band. And the corresponding relationship between vibration frequency and wavelength change in this part of the frequency was determined by segmented fitting. Two experiments were designed to compare the ability of this method to calculate vibration displacement with traditional methods. They are the simple harmonic vibration experiment and the complex vibration experiment. The harmonic vibration experiment provides vibration excitation of only a single frequency by a vibration table. In the complex vibration experiment, the vibration excitation generated by the vibration table contains two different vibration frequencies, and there is a certain phase difference between these two different vibration frequencies. The wavelength change of the fiber-optic grating sensor is recorded, and the vibration acceleration and displacement are calculated using the above two methods. In the harmonic vibration experiment, the maximum vibration acceleration error of the traditional method is 4.6%, and the maximum vibration displacement error is 4.6%. The maximum vibration acceleration error of the method in this article is 2.1%, and the maximum vibration displacement error is 3.74%. In complex vibration experiments, the traditional method cannot accurately measure the vibration displacement, and the maximum error of this method is 8.45%. The experimental results of harmonic vibration show that both traditional methods and grating wavelength conversion methods based on Fourier series can accurately measure vibration displacement when there is only a single frequency vibration in the environment. The results of complex vibration experiments show that when there are multiple vibrations of different frequencies in the environment, the grating wavelength conversion method based on Fourier series can accurately decompose and reconstruct the grating wavelength signal, and has higher accuracy and stronger stability in measuring multi-frequency vibration displacement.In addition, we also used these two methods to measure the vibration displacement peak-to-peak values of the stator end and upper frame data of the water turbine, and compared them with the results of the electrical sensors. The experimental results show that the vibration component at the stator end of the water turbine is single, and the vibration displacement peak-to-peak measurement results are similar. The vibration components at the upper frame of the water turbine are complex, and the vibration displacement peak-to-peak measurement results of the traditional method are less than 5 μm. The vibration displacement peak-to-peak measurement results of the electric sensors and the grating wavelength conversion methods based on Fourier series are both around 45 μm. This indicates that the grating wavelength conversion method based on Fourier series has certain practical application value..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0806002 (2024)
Instrumentation, Measurement and Metrology
Station Planning Method for Multi-sensor System Collaborative Measurement Field
Xuezhu LIN, Dexuan WANG, Xihong FU, Fan YANG... and Lijuan LI|Show fewer author(s)
With the continuous development and technological advances in the modern industrial field, large component measurement techniques are becoming increasingly important in various fields. Particularly in areas such as large machinery and equipment, aerospace, etc., accurately measuring and evaluating the dimensions and shWith the continuous development and technological advances in the modern industrial field, large component measurement techniques are becoming increasingly important in various fields. Particularly in areas such as large machinery and equipment, aerospace, etc., accurately measuring and evaluating the dimensions and shapes of parts, components, and systems is critical to ensuring product quality, meeting design requirements, and ensuring safety. Among them, station planning plays a key role in large component measurement tasks, and it directly affects the overall accuracy and efficiency of the entire measurement task. Currently, the station planning of large component measurement often relies on experienced surveyors, which leads to an increase in the time and labor cost of the measurement and the instability of the measurement results. Secondly, the traditional method of station planning for large component measurement is often time-consuming and inefficient, lacks theoretical basis and evaluation methods, and is prone to problems such as large number of stations, high number of station transfers and low measurement efficiency, which can not meet the needs of modern manufacturing industry for fast and efficient measurement. In view of the above-mentioned large-scale component multi-sensing system station planning, due to the diversification of system measurement accessibility models and the imbalance of multi-system measurement accuracy, the combined measurement station setting relies heavily on the experience of surveyors and continuous attempts to obtain suitable stations. To solve the problem, this paper proposes a combined measurement station planning method for multi-sensor systems. Firstly, considering the tooling occlusion issue, based on the combined measurement accessibility model in the collaborative measurement field, we establish an initial value solving model for tooling-affected station positions using the Remora optimization algorithm. This model calculates the initial values of measurement stations in the combined measurement system; secondly, addressing the precision constraint issue, we establish a collaborative measurement accuracy model. We formulate an optimization objective function that minimizes the weighted residual values of the observation data and the vector angular measurement errors. We optimize the scaling factor to achieve the best accuracy in station coordinates; finally, a certain target simulator satisfies the position, posture initial assembly and adjustment accuracy requirements are taken as an example. A combined measurement station planning experiment was conducted. The root mean square error of the measurement data after optimization is 0.032 mm. Compared with the measurement planning before optimization, the position measurement accuracy increased by 34%, and the angle measurement accuracy increased by 9.5 %. This method provides improvements in methods for the rapid and precise detection as well as station planning efficiency of components, parts, and systems in large-scale structures. It offers valuable references for further research and applications in the field of measurement..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0812001 (2024)
Compensation Method for Crosstalk and Chromatic Aberration Based on Color Orthogonal Fringe Patterns
Feng MA, Yubo NI, Zhaozong MENG, Nan GAO... and Zonghua ZHANG|Show fewer author(s)
Fringe projection technology has attracted widespread attention in academic research and engineering applications because of its advantages of non-contact, high precision, and high efficiency. On this basis, to enhance the measurement efficiency of the system, multi-channel fringe projection technology has emerged. It Fringe projection technology has attracted widespread attention in academic research and engineering applications because of its advantages of non-contact, high precision, and high efficiency. On this basis, to enhance the measurement efficiency of the system, multi-channel fringe projection technology has emerged. It encodes sinusoidal fringe patterns in the red, green, and blue channels, significantly reducing the number of images captured by a camera. However, the characteristic of multi-channel usage introduces both crosstalk and chromatic aberration into the system, becoming critical factors affecting measurement accuracy. Therefore, it is crucial to compensate for both crosstalk and chromatic aberration. The existing methods mostly separately correct crosstalk and chromatic aberration, which has the disadvantages of capturing multiple images, multiple procedures, and complex operations. In order to solve this problem, this paper proposes a pixel-by-pixel correction method based on orthogonal color fringes for crosstalk and chromatic aberration. Firstly, the fringe intensity information from each channel is extracted by projecting orthogonal color fringe patterns. A mathematical model is then established for crosstalk coefficients in relation to average intensity and background intensity. Then fringe intensity is corrected, thereby achieving the elimination of crosstalk. Secondly, the unwrapped phase is computed for each channel in both horizontal and vertical directions. Based on this, pixel matching relationships between different color channels are constructed. Chromatic aberration correction is then accomplished through interpolation. Using a color camera and projector, a fringe projection system has been constructed to conduct experiments on the proposed method. Measurements were performed on two objects of a plane and a standard step, indicating that the proposed method significantly enhances the measurement accuracy and efficiency. Moreover, it outperforms traditional methods in terms of measurement precision and efficiency. When measuring the plane, the proposed method achieves a measurement precision of 0.040 mm, which is an improvement of 0.029 mm compared to 0.069 mm of the traditional method. For the standard step, the measurement error is reduced from 0.647 mm to 0.031 mm. Compared to the measurement precision of 0.045 mm achieved by the traditional method, the proposed method demonstrates an improvement of 0.014 mm in measurement precision. And the number of fringe patterns required to be captured has been reduced by half compared to the traditional methods. Additionally, the measurement error distribution range is smaller, indicating higher stability. Therefore, the proposed method can improve the measurement accuracy and efficiency of multi-channel fringe projection technology effectively and stably..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0812002 (2024)
Optical Device
Investigation of Apodized Chirped Grating on Thin-film Lithium Niobate
Jiaxuan LONG, Kan WU, Minglu CAI, Xujia ZHANG, and Jianping CHEN
The fields of high-performance optical signal processing, optical computing, microwave photonics, and high-speed optical communications are important directions for photonics research. In these systems, dispersion compensation is a key technology to eliminate signal distortion and improve signal quality. Typical disperThe fields of high-performance optical signal processing, optical computing, microwave photonics, and high-speed optical communications are important directions for photonics research. In these systems, dispersion compensation is a key technology to eliminate signal distortion and improve signal quality. Typical dispersion compensation usually has three methods: first, the use of dispersion-compensated optical fibres, whose technical advantage lies in the fact that very large dispersion values can be obtained and both normal and anomalous dispersion compensation can be taken into account, but they are usually large in size and introduce additional nonlinear accumulations; second, digital equalizers, which usually provide dispersion compensation for Wavelength-Division-Multiplexing (WDM) systems, are highly flexible and have a stable amplitude response, but are complex in design and have high costs. The last method is the fibre Bragg grating, which has the greatest advantage of wavelength coding characteristics and tunability, but it requires relatively high process accuracy. Although the above three methods can obtain large dispersion values and better dispersion compensation effects, but for the integrated on-chip photonic system, it is more desirable to have an integrated dispersion compensation scheme to adapt to the integration needs of the on-chip system.Recently, lithium niobate has received much attention due to its high quality material properties. Its ultra-wide transmission band, excellent electro-optic coefficient and second-order nonlinear coefficient compared to other materials make it possible to realize active and passive devices with various functions. As the research on thin-film lithium niobate intensifies, soliton lasers and pulse compression on lithium niobate materials have also been studied more extensively, and dispersion compensation based on thin-film lithium niobate becomes particularly important. In this paper, we explore the application of chirped Bragg gratings for dispersion compensation on the lithium niobate platform, focusing on four critical properties: central wavelength, reflectance spectrum bandwidth, Group Delay Dispersion (GDD), and group delay ripple. Initially, we simulate unapodized chirped Bragg gratings, observing significant group delay ripple due to abrupt changes in the grating envelope. To address this issue, we investigate apodized chirped gratings, which gradually modulate the effective index along the etching depth. Parameters including waveguide initial and ending width (Wb & We), etching depth, grating initial and ending period, grating number, apodization ratio, and gaussian parameter are systematically scanned. We simulate symmetric linear apodization, asymmetric linear apodization, and gaussian apodization functions, selecting the former two to maintain linearity of the group delay spectrum since the linear apodization function yields a linear chirp, which will not influence the linearity of group delay spectrum. Gaussian apodization offers benefits such as a smooth spectral profile and mild ripple suppression. Simulation results indicate that when waveguide initial width and waveguide ending width are both equal to 1 μm, asymmetric linear apodization yields significant group delay dispersion (0.75 ps/nm) with a corresponding group delay ripple of 0.26 ps at the expense of bandwidth (9 nm). For the other two apodization functions, optimal group delay spectra are achieved with Wb=1 μm and We=1.2 μm, yielding GDDs of 0.21 ps/nm and 0.19 ps/nm, respectively, with a ripple of 0.21 ps. Furthermore, simulations demonstrate improved results with an etching depth of 0.3~0.4 μm, indicating enhanced ripple suppression with larger apodization ratios, albeit with decreased bandwidth. Following simulation, we fabricate apodized chirped gratings based on these simulations, opting for the gaussian apodization function. Experimental results show a 3 dB bandwidth of 21.2 nm, slightly lower than the simulated value (41.8 nm), with a delay dispersion of 0.138 5 ps/nm, closely aligning with the simulated value (0.136 9 ps/nm). Discrepancies in measured reflectance spectrum attributes to fabrication errors, impacting bandwidth, ripple, and suppression ratio. In conclusion, this paper investigates the apodized chirped grating based on thin-film lithium niobate, analyses the influence of each parameter of apodized chirped grating on four key attributes, and summarizes the optimal selection range of the individual parameters as well as the trade-off relationship. This work lays a research foundation for the dispersion compensation technology based on thin-film lithium niobate..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0823001 (2024)
Design of Flexible and Transparent Metamaterial Absorber with Broadband
Xiaojun HUANG, Lina GAO, Miao CAO, Wang YAO, and Helin YANG
Electromagnetic metamaterials absorbers provide indispensable technical support and important application value for communication, radio, radar, stealth technology and medical imaging due to their excellent electromagnetic wave regulation performance. Current research targets wideband absorbers, essential for handling Electromagnetic metamaterials absorbers provide indispensable technical support and important application value for communication, radio, radar, stealth technology and medical imaging due to their excellent electromagnetic wave regulation performance. Current research targets wideband absorbers, essential for handling multiple bands and adapting to complex electromagnetic environments. Traditional absorbers face challenges like complex structures and low transmittance and inflexibility, limiting technological progress. Recently, the transparent flexible metamaterial absorbers based on conductive films have played a great role in improving the absorption bandwidth. Nevertheless, most of the transparent flexible metamaterial absorbers that have been proposed still do not meet the size and thickness limitations of absorbers in special application scenarios such as mobile communication devices and satellite communication antennas. Therefore, the study of transparent flexible absorber with high absorption efficiency, thin thickness, light weight and wide absorption frequency band is of great significance for practical application. In this study, a transparent, flexible, small size and low profile metamaterial absorber is designed based on indium tin oxide conductive film and polyvinyl chloride. The absorber is mainly composed of three parts: a patterned conductive film layer at the top, a medium layer in the middle and a base layer completely covered by a low square resistance conductive film. Firstly, by combining the impedance matching principle, CST Studio Suite 2021 simulation software is used to calculate the absorptivity and S-parameters. Constantly adjust the pattern of the top conductive film, and finally determine the top periodic unit pattern. Secondly, other structural parameters are optimized by parameter scanning to achieve the best absorption, and the influence of the change of key parameters on the absorptivity is analyzed. In addition, the absorption performance of the 10×10 array structure at different bending angles is verified and compared with that of the planar surface. At the same time, the absorptivity of the unit structure is simulated under different polarization angles and incident angles. Moreover, in order to evaluate the effect of systematic geometric parameter errors that may exist in the actual machining of the designed structure, the robustness analysis of the structural geometric parameters is carried out. The results show that the absorber has excellent structural stability and can be manufactured for practical engineering applications. Finally, the physical mechanism of the wideband absorption of the absorber is systematically analyzed through the power loss density, the surface current distribution and the electromagnetic field energy distribution. Electromagnetic simulation software is used to conduct full-wave simulation, and the simulation results demonstrate that within the frequency range of 8.22 GHz to 22.76 GHz, the absorptivity of the metamaterial exceeds 90%, achieving an impressive absorption bandwidth of 14.54 GHz with a relative absorption bandwidth of 93.9%. In addition, the proposed absorber normalized impedance is calculated through the S-parameter in the simulation results, and the real part of the normalized impedance is close to 1 and the imaginary part is close to 0 in the operating frequency band. It shows that the design realizes the impedance matching with the free space and achieves the perfect absorption effect. In order to obtain the best absorption effect, the structural parameters of the absorber (slit width w, arrow cross width b, dielectric layer thickness tPVC and surface conductive film square resistance R) are scanned to determine the specific value of the structural parameters of the absorber. At the same time, the absorptivity of the 10×10 array is simulated under curved and planar conditions, the results show that the absorber can maintain excellent absorption performance even when bending. While the polarization angle is adjusted in the range of 0°~45°, the absorptivity does not change, showing excellent polarization insensitive characteristics. Notably, by adjusting the incidence angle of the incident electromagnetic wave from 0° to 60°, the results show that in TE mode, the oblique incidence angle of 45° can still maintain over 75% absorptivity, while in TM mode, even if the incidence angle of 60° can also maintain the absorptivity of more than 75%. Through analyzing surface current distribution, it is shown that the surface conductive film is an important cause of electromagnetic resonance. The current loop forms at the top and bottom layer causes magnetic resonance and thus magnetic loss, and the current loop form at the top layer causes electric resonance and thus electric loss. At the same time, the magnetic field energy is larger than the electric field energy at the same scale, which indicates that the absorber is mainly dominated by magnetic resonance. Based on the impedance matching principle, a completely original reflective metamaterial absorber with broadband microwave absorption, high flexibility and awesome transparency is proposed in this study. Notably, this absorber exhibits insensitivity to electromagnetic wave polarization and maintains excellent absorption performance at angles of incidence below 60° for most application scenarios. Moreover, excellent absorption properties are maintained under conformal conditions. The proposed electromagnetic metamaterial absorber holds significant potential applications in medical devices, aerospace technology, and radar stealth technology..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0823002 (2024)
Design and Fabrication of O-band Silicon-based Silicon Dioxide Dense Wavelength-division Multiplexing AWG
Feng HAN, Jiashun ZHANG, Liangliang WANG, Pengwei CUI... and Tianhong ZHOU|Show fewer author(s)
The O-band wavelength division multiplexer is a key component for high-speed interconnection in data centers. Thin film filters and array waveguide gratings are two commonly used technical solutions. The silica based array waveguide grating wavelength division multiplexer has the advantages of low loss and integration,The O-band wavelength division multiplexer is a key component for high-speed interconnection in data centers. Thin film filters and array waveguide gratings are two commonly used technical solutions. The silica based array waveguide grating wavelength division multiplexer has the advantages of low loss and integration, becoming the main technology of data center wavelength division technology. This article adopts a silica based optical waveguide material with a relative refractive index difference of 0.75%. Based on the diffraction equation, an O-band 48 channel, 120 GHz channel spacing flat top dense wavelength division multiplexing array waveguide grating chip is designed, with a single-mode waveguide cross-section of 6 μm×6 μm. Using the beam propagation method, the effective refractive index of the flat waveguide at the center wavelength was calculated to be 1.456 4, the effective refractive index of the array waveguide was 1.454 4, the group refractive index was 1.474 7, and the spacing between the array waveguides was selected to be 8 μm. The output waveguide spacing is 26 μm. The diffraction order is 31, and the difference in length between adjacent array waveguides is 27.708 μm. The focal length of Rowland circle is 14 256.97 μm. The number of waveguide arrays is 401, and the designed chip size is 4.4 cm×3 cm. The AWG preparation adopts a planar optical path integration process, and the silicon substrate undergoes silicon thermal oxidation at 1 050 ℃ to form 20 μm-thick SiO2 lower cladding, followed by growth of 6 μm GeO2-SiO2 core layer using Plasma Enhanced Chemical Vapor Deposition (PECVD) technology. Using contact exposure lithography technology and inductively coupled plasma etching technology to achieve good pattern transfer. Subsequently, 20 μm boron phosphorus silicate glass undercladding is formed through PECVD, which the refractive index is consistent with that of the lower layer SiO2. The wafer is cut and polished to an angle of 8° on the end face to reduce return loss. Place the AWG chip on an alloy material rack and cut a slot at a specific part of the AWG input Rowland circle. The input and output waveguides are coupled with the fiber array respectively. Adjust the metal screw on the alloy rack to ensure that the central wavelength is at the International Telecommunication Union (ITU) wavelength. The metal screw contracts or extends with temperature changes, pushing the input Rowland circle up and down to compensate for the drift of the AWG chip's response spectrum wavelength due to temperature, stable response spectrum was achieved within the temperature range of -5 ℃ to 65 ℃. Using an O-band tunable laser, polarization controller, and power detector, the output spectrum of the packaged AWG module was tested. The insertion loss was between -5.31 dB and -6.59 dB, with a channel spacing of 120 GHz. The 1 dB and 3 dB bandwidths were 0.41 nm and 0.55 nm, respectively. Adjacent crosstalk and non-adjacent crosstalk were 29.4 dB and 29.2 dB, respectively, and polarization related loss was less than 0.67 dB. Using a temperature controller to change the ambient temperature of the AWG module, within the temperature range of -5 ℃ to 65 ℃, the center wavelength drift was reduced from 7 pm/℃ to 0.6 pm/℃, demonstrating good temperature stability. Using an error code analyzer, lithium niobate modulator, and sampling oscilloscope, high-speed signal transmission was carried out for two modulation methods of Non Return to Zero (NRZ) and 4 Pulse Amplitude Modulation (PAM-4) signals in the AWG module. The results showed that the modulation transmission eye diagrams of 26.56 Gbps non zero and 53.12 Gbps 4-level signals were clear, with extinction ratios greater than 5.5 dB and 3.6 dB, respectively, and the total transmission capacity of 48 channels reached 2.4 Tbps..
Acta Photonica Sinica
- Publication Date: Aug. 25, 2024
- Vol. 53, Issue 8, 0823003 (2024)