
Search by keywords or author
Journals >Acta Photonica Sinica
Export citation format
Underwater Single-pixel Imaging Method Based on Object Search and Detail Enhancement
Yifan CHEN, Zhe SUN, and Xuelong LI
In the realm of underwater imaging, current Single-Pixel Imaging (SPI) technologies are grappling with a substantial challenge when deployed in intricate optical environments. Predominantly, conventional methods focus on reconstructing the overall representation of the object, which inherently restricts their ability tIn the realm of underwater imaging, current Single-Pixel Imaging (SPI) technologies are grappling with a substantial challenge when deployed in intricate optical environments. Predominantly, conventional methods focus on reconstructing the overall representation of the object, which inherently restricts their ability to optimally restore and emphasize the minute details within the image. This inherent limitation has a profound impact on the overall quality and resolution of the reconstructed images, particularly in scenarios where precise analysis and interpretation require high levels of fidelity. In light of this pressing issue, we propose a methodological approach for underwater single-pixel imaging that ingeniously integrates two pivotal mechanisms: object search and detail enhancement. The core essence of the proposed method is bifurcated into two main objectives. Initially, the object search component deploys intelligent algorithms that meticulously analyze the fluctuations in pixel intensities across rows and columns within the reconstructed image. So it expertly discriminates between the object area and its surrounding background, effectively singling out and amplifying the object signal while simultaneously attenuating background noise. This strategic isolation significantly enhances the contrast and visual prominence of the targeted object. On the other hand, the detail enhancement facet of our methodology harnesses state-of-the-art machine learning techniques, specifically leveraging a part-based model embedded within a Convolutional Neural Network (CNN) architecture. This sophisticated model specializes in discerning and learning the complex, fine-grained features encapsulated within the collected light intensity data. Upon extracting these learned attributes, the methodology proceeds to refine and accentuate the detailed aspects of the object within the reconstructed image, thereby elevating its overall resolution and sharpness. To rigorously substantiate the dependability and efficacy of our novel technique, we have conducted an extensive series of experiments in both space and underwater settings. During the preliminary experimental phase, we concentrated on five distinct alphabetical objects-“I”,“O”,“P”,“E”, and“N”-comparing the performance of our method against the Traditional Ghost Imaging (TGI) and Different Ghost Imaging (DGI) methodologies. We carried out meticulous measurements of the Contrast-to-Noise Ratio (CNR) and spatial resolution of the reconstructed images, as well as closely examining grayscale values at specific points such as the slits within the letters “O”and“P”. The experimental results underlined that, under space conditions, the proposed method surpasses conventional approaches by successfully maintaining and enhancing the intricate detail information of the object, thus leading to a significant improvement in the reconstructed image quality. Additionally, to prove the robustness of our method across various sampling rates, supplementary tests were performed in the space environment. By calculating the CNR and resolution of reconstructed images at different iteration counts, we empirically demonstrated that even at lower sampling rates, our method consistently delivers enhanced detail, showcasing its adaptability and versatility. Taking our experimentation one step further, we ventured into highly turbulent water conditions, executing over 1 500 iterations of transmission and reflection SPI experiments. Despite the challenging nature of these dynamic and unpredictable environmental conditions, the proposed method exhibited superior performance, solidifying its reputation for resilience and reliability under diverse circumstances. The comprehensive experimental findings provide compelling evidence of the merit and value of our innovative underwater single-pixel imaging method. It decisively demonstrates that, whether the imaging context involves unknown space environments or intricate underwater landscapes, our method can reliably and accurately reconstruct high-quality object images, even with limited sampling rates..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0401001 (2024)
Simulation and Test of Polarization Reflection Characteristics of Marine Oil Film
Di YANG, Yingchao LI, Xiaolei HAN, Haodong SHI... and Xing MING|Show fewer author(s)
Oil spills are a common form of marine pollution. In the field of satellite remote sensing, utilizing polarization techniques for oil spill detection holds significant potential. Researchers conducted tests, simulations, and analyses on the polarization characteristics of oil spills. These studies focused on the opticaOil spills are a common form of marine pollution. In the field of satellite remote sensing, utilizing polarization techniques for oil spill detection holds significant potential. Researchers conducted tests, simulations, and analyses on the polarization characteristics of oil spills. These studies focused on the optical feature model of the oil film, neglecting the geometric feature modeling, resulting in an incomplete exploration of polarization characteristics in marine oil spills. Based on these findings, this paper encompasses polarization sky radiation, dynamic oil film reflection, detector performance parameters, and more, establishing a comprehensive oil spill model.Firstly, a model for polarization distribution in the sky was created, with sunlight as the incident light and scattered light as linearly polarized light. Through the calculation method of Rayleigh particles, the polarization degree and polarization angle of sky light were obtained, further deriving the Stokes vector of polarized sky light. Secondly, a submodule for wind-wave and swell-wave spectrum was constructed, utilizing the Jonswap spectrum model for wind-waves and the Gaussian model for swell-waves. These were combined into the wave submodule. Thirdly, a submodule for polarization radiation conversion of the sea surface was generated. Through the transformation of the‘global-to-local’incident Stokes rotation matrix, the transformation matrix was multiplied by the incident vector, then by the local pBRDF matrix, and finally by the ‘local-to-global’transformation matrix. The calculated results were the global reflected luminance. Finally, a submodule for the polarization distribution of the oil film was built, deriving the polarization degree and polarization angle of quasi-monochromatic light from the boundary conditions of electric and magnetic vectors based on the refractive index, incidence angle, and emergence angle.The simulation framework in this paper comprises two parts full digital simulation and semi-physical simulation. Results of full digital simulations are as follows: Firstly, the azimuth and elevation of the sun are determined by time, latitude, and longitude. The sun is non-polarized light, while the sky is polarized light. Both constitute the light source. Based on the elevation and azimuth angle between the sun and the point in the sky, the polarization degree and polarization angle of a point light source were calculated, obtaining the Stokes value. Secondly, wave height was obtained through Fourier transform of wind-wave and swell-wave spectrum. The inclination of the wave surface was calculated by subtracting the wave height in the neighborhood. Thirdly, incident and reflection rotation matrices were constructed based on the wave surface inclination angle, while the Mueller matrix of pBRDF was constructed based on the refractive index. Through the 'global-local-global' transformation, polarization degree and polarization angle were obtained. Semi-physical simulation involves capturing real water surfaces with a measurement camera and replacing parts of water with oil to simulate the oil film.The research process in this paper involved four steps Firstly, simulating sea surfaces, including the superposition effect of wind-waves, swell-waves, and wind-wave fluctuations based on wind speed changes. Secondly, simulating optical radiations, where the intensity of the sea surface becomes brighter as the angle between some points in the sky and the sun decreases, reaching saturation in the camera. Additionally, the phase angle gradually increases, and the polarization degree changes from ‘small-large-small’. Thirdly, simulating rotations of the observed axis, where observed axis rotation has no effect on intensity and linear polarization degree but affects the polarization angle, redistributing the polarized value. Fourthly, semi-physical simulations, where in water tests, approaching Brewster Angle maximizes the polarization degree of water and oil.The ocean, unlike most solids, is a relatively smooth liquid. The distribution of pBRDF in the ocean is concentrated. Even slight waves on the ocean surface cause significant changes when sunlight is reflected, leading to camera saturation, especially during repeated alternations between the sky and the sun. Secondly, intensity is preferred over polarization degree in sea surface detection, and polarization reaches its maximum at the Brewster angle. The polarization angle mainly reflects the rotation of the light source in the sky around the sun and the axis rotation of the camera. Finally, to better distinguish oil spills, understanding the inclination angle of each micro-surface, irradiation environment, and 3D imaging algorithm is essential. This approach helps mitigate the influence of waves and improves the probability of oil spill identification..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0401002 (2024)
Polarization Accuracy Verification and Out-of-band Response Analysis for Aviation Polarization Radiometer
Zhengdong XU, Pingping YAO, Mengfan LI, Xiangjing WANG... and Jin HONG|Show fewer author(s)
Aerosols and clouds in the atmospheric environment are important factors affecting the image quality from high-altitude platforms. Aerosols exhibit relatively strong polarization effects compared to ground reference objects, especially under adverse conditions such as sand and dust storms. When aerosols are mostly concAerosols and clouds in the atmospheric environment are important factors affecting the image quality from high-altitude platforms. Aerosols exhibit relatively strong polarization effects compared to ground reference objects, especially under adverse conditions such as sand and dust storms. When aerosols are mostly concentrated in low altitude areas, it has a significant impact on the image quality of aviation platforms. To adapt the application requirements of atmospheric correction for aviation platform images, multiple spectral bands of polarization radiation information in the atmosphere are obtained from the visible to shortwave infrared bands, which can make to achieve spatial and temporal change of aerosol and water vapor detection, and retrieve atmospheric aerosol and water vapor characteristic parameters to solve the difficulties of atmospheric correction.The Aviation Polarization Radiometer (APR), developed by the Anhui Institute of Optics and Fine Mechanics (AIOFM), Chinese Academic of Sciences (CAS), has the characteristics of compactness and lightweight for aerial applications, which is mounted on an aviation flight platform. The Aviation Polarization Radiometer is designed to detect aerosols, land surfaces, and sea surfaces under aerial flight modes, which need to have a certain dynamic range to adapt and obtain effective data for different targets.Firstly, a systematic description is provided for the instrument. The airborne polarization radiometer adopts a design scheme of a channel type optical system combined with a unit detector. The APR achieves wide spectral coverage and high-precision information acquisition. The instrument utilizes the filters to achieve band division, there are four polarization spectral channels from visible to near infrared, namely 490 nm, 670 nm, 870 nm, and 1 610 nm. Each polarization spectral channel is detected through a combination of three filters and three polarizers for spectral and polarization information detection. Secondly, to ensure the availability of instrument data and the accuracy of parameter inversion, the laboratory calibration is conducted after the instrument is assembled and adjusted. The absolute radiometric calibration and polarization measurement accuracy are verified for the APR.In order to ensure the accuracy of the absolute radiometric calibration, the calibration experiment is performed using a radiometric calibration system based on a spectrometer. This calibration system comprehensively considers the uncertainty in absolute response, which meets the requirements for radiometric calibration accuracy. In addition, an adjustable polarized light source is utilized to verify whether the performance of the instrument fulfills the requests in various polarized spectral bands. This validation ensures the accuracy and stability of the instrument's measurements in different polarization bands, meeting the expected performance criteria. At the same time, the instrument's response suffering to out-of-band effects in the 490 nm wavelength is analyzed and validated.The measurement results indicate that the absolute radiometric calibration uncertainty of the APR is less than 2.51%. When the polarization degree of the input polarized light is 20%, the polarization measurement accuracy in various polarized spectral bands is lower than 0.16%. At the same time, an analysis is conducted on the unsatisfactory polarization accuracy in the 490 nm wavelength band, and a monochromator system is used to verify its influence on out of band response. Through comparative experiments, it is confirmed that by removing the out-of-band response influence on the polarization channel at 490 nm band, the polarization accuracy is improved from 2.29% to 0.06%. This improvement provides support for subsequent product optimization, such as optimizing the filters' cut-off wavelength to enhance the instrument performance..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0401003 (2024)
Recognition of Vortex Beam Orbital Angular Momentum Based on Improved Xception
Yonghao CHEN, Xiaoyun LIU, Jinyang JIANG, Siyu GAO... and Yueqiu JIANG|Show fewer author(s)
As an emerging wireless communication technology, optical communication, which utilizes lasers as a means of information transmission, combining the advantages of high communication capacity and high-speed transmission, is gradually gaining widespread attention. Since Allen and others first demonstrated that optical voAs an emerging wireless communication technology, optical communication, which utilizes lasers as a means of information transmission, combining the advantages of high communication capacity and high-speed transmission, is gradually gaining widespread attention. Since Allen and others first demonstrated that optical vortices carry Orbital Angular Momentum (OAM) under near-axial conditions, vortex beams have attracted significant attention. OAM has infinitely many orthogonal eigenstates, forming an infinite-dimensional Hilbert space. Theoretically, this property can infinitely increase communication transmission capacity. Due to these unique characteristics, vortex beams find widespread applications in optical imaging, micro-operations, and free-space optical communication.However, when vortex beams propagate in the ocean, the presence of turbulence significantly affects the quality of the beam, leading to distortion and degradation of the light field at the receiving end. In recent years, scholars worldwide have proposed OAM recognition schemes under different conditions. Nevertheless, when vortex beams propagate in complex media such as oceanic turbulence, their OAM spectrum becomes broader, posing a challenge to OAM recognition. The lack of effective OAM detection methods hampers the development of oceanic optical communication. With the rise of artificial intelligence, deep learning technologies have rapidly developed in various fields. This paper proposes a method based on improved deep separable networks (IXception) that combines deep learning with traditional oceanic optical communication technology. This method aims to achieve OAM mode recognition of vortex beams transmitted through oceanic turbulence. Initially, the paper adopts the idea of stepwise phase screens based on the power spectrum inversion method to simulate the transmission process of vortex beams with different OAM values in the ocean. The corresponding dataset is established by collecting, degrading, and distorting the speckle field image at the receiving end. Subsequently, the dataset is randomly divided into training, validation, and test sets in an 8∶1∶1 ratio, and the IXception model is trained using the training set. IXception adopts the architectural concept of Xception, combining the residual structure of ResNet and the inverted residual structure of MobileNetV2. IXception reduces the number of network parameters, network weights, and the complexity of the network model structure while improving accuracy through partial connections and weight sharing. Additionally, the network can extract highly spatially deep features, reduce redundancy in network structural parameters, and enhance generalization ability. For transmission distances of 20 m and 80 m, IXception is trained for 40 cycles using the corresponding training set, and the training and validation accuracy curves fit well, with validation accuracy reaching 99.20% and 97.9%. The results indicate that the accuracy and loss curves fit well during the training process of the training and validation sets, with no signs of overfitting, underfitting, or gradient explosion. IXception can effectively extract OAM modes from degraded light field images. In practical applications, turbulence-induced disturbances to the beam pose a significant challenge to oceanic optical communication. To investigate the generalization and robustness of the IXception model, vortex beams are transmitted in seawater at distances of 20 m, 40 m, 60 m, 80 m, and 100 m, and the corresponding degraded light field datas are collected to create a dataset. The training set from different transmission distances serves as the input for the IXception model. Four statistical measures, namely, mean absolute error (EMAE), mean relative error (EMRE), root mean square error (ERMSE), and correlation coefficient (Rxy), are selected to evaluate the performance of the IXception model. The evaluation results show that as the transmission distance increases, the OAM spectrum through oceanic turbulence's phase screen also broadens, resulting in stronger distortion of the light field at the receiving end and making it more challenging to extract OAM modal values from the distorted light field. The research indicates that the IXception network architecture has strong generalization ability, even achieving an Rxy of 91.21% for OAM modes at a transmission distance of 100 m. To compare the recognition performance of the Xception and IXception models, evaluations are conducted at transmission distances of 40 m and 100 m. IXception outperforms Xception in terms of EMRE, EMAE, and ERMSE evaluation results. Additionally, regarding Rxy evaluation results, Xception scores lower than IXception at both transmission distances, especially 4.66% lower at 40 m. IXception reduces the number of network weights through partial connections and weight sharing, achieving a training time 39 ms lower than Xception for the same batch (step). In conclusion, the overall performance of the proposed IXception model in recognizing OAM modes in distorted light fields caused by oceanic turbulence is superior to the Xception model. This research provides theoretical support for the practical engineering application of oceanic optical communication using vortex beams..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0401004 (2024)
Fluorescence Quantization Characterization and Temperature Sensing Properties of Holmium Ion-doped Yttrium Fluoride Oxide Submicron Crystals
Xin ZHAO, Jing YU, Desheng LI, and Hai LIN
YOF-HY submicron crystals prepared by hydrothermal synthesis and high temperature calcination have been demonstrated to possess Pure Upconversion (UC) luminescence and sensitive temperature feedback properties for laser display and real-time temperature monitoring in complex environments. In recent years, the fine optoYOF-HY submicron crystals prepared by hydrothermal synthesis and high temperature calcination have been demonstrated to possess Pure Upconversion (UC) luminescence and sensitive temperature feedback properties for laser display and real-time temperature monitoring in complex environments. In recent years, the fine optoelectronics industry, especially in the field of photonic devices and non-contact temperature sensing, has witnessed a growing requirement for optical temperature measurement materials in terms of clear presentations and accurate test records. Among many temperature sensing materials, rare earth doped upconversion luminescent fluorophor which can realize temperature sensing by using their own non-thermal coupled energy levels are ideal optical temperature measuring materials. Compared to traditional contact temperature measurement techniques, the non-contact temperature measurement based on Fluorescence Intensity Ratio (FIR) is a promising temperature sensing technique with favorable sensitivity, high accuracy, low environmental dependence, which avoids the spectral loss and excitation source fluctuation. UC fluorescent materials have attracted widespread interest owing to their excellent optical properties such as pure luminous, rapid response and real-time feedback demonstrated in various optoelectronic devices. Rare-earth ions are inherently low in luminescence efficiency, but the introduction of sensitizers will produce strong characteristic fluorescence by increasing the absorption of infrared photons and transferring energy to the rare-earth ions. Among the rare earth ions,Ho3+ has attracted great focus according to its special energy level structure which has been used to achieve strong visible UC luminescence with great temperature sensing potential, meanwhile the introduction of Yb3+ ions as a sensitizer can enhance the intensity of UC emission in Ho3+ ions-doped materials at the pumping of ~980 nm lasers. Among UC luminescence materials, fluorine oxides, with their lower phonon energies and stronger inversion asymmetry, contribute to improve the probability of radiation transition and obtain more efficient UC luminescence for higher sensitivity temperature monitoring. In this paper, the fabrication and temperature sensing luminous properties of the holmium-doped yttrium fluoride oxide submicron crystals are reported. The crystal structures are characterized by Scanning Electron Microscopy (SEM) and X-ray Diffractometer (XRD), confirming that the powder is YOF with the submicron structure. The UC luminescence performance of the crystal particles has characterized under 977 nm laser pumping. It is illustrated that both green and red light emissions from UC luminescence are two-photon excitationprocesses through power dependence. In the aspect of fluorescence quantum characterization, the spectral power distributions of the samples are presented with the fluorescence spectroscopy test system. Absolute quantum parameters such as the net photon distributions and quantum yields are calculated. When the excitation power density is increased to 73 mW/mm2, the green and red UC emission spectral powers are 0.31 μW and 0.10 μW, respectively, demonstrating that the Ho3+/Yb3+ co-doped YOF submicron crystals are efficient luminous materials. The quantum yields (QYs) of green and red emissions from Ho3+ under 977 nm laser excitation are derived to be 2.97×10-5 and 1.40×10-5 respectively when the pump power density arrives at 73 mW/mm2, and the high photon generation efficiency ensures sufficient fluorescence intensity for tracing temperature feedback. For temperature sensing, the thermal behavior of Ho3+ has been investigated using a FIR system with two non-thermal coupled energy levels. The absolute sensitivity (SA) and relative sensitivity (SR) of the YOF submicron crystals have been calculated. SR is an indispensable parameter which is independent of material properties and enables direct quantitative comparison of temperature sensing properties in different samples. The maximum SR is 0.437% K-1 at 303 K, and maintains at 0.331% K-1 when the temperature has increased to 433 K. Finally, temperature cycling tests have been conducted on the YOF submicron crystals, demonstrating that the submicron crystals have good reproducible properties and is an excellent candidate for temperature monitoring. Therefore Ho3+/Yb3+ co-doped YOF submicron crystals provide a potential option as an efficient luminescence and high temperature sensitivity material for the field of temperature sensing..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0416001 (2024)
Optical Properties of Tungsten-doped VO2 Films and Microstructure Formation by Nano-powders Using Double-phase-interface Self-assembled Method
Yangyang ZHOU, Jiatong JIANG, Xiaoran ZHANG, Mengjie TIAN... and Yabin ZHU|Show fewer author(s)
Vanadium dioxide (VO2) has a phase transition between the high-temperature metal state and the low-temperature semiconductor state, accompanied by significant optical property changes. However, the applicability of pure VO2 is limited by the transition temperature at 68 ℃. Ion doping is an effective method to effectiveVanadium dioxide (VO2) has a phase transition between the high-temperature metal state and the low-temperature semiconductor state, accompanied by significant optical property changes. However, the applicability of pure VO2 is limited by the transition temperature at 68 ℃. Ion doping is an effective method to effectively reduce the VO2 phase transition temperature, and W6+ doping can reduce the VO2 phase transition temperature to 20~30 K/wt%. The dual-interface self-assembly method based on nanomaterials is environmentally friendly and simple to operate, and has the potential of large-scale production. In this study, WxV1-xO2/glass films were prepared by bi-phase interfacial self-assembly method. Under hydrothermal conditions, pure W-doped VO2(B) nano-powder was synthesized by hydrothermal crystallization using oxalic acid, tungstic acid and VO2 as raw materials, and then the crystalline transformation from VO2(B) to VO2(M) powder was realized by annealing process. The WxV1-xO2 nano-solution was synthesized by ultrasonic mixing of VO2 nano-powder with tungsten (W) atom ratio of 0%, 1%, 2% and 3%, high purity anhydrous ethanol and Polyethylpyrrolidone (PVP, concentration of 98%) at 1∶1∶28 weight ratio. WxV1-xO2/glass film samples with tungsten concentration of 0%, 1%, 2% and 3% were prepared by vacuum silicone (Polydimethylsiloxane, PDMS) coated with self-made Polystyrene (PS) mold and nano-sized tungsten-doped vanadium dioxide solution added with surfactant on glass substrate, respectively. At the same time, the capillary flow movement and film formation process at the liquid/solid dual phase interface, as well as the double“coffee ring”pattern were directly observed. The complex dynamic change process of liquid-solid-gas system follows the laws of fluid dynamics and thermodynamics. On the basis of preparing WxV1-xO2 film at the bi-phase interface, Polyethylene (PE) wire with diameter of 1 μm was added into the mold, and the microstructure of WxV1-xO2 film was fabricated by a simple method. The optical transmittance of WxV1-xO2/glass thin film was measured by using a laser with wavelength of 980 nm as the light source. Ultima IV equipment of Rigaku in Japan was used to test the crystal structure of the film by X-ray Diffraction (XRD) and analyze the crystallization of the sample. The surface morphology of the thin film samples was measured by the German ZEISS GeminiSEM 300 Scanning Electron Microscope (SEM). By comparing the XRD pattern with the standard card, the main crystal structure of the sample is VO2 (M). Although the polycrystalline state, VO2 (M) has the largest (011) crystal plane diffraction peak intensity, which is the component that has the greatest influence on the results of subsequent property tests. With the SEM photos, it can be observed that most of the surface of the films are cuboid grains with some small grains attached. It can be seen that the film is a preferentially oriented polycrystalline film, which is mutually confirmed by XRD results. Compared with the near-infrared optical transmittance of 2% WxV1-xO2/glass prepared by the spin-coating method, the optical transmittance of the filmprepared by the bi-phase interface self-assembly method is better than that of the previous methods in decreasing trend with increasing temperature. The microstructure diffraction pattern was observed by He-Ne laser. The visible light microstructure diffraction pattern is similar to the grating diffraction pattern, which indicates that the preparation method of the film and microstructure is feasible. The electromagnetic wave frequency domain module in COMSOL software is used to simulate the theoretical transmittance of VO2. This module uses finite element method to solve Maxwell equations. In the near infrared light band, the transmittance remains relatively high at low temperature, but drops sharply at high temperature, which is the same as the transmittance curve of vanadium dioxide described in the literature. By multiplying the experimental data of transmittance of thin film sample with W doping concentration of 0% by 10 times and comparing with the simulation data, it can be seen that the simulation results are consistent with the experimental ones. It is proved that a low cost and easy to implement dual-interface self-assembly method can be used to prepare microstructures for regulating optical properties, which provides a new solution for the self-assembly preparation of nano-powder films. The results can be applied in the fields of protective coating preparation and microstructure optical control..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0416002 (2024)
Interaction of Airy Gaussian Beams in Nematic Liquid Crystals with Competing Nonlocality
Siqi REN, Shaozhi PU, Ying LIANG, Mingxin DU, and Meng ZHANG
In this paper, based on the new competing nonlocal model proposed by JUNG P S et al, the interaction of the Airy Gaussian beam is numerically investigated by the split-step Fourier scheme. Usually, in nonlocal media with competing nonlinearities, the nonlinear refractive index is induced by two types of independent nonIn this paper, based on the new competing nonlocal model proposed by JUNG P S et al, the interaction of the Airy Gaussian beam is numerically investigated by the split-step Fourier scheme. Usually, in nonlocal media with competing nonlinearities, the nonlinear refractive index is induced by two types of independent nonlinearities. However, when the beam propagates in the nematic liquid crystal, the nonlinear refractive index will be induced by molecular orientational effects and thermal effects. In this case, the beam propagation in liquid crystals is guided by the model proposed by JUNG P S. According to the new competitive nonlocal model, the refractive index changes induced by the two nonlinear effects are not independent of each other, so the light-induced refractive index change becomes the refractive index change caused by the two nonlinear effects in the form of multiplication. It is shown that the interaction between Airy Gaussian beams can be controlled by the adjustment of the distribution factor, the beam amplitude, beam separation, phase difference, and the nonlocality. The results show that the distribution factors (the degree of molecular orientation nonlocality, thermal nonlinearity coefficient, and the initial spacing) all affect the interaction between the Airy Gaussian beam. Among the above factors, the degree of thermal nonlocality has the smallest effect on the beam interaction under the condition of in-phase or out-of-phase interaction. For in-phase Airy Gaussian beams interaction, the period of the breather obtained from the fusion of the Airy Gaussian beam decreases with the increase of the distribution factor and gradually fuses into quasi-solitons. For the out-of-phase Airy Gaussian beams interaction, the two Airy Gaussian beams became two independent solitons. The angle between the soliton pairs increases with the decrease of the distribution factor. It is found that an increase in the beam amplitude will increase the interaction force. The increase in the nonlocality of the reorientation nonlinearity will lead to an increase in the width of the Airy Gaussian beam. It is also found that the increase of this nonlocality will lead to the increase of the interaction force between out-of-phase Airy Gaussian beams. Interestingly, it is found that the thermal nonlocality drastically affects the interaction of the out-of-phase Airy Gaussian beams, which leads to the change of the repulsive force into mutual attraction force between two out-of-phase Airy Gaussian beams. So, it will lead to the balance of repulsive force and attraction force under certain conditions. It is also found that the increase of the thermal nonlinear coefficient will lead to the appearance of the repulsive force or attraction force between out-of-phase Airy Gaussian beams. As the initial spacing decreases, the attraction between the in-phase Airy Gaussian beams increases and the vibration period of the formed breathers decreases. In this case, the repulsion between the out-of-phase Airy Gaussian beams increases. When the degree of thermal nonlocality increases, the vibration period of in-phase airy Gaussian beams decreases. When the initial beam amplitude is large, the mutually repulsive Airy Gaussian beams attract each other and fuse into a breather. For the out-of-phase Airy Gaussian beams, when the degree of thermal nonlocality increases to a certain value, the repulsive force and attractive force between the two Airy Gaussian beams can reach a dynamic balance. In this case, it is also found that changing the thermal nonlinear coefficient in the nematic phase liquid crystal can make the two Airy Gaussian beams between the simultaneous existence of attraction and repulsion. When the distribution factor is small, adjusting the thermal nonlinear coefficient makes the change in repulsion between the Airy Gaussian beams more pronounced. These theoretical researches may provide a basis for experiments investigating the interaction between Airy Gaussian beams. In addition, our theory may be useful in achieving all-optical interconnection by the interaction between Airy Gaussian beams..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0419001 (2024)
Dual-channel Synchronous Calibration VIPA Spectrometer with Optical Waveguide Input
Zhongnan ZHANG, Dong LIN, Xiaoming ZHU, Yutao WANG... and Jinping HE|Show fewer author(s)
In the realm of observational astronomy, achieving high-precision spectral detection has become a crucial necessity, particularly for scientific endeavors such as studying terrestrial planets via radial velocity methods, probing cosmological variations in fundamental constants, and measuring the universe's expansioIn the realm of observational astronomy, achieving high-precision spectral detection has become a crucial necessity, particularly for scientific endeavors such as studying terrestrial planets via radial velocity methods, probing cosmological variations in fundamental constants, and measuring the universe's expansion rate. This demand drives the advancement of spectrometers with high spectral resolution and high wavelength calibration accuracy. Over the past two decades, several high-resolution astronomical spectrometers tailored for high-precision radial velocity measurements have been developed worldwide. These spectrometers typically employ echelle gratings as the primary dispersion components, characterized by complex structures, large dimensions, stringent mechanical and thermal stability requirements, and considerable manufacturing and maintenance costs.Compared with the gratings, the Virtually Imaged Phased Array (VIPA), employing the side-entrance Fabry-Perot etalon geometry, features a simple and compact structure, ultra-high angular dispersion, minimal sensitivity to slit width variations, and ease of calibration when combined with laser frequency combs. These characteristics render it highly promising in astronomical spectrum detection. Consequently, research endeavors focused on VIPA spectrometers for astronomical applications have been initiated. A kind of VIPA spectrometer equipped with dual-channel optical fiber input and calibrated using a laser frequency comb was developed, and its long-term stability was investigated. However, significant relative shifts and poor synchronization of parallel optical fiber channels under environmental disturbances limit the calibration repeatability of this spectrometer. A substantial disparity persists between the dual-channel synchronous calibration accuracy and the photon noise limit.To mitigate the significant impact of relative shift between channels on synchronous calibration accuracy, this paper adopts a novel dual-channel optical waveguide input mode. Compared to the side-by-side optical fiber arrangement, the optical waveguide chip exhibits superior stability and reduced spatial position deviation attributed to the utilization of photolithography and reactive ion etching fabrication techniques. Additionally, the proximity between two optical waveguides is significantly closer than that of parallel optical fibers. When subjected to environmental disturbances, the relative displacement between waveguides is smaller compared to optical fibers, resulting in enhanced synchronicity. In principle, the VIPA spectrometer with optical waveguide input can achieve superior calibration synchronization.Hence, this study develops a VIPA spectrometer utilizing a dual-channel optical waveguide as the input port and examines the calibration shifts and dual-channel synchronous calibration accuracy of the VIPA spectrometer across diverse environmental conditions. Research findings demonstrate that under comparable experimental conditions, the VIPA spectrometer with optical waveguide input achieves superior dual-channel synchronous calibration accuracy compared to its counterpart with optical fiber input. This represents the highest dual-channel synchronous calibration accuracy attained by VIPA spectrometers to date.Furthermore, the stability performance of the VIPA spectrometer has not reached its optimal state under current experimental conditions. Employing an astronomical optical comb with higher repetition frequency and a flattened spectrum will improve the signal-to-noise ratio of the spectrum detected by the VIPA spectrometer, thereby leading to further improvements in the wavelength calibration accuracy. It is anticipated that the dual-channel synchronous calibration accuracy will surpass current levels significantly, thereby maximizing the advantages of optical waveguides as innovative input ports for spectrometers..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0430001 (2024)
Under-sampled Image Quality Measurement Based on Virtual Knife Edge for Next Generation Palomar Spectrograph
Lifeng TANG, Zhongwen HU, Ru CHEN, Nan ZHOU, and Hangxin JI
The Next Generation Palomar Spectrograph (NGPS) is a broadband high-throughput medium-low resolution spectrograph with a focal ratio of f/15.7 and a slit field of view of 180 "×10", which will be installed on the Cassegrain focus platform of the Hale telescope. NGPS generally consists of four channels, UThe Next Generation Palomar Spectrograph (NGPS) is a broadband high-throughput medium-low resolution spectrograph with a focal ratio of f/15.7 and a slit field of view of 180 "×10", which will be installed on the Cassegrain focus platform of the Hale telescope. NGPS generally consists of four channels, U, G, R and I respectively, covering a wide spectral range of 310~1 040 nm. The spectrograph is equipped with a 4 000×2 000 scientific grade CCD with single pixel size of 15 μm. The imaging quality of the astronomical spectrometer can be characterized by Point Spread Function (PSF) or Line Spread Function (LSF), but it is usually under-sampling on the spectrometer detector, which is difficult to measure directly. Therefore, it is necessary to manage to measure the image quality accurately under the condition of CCD under-sampling while in the debugging phase of the spectrograph.In this paper, the virtual knife edge method is introduced into the image quality measurement of the NGPS. A pixel boundary of CCD is set as a virtual segmentation edge. When the light source of the optical system moves gradually along the slit plane at a certain step length, the image spot is scanned along the direction perpendicular to the virtual knife edge, and meanwhile the CCD takes under-sampled images. By integrating the energy on one side of the virtual knife edge of the images, the intensity profile of integral region is obtained, and the Edge Spread Function (ESF) is acquired by Gaussian function fitting. The LSF is the derivative of ESF, then Full Width Half Max (FWHM) can be calculated from LSF. The LSF measured by the virtual knife edge method represents the energy distribution of the virtual line light source after passing through all optical elements, which can objectively evaluate the imaging quality of the optical system.Simulation and experimental research are carried out on the R channel optical system of NGPS, and the following work is done in the image quality test: first, the virtual knife edge method is introduced to measure the under-sampled image quality of the spectrometer; second, the virtual knife edge method is applied to measure the scale of the under-sampled light source. The classical straight edge method and direct oversampling measurement are compared and analyzed to verify good measurement accuracy of the virtual knife edge method. At 693 nm, the oversampled result of LSF is FWHM=7.05 μm measured by the straight edge method, and under-sampled result is FWHM=7.14 μm measured by the virtual knife edge method, with a measurement error of 1.3%. For 0.3 " field of view scale, the under-sampled FWHM at 689 nm, 693 nm and 701 nm measured by virtual knife edge method is 24.2 μm, 27.7 μm and 30.8 μm, respectively. Consider the direct oversampling measurement as a reference, the accuracy of virtual knife edge reaches 98.3%, 93.9% and 88.8%, respectively. The experimental results show that the virtual knife edge method can measure the image quality of spectrograph accurately when the detector is under severe under-sampled condition. It can be predicted that with further optimization of image quality of the spectrograph, the measurement accuracy will also be improved. The method combined with dispersion coefficient can also measure the resolution of spectrometer. The research work in this paper provides a reference for the application of this method in the field of image quality measurement of astronomical spectrograph..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0430002 (2024)
Estimation of Error in Non-linear Least Square for Quantitative Analysis in Fourier Transform Infrared Spectrometry
Xinchun LI, Jianguo LIU, Liang XU, Xianchun SHEN... and Yongfeng SUN|Show fewer author(s)
The quantitative analysis of Fourier transform infrared spectrometry using non-linear least squares method has achieved a wide range of applications. At present, it is common practice to evaluate the fitting performance based on the magnitude of the residuals, which cannot quantify the inversion error of each parameterThe quantitative analysis of Fourier transform infrared spectrometry using non-linear least squares method has achieved a wide range of applications. At present, it is common practice to evaluate the fitting performance based on the magnitude of the residuals, which cannot quantify the inversion error of each parameter involved in the fitting. In this paper, the parameter error estimation method for inversion using the non-linear least squares method in quantitative analysis of infrared spectra is proposed based on the statistical theory of parameter estimation. The inversion errors for each fitting parameter are estimated through the Jacobian matrix of the parameters and the estimation of the variance of measurement errors, where the variance of measurement errors can be approximated using the variance of fitting residuals. Since the model adopts a series of idealized assumptions and the error estimation is an approximation at the optimal parameters, we conducted experimental validation for toxic and hazardous gases commonly found in ship's compartments to verify its applicability and stability for quantitative infrared spectroscopy. The materials used in the experiment are three gases, CBrF3, CH2Cl2, and CHCl3, which exhibit significant absorption peak overlap in the 725-795 cm-1 spectral range. We conducted a comparative analysis of the commonly used single-beam spectra and transmittance spectra in quantitative analysis, and controlled the noise level of the spectrum by its averaging number. The acquisition of transmittance spectra relies on single-beam spectra obtained with high-purity nitrogen gas as the background. The experimental results indicate that, the primary reasons for differences in the inversion results between single-beam spectra and transmittance spectra are spectral drift, baseline fitting errors, and systematic errors. For the self-developed extractive Fourier transform infrared spectrometer, an 8 averaged spectra is sufficient to meet the requirement of an inversion error of less than 3%. When using the inversion results from 64 averaged spectra in conjunction with error estimation, it is possible to achieve 100% coverage of the mean concentration. As the noise level decreases, disturbances from factors such as the instrument and the environment become the main contributors to estimation error. The differences in the convergence values of relative errors for various gas components are primarily caused by variations in the spectral accuracy of each component in the spectral database. In practical applications, the transmittance spectrum and single-beam spectrum can be reasonably selected for quantitative analysis according to the specific conditions of the monitoring scene. The estimation error of the inversion results can be obtained as reference indicators for the reliability and accuracy of the inversion results and can be used to balance the trade-off between measurement precision and time resolution. At the same time, this method has important application prospects in many aspects such as optimizing the parameter configuration of spectral analysis and guiding the design of spectral instrument systems..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0430003 (2024)
Spectroscopic Mueller Metrix Polarimetry Based on Spectral Modulation and Division of Amplitude Demodulation
Zhongxun DENG, Naicheng QUAN, Siyuan LI, and Chunmin ZHANG
Thin film and nanostructure measurement technologies have played an important role in production process monitoring in industries such as integrated circuit manufacturing, flat panel displays, and solar cells. Many optical based measurement techniques have emerged to meet the industrial needs of high-speed and non-destThin film and nanostructure measurement technologies have played an important role in production process monitoring in industries such as integrated circuit manufacturing, flat panel displays, and solar cells. Many optical based measurement techniques have emerged to meet the industrial needs of high-speed and non-destructive measurement. Spectroscopic Mueller Metrix Polarimetry(SMMP) is a typical representative of these techniques and has become an important direction in the research and development of thin film and nanostructure measurement technology. It uses a Polarization State Generator (PSG) to convert a certain spectral range of polychromatic light into fully polarized light and project it onto the surface of the sample to be tested, and uses the Polarization State Analyzer (PSA) to detect the polarization state of the reflected or transmitted light on the surface of the sample for obtaining all 16 Mueller matrix elements of the sample as a function of wavelength, and then analyzes and extracts their characteristic parameters such as complex dielectric constant, carrier structure, and film thickness. SMMP can be divided into frequency modulation type and time modulation type according to its working principle. The polarization state generator and polarization state analyzer of the former are both composed of components that can change modulation parameters over time and fixed linear polarizers, such as rotary compensators, liquid crystal phase delay devices, and photoblastic modulators. When measuring in a wide spectral range, the SMMP with dual rotation compensators is the most common: the compensators of PSG and PSA rotate at a certain rate to produce different time modulation frequencies, and then use Fourier transform demodulation to obtain all 16 Mueller matrix elements of the sample, which takes a long measurement time and is not suitable for situations where Mueller matrix elements change rapidly over time; the PSG and PSA of the latter are both composed of two high-order phase delay devices configured with a certain thickness and fixed fast axis direction, as well as a fixed linear polarizer. All 16 elements of the measured Mueller matrix are modulated to 37 different frequency channels, and the spectra of all 16 Mueller matrix elements can be obtained by channel filtering and Fourier transform. As the system does not contain moving components, static real-time measurement can be achieved. However, when the light source or the measured Mueller matrix has sharp characteristic peaks, serious channel crosstalk will occur, which affects measurement accuracy and accuracy. According to the principle of Fourier transform spectroscopy, a large channel bandwidth corresponds to high restoration spectral resolution. Due to the limited total channel bandwidth, an increase in the number of channels will reduce the bandwidth required to restore the Muller matrix spectrum. Therefore, the spectral resolution of the measured Mueller matrix elements is much smaller than the spectral resolution of the spectrometer. Therefore, it is only suitable for situations where the measured Mueller matrix slowly changes with wavelength. To overcome these limitations, we presented a SMMP based on frequency modulation and division of amplitude demodulation. Compared with the spectroscopic Mueller polarimetry based on time modulation or frequency-temporal modulation, it has no moving components and electronic devices, and can achieve real-time measurement of the spectra of all 16 Mueller matrix elements of the sample. Compared with the spectroscopic Mueller polarimetry based on frequency modulation, it has higher spectral resolution and lower the probability of channel crosstalk generation. According to the research results, the selection of high-order retarders and spectrometers can further expand the spectral range of measurement. By optimizing the calibration method, the accuracy and precision of optical measurement can be further improved. This article has certain scientific significance and potential application prospects in the research and development of high-speed, high-precision, and wide spectral band generalized spectral ellipsometry technology in the field of non-destructive testing technology..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0430004 (2024)
Atomic and Molecular Physics
Time-stamp Camera Centroiding Algorithm and Dissociation Electron/ion Momentum Distribution Simulation
Xiaohong HUA, Yuliang GUO, Tianmin YAN, Shuai LI... and Yuhai JIANG|Show fewer author(s)
The time-stamped camera Tpx3Cam is a cutting-edge tool for exploring atomic and molecular dynamics, enabling the detection of photons, electrons, and ions in three dimensions with an impressive time resolution of up to 1.6 ns. Despite its advantages, Tpx3Cam faces inherent challenges, such as the cluster effect. This eThe time-stamped camera Tpx3Cam is a cutting-edge tool for exploring atomic and molecular dynamics, enabling the detection of photons, electrons, and ions in three dimensions with an impressive time resolution of up to 1.6 ns. Despite its advantages, Tpx3Cam faces inherent challenges, such as the cluster effect. This effect compromises both the temporal and spatial resolution of data acquisition while significantly increasing data capacity, thereby posing obstacles for subsequent data processing. To counter this, a method, known as the centroiding algorithm, is crucial to mitigate the cluster effect's impact, enhance Tpx3Cam's imaging resolution, and reduce data capacity. The current centroiding algorithm efficiently eliminates unnecessary derived signals within clusters and accurately locates their centers by analyzing their distributions, achieving subpixel super-resolution in position. However, existing centroiding algorithms are limited to handling low counting rates, specifically dealing with isolated clusters, lacking the capability to distinguish connected clusters in position. Under high counting rates, closely situated clusters could emerge within a short time. Consequently, traditional centroiding algorithms is inadequate for declustering in such scenarios.A new centroiding algorithm has been developed to address the cluster effect encountered during high counting rate imaging processes. Based on the existing centroiding algorithm, this new method significantly enhances the capability to distinguish clusters in time. It accurately identifies each independent cluster within extensive datasets, effectively declustering them. It results in a data capacity reduction by approximately one order of magnitude, while achieving subpixel super-resolution of the cluster center location. A position resolution of about 0.1 pixel could be achieved with the application of this new algorithm for each signal. Additionally, instead of employing Gaussian fitting, we utilize the weighted average method to determine cluster centers. This choice is supported by its equivalence to Gaussian fitting, as proven in the article. Notably, the weighted average method exhibits higher efficiency compared to Gaussian fitting. It's approximately 103 times faster in locating cluster centers in calculations.To validate the impact of the centroiding algorithm on Tpx3Cam imaging in practical experiments, we conducted simulations using SIMION to replicate the imaging process of electrons and ions in a typical Velocity Map Imaging(VMI) system. By simulating the ionization of ns state electrons and the Coulomb explosions of N2 from the (1,1) channel in VMI experiments, we observed significant improvements. The centroiding algorithm reduced the Full Width at Half Maximum (FWHM) of the electron's position distribution by 30%, thereby enhancing momentum resolution by 30% along the detector plane. Moreover, it reduced the FWHM of the time-of-flight (ToF) distribution of N+ from Coulomb explosions by 80%, leading to an 80% enhancement in time resolution. Variations might occur with alterations in the initial conditions of electrons and ions, the overall improvements in position and time resolution remain consistent. Consequently, the centroiding algorithm demonstrates its efficacy in enhancing momentum resolution in practical electron and ion detection experiments. Furthermore, conducting covariance analysis on the ions' radius distribution resulting from the Coulomb explosion of CO with background gas interference, after the implementation of the centroiding algorithm, successfully revealed the correlation between C+ and O+. This algorithm effectively mitigates count fluctuation interference induced by the cluster effect and remains unaffected by background impurities. Finally, the impact of count rate on the centroiding algorithm is addressed. Excessively high count rates pose a risk of data loss when employing the centroiding algorithm. We are actively addressing this concern and working towards resolving this flaw in the algorithm, aiming for a solution in the near future..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0402001 (2024)
Fiber Optics and Optical Communications
Study on Scanning Velocity of Germanium Core Fibers with Different Outer Diameters Annealed by CO2 Laser
Yifan DU, Ziwen ZHAO, Shuangqi ZHONG, Zecheng MA, and Shaoye WANG
As the germanium (Ge) core is mostly in an amorphous or polycrystalline state after fabrication, laser annealing is an effective way to improve the properties of semiconductor core fiber. During the laser annealing process, the axial scanning velocity of the laser along the fiber is an important parameter that affects As the germanium (Ge) core is mostly in an amorphous or polycrystalline state after fabrication, laser annealing is an effective way to improve the properties of semiconductor core fiber. During the laser annealing process, the axial scanning velocity of the laser along the fiber is an important parameter that affects the properties of the annealed fiber. Therefore, it is of great significance to investigate the modification mechanism of laser annealing on the Ge core to improve the properties of annealed fibers.In this study, three sets of Ge core fibers with different outer diameters (OD) and the same inner diameter (ID) were annealed by CO2 laser at different scanning velocities. The laser annealing experiments were carried out on Ge core optical fibers with an ID of 41~43 μm and the ODs of 188 μm, 251 μm, and 270 μm, respectively. The Ge core fibers were annealed by the SK-3D30 CO2 laser. The laser spot is 1 mm in diameter, the output power is 0~30 W, and the laser wavelength is 10.6 μm. The scanning region along the fiber axis is 1 mm×50 mm, which can completely cover the Ge core fiber, and the laser scanning in this region was reciprocated along the fiber axis during the annealing time. After laser annealing, samples were analyzed by using the spectrometer. Raman experiments were carried out on the cross-section of the Ge core fiber to collect the Raman peak frequency information. The obtained data was processed into mapping by MATLAB software. The optical transmission loss of Ge core fiber was measured by the cutback method. The system consists of a laser, photodetector, and optical power meter. The samples were cut off at 5 mm each time and measured 3 times per fiber. All measurements were made at room temperature.Three sets of experiments were carried out, the laser frequency is 50 kHz, the laser power is 20%(6 W), and the laser scanning time is 20 s. 1) The Ge core fiber with an OD of about 188 μm was annealed by laser, and the scanning velocities were set at 8 mm·s-1, 10 mm·s-1, 12 mm·s-1, and 14 mm·s-1. The Raman frequency distribution and average value at 10 mm·s-1 laser scanning velocity closest to Ge bulk crystal and optical transmission loss values was 3.435 dB·cm-1. 2) The Ge core fiber with an OD of about 251 μm was annealed by laser, the scanning velocities were set at 10 mm·s-1, 12 mm·s-1, 14 mm·s-1, 16 mm·s-1, and 20 mm·s-1. The Raman frequency distribution and average value at 14 mm·s-1 laser scanning velocity closest to Ge bulk crystal and optical transmission loss values was 2.147 dB·cm-1. 3) The Ge core fiber with an OD of about 270 μm was annealed by laser, and the scanning velocities were set at 12 mm·s-1, 14 mm·s-1, 16 mm·s-1, and 18 mm·s-1. The Raman frequency distribution and average value at 16 mm·s-1 laser scanning velocity closest to Ge bulk crystal and optical transmission loss values was 3.578 dB·cm-1. The experimental results show that under the same OD conditions, the laser annealing effect becomes better first and then worse with the increase in laser scanning velocity, and the scanning velocity for obtaining the optimal annealing effect increases with the increase of the OD of the fiber. Temperature variation at fixed points on the surface of the Ge core on the laser-irradiated side during the annealing process was simulated by COMSOL Multiphysics. The simulation results indicated that under the same OD conditions, the faster scanning velocity leads to the formation of denser temperature pulses, so that the Ge core is in the relatively high-temperature region most of the time, and the strength of the modification effect of this temperature field structure on the Ge core is more enhanced.The experimental results and the simulation of temperature variation indicate that the laser scanning velocity is an important factor affecting the annealing effect of Ge core fiber. The annealing intensity of the laser-annealed Ge core fiber can be enhanced as the laser scanning velocity is increased..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0406001 (2024)
Shape Sensing Based on Brillouin Optical Time Domain Analysis
Zijuan LIU, Jiaqi WU, Lixin ZHANG, Yongqian LI... and Kuan WANG|Show fewer author(s)
In recent years, optical fiber shape sensing technology has been widely studied in various fields, and has been widely used in robot, medical, aerospace, industrial equipment structure monitoring and submarine cables. With the change of application scenarios and the gradual improvement of measurement performance requirIn recent years, optical fiber shape sensing technology has been widely studied in various fields, and has been widely used in robot, medical, aerospace, industrial equipment structure monitoring and submarine cables. With the change of application scenarios and the gradual improvement of measurement performance requirements, the research needs of optical fiber shape sensing technology are becoming increasingly urgent. At present, the research on fiber shape sensing is mainly divided into two directions. One is the shape sensing technology based on FBG, which takes advantage of the wavelength drift of FBG under strain and realizes shape measurement by writing FBG on multi-core fiber, which has the advantages of high precision and simple data processing. In this direction, some scholars have done more in-depth research, but this technology is limited by the number and interval of FBG writing, and cannot achieve long-distance distributed shape measurement. The other direction is the shape sensing based on the distributed optical fiber measurement system. As a medium of shape sensing technology, optical fiber is small in size, light in weight, and has strong electromagnetic interference resistance and corrosion resistance. It can be either a transmission medium or a sensing medium. When the light wave is transmitted in the optical fiber, the optical intensity, phase, frequency and other parameters of the optical fiber will change with the change of environmental parameters such as strain and temperature. The data processing equipment is used to demodulate the modulated light, and then the information of strain and temperature of the optical fiber is obtained. In this paper, the Brillouin scattering in the fiber is used to reconstruct the shape of the fiber or the measured object in contact with it, and the strain change values of more than two fiber cores in the shape sensor are measured at the same time. Then the shape reconstruction algorithm is used to reconstruct the shape of the sensor or the measured object. In this paper, the BOTDA system is built with a spatial resolution of 1 m. A homogenous low-crosstalk seven-core fiber from Changfei Company is selected as the distributed shape sensor. The total length of the fiber is 300 m, the core diameter is 8 μm, the cladding diameter is 150 μm, and the protective layer diameter is 245 μm. The remaining six cores are located at a distance of 42 μm from the middle core and are symmetrically distributed around each other at 60°. At the same time, the seven pigtails of the multi-core fiber are labeled and separated by a fan-in fan-out coupler. By using the BOTDA system, the Brillouin gain spectra of the intermediate core and the off-core are measured, and it is verified that the intermediate core is not affected by bending, and the strain values of each two symmetric off-core are negative to each other. Three unsymmetrical cores with 120° distribution were selected, and the intermediate cores were used as temperature compensation to demodulate the induced variables of each core at different curvature radii. Finally, parallel transmission frame shape reconstruction algorithm is used to reconstruct the shape of seven-core fiber when the curvature diameter is 0.112 m and 0.052 m. When the curvature diameter is 0.112 m, the curvature reconstruction error is 0.375%, which is mainly due to the low spatial resolution of the construction system and the torsion problem in the winding process. Distributed fiber shape sensing technology has a very large application prospect, but there are still many technical difficulties that need to be overcome by researchers. The work in this paper has laid the research foundation for the subsequent distributed fiber shape sensing, and has certain practical significance..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0406002 (2024)
Temperature Measurement Error and Its Influencing Factors of FBG Sensor under Rotor Whirling Conditions Based on Space-Coupled Transmission Method
Sitong CHEN, Junbin HUANG, Hongcan GU, Gaofei YAO... and Zheyu LI|Show fewer author(s)
The increased power of the motor directly results in a higher temperature rise effect on the rotor. High temperatures can cause turn-to-turn short circuits or permanent demagnetization of the motor rotor, which can seriously affect the reliability of the generator operation and the stability of the combat system. ThereThe increased power of the motor directly results in a higher temperature rise effect on the rotor. High temperatures can cause turn-to-turn short circuits or permanent demagnetization of the motor rotor, which can seriously affect the reliability of the generator operation and the stability of the combat system. Therefore, the research of online rotor temperature measurement technology is of great significance. Compared to electronic sensors, Fiber Bragg Gratings (FBGs) are resistant to electromagnetic interference, small in size, require no power supply, and can be used for quasi-distributed measurements. This is a great advantage for monitoring the motor rotor temperature. However, research on FBG rotor temperature monitoring systems is still relatively rare. The state of motion of the motor rotor is an important factor affecting the accuracy of the system. The effect of rotor whirling on measurement accuracy due to unbalanced mass is even more difficult to ignore and has not been studied.In this paper, a model of FBG scanning spectra under rotor whrling conditions is developed by combining the transmission matrix theory of FBG and the coupled transmission theory of self-focusing lenses. The scanning error of the center wavelength and its influencing factors have been investigated in conjunction with relevant experiments. The results show that the whirling of the rotor leads to aberrations in the scanning spectra of the FBG, which are mainly manifested in the offset of the reflection peaks and the reduction of the 3 dB bandwidth. The main factors affecting the temperature measurement error of the system are the rotor whirling frequency, the demodulator scanning frequency, the radial displacement of the rotor at the end face, the axial ratio of the axial trajectory and the deflection angle of the axis. As the ratio of the coupling loss period to the spectral scan time (q-value) increases, the maximum center-wavelength scan error and the peak-seeking error both show a rapid decrease, followed by a slow decrease to stability. The key to ensuring a low level of temperature measurement error in the system is to ensure that q>10. The peak-finding error of the Gaussian curve fitting method is reduced to the level of the centroid method when q>40. When the rotor radial vibration amplitude is 200 μm, the axis ratio of the axial trajectory is 3, and the axis deflection angle is 0.1°, for a typical demodulator operating bandwidth of 40 nm and FBG bandwidth of 0.3 nm, if it is hoped that the measurement error of the polyimide-coated FBG to be less than 0.5 ℃, it should be ensured that the q value reaches 115 or more. The corresponding scanning frequency must be approximately 1.74 times higher than the whirling frequency. When using the centroid method or the Gaussian curve fitting method for peak finding, it is only necessary to make the scanning frequency about 0.21 and 0.44 times higher than the whirling frequency, respectively. The centroid method is more advantageous than the Gaussian curve fitting method for rotors with strong whirling.In addition, a study was conducted to investigate the effect of the intensity of the rotor whirling on the maximum scanning error at the center wavelength of the FBG and the system temperature measurement error. The results show that the maximum scanning error at the center wavelength increases slowly and then linearly as the rotor radial displacement amplitude or axis declination amplitude increases. The peak-finding error of the centroid and Gaussian curve fitting methods increases slowly and then dramatically. As the axial ratio of the rotor axis trajectory increases, the maximum scanning error and the peak detection error at the center wavelength both increase dramatically and then slowly increase to a steady state. When the amplitude of radial displacement is less than 200 μm and the axial deflection angle is less than 0.167°, the maximum temperature measurement error caused by whirling motion is 2.9 ℃, which is reflected by 15 sampling spectra under the condition of q=10 and the axial ratio of n=3. When peak detection is performed using the centroid method or the Gaussian curve fitting method, the temperature measurement error of the system is reduced to 0.6 ℃ and 1.1 ℃, respectively..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0406003 (2024)
Fiber Optic Temperature Sensor Based on Harmonic Vernier Effect of Sagnac Interferometer
Yuqiang YANG, Yuying ZHANG, Yuting LI, Jiale GAO... and Lei BI|Show fewer author(s)
Fiber optic interferometer has the advantages of small size, light weight, anti-corrosion, anti-electromagnetic interference, high sensitivity, etc., and is widely used in the measurement of temperature, humidity, magnetic field and other parameters. In recent years, researchers have dramatically improved the measuremeFiber optic interferometer has the advantages of small size, light weight, anti-corrosion, anti-electromagnetic interference, high sensitivity, etc., and is widely used in the measurement of temperature, humidity, magnetic field and other parameters. In recent years, researchers have dramatically improved the measurement sensitivity of interferometric fiber-optic sensors by cascading or paralleling two fiber-optic interferometers to produce an optical Vernier effect. When the free spectral ranges of the two fiber optic interferometers are close but not equal, the resulting Vernier effect is called the normal Vernier effect, when the free spectral range of one fiber optic interferometer is about an integer multiple of the other fiber optic interferometer, the resulting Vernier effect is called the harmonic Vernier effect.In this paper, a parallel optical fiber temperature sensor based on two Sagnac interferometers is proposed, where the interferometers SI1 and SI2 are connected to the two outputs of the fiber-optic coupler C3, where SI1 is a reference interferometer and SI2 is a sensing interferometer. When the length of the Panda fiber in the SI2 interferometer is approximately i+1 of the length of the Panda fiber in the SI1 interferometer (i is 1,2,3 …) times, the two interferometers will produce an i-order harmonic Vernier effect. When i is 0, it produces an normal Vernier effect, at which time there will be a single envelope in the interference spectrum. When i is 1, it produces a first-order harmonic Vernier effect, at which time there will be a double envelope in the interference spectrum. When i is 2, it produces a second-order harmonic Vernier effect, at which time there will be a triple envelope in the interference spectrum. In other cases, the order is analogous.We have numerically simulated the theoretical analysis and the free spectral range of the interferometric spectrum of the length interferometer SI1 with a Panda fiber of 520 mm at constant temperature is 9.13 nm. The SI2 interferometers with lengths of 572 mm, 953 mm, and 1 430 mm of the Panda fiber have free spectral ranges of 8.30 nm, 4.98 nm, and 3.32 nm, respectively. When the temperature is increased from T0 ℃ to T0+1 ℃, the interference spectra of SI2 interferometers with different Panda fiber lengths are all shifted to the short-wave direction, and the shifts are all about 1.89 nm, which is consistent with the theoretical analysis. The parallel interference spectra of interferometer SI1 and SI2 with Panda fiber lengths of 572 mm, 953 mm and 1 430 mm show single, double and triple envelopes respectively, indicating that the two interferometers produce the normal Vernier effect, first-order and second-order harmonic Vernier effects, respectively, and from the theoretical calculations. It can be seen that the amplification of the normal Vernier effect, first-order and second-order harmonic Vernier effects are all 11 times. When the temperature increases from T0 ℃ to T0+1 ℃, the single envelope moves in the short-wave direction, while the double and triple envelopes both move in the long-wave direction, which is opposite to that of the single SI2. In addition, the shifts of the single, double and triple envelopes are all about 20.7 nm due to the fact that the Vernier magnification is the same for the normal Vernier effect, first-order and second-order harmonic Vernier effects.It is experimentally concluded that the interference spectra of SI2 are blueshifted in the temperature range from 40 ℃ to 50 ℃, and the shifts are all about 1.89 nm , which is consistent with the theoretical analysis and simulation results. The temperature sensitivity of the sensor corresponding to the normal Vernier effect is -20.67 nm/℃, the temperature sensitivity of the sensor corresponding to the first-order harmonic Vernier effect is 21.34 nm/℃, the temperature sensitivity of the sensor corresponding to the second-order harmonic Vernier effect is 21.18 nm/℃, and the fiber optic sensors corresponding to the harmonic Vernier effect and the normal Vernier effect have almost the same temperature sensitivities, which are both about 21 nm/℃. This results are consistent with the theoretical analysis and simulation results. The above experimental results show that the temperature sensitivity of the SI2 interferometer is independent of the length of the Panda fiber, although the magnification is the same, the harmonic Vernier effect and the normal Vernier effect correspond to the detuning of the length of the Panda fiber are obviously different, the detuning corresponding to the normal Vernier is 52 mm, and the detuning corresponding to the first-order and second-order harmonics is -87 mm and -130 mm, respectively. This shows that the higher the order, the larger the detuning amount, which is approximately a multiple increase. The above experimental results are consistent with the theoretical analysis. Since the larger the detuning amount, the easier the Vernier magnification can be controlled and realised, the harmonic Vernier effect is obviously superior to the normal Vernier effect from the preparation point of view. This study can provide an important reference for the subsequent study of optical Vernier effect..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0406004 (2024)
Small Torque Detection of Bolt Connection Based on Suspended-FBG
Chunfang RAO, Peng CHEN, Youde HU, Xuefeng ZHAN... and Wenxin YU|Show fewer author(s)
Loosing of the bolt connection structure affects its working and operation safety. The main reasons for the loosing coming from loading, vibration, and friction. Consequently, loosing is inevitable and its monitoring is important for its application. At present, the theory and technology of testing the tightness state Loosing of the bolt connection structure affects its working and operation safety. The main reasons for the loosing coming from loading, vibration, and friction. Consequently, loosing is inevitable and its monitoring is important for its application. At present, the theory and technology of testing the tightness state of bolt connection are still not mature. The detection of small torque is a technical difficulty in this field. In this study, for the structure to be tested on the uneven surface in the narrow space, the Bragg Fiber Grating (FBG) was used as the sensor to identify the small torque of the bolt connection. In the testing, a periodic vibration in the tested structure with bolt tightness information was excited and employed for the identification. One tail of the suspended- FBG was sticked on the tested structure, and the vibration yielded periodic strains in the tail of the FBG, which acted as the source of the elastic longitudinal wave propagating along the optical fiber with a FBG written in it. The edge-filter method was used to demodulated the signals in the FBG sensor to satisfy the high frequency signals. The information coming from the FBG was used to be identified. Firstly, the Empirical Mode Decomposition (EMD) method was used to decompose the original signal, basing on which, we removed the unstable components and noise by calculating the correlation function of each component and the original signal. Then the signals were restructured for later identification. The dimensional features (standard deviation, residuals, peak- peak value, and energy) and dimensionless features (skewness, kurtosis, waveform factor, amplitude factor, impact factor and margin factor) of the signals were exacted, and were inputted to the recognition system based on the Support Vector Machine (SVM) finally, where we used the ten-fold cross-validation algorithm and Gaussian kernel function SVM for higher accuracy. Results show that the recognition accuracy reaches to 97.2% and the torque recognition ability is on the order of N·cm. This study proves that optical fiber is a good acoustic waveguide, and the installation technique of suspended FBG effectively mitigates spectral distortion resulting from uneven stress due to direct adhesion, thereby decreasing the complexity associated with sensor installation. At the same time, because the optical fiber as an acoustic waveguide does not sense the torsional displacement, the bending stress wave cannot form an effective transmission in the optical fiber, the FBG only senses the vibration displacement along the optical fiber axis that causes the longitudinal wave. Therefore, the signal deviation caused by the excitation and sensor setting in the actual test process is relatively small and limited, which can reduce the difficulty of the signal processing. On the other hand, the study identifies that the signal processing and identification method are suitable for the non-linear, non-stationary and small sample test data in this study. This study presents a new detection method for the bolted state, especially for the detection of small torques in small mass structures on the uneven surface in the narrow space..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0406005 (2024)
Real-time Spectrum Analysis of Wideband RF Signals Based on Fractional Temporal Talbot Effect
Bo YANG, Lei ZHAO, Shuna YANG, and Hao CHI
To enhance the bandwidth and real-time capabilities of Radio Frequency (RF) spectrum analysis, the optical real-time Fourier transform method has been proposed. The optical real-time Fourier transform method based on the temporal Talbot effect has the advantage of a simple structure by using optical pulse sampling and To enhance the bandwidth and real-time capabilities of Radio Frequency (RF) spectrum analysis, the optical real-time Fourier transform method has been proposed. The optical real-time Fourier transform method based on the temporal Talbot effect has the advantage of a simple structure by using optical pulse sampling and dispersion delay structure. However, its frequency measurement bandwidth is limited by the optical pulse's repetition rate. Addressing this limitation, a real-time spectrum analysis scheme of wideband RF signals based on the fractional temporal Talbot effect is proposed and demonstrated. Based on the sampling and dispersion structure, the scheme realizes the mapping of the RF signal frequency to the optical pulse time interval. At the same time, the repetition rate of the optical pulse before sampling is multiplied by passing through a dispersive element satisfying the fractional Talbot distance in advance. The frequency measurement bandwidth of the system can be significantly improved by using the fractional Talbot effect. A proof-of-concept experiment is carried out to test the performance of the proposed scheme. The repetition period of the optical pulse is set to 151.5 ps, that is, the repetition frequency is 6.6 GHz. Each pulse has a Gaussian shape and the full width at half maximum of the pulse is approximately 30 ps. Dispersion compensation fiber is used to provide dispersion for the system. The total dispersion value of the two sections of dispersion is about 3 650 ps2, which is about 0.1% different from the theoretical result. The RF signal to be measured is generated by a RF signal generator. The output optical signal is converted into an electrical signal by a 40 GHz bandwidth photodetector and recorded by a sampling oscilloscope with a bandwidth of 50 GHz. Comparing the experimental results under integer-order, 3rd-order fractional, and 9th-order fractional temporal Talbot conditions, it is verified that the measurement frequency bandwidth of the system increases with the order of the temporal Talbot effect. Real-time spectral analysis of single-tone and two-tone RF signals within a 29.7 GHz bandwidth is achieved using the 9th-order fractional temporal Talbot effect. Numerical simulation is carried out to achieve time-frequency analysis of a large-bandwidth linear chirp signal. Based on the 3rd-order fractional temporal Talbot effect, a linear chirp signal with a frequency range of 2~13 GHz and a chirp rate of 2.2 GHz/ns is successfully identified. Numerical simulation results further verify that this scheme can effectively analyze frequency transient signals. The main causes of frequency measurement errors include the time jitter of the input optical pulse train, the limited bandwidth of the pulse detection system, the deviation between the dispersion value and the theoretical value, high-order dispersion terms, etc. In the experiment, the time jitter root mean square value of the optical pulse is approximately 1 ps, which is 0.066% of the period of the optical pulse train. When the frequency measurement bandwidth is 29.7 GHz, the frequency measurement error caused by the time jitter is about 200 MHz. In order to improve frequency measurement accuracy, methods such as reducing optical pulse jitter, increasing the bandwidth of the pulse detection system, and compensating for high-order dispersion can be used. It should be noted that the increase of frequency measurement bandwidth will sacrifice the frequency resolution of the system. In practical applications, the requirements of system bandwidth and frequency resolution should be fully considered to select the order of the fractional temporal Talbot effect. With its advantages of simple structure, large bandwidth, and real-time processing, this scheme has potential application value in the fields of broadband radar, cognitive radio, and other fields..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0406006 (2024)
Design of Self-collimating Fiber FP Cavity and Localization of Sound Source Based on Silver Film
Yilin GUO, Yihao LI, Binbin LUO, Xue ZOU... and Mingfu ZHAO|Show fewer author(s)
Sound source localization is a pivotal research area within the field of acoustics, finding extensive applications in domains such as Unmanned Aerial Vehicle (UAV) navigation, intelligent traffic systems, medical imaging, and structural health monitoring. Traditional sound source localization methods typically rely on Sound source localization is a pivotal research area within the field of acoustics, finding extensive applications in domains such as Unmanned Aerial Vehicle (UAV) navigation, intelligent traffic systems, medical imaging, and structural health monitoring. Traditional sound source localization methods typically rely on arrays of multiple microphones or sensor networks. Nevertheless, these conventional approaches are beset by challenges related to complex installation, intricate data processing, and poor resistance to interference. In recent years, there has been considerable attention directed towards the emerging field of optical fiber-based acoustic localization, within which most optical fiber-based detection systems have employed Fiber Bragg Grating (FBG) sensor arrays due to their wavelength-based multiplexing capabilities. However, FBG sensors exhibit limitations in sensitivity. In contrast, optical fiber Extrinsic Fabry-Perot Interferometer (EFPI) sensors, with their probe-like structure and advantages in terms of high sensitivity and structural simplicity, are better suited for sound source localization. In this research endeavor, we have introduced optical fiber collimators within EFPI sensor arrays to develop a self-collimating optical fiber-based EFPI acoustic sensor array. The primary objective is to augment the sound pressure sensitivity and detection range of the sensor array. The designed sensor array exhibits elevated acoustic sensitivity and an expanded spatial detection range, thus holding immense potential for applications in sound source localization and the detection of partial discharge phenomena.Firstly, the optical field distribution of a quarter-pitch-length gradient multimode optical fiber was verified using Rsoft software. Subsequently, an EFPI (Extrinsic Fabry-Perot Interferometer) acoustic sensor with a self-collimating optical fiber was designed. To assess whether the proposed sensor exhibits enhanced sensitivity to sound pressure, it was compared to an EFPI acoustic sensor without the self-collimating feature. Next, three EFPI acoustic sensors with identical structures and self-collimating optical fibers were fabricated for sound source localization experiments. Prior to conducting the localization experiments, the consistency of sound pressure sensitivity and sound source directionality among the three sensors with self-collimating optical fibers was verified. Subsequent to these preparations, time-delay signals were acquired using an intensity demodulation technique and recorded on an oscilloscope. The time-delay signals were processed using conventional cross-correlation algorithms to calculate the time delays between pairs of sensors. Finally, based on the geometric positions of the sensor array, an estimation of the approximate sound source location was determined.The experimental results show that the interference spectrum FSR of EFPI sensor with collimator is 5.25 nm, and the maximum fringe visibility is 14.96 dB. The EFPI sensor without a collimator has a FSR of 5.18 nm and a maximum fringe visibility of 9 dB. The FSR of them is almost the same, but the interference spectral intensity of the former is increased by about 6 dB. In addition, the EFPI spectral slopes were 6.5 dB/nm and 10.2 dB/nm, respectively, without and with collimators, and the spectral slope of the latter increased nearly twice as much as the former. In the response characteristic experiment for the single sensor, EFPI acoustic sensor with collimator is superior to EFPI acoustic sensor without collimator in sound pressure response waveform and sound pressure sensitivity test. EFPI acoustic sensor with collimator has sound pressure sensitivity of 185 mV/Pa. The minimum detectable sound pressure is 52.7 μPa/Hz1/2@500 Hz, and the signal-to-noise ratio reaches 62 dB. In the experiment of sound source directionality, the designed sensor showed good performance under different sound pressure directionality. When the sound source was placed directly in front of the sensor, its sound pressure sensitivity reached 185 mV/Pa. When the sound source was set on the side of the sensor (90°, 270°), its sound pressure sensitivity could still reach 177 mV/Pa. This indicates the ability of the sensor array to achieve sound source localization within a wide-angle range. In the two-dimensional plane sound source location experiment, the signal delay in the time domain signal is extracted by the correlation algorithm, and the two-dimensional plane sound source location within the range of 200 cm×200 cm is finally realized. The theoretical spatial resolution is 0.71 cm, and the maximum positioning error of the system is no more than 2.8 cm. Finally, the performance comparison with other EFPI acoustic arrays shows that the system has the advantages of high sensitivity, low production cost, simple demodulation system and large detection range..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0406007 (2024)
Machine Vision
Global Low Bias Visual/inertial/weak-positional-aided Fusion Navigation System
Yufeng XU, Yuanzhi LIU, Minghui QIN, Hui ZHAO, and Wei TAO
In recent years, the rapid development of mobile robots, autonomous driving, drones and other technologies has increased the demand for high-precision navigation in complex environments. Visual-inertial odometry has been widely used in the field of robot navigation because of its low cost and high practicability. HowevIn recent years, the rapid development of mobile robots, autonomous driving, drones and other technologies has increased the demand for high-precision navigation in complex environments. Visual-inertial odometry has been widely used in the field of robot navigation because of its low cost and high practicability. However, due to its relative measurement principle, the cumulative error can increase significantly during long-term operation of the system. To solve this problem, a global low-bias visual/inertial/weak-positional-aided fusion navigation system is proposed. The system provides optional solutions to integrate several unbiased positioning information such as Global Navigation Satellite System(GNSS)satellite navigation original information, US base station ranging information, and visual target positioning auxiliary information, fully combining the advantages of global information and visual-inertial odometry. Thus, high precision, high continuity, high real-time, indoor and outdoor integration of low-bias global navigation results are obtained. The main frame of the system designed in this paper is a factor graph model, based on the visual-inertial odometry, which ensures the high frequency pose output and seamless indoor/outdoor switching of the system. The visual-inertial residual factor is defined based on the visual reprojection model and IMU pre-integration model. For different application scenarios, GNSS constraints and ultrasonic constraints are introduced in the form of optional factors and state quantities, and GNSS residuals and ultrasonic residuals are defined. Among them, the GNSS factor constructs residuals with pseudorange measurements and Doppler shift information. The ultrasonic factor constructs the residual from the ultrasonic positioning results and the ultrasonic base station distance measurement. At the same time, ArUco visual information correction optional module is provided. Based on ArUco marker position prior information and ArUco target recognition algorithm, ArUco assisted global pose optimization method is defined. A wheeled robot platform equipped with multiple sensors, such as cameras and LiDAR, was built to collect data and conduct algorithm testing in the underground parking lot and the connected above-ground architectural complex area. The experimental scene was scanned by laser scanner and the map truth value was generated. The VIO assisted LiDAR point cloud and map prior were used for point cloud registration to obtain accurate track truth value. The positioning and navigation performance of three different methods, namely, proposed method, VINS-Mono method and ORB-SLAM3 method, were tested respectively in three scenarios: indoor, in-outdoor during the day and in-outdoor at night. The test results show that in the three scenarios, the RPE and ATE evaluation results of the proposed method are superior to the other methods. Especially under harsh conditions, the ATE RMSE of the proposed method is 3.495 m in in-outdoor scenes at night, which is significantly better than VINS-Mono (10.77 m) and ORB-SLAM3 (15.02 m). In addition, the experiment also tests the comparison between the proposed method using VIO+ArUco module and the VINS-Mono method with the loopback detection function enabled, proving that the introduction of ArUco module is of great significance for eliminating global cumulative errors and improving global navigation accuracy, and can solve the problem of loopback detection failure to a certain extent. In general, this paper presents an extensible multi-modal information weak-aided visual inertial navigation system. The experimental results show that the proposed system has excellent global positioning accuracy and universality in different scenes. In future work, the range of multi-modal information can be further expanded to explore the integration scheme of sensor information such as LiDAR, magnetometer and other sensing characteristics..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0415001 (2024)
Enhancing Aircraft Object Detection in Complex Airport Scenes Using Deep Transfer Learning
Dan ZHONG, Tiehu LI, and Cheng LI
Within the civil aviation airports of China, intricate traffic scenarios and a substantial flow of traffic are pervasive. Conventional monitoring methodologies, including tower observations and scene reports, manifest vulnerability to potential errors and omissions. Aircraft object detection at airport scenes remains aWithin the civil aviation airports of China, intricate traffic scenarios and a substantial flow of traffic are pervasive. Conventional monitoring methodologies, including tower observations and scene reports, manifest vulnerability to potential errors and omissions. Aircraft object detection at airport scenes remains a challenging task in the field of computer vision, particularly in complex environmental conditions. The issues of severe aircraft object occlusion, the dynamic nature of airport environments and the variability in object sizes pose difficulties for accurate object detection tasks. In response to these challenges, we propose an enhanced deep learning model for aircraft object detection at airport scenes. Given the practical constraints of limited hardware computational power at civil aviation airports, the proposed method adopts the ResNet-50 model as the foundational backbone network. After pre-training on publicly available datasets, transfer learning techniques are employed for fine-tuning within the specific target domain of airport scenes. Deep transfer learning methods are utilized to enhance the feature extraction capabilities of the model, ensuring better adaptation to the limited aircraft dataset in airport scenarios. Additionally, we incorporate an adjustment module, consisting of two convolution layers, into the backbone network with a residual structure. The adjustment module can increase the receptive field of deep feature maps and improve the model's robustness. Moreover, the proposed method introduces the Feature Pyramid Network, establishing lateral connections across various stages of ResNet-50 and top-down connections. FPN generates and extracts feature information from multiple scales, facilitating the fusion of features in the feature maps. This enhances the accuracy of multi-scale target detection in the task of object detection. Furthermore, optimizations have been implemented on the detection head, composed of parallel classification and regression branches. This detection head aims to strike a balance between the accuracy and real-time performance of target detection, facilitating the fast and accurate generation of bounding boxes and classification outcomes in the model's output. The loss function incorporates weighted target classification loss and localization loss, with GIoU loss used to calculate the localization loss. Moreover, we construct a comprehensive airport scene dataset named Aeroplane, to evaluate the effectiveness of our proposed model. This dataset encompasses real images of diverse aircraft in various backgrounds and scenes, including challenging weather conditions such as rain, fog, and dust, as well as different times of day like noon, dusk, and night. Most of the color images are captured from the camera equipment deployed in various locations, including terminal buildings, control towers, ground sentry posts and other places of a civil aviation airport surveillance system in China. The diversity of the dataset contributes to enhancing the generalization performance of the model. The Aeroplane dataset is structured adhering to standards and is scalable for future expansion. And we conduct experiments on the Aeroplane dataset. Experimental results demonstrate that our proposed model outperforms classic approaches such as RetinaNet, Inception-V3+FPN, and ResNet-34+FPN. Compared to the baseline method, ResNet-50+FPN, our model achieves a 4.9% improvement in average precision for single-target aircraft detection, a 4.0% improvement for overlapped aircraft detection, and a 4.4% improvement for small target aircraft detection on the Aeroplane dataset. The overall average precision is improved by 2.2%. Through experimental validation, our proposed model has demonstrated significant performance improvement in aircraft target detection within airport scenarios. The presented model exhibits robust scene adaptability in various airport environments, including non-occlusion, occlusion, and complex scenes such as nighttime and foggy weather. This validates its practicality in real-world airport settings. The balanced design of real-time performance and accuracy in our approach renders it feasible for practical applications, providing a reliable aircraft target detection solution for airport surveillance systems and offering valuable insights for the task of object detection..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0415002 (2024)
Optical Device
Artificial Photoelectric Neuron Based on Organic/inorganic Double-layer Memristor
Binglin LAI, Zhida LI, Bowen LI, Hongyu WANG, and Guocheng ZHANG
Currently, data processing computing systems primarily rely on the Von Neumann architecture. This architecture employs serial data processing and physically separates the processor unit from the storage unit, resulting in data transmission delays. These delays not only reduce work efficiency but also increase energy coCurrently, data processing computing systems primarily rely on the Von Neumann architecture. This architecture employs serial data processing and physically separates the processor unit from the storage unit, resulting in data transmission delays. These delays not only reduce work efficiency but also increase energy consumption. Neuromorphic computing has gained significant attention for its ability to process large amounts of data with minimal power consumption. Artificial neurons are crucial components of this technology and have been extensively researched. The primary function of these devices is to receive and integrate input signals from synapses and generate spike signals as outputs when the threshold is exceeded. Artificial neurons typically use a threshold function to determine whether synaptic signals are integrated enough to reach the threshold. They receive postsynaptic current from the previous synapse as input and output voltage in a spike form to the front end of the next synapse, firing like biological neurons to exchange information. Researching more efficient and precise artificial neural devices is of great significance for processing complex information. Therefore, it is important to continue exploring the potential of memristor-based artificial neurons. Artificial neurons based on memristors have advantages such as high stacking density, low power consumption, and fast switching speed, which are essentially closer to biological neurons. Currently, artificial neurons based on memristors are mainly categorised into three types: electrochemical mechanism-based, valence mechanism-based, and phase change mechanism-based. To process complex information more efficiently in artificial neural morphology computing, we propose an artificial neuron device based on Ag/IDTBT/ZnO/Si memristors. The device exhibits good threshold characteristics, with a switching ratio of about 102~103 and lower operating voltage. It can simulate a neuron model for Leaky Integrate and Fired ignition, with the ignition time of the neuron device being inversely proportional to the pulse amplitude applied to the device. Increasing the applied pulse amplitude from 0.8 V to 1 V results in a decrease in the integrated ignition time of the device from 5.22 s to 1.19 s. The ignition time decreases as the amplitude increases. It is important to note that if the applied pulse amplitude is too low, the neuron device cannot be activated, while if it is too high, the device irreversibly breaks down and the internal lattice structure of the material is permanently damaged. In complex neural morphology calculations, artificial neurons require adjustable performance to adapt to their environment. Therefore, we investigated the impact of Indacenodithiophene-co-benzothiadiazole (IDTBT) concentration on the performance of artificial neural devices. The results indicate that an increase in IDTBT concentration can lead to an increase in film thickness. This, in turn, can increase the threshold voltage of the neural device and the amplitude voltage required for integral ignition. Currently, most artificial neurons are driven by electrical signals. However, these signals have some drawbacks, such as high power consumption, limited triggering selection, and difficulty in simulating visual systems, which hinder further improvements in computing speed and energy efficiency. In contrast, optical signals offer significant advantages in terms of high energy efficiency, high bandwidth, low crosstalk, and computational speed. To enhance the operating speed of the neural morphology system, we investigated the photoelectric synergistic effect of photoelectric neural morphology devices and the impact of light on device performance. Upon illumination, the device's threshold voltage decreased significantly from 1.99 V to 1.62 V. To assess the device's stability, we retested it after 30 days of storage. By comparing the switch ratio and threshold voltage of two time periods, it appears that the switch ratio and threshold voltage remain stable at approximately 103 and 1.79 V, respectively. The device's overall performance is stable without significant changes, indicating good stability. This work presents an effective strategy for promoting the development of the neuromorphic system..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0423001 (2024)
Remote Sensing and Sensors
Co-axis Alignment and Radiance Transfer Calibration of Direct Lunar Irradiance Meter
Xinrui WANG, Xin LI, Yan PAN, Ping LI... and Mengmeng QIN|Show fewer author(s)
With the continuous development of space remote sensing instruments and the further improvement of the quantitative application requirements of remote sensing products, the necessity and importance of high-precision calibration of space remote sensing instruments are increasingly apparent. The moon's reflectance chWith the continuous development of space remote sensing instruments and the further improvement of the quantitative application requirements of remote sensing products, the necessity and importance of high-precision calibration of space remote sensing instruments are increasingly apparent. The moon's reflectance changes by 10-8/year, and its high stability in reflectance, as well as the repeatability of radiance at the same observation geometry angle, make the moon very suitable as a reference radiation source and a hot research topic in the field of radiation calibration at home and abroad. However, the key to achieving this function is the need for a large amount of ground-based observation data to support the establishment of a high-precision lunar radiation model. Therefore, it is urgent to develop radiation instruments that have long-term precise tracking and measurement capabilities on the moon. Accurate test data and laboratory radiation calibration are two key factors that directly affect the acquisition of high-precision ground-based observation data for ground-based lunar radiation instruments. Among them, the tracking camera and signal detection lens being on the same optical axis are the key factors affecting the accuracy of the test, and the low-light radiation calibration coefficient is the key factor affecting the calibration accuracy. The tracking camera's full field of view of the moon's direct irradiance meter developed in this paper is 2.13°×1.60°, and the signal detection lens's full field of view is 1°. Because both fields of view are small, it is necessary to strictly parallel the optical axis of the tracking camera and the signal detection lens. If they are not adjusted to the same optical axis, it will cause the tracking camera to track the moon in real-time, the moon is at the center of the tracking camera's field of view but not in the center of the detection lens's field of view, resulting in no signal being detected or inaccurate data due to the signal being too weak. To ensure the accuracy of test data, this paper combines a laser level instrument with a flat reflection mirror to simulate the moon in the laboratory and adjust the tracking camera and detection lens on the same optical axis, avoiding inaccurate data caused by not being on the same optical axis. The experimental results show that the error of the same optical axis adjustment in the laboratory is within ±0.03°. In addition, the visible, infrared 1, and infrared 2 modules transmit radiation calibration, determining the coefficient relationship between the detector's output Digital Numbers(DN) value and the irradiance, which also directly affects the establishment of the lunar radiation model. To ensure the accuracy of the instrument's calibration coefficient, this paper uses a standard lamp traced back to the national metrology benchmark to transmit the calibration of the low-light lamp, using a combination of a laser rangefinder, a laser level instrument, and a flat reflection mirror to ensure the accuracy of the transmission calibration and solve the problems of non-linear response errors of the detector and long calibration distance errors caused by direct use of the standard lamp. In theory, this can improve the accuracy of low-light calibration. This paper aims to provide a universal and feasible solution to solve the problem of limited same optical axis adjustment due to environmental conditions affecting external field adjustment and to solve the problem of transmission radiation calibration accuracy and detector non-linear response using low-light sources for the moon's direct irradiance meter in the laboratory. This solution provides a way of thinking and a solution for laboratory optical axis adjustment and transmission calibration of integrated automatic observation remote sensing instruments with tracking and testing capabilities, and has important reference value..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0428001 (2024)
Radiometric Calibration Method for Infrared Camera Based on Power-weighted Least Squares Fitting
Zhanlei JIN, Libing JIN, Jiushuang ZHANG, Lina XU... and Yan LI|Show fewer author(s)
Radiometric calibration is an important step in the high-precision quantitative application of infrared remote sensing camera, and in order to obtain uniform images, infrared cameras need to perform relative radiometric calibration. To obtain the true radiation value of the target, it is necessary to carry out absoluteRadiometric calibration is an important step in the high-precision quantitative application of infrared remote sensing camera, and in order to obtain uniform images, infrared cameras need to perform relative radiometric calibration. To obtain the true radiation value of the target, it is necessary to carry out absolute radiometric calibration to establish the quantitative relationship between the radiance of the entrance pupil and the DN value of the detector output, and then invert the true radiance value of the detection signal. In recent years, the requirements for radiometric calibration are getting higher and higher, so it is very necessary to deeply analyze and suppress the main influencing factors of infrared camera radiometric calibration accuracy to improve the radiometric calibration accuracy.There are many factors affecting the calibration accuracy of infrared system, and it can be seen from the analysis of the radiometric calibration accuracy of infrared camera that in addition to the non-uniformity of blackbody temperature, the nonlinearity of system response is an important source of radiometric calibration error, and improving the response linearity is an important means to improve the radiometric calibration accuracy of infrared camera. Least squares is the most commonly used method of data fitting and considers that all data are equally weighted to find the best matching function by minimizing the sum of squares of the error. The weighted least squares used by the Changchun optical machine of the Chinese Academy of Sciences reduces the linearity deviation caused by the temperature fluctuation of the calibration blackbody. For systematic errors such as the nonlinearity of the low-end response of the detector, how to use a better fitting algorithm to compromise has not been publicly studied.The nonlinearity of response is an important source of radiometric calibration error of infrared camera, and a terrestrial and on-orbit radiometric calibration method based on weighted least squares fitting is proposed to improve the radiometric calibration accuracy. Firstly, the shortcomings of the existing linearity calculation methods are analyzed, and a new linearity evaluation method based on deviation/measured value is proposed, which uses the low-end temperature deviation of least squares fitting to be an order of magnitude larger than that of the high-end, and it is necessary to use weighted fitting to improve the response linearity of the system. Based on the analysis of the temperature and radiance fitting functions of different spectra, a ground and on-orbit radiometric calibration method based on the reciprocal power of the radiance weighted least squares linear fitting was proposed. The fitting effect was better when the weights were confirmed to be powers n=1 and 2, and then the weighted least squares fitting for outer blackbody linear calibration equation, inner blackbody linear calibration equation and inner to outer blackbody radiance conversion model with power n=1 were established. For the high-precision two-point calibration of the blackbody in orbit, a radiance conversion model based on weighted least squares fitting between the inner and outer blackbodies was established. The data processing results show that the low-end outer blackbody inversion deviation decreases from 1.63 K to 0.52 K, and the inner blackbody inversion temperature deviation decreases from 0.83 K to 0.39 K after weighted linear fitting, and the system response linearity is significantly improved.In view of the degradation of the in-orbit blackbody emissivity, a correction method for the in-orbit blackbody radiance calibration of cameras based on stellar calibration was proposed, and the conversion models of outer blackbody radiance, inner blackbody radiance and stellar radiance were established. After the camera is in orbit and stable, the blackbody calibration and stellar calibration can be carried out, and the relative relationship calibration equation can be established. Since most of the brightness measurements of stars are carried out in the bands of interest in astronomical research, these bands generally do not coincide with the earth observation bands of remote sensing satellites, and the radiance of stars is not equal to the blackbody on the star, so there may be inconsistencies in the calibration equation, but the stellar stability is very good, which can be used for long-term stability monitoring and correction of the blackbody on the satellite..
Acta Photonica Sinica
- Publication Date: Apr. 25, 2024
- Vol. 53, Issue 4, 0428002 (2024)