Precise timing synchronization is crucial for large-scale accelerator systems, where fiber length differences and long-distance optical transmission introduce timing discrepancies that significantly impact system stability and device triggering accuracy.
This study aims to develop an automatic delay compensation algorithm for event timing systems to address synchronization issues caused by fiber length differences and improve system stability over long-distance optical fiber transmission.
A depth-first multi-node approach was employed by this delay compensation algorithm and implemented on an FPGA-based hardware platform utilizing a 125 MHz event clock. Firstly, the delay compensation process was realized through a combination of hardware and software solutions, with the core algorithm consisting of a time acquisition (TAQ) module responsible for measuring fiber-induced delays and a time delay compensation module for adjusting synchronization. Then, the algorithm was designed to store delay values in hardware registers using shift register technology to enable immediate compensation without recalibration during power cycles. Finally, the system's performance was validated through comprehensive testing involving different fiber lengths ranging from 6 m to 30 m under various operational conditions.
The test results show that this compensation algorithm achieves 8-ns time precision in delay compensation across different fiber configurations. The delay compensation successfully aligns signal outputs with a precision of approximately 5.5 ns for fiber lengths from 6 m to 30 m, demonstrating effective synchronization maintenance. The measured fiber delay of approximately 4.83 ns·m-1 closely matches the theoretical value of 5 ns·m-1, validating the measurement accuracy. The system demonstrates strong compatibility and flexibility through its adaptability to different accelerator configurations.
The proposed automatic delay compensation algorithm significantly enhances the synchronization accuracy and stability of event timing systems in accelerator applications, reducing dependency on fixed-length fiber installations and improving long-term operational reliability. The algorithm's universal design and compatibility make it suitable for complex large-scale scientific facilities, providing new possibilities for scientific research and engineering applications.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090201 (2025)
With the development of high-current pulsed power devices, intense pulsed γ radiation measurement and diagnostic techniques face new challenges as conventional methods are limited in spatial arrangement and interference resistance under extreme radiation environments.
This study aims to propose a scattering coded imaging system for accurately measuring the dose field intensity distribution of intense pulsed γ radiation, with an optimized design of the coded aperture to enhance the imaging performance of the system.
Firstly, a thin scattering target was introduced to reduce γ ray intensity, protecting the imaging detector from dose rate damage while minimizing changes to the pulsed radiation field parameters. Secondly, the ring aperture was selected as the coded aperture for the imaging system, with its inner diameter, ring width, and thickness optimized through the application of genetic algorithms in conjunction with the Monte Carlo N-Particle Transport Code (MCNP) code. Then, to validate the optimization results in terms of the spatial resolutions of the imaging systems corresponding to the optimized ring aperture, ring aperture without optimal parameters, and pinhole aperture, Maximum Likelihood Expectation Maximization (MLEM) algorithm was employed to reconstruct radiation source images for quantitative evaluation using line-pairs sources of varying widths. Finally, bilateral filtering was applied to the reconstructed images to enhance contrast and achieve more uniform distribution under conditions of misalignment and non-uniform source intensity.
The genetic algorithm optimization results show that the optimal reconstructed image correlation coefficient is achieved with an inner diameter of 2.730 cm, a ring width of 2.147 cm, and a thickness of 4.230 cm. The imaging system employing the optimized ring aperture demonstrates superior spatial resolution with contrast ratios of 100% and 88% for 3 mm and 2 mm line-pairs respectively, compared to both the ring aperture without optimal parameters and the traditional pinhole aperture. Under conditions of misalignment and non-uniform source intensity, the optimized ring aperture maintains higher reconstruction quality with correlation coefficients ranging from 0.796 4 to 0.862 8 across different source configurations.
A novel scattering coded imaging system proposed in this study achieves high robustness and accuracy within compact space for measuring intensity distribution of dose fields in intense pulsed γ radiation environments. The system demonstrates superior performance with the optimized ring aperture parameters, achieving correlation coefficients above 0.84. It maintains excellent imaging quality even under system misalignment conditions. This system provides a new approach for realizing radiation imaging in extreme radiation environments where conventional methods fail to meet the requirements.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090202 (2025)
Currently, the output correction factor (
This study aims to propose a refined method for correcting the output factor (
Firstly, based on the measurement data and technical parameters of the Elekta Synergy accelerator, models for the 10 MV FF and 10 MV FFF accelerator heads were constructed using EGSnrc/BEAMnrc program, and a computational model for the chamber's correction factor (
Calculation results show that significant differences exist in the calculated correction factor (
The method proposed in this study for refining the ionization chamber correction factor (
- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090203 (2025)
Medical images inevitably generate noise and artifacts during the imaging process, and segmentation algorithms are susceptible to such information. In order to more accurately assist physicians in diagnosing liver diseases and making surgical plans, accurate and stable automatic liver region segmentation of CT images is an urgent issue to be addressed.
This study aims to develop a novel image segmentation algorithm for accurately segmentation of liver region from CT images.
Firstly, the Multi-feature Complementary and Adaptive Fusion Network (MCAF-Net) was proposed with the Multi-feature Complementary Cross-Attention (MCCA) embedded into the bottleneck layer. Multi-feature complementation to generate abundant and complete feature representations was realized through four different down-sampling modules to reduce the information loss, and realized feature interaction to mitigate the effect of noise and artifacts through cross-attention. Then, the encoder and decoder ends were connected through the Adaptive Multi-Scale Feature-fusion Module (AMFM) to recognize liver edges more accurately, and the perception of contextual and multi-scale information were enhanced by the Spatial Pyramid Pooling Fusion (SPPF) module as well as the adaptive feature fusion to achieve fine segmentation of liver edges. Subsequently, the MCAF-Net was quantitatively evaluated with other mainstream segmentation algorithms on the LiTS2017 dataset, and the performance of the MCCA as well as the AMFM was evaluated using ablation experiments on the LiTS2017 dataset. Finally, the noise-containing LDCT dataset was operated by denoising before segmentation, and the segmentation performance of the MCAF-Net and other mainstream algorithms was evaluated in the noise-containing environment.
The experimental results on the LiTS2017 and the noise-containing LDCT dataset show that the MCAF-Net outperforms other mainstream algorithms in mitigating the effects of noise and artifacts and the recognition of liver edges, and the ablation experiments on the LiTS2017 dataset demonstrate the effectiveness of AMFM and MCCA in edge recognition and anti-interference, in which the DSC and Jaccard on the LiTS2017 dataset reach 96.24% and 92.83%, respectively; and the DSC and Jaccard reach 94.90% and 90.63% on the LDCT dataset, respectively.
The experimental results on the LDCT dataset shows that the MCAF-Net has certain anti-noise performance for CT image liver segmentation, which is better than other mainstream algorithms.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090301 (2025)
Reactive transport processes involved in acid in-situ leaching of sandstone uranium deposits lead to changes in pore structure and reactive transport parameters, potentially causing pore blockage.
This study aims to conduct pore-scale simulations of multi-component reactive transport, provide a methodological reference for investigating the mechanisms of pore structure evolution and changes in reactive transport parameters during acid in-situ leaching of uranium.
A pore-scale reactive transport model was used to directly simulate evolutions of pore structure and reactive migration parameters caused by the reactive transport process. Firstly, the fluid and solute transport were modeled using the D2Q9 and D2Q5 LBM models of lattice Boltzmann method (LBM), respectively. And chemical reactions were coupled to simulate the dissolution of calcium carbonate minerals and the formation of gypsum precipitation in porous media. Then, the pore structure was updated based on mineral concentration thresholds, simulating the evolution of solute concentration, pore structure, and flow fields as reactive transport progressed. Finally, benchmark tests for calcium carbonate dissolution and gypsum formation were conducted to validate the model's accuracy. Thereafter image analysis was applied to capture the evolution of key reactive transport parameters in the porous media.
The simulation results demonstrate that the LBM effectively simulates the evolution of pore structure, flow fields, solute concentration distributions, and reactive transport parameters over leaching time in heterogeneous porous media. Benchmark tests show that the dissolution rate of calcium carbonate under standard conditions is 4.39×10-8 mol·cm-2·s-1, consistent with the reported range of (4.18~4.59)×10-8 mol·cm-2·s-1. The simulation of gypsum formation shows an error of less than 5% compared to theoretical calculations. Additionally, the evolution of reactive transport parameters reveals that porosity increases monotonically, while reactive surface area decreases monotonically. Tortuosity and permeability fluctuate within ranges of 97.2%~100% and 99.9%~112.3%, respectively. However, the specific evolution patterns of reactive transport parameters are influenced by the initial pore structure and mineral distribution.
Pore-scale reactive transport simulations proposed in this study offer deeper insights into the mechanisms behind pore structure evolution and blockage formation during acid in-situ leaching. Furthermore, this approach provides dynamic reactive transport parameters that serve as valuable references for simulating leaching processes in well fields.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090302 (2025)
At present, the existing online dose monitoring instruments have the problems of no response and dose leakage in measurements of mixed pulse radiation field generated by ultra-short and ultra-intense laser facilities. Hence, developing radiation monitoring instruments, which are suitable for the surrounding environment of laser facilities, is of great significance.
This study aims to design a high-sensitivity neutron dosimeter with robust anti-interference capability in pulsed neutron-dominated radiation environments by simulation.
Firstly, based on the structure of A-B rem meter, a boron coated current type ionization chamber was designed, and the optimization process of the detector was realized by using FLUKA simulation. Then, the neutron detection efficiency was simulated under different boron thicknesses of the detector from 0.4 μm to 4.2 μm to obtain optimaized boron thicness and the neutron sensitivity over a wide energy range under the thickness of polyethylene from 7.75 cm to 10.75 cm was simulated by fixing the thickness of the coated boron to 4.2 μm. Finally, the neutron and photon fields of the Station of Extreme Light (SEL) at Shanghai HIgh repetitioN rate XFEL and Extreme light facility (SHINE), were utilized as the radiation source terms for testing the response performance and n-γ resolving ability in the specific radiation field.
The simulation results show that, the neutron detection efficiency reaches 5.29% if the thickness of polyethylene is chosen to be 7.75 cm with boron thickness of 1.4 μm. The neutron sensitivity is up to 4.2×10-14 A·(n·cm-2·s-1)-1 within the energy range of 0.025 eV ~ 200 MeV. Under the mixed neutron and γ radiation field conditions at the SHINE-SEL, the equivalent energy response of the detector to the neutron field with a dose rate of 71.2 μSv·h-1 is 0.022 pA·(n·cm-2·s-1) -1, which is three orders of magnitude higher than that to the photon field with a dose rate of 151.2 μSv·h-1, and it has a good resistance to γ interference.
This study demonstrates the feasibility of applying the boron-coated ionization chamber detector for high-sensitivity radiation dose monitoring in pulsed neutron fields.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090401 (2025)
Radioactive waste measurement and classification is the key to waste minimization management.
This study aims to propose a method for correcting neutron detection efficiency, thereby improving the accuracy of activity measurement for waste drum.
A machine learning-based method for radioactive source localization was proposed. First of all, a typical 200 L waste bin with a diameter of approximately 56 cm and a height of approximately 90 cm was taken as object, and based on the Monte Carlo program, the spatial detection efficiency of the system was simulated for waste drum filled with different substrates. Then, through the artificial neural network training, a mapping model between detector response and source position distribution was established. Subsequently, based on the detector response characteristics, the source position was able to be predicted, thereby achieving effective correction of the system's detection efficiency. In addition, simulation was performed toward the radioactive activity measurement of 252Cf source to verify the accuracy of this method.
The standard deviation σ between the predicted and actual source coordinates is less than 0.25 cm, and the relative deviation between the corrected detection efficiency and the true value is less than 1.30%. The activity measurement of a 252Cf source achieves a less than 1.25% relative deviation between the measured activity value and the true value after efficiency correction.
The efficiency correction method proposed in this study improves the accuracy of activity measurement results, thereby providing technical support for the accurate measurement of radioactive waste drums.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090402 (2025)
Due to the lack of filament lifespan limitations and electrode contamination, the high-power radio frequency (RF) ion source is the most promising ion source for achieving steady-state operation in neutral beam injection system research. During the operation of the RF ion source, dynamic variations in plasma impedance can increase reflected power, reduce RF coupling efficiency, and, in severe cases, damage core components.
This study aims to design a reflection protection system to address impedance mismatch and reflected power spikes in the high-power RF ion source.
A high-precision analog-to-digital converter (ADC) was applied to the real-time monitoring of the reflector power, and integrated into microcontroller unit (MCU) together with the reflected power protection and spike pulse shielding functionalities in the field control site, and the fast response was achieved through interrupts of MCU. Meanwhile, a remote monitoring interface was designed using LabVIEW to enable parameter adjustment and data storage in host computer which was connected to MCU by optical fiber optic via communication units.
The test results demonstrated that the system had a maximum response delay of 150 μs for protection signal output is achieved by the protection system. The spike shielding time adjustable between 10 ms and 10 s, with 1 ms step precision, meeting real-time operational requirements.
The proposed system ensures safe and stable operation of the high-power RF ion source, offering advantages such as high cost-effectiveness and easy maintenance.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090403 (2025)
The Engineering Material Diffraction (EMD) of China Spallation Neutron Source (CSNS) is currently under construction. It adopts the neutron scintillator detector as its main detection unit.
This study aims to develop an intelligent readout electronics system for the main detectors of EMD and the front-loading of data processing.
A completely new architecture was adopted in the design of the readout electronics of EMD, consisting of the front-end electronics and SoC electronics. The front-end electronics consisted of two front preamplifier boards and a digital board. The parameter configuration of front-end electronics and the aggregation, trigger selection and packaging of original data were realized by the SOC electronics which also provided physical hardware, embedded Linux system, driver and other support for data processing, and completes the analysis and processing of front-end data, and then sends the processed data to the back-end system through Gigabit Ethernet. The migration of algorithms originally operating on backend servers to a System-on-Chip (SoC) based on Field Programmable Gate Array (FPGA) chips was implemented to achieve front-loaded data processing that reduced computational pressure on server infrastructure. The architecture enabled intelligent automated control and continuous condition monitoring of detector arrays and their integrated electronic subsystems. Finally, performance testing of the designed readout electronics was conducted in the laboratory and on the No.20 beam line of CSNS.
The test results show that the nonlinear error in the readout electronics is less than 1.2%, the maximum counting rate is 233.6 k·s-1, the time resolution is 13 ns, and the detection efficiency of the detector is approximately 40.7%@0.1 nm.
The performance of the readout electronics developed in this study meet the design specifications for the main detectors of EMD. Its successful development provides an important technical support for the construction of EMD.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090404 (2025)
Molten Salt Reactors (MSRs) have emerged as a leading contender among the prospective Generation IV nuclear reactor technologies, primarily attributed to their intrinsic safety features and enhanced economic viability. In 2011, Chinese Academy of Sciences (CAS) initiated the Strategic Priority Research Program, titled "Future Advanced Nuclear Fission Energy" with MSRs being one of the key research foci. Accurate forecasting of key operational parameters for MSRs not only provides real-time insights into the dynamic operational status but also affords operators with advanced warnings of potential anomalies. Such capabilities are crucial for ensuring the safe and stable operation of the reactor, while also providing decision-makers with critical support and technical guidance.
This study aims to propose a deep learning-based model for predicting MSR safety parameters to assist operators in assessing reactor status and enhancing operational safety.
Firstly, RELAP5-TMSR code was utilized to establish a transient behavior analysis model for dataset generation, and four coupled parts, i.e., the primary loop, secondary loop, air cooling system modules, and passive residual heat removal system, were included in the model. Subsequently, multiple scenarios were generated across a range of reactor power levels to comprehensively capture system behavior, so that a substantial number of data samples, with each sample consisting of monitoring parameters corresponding to the design of the instrument control system and their prediction parameters was comprised in the total dataset. Then, the dataset was divided into training, validation, and test sets to ensure robust model evaluation, and an appropriate input feature subset for the parameter prediction of MSRs was selected using Pearson correlation analysis. Thereafter, two safety parameter prediction models based on Autoencoder (AE) - Long Short-Term Memory (LSTM) and AE- Gated Recurrent Unit (GRU) methods were developed by training and validating on these datasets. The model with superior performance was selected as the safety parameter prediction model for the MSR and rigorously tested to assess its generalization capabilities, and five metrics, including mean absolute error, maximum absolute error, mean relative error, maximum relative error, and R2, were applied to evaluate the model's performance. Finally, the robustness of these two models was tested and optimized under noisy conditions.
The test results demonstrate that the prediction model, predicated on the AE-GRU framework, excels in forecasting the safety parameters of molten salt reactors. This model exhibits superior predictive precision and robust generalizability. The mean relative error of the predicted parameters is below 0.04%. Even in noisy environments, it continues to demonstrate high robustness, which can meet the requirements of engineering applications.
The AE-GRU based safety parameters prediction model for the MSR system can satisfy the requirements of parameters prediction of the MSR system, hence be applied to intelligent MSR operations and maintenance, ensuring safe MSR operation.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090601 (2025)
The dipole magnetic field confinement is a crucial component in the field of fusion energy and space plasma research. Previous projects, such as MIT's Levitated Dipole Experiment (LDX) and the University of Tokyo's Ring Trap (RT), have explored the behavior of high-temperature plasmas in such configurations. These efforts, although significant, faced challenges in terms of limited design capabilities and funding constraints. The China Astro-Torus (CAT) project was initiated to address these limitations and push the boundaries of magnetic confinement.
This study aims to develop a stable levitation system for dipole magnets of the CAT-1 fusion device and thereafter to validate the use of permanent magnets for magnetic field generation and explore the feasibility of a novel levitation system that can stabilize the dipole magnet effectively.
The experimental setup involved the development of a levitation permanent magnet ring system (LPMR). The system was designed to simulate the magnetic dipole field required for plasma confinement. Key components included a laser positioning system for high-precision feedback and a digital control system that adjusted the current in the lifting coil to maintain the magnet's position, and real-time position data was used to modulate the current dynamically, ensuring stable levitation.
The system successfully achieves stable and sustained levitation of the permanent magnet, with minimal oscillations observed. The vertical oscillation frequency is measured at 2.53 Hz, which is slightly lower than the theoretical prediction of 2.71 Hz, indicating that there is potential for further optimization of the system.
These results of this study demonstrate the effectiveness of the digital control model implemented for stabilizing the magnet. The system performs well over extended periods, confirming its capability for long-term operation. The successful stabilization achieved by the LPMR system lays a solid foundation for the development of advanced magnetic confinement systems, which are critical for the future of fusion energy research. The findings offer valuable insights for enhancing magnetic levitation and confinement techniques in fusion reactors, contributing significantly to the development of the CAT-1 device. The ability to stabilize dipole magnets through digital control also represents a key advancement in magnetic confinement technology.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090602 (2025)
The global demand for sustainability and flexibility of nuclear power plants is increasing in recent years whilst micro reactor has broad application prospects and irreplaceable advantages with small size, high flexibility and security, strong adaptability, and less maintenance requirement, which can provide power for the areas that large power grid cannot reach. The SiC-based Vehicular Micro Reactor (SVMR) is mainly used to provide power for remote areas and has the advantages of high fuel loading and high burnup depth due to its compact core design. However, SVMR has distortion phenomenon of sharp rise of power density at the edge of the active zone, resulting in the radial Power Peak Factor (PPF) exceeding the safety limit of 2.10, which threatens the reactor safety seriously.
This study aims to optimize the power distribution of SVMR to meet the thermal design criteria without compromising the fuel utilization efficiency of the core.
Firstly, the power distribution throughout the lifetime of the SVMR core was calculated by using MCNP (Monte Carlo N-Particle Transport Code) under critical conditions. The rotation of control drums was found to have a great influence on the radial power distribution. Then, based on the critical conditions of control drum rotating, four methods, i.e., changing the reflector thickness, adding burnable poison, fuel enrichment partition and fuel loading partition, were adopted to flatten the power distribution of SVMR during its lifetime. Finally, the ORIGEN2 was employed to simulate the fuel cycle process inside a reactor and calculate the variations of radial PPF caused by these four methods for power distribution flatten, so as to find methods that met the neutron and thermal design objectives of SVMR.
The calculation results show that: 1) reducing the reflector thickness leads to increased neutron leakage and seriously deteriorates the depletion characteristics of SVMR; 2) adding burnable poisons can reduce the radial PPF to less than 2.03, and achieves more than 10 years of lifetime and 75.53 MWd·(kgU)-1 of burnup depth; 3) fuel enrichment partition can reduce the radial PPF of SVMR below 2.10, but the lower neutron utilization of fuel results in poor burnup performance; 4) fuel loading partition can achieve different power flattening targets and ensure the burnup depth greater than 77.70 MWd·(kgU)-1, in which the radial PPF can be less than 1.44.
Comparison results in this study demonstrate that fuel load zoning offers greater flexibility, allowing adjustments to be made according to the specific needs of the user to achieve the desired power flattening effect, which can provide theoretical reference for the subsequent research work.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090603 (2025)
The tritium content in the graphite of thorium-based molten salt reactors (TMSRs) directly affects the distribution, control methods, estimated release rates during operation, and decontamination strategies for the reactor graphite upon TMSR decommissioning.
This study aims to accurately determine the tritium content in TMSR graphite by a deuterium-simulating tritium experiment conducted to investigate the adsorption and desorption behavior of deuterium in nuclear graphite.
The entire experiment consisted of determining the background deuterium concentration, pre-treated the graphite samples, performed adsorption, and desorption. Firstly, the background deuterium concentration was determined by continuously flowing high-purity argon gas at room temperature without graphite presence. After purging the gas in the tube furnace, the stable deuterium concentration in the exhaust gas was measured and taken as the background signal. Then, the NG-CT-50 graphite sample was placed in the uniform temperature zone of the tube furnace and heated to 1 500 ℃. Under this temperature, continuous desorption was carried out for 2 h while high-purity argon gas was introduced to remove the desorbed gas from the graphite until the deuterium concentration in the exhaust gas reached the background level. Subsequently, the temperature was set to the operating temperature of the reactor (650 ℃), and the high-purity argon gas was switched to a 0.179 mg·L-1 deuterium-argon mixture at a flow rate of 20 mL·min-1 for adsorption until equilibrium was reached (no further change in deuterium concentration in the exhaust gas). After reaching adsorption equilibrium, the 0.179 mg·L-1 deuterium-argon mixture was switched back to high-purity argon gas for purging until the deuterium concentration in the exhaust gas reached and remained at the background level. Finally, high-purity argon gas was continuously introduced at a flow rate of 20 mL·min-1, and the temperature was increased from 650 ℃ to 1 500 ℃ at a constant heating rate of 6 ℃·min-1 for desorption. The desorption process was followed by continuous real-time measurement of deuterium concentration in the exhaust gas using quadrupole mass spectrometry. Deuterium was deemed to be fully desorbed from the nuclear graphite only when the concentration of deuterium in the exhaust gas had reached and consistently maintained the background level.
The experimental results show that the desorption process can be divided into three stages. In the low-temperature stage, there is no significant deuterium desorption signal. In the 770?1 250 ℃, a large amount of deuterium gas desorption is observed, accounting for approximately 60.9% of the total desorption. In the 1 250?1 500 ℃, deuterium desorption accounts for approximately 39.1% of the total desorption. The ratio of deuterium desorption in the second stage to that in the third stage is approximately 3:2. Based on the thermal desorption curve, the average adsorption capacity of deuterium gas in graphite is determined to be (2.34±0.13)×10-6 g·g-1.
Without considering isotope effects, the estimated tritium fraction in the molten salt reactor experiment (MSRE) based on the experimental data (14.21%) closely matches the observed value (14%). Similarly, the estimated tritium content in the graphite of a 2 MWt liquid thorium-based molten salt reactor (TMSR-LF1) accounts for 14.44% of the total tritium production. In contrast, for 10 MWt and 2 250 MWt TMSRs, tritium content in the graphite accounts for 7.10% and 11.60% of the tritium production, respectively.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090604 (2025)
As a typical representative of passive safety technology, the automatic depressurization system (ADS) plays a crucial role in enhancing the safety of pressurized water reactor (PWR). After an accident occurs, it accelerates the depressurization of the reactor's primary loop, which is essential for preventing excessive pressure buildup. The ADS effectively connects the high-pressure, medium-pressure, and low-pressure safety injection systems, ensuring that they work together seamlessly. This system actively maintains core cooling by allowing coolant to flow into the reactor core, thereby removing heat and preventing overheating.
This study aims to explore the opening and closing characteristics of the pressure relief valve of the automatic depressurization system and its influence on the reactor systems.
Based on the system analysis program RELEAP5, the China advanced PWR was taken as the research object, and typical ADS trigger accidents were chosen as the initial event. The different opening speeds of the first three valves of the ADS and the closing conditions of the fourth stage pressure relief valve were simulated, and the responses of the reactor primary loop pressure, pressure relief pipeline flow rate, and sparger inlet pressure under different working conditions were analyzed.
The analysis results indicate that the opening speed of the first three valves of the ADS does not have a significant impact on the step-down characteristics of the primary loop. The ADS-1 pressure relief valve is the most effective when employing a fast opening mode to facilitate the bubbling device in reaching a stable critical jet state more quickly. Within the simulated working condition range, the slowest opening speed of the ADS-1 pressure relief valve causes a slight delay of up to 10 s in achieving the stable critical jet state for the sparger. On the other hand, the ADS-2 and ADS-3 pressure relief valves are the most efficient when operating in a slow opening mode to reduce the impact of the two-phase ejection process caused by the valve opening on the pipeline and sparger. It is worth noting that the ADS-4 valve holds significant importance in mitigating small break LOCA accidents.
Through the simulation analysis of the opening and closing characteristics of the ADS pressure relief valve, valuable insights have been provided for the design of the automatic depressurization system. The finding in this study has contributed to enhancing the understanding of the functioning and effectiveness of the ADS, thereby improving the overall safety and operational efficiency of advanced nuclear power plants.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090605 (2025)
The He-Xe Brayton cycle system, which adopts a helium-xenon mixture as the working fluid, has significant advantages of high cycle efficiency, high specific power, and great operation reliability, which has promising application prospects in the field of special nuclear power. The megawatt level special nuclear power that combined with helium-xenon Brayton cycle system and nuclear reactor, can effectively meet the needs of high-power energy supply, including deep space exploration, planet-base power supplement, and unmanned underwater vehicles. Presently, the research on the operation characteristics of the helium-xenon Brayton cycle system is insufficient and the systemically simulation models need to be developed urgently.
This study aims to develop a steady-state simulation tool for helium-xenon closed Brayton cycles, enabling characterization of system components and overall configurations prior to actual engineering design and operation, thereby reducing research costs.
A simulation tool for steady-state analysis of the helium-xenon closed Brayton cycle was developed by establishing component models of key equipment in the thermodynamic system, including the heater, regenerator, cooler, turbine, and compressor. The accuracy of the simulation software was verified through comparison between design values from the U.S. "Prometheus" project and computational values obtained under identical conditions. With the output power fixed at 200 kW, the influences of critical parameters, i.e., cycle maximum temperature, cycle minimum temperature, cycle maximum pressure, and total thermal conductivity of the regenerator, on the system efficiency and specific power were comprehensively analyzed. Finally, the accuracy and capability of the helium-xenon closed Brayton cycle model were comparatively verified.
The calculation results of the helium-xenon thermodynamic cycle model developed in this work are in good agreement with the Prometheus's design values, with the maximum node parameter error being 0.212% and the maximum system parameter error being 3.419%. The errors are within the acceptable error range. Verification results indicate that there is an optimal pressure ratio for both system efficiency and system specific power, but the optimal pressure ratios are not equal. In engineering design, the pressure ratio at the maximum system efficiency shall be adopted. A higher the maximum temperature and a lower the minimum temperature of the cycle will result in a higher system efficiency and specific power. The minimum temperature of the cycle has a more significant impact on the cycle efficiency than the maximum temperature. As the pressure ratio increases, the total thermal conductivity of the recuperator has a smaller impact on the cycle efficiency.
This study provides reference and basis for the design and optimization of helium-xenon closed Brayton cycle.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090606 (2025)
During the service of the tritium penetration barrier coating system, a significant amount of thermal stress is generated inside the system due to the influence of the high temperature environment. The accumulation of thermal stress can lead to cracking and peeling of the coating, which seriously affects the safe operation of the fusion reactor. Therefore, the study of the thermal mechanics of tritium penetration barrier coating is significant.
This study aims to explore an effective method to reduce thermal stress in Er2O3/RAFM (Reduced Activation Ferritic/Martensitic) tritium penetration barrier system, thereby increasing the service life of the coating.
Based on the thermal expansion coefficient, elastic modulus, and yield strength, gradient multi-interlayers coatings for tritium penetration barrier were designed, the method of finite element simulation was employed to explore the influence factors, such as substrate roughness and coating thickness on the thermal stress. Moreover, the simulation results of the above methods were used to investigate the effects of reducing in the thermal stress with different types of gradient coatings in Er2O3/RAFM steel tritium penetration barrier system.
Simulation results show that thermal stress of the system increases with increasing substrate roughness. By increasing the thickness of Er2O3 coating or introducing yield strength graded multi-interlayer between Er2O3 coating and RAFM steel substrate, the thermal stress of the system can be effectively reduced.
The comparison results of this study indicate that the yield strength graded multi-interlayer is more effective in reducing thermal stress.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090501 (2025)
The parameters and discharge level of the new generation of room temperature compact spherical ring fusion research device Xuanlong-50U upgraded and constructed by ENN Science and Technology Development Co., Ltd. have been significantly improved. The maximum discharge current of the magnetic coil in the toroidal field of the device is up to 150 kA for this upgraded device, named as EXL-50U.
This study aims to conduct overall modeling and simulation research on the toroidal field (TF) power supply system, in order to ensure the safety and stability of the system under high parameter discharge experiments.
The EXL-50U real machine was selected as the research object. Parameter decoupling method was used to equivalently model the six-phase pulse generator and the phase-shifting transformer, and the phase controlled rectification model was established with phase lag and the time-varying load model for coil heating. Hence a complete simulation model of the circumferential field power supply system was constructed. Finally, the simulation verification under multiple discharge levels was carried out.
The simulation results show that the constructed simulation model can accurately simulate the changes in electrical characteristics on site, with a simulation error of no more than 9.58% and waveform similarity of no less than 98.76%, meeting the accuracy requirements of on-site analysis. At the same time, it is proven that the system can operate stably according to the maximum design parameters, verifying the correctness of the initial design parameters of the power system.
The proposed simulation model can provide a simulation platform and data support for the setting of system protection schemes and verification of control strategies in subsequent unknown high parameter experiments.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090502 (2025)
As a crucial input for the neutronics analysis of tokamak devices, the plasma neutron source serves as a bridge connecting the realms of fusion plasma physics and engineering design.
This study aims to depict the spatial distribution of the neutron wall loading (NWL) more accurately by developing a set of independent codes dealing with the neutron source and NWL.
In respect of the deuterium-tritium fusion plasma discharge in tokamak, the types of nuclear fusion reactions and the release model of neutron sources were constructed, and the theoretical model of energetic neutron bombardment to the first wall was also established. The numerical simulation of neutron source and NWL distribution was carried out by employing the "multifilament" method in tokamak configuration. Finally, the effects of plasma density peaking (DP) factor and temperature distribution on neutron source distribution and neutron wall load distribution were investigated.
Simulation results show that the maximum of NWL is located near the outboard midplane and exceeds ~100% of that on the inboard midplane, and hence the necessity of paying special attention to the radiation shielding and the protection of key components situated on the outboard midplane. As DP increases, both outboard and inboard peak NWL values decrease. However, the decrease in NWL is very small, the value of NWL decreases by only 10% when the peak of the neutron source increases by a factor of about 1.5. As the temperature ratio is increased from 0.6 to 1.4, the fusion power is increased from 361.4 MW to 1 580.0 MW and the peak value of NWL in the outer midplane is increased from 0.510 MW·m-2 to 2.10 MW·m-2, while the peak value of NWL in the inner midplane is increased from 0.22 MW·m-2 to 0.99 MW·m-2.
Results of this study demonstrate that the peak NWL is consistently near the outboard midplane, and the density peaking has slight effect on the NWL distributions in tokamaks, but both fusion power and NWL increase with the increase of temperature, indicating the temperature has a significant impact on the NWL distribution in tokamak devices.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090503 (2025)
Inductively coupled plasma mass spectrometer (ICP-MS) is an important technique for elemental and isotope ratio analysis. Among them, time-of-flight mass spectrometer (TOF-MS) has been widely used in the field of organic molecule and biomedical detection due to its simple principle, high sensitivity, wide detection quality range, and ability to obtain a full range of mass information at one time.
This study aims to design and simulate the physical integration of an inductively coupled plasma-time-of-flight mass spectrometer (ICP-TOF-MS) so as to meet the quality analysis requirements of uranium and transuranic elements during the operation of Thorium-based Molten Salt Reactors (TMSR).
A physical model was developed in accordance with the structural principles. Ion optical simulation software SIMION was employed to simulate the configurations of the ion transport system, including the differential cone, deflection lens, collision cell, direct current quadrupoles (DCQ), and single lens. The rationality of these designs was validated through simulation results. The optimal collision pressure for the collision cell was determined by systematically varying the pressure. Additionally, the TOF design parameters were calculated, and simulations were conducted to optimize the voltage settings, thereby enhancing the resolution.
Simulation results show that the optimal guide cone voltage for the differential cone system is determined to be -30 V. The optimal deflection voltage combination for the designed deflection lens system is identified as -56 V and -530 V. The optimal collision pressure within the collision cell is 1.6 Pa. The DCQ coupled with the single-lens system can introduce ions into the TOF acceleration field with an initial kinetic energy of approximately 4 eV in a near-horizontal state. When the pulse field voltage is set to ±200 V, the accelerating voltage is -1 600 V, the first-stage reflection voltage is 48 V, and the second-stage reflection voltage is 680 V, the mass resolution (M/ΔM) of the TOF-MS exceeds 4 000.
The results of this study provide an important theoretical reference for the subsequent processing, manufacturing, construction and commissioning.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090504 (2025)
Cavitation bubbles and their collapse processes are important research topics in bubble dynamics. Traditional research methods based on visible light high-speed imaging technology suffer from low spatial resolution, poor signal-to-noise ratio, and weak imaging contrast when observing these processes in real-time.
This study aims to construct a high spatiotemporal resolution X-ray imaging experimental system for real-time observation of underwater bubble dynamics processes.
Firstly, a high spatiotemporal resolution X-ray imaging system was developed based on high-throughput pink light with a flux of 1.31×1016 phs·s-1 and a fast X-ray imaging detector with maximum frame rate up to 2.1×105 fps and minimum effective pixel size of 1 μm on the fast X-ray imaging beamline (BL16U2) at Shanghai Synchrotron Radiation Facility (SSRF). Then, cavitating bubbles generated by a syringe pump in water at room temperature were used as experimental samples, and real-time dynamic images of the bubble cavitation evolution process were recorded by the fast X-ray imaging detector. Finally, the motion of bubbles was characterized and analyzed using the motion contrast imaging (MCI) method to enhance signal detection and suppress background noise.
The experimental system achieves a temporal resolution of 25 μs with an effective detector pixel size of 4 μm, enabling real-time and clear observation of the complete evolution process including bubble growth, collapse, and jet formation in water. Statistical analysis reveals that bubble jets exhibit velocities ranging from 1.74 m·s-1 to 2.54 m·s-1 with an average of 2.13 m·s-1 while the average energy loss rate during jet formation reaches 96.46%. The motion contrast imaging method successfully characterizes capillary waves on bubble surfaces, revealing wave propagation velocities of approximately 0.66 m·s-1 and wavelengths ranging from 1.04 μm to 800 μm depending on bubble deformation conditions.
The high spatiotemporal resolution X-ray imaging method established provides unprecedented research capabilities for bubble evolution studies, offering quantitative analysis of bubble dynamics with microsecond temporal resolution and micrometer spatial resolution. The motion contrast imaging technique effectively enhances the detection of weak signals such as capillary wave propagation, enabling detailed characterization of bubble surface dynamics that are invisible to conventional imaging methods. This experimental approach provides a new research tool for deeper understanding of bubble-induced microfluidic jet evolution processes and advances the field of high-speed fluid dynamics research.
.- Publication Date: Sep. 15, 2025
- Vol. 48, Issue 9, 090101 (2025)





