• Chinese Journal of Lasers
  • Vol. 52, Issue 8, 0802105 (2025)
Jianfeng Yue1, Weiming Li1,*, Lihua Ning2,**, Xingyu Gao3..., Yu Li1, Wenlong Wang1, Baiqing Yang1 and Yani Liu1|Show fewer author(s)
Author Affiliations
  • 1School of Mechanical and Electrical Engineering, Guilin University of Electronic Technology, Guilin 541004, Guangxi , China
  • 2School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin 541004, Guangxi , China
  • 3School of Artificial Intelligence, Guangxi University for Nationalities, Nanning 530006, Guangxi , China
  • show less
    DOI: 10.3788/CJL241223 Cite this Article Set citation alerts
    Jianfeng Yue, Weiming Li, Lihua Ning, Xingyu Gao, Yu Li, Wenlong Wang, Baiqing Yang, Yani Liu. Real‑Time Weld Defect Detection Algorithm Based on YOLO‐DEFW[J]. Chinese Journal of Lasers, 2025, 52(8): 0802105 Copy Citation Text show less

    Abstract

    Objective

    Defects such as cracks, porosity, pits, undercuts, and slag inclusions commonly occur in laser welding and gas-shielded welding processes. However, imaging these defects poses a challenge, which has led to a scarcity of samples and an imbalance in the frequencies of different types of defects. Recent deep learning algorithms often have high complexity, a large number of parameters, and high consumption of computational resources. To address these challenges, this study aims to solve problems such as the scarcity of defect images, any imbalance in the data that affects detection accuracy, high model complexity that hinders real-time detection, and difficulty with the recognition of small features. Data augmentation techniques were applied to the dataset to increase the amount of data and balance the sample defect types. Simultaneously, an improved YOLOv8 model, named YOLO-DEFW, was proposed to reduce the number of parameters needed, increase detection speed, and improve detection accuracy for small defects and those for which there were only a few samples, which enables the intelligent visual inspection system for welds to perform real-time detection tasks online.

    Methods

    In the experiment, the ROBOT_WELD weld defect dataset was compiled and used as the training set for the YOLO-DEFW model. First, images were captured by the MV-HS2000GC camera mounted on a robotic arm, and additional samples were sourced from public datasets. Subsequently, the dataset was increased in size from 460 to 11960 images using 25 techniques for image augmentation. The optimized YOLO-DEFW model was characterized by the following improvements: DSConv2D was used in place of the standard convolution layers (Conv) at layers 0, 1, 3, 5, 7, 17, and 21, and C2f_DSConv2D was used to replace C2f at layers 2, 4, 6, 8, 12, 15, 19, and 23, which reduced the number of model parameters; EMA module was introduced into layers 16, 20, and 24 to enhance the model's ability to recognize weld defects in features of varying sizes; and the composite loss function FWCE Loss was added to the loss.py file, where the weight parameters were adjusted for Focal Loss and Weighted Cross Entropy (WCE) Loss to improve accuracy in detecting small-sample weld defects.

    Results and Discussions

    The development of the ROBOT_WELD dataset and data augmentation effectively increased the size of the dataset to 11960 images, thereby enhancing the model’s generalizability. By running ablation experiments with common convolution modules, DSConv2D was identified to be the most effective module for reducing the parameter count, where the lowest parameter count was 32 and lowest GFLOPs was 0.0003 (see Table 3). Different attention mechanisms were also introduced to improve the recognition of small features, and EMA module yielded the best overall performance (see Table 4). Additionally, introducing the customized FWCE Loss improved the detection accuracy for small-sample defects. The improvements to YOLOv8 resulted in a 13.4% increase in precision, a 17% increase in recall, and a 24.8% increase in mean average precision (mAP) on the ROBOT_WELD test set. Model complexity was also reduced: the number of parameters was reduced by 13.5%, GFLOPs were reduced by 10%, and the single image processing time was 3.9 ms, which resulted in a 12% increase in the accuracy of small-feature recognition. The improved YOLO-DEFW model outperformed the YOLOv8 model on key performance metrics.

    Conclusions

    This study expanded the original dataset using 25 data augmentation techniques, which increased the size of the dataset by 25-fold and effectively enhanced the robustness and generalizability of the model. The proposed YOLO-DEFW model utilizes DSConv2D instead of Conv in YOLOv8, significantly reducing its parameter count and computational load. The introduction of an EMA module effectively captures features at varying scales within the images, thereby significantly improving the accuracy of the model in detecting small features. Furthermore, the model incorporates a composite loss function (FWCE Loss) and adjusts the weight parameters of Focal Loss and WCE to effectively improve its detection on minority categories and imbalanced samples. The YOLO-DEFW model achieves notable optimization in terms of parameter count, model complexity, and detection accuracy; in the present study, the primary evaluation metrics improved by more than 10%. This algorithm can be integrated into the vision sensors of intelligent welding robots and used for real-time defect detection online, in low-arc-noise welding processes, and in high-arc-noise post-weld inspections, which will pave the way for advances in intelligent welding inspection technology.

    Jianfeng Yue, Weiming Li, Lihua Ning, Xingyu Gao, Yu Li, Wenlong Wang, Baiqing Yang, Yani Liu. Real‑Time Weld Defect Detection Algorithm Based on YOLO‐DEFW[J]. Chinese Journal of Lasers, 2025, 52(8): 0802105
    Download Citation