
- Infrared and Laser Engineering
- Vol. 50, Issue 10, 2021G004 (2021)
Abstract
0 Introduction
Powder properties are of great importance in powder bed fusion(PBF), one of the popular metal additive manufacturing (also called 3D printing) methods currently[
Commonly, preparation method of the spherical powder includes plasma rotating electrode process(PREP), gas atomization (GA) and plasma spheroidization(PS)[
One of the most used methods to measure the PSD of metal powder is laser diffraction(LD), which detects and analyses the angular distribution of the scattered light produced by a laser beam passing through a diluted powder layer[
With the advance in SEM technology, abundant particle information can be withdrawn from the microscopy image with the existing image processing tools, such as ImageJ[
In this work, the Mask R-CNN[
1 Methodology and process
Figure 1 depicts flow chart of the developed system, based on the instance segmentation results from the Mask R-CNN algorithm, which consists of particle size distribution, degree of sphericity and spheroidization ratio modules.
Figure 1.Flowchart of the powder microscopy image automatic analysis system
1.1 Training dataset preparation
To train the Mask R-CNN model, powder microscopy images are collected with the requirement that powders with varied size and shape should be contained. The un-sifted Ti-6Al-3Nb-2Zr-1Mo alloy powder(provided by High Performance Powder Synthesis Lab, Fujian Innovation Academy, Chinese Academy of Sciences), prepared by radio frequency plasma spheroidization, was selected to construct the dataset. Before SEM, the powder sample is dispersed on the conductive tape and then blown to remove the unstuck powder by rubber suction bulb. Each SEM(using Phenom XL) image is magnified by 300 times to allow sufficient particles in a clear image with view field of 895 µm. The images were saved with a size of 2048×2048 pixel.
To increase the detection accuracy and facilitate the use of Mask R-CNN, the original SEM image, is then cropped into 16 sub-images with equal-size of 512×512 pixel (Fig.2(a)). LabelMe is selected as the tool to manual label the sub-images[
Figure 2.(a) Original SEM image (2 048×2 048 pixel), which is cropped into 16 parts; (b) Characteristic image labeled with LabelMe(512×512 pixel); (c) The corresponding image mask of (b)
For ease of post-processing and statistical counting, 4 kinds of labels are adopted (Fig.1(f)): “ins_sphere” is for more than half occluded spherical particle, “inc_sphere” for less than half occluded spherical particle, “com_sphere” for complete spherical particle, and “non_sphere” for non-spherical particle. Characteristic image labeled by LabelMe is illustrated in Fig.2(b). 100 sub-images(Fig.1(d)), with total 14835 labeled particles (“com_sphere”: 7073, “inc_sphere”: 4396, “ins_sphere”: 1973, “non_sphere”: 1393), was applied for training. Figure2(c) shows one mask image, which generated from corresponding labeled image, as the input data to training process.
1.2 Mask R-CNN
The Mask R-CNN, extended from Faster R-CNN[
At the first stage, the input images are processed by a feature extraction network, which also called backbone, to construct feature maps containing spatial semantic information at different scales. ResNet-101[
At the second stage, feature maps for each RoI that proposed by RPN, are cropped by the RoI alignment, and resized with the same size for the following convolution networks. The RoI alignment also fixes misaligned features with low-resolution in the feature maps. Next, cropped feature maps that contain objects are fed into a classifier, which conducts classification and bounding box regression. After that, one object is enclosed by each bounding box. Finally, the original feature maps are cropped again using these bounding boxes, after being resized, the newly cropped feature maps are fed into a fully convolution network to conduct semantic segmentation and predict the binary mask.
After the prepared training image dataset is applied to train model parameters by Mask R-CNN, a set of pre-trained model weights on MS COCO dataset[
Figure 3.Loss-epoch curve during train process
1.3 Predict process
The raw input image size is 2048×2048 pixel, it is cropped into 28 sub-images (Fig.1(b)), in which 16 sub-images are with equal size (512×512 pixel) and 12 sub-images are with border strips among the 16 sub-images (Fig.2(a)), the green one is with 256×1024 pixel and the blue one with 1024×256 pixel. The width of border strips can be adjusted according to the maximum size of the powder in one image.
Before entering the 28 sub-images into the trained model, each sub-image is transferred into 4 images by maintaining itself, making a 180° rotation, flipping horizontally, and flipping vertically. Next, 4 output images are roughly merged into one sub-image. The purpose of this step is to improve recognition rate of the particle and reduce the wrong classification. Figure4 illustrates the transferring and merging process. In the rough merging process, one particle’s 4 masks(maybe less than 4) are merged simply depending on the intersection-over-union(IoU) and the intersection-over-self(IoS) of their circumscribed rectangle(Fig.5(a)). The usage of IoS here plays the role of precenting the occurrence of mis-merging, as shown in Fig.5(c). Each two masks satisfying the condition of IoURec>0.7 ∩IoSRec-A>0.7 ∩IoSRec-B>0.7 will be merged. Several unrecognized particles inFig.4(c) are recognized after rough merging. The merged image contains some un-merged small masks, which are supposed to belong to one same particle, will be merged correctly in the next precise merging process.
Figure 4.Flowchart of transferring and rough merging process of one sub-image
Figure 5.Illustration of two kinds of IoU & IoS in rough merging and precise merging processes, respectively. (a) IoU & IoS of two circumscribed rectangles; (b) IoU & IoS of two masks; (c) One example of the usage of IoS
The next step is to merge all the 28 predicted sub-images(Fig.1(f)) via precisely merging. The IoU and IoS of two masks instead of their circumscribed rectangles are adopted to conduct more precisely merging(Fig.5(b)), which consumes much more computation resources than the IoU & IoS with circumscribed rectangles, especially when thousands of particles are in one input image. In this process, each two masks that satisfy the condition of
1.4 Measurement error and compensation
The error of the particle measurement comes from two facts. One is the deviation of boundary calculation due to the aliasing effect by the square pixels tessellation[
where
where
Figure 6.(a) Illustration of particle boundary smoothing and error compensation; (b) Fitted perimeter and area residual function based on scattered deviation values of standard circles
To compensate the second residual error, a set of standard circles with evenly spaced diameter from 5 µm to 100 µm are predicted before predicting the input image. The deviation of output result and ground truth of these standard circles can be calculated, and the residual function can be fitted using the scattered deviation values. Figure6(b) shows the fitted residual function curves of perimeter and area. The residual value(perimeter or area) will be compensated during the statistical process for corresponding mask’s equivalent projected area diameter according to the residual function.
1.5 Statistical analysis
Except for the particles at the edge of the image, each particle in the original image (2 048×2 048 pixel) is classified, and their masks’ information is extracted. There are two methods to calculate the size of a particular particle. One is to calculate the equivalent projected area diameter, corresponding to the diameter of a circle with the same projected area as the particle. However, as nearly 30% particles are occluded in one image, it’s inaccurate to use equivalently projected area diameter. Another is to calculate the minimum circumscribed circle diameter. This descriptor can provide the diameter for the “inc_sphere” particles, which account for 90% of the occluded particles. The “ins_sphere” particles, accounting for 3% in all particles, are abandoned in the statistics of PSD due to the limitation of 2-dimensional images.
The particle’s degree of sphericity is calculated using the following formula (also called root of form factor)[
where
The spheroidization ratio is defined as follow:
where
2 Results and discussion
2.1 Output image comparation
The output image of our model is shown in Fig.7(d). Each recognized particle is labeled and colored, where the label describes the class and probability that the particle belongs to. The color represents the class intuitively. Here, green is for “com_sphere”, blue for “inc_sphere”, orange for “ins_sphere”, and red for “non_sphere”, respectively. We use the Phenom ProSuite Software Particlemetric, a professional microscopy image processing software, to compare our proposed method with traditional image segmentation method (Fig.7(c) and 7(e)). Recognition accuracy of the proposed method is 96.95%, higher than that of 78.44% by Particlemetric.
Figure 7.Predicted results and comparation with the Phenom ProSuite Software Particlemetric. (a) Raw image; (b) Output segmentation result of Particlemetric; (c) Four enlarged details region of (b); (d) Output result of proposed method; (e) Four enlarged details region of (d)
As the non-sphere particles have complicated random surface texture and shape, traditional method shows poor recognition ability on them. In comparison, the proposed model can recognize these non-sphere particles correctly via learning deeply the complex feature, where some small spherical particles adhering to the non-sphere particle can also be detected. Moreover, it is hard to separate two particles with position closed to or overlapped to each other by the traditional method (Fig.7(c)), which can be easily segmented by our system and tell which one is un-occluded.
2.2 Statistical results comparation
The statistics results of 8 raw images in PSD and DSD are shown in Fig.8. Total 9374 particles were detected by the Particlemetric, less than that of 12192 by our methods. Most of the small particles are attached and detected as one part of the big ones in the traditional method (Fig.7(e)), which can be solved by our method. Non-sphere particles were recognized as many tiny small particles with less than 5 µm by the traditional method (Fig.7(c)), which was not shown in in the corresponding PSD results (Fig.8(a)). This inconsistence is due to the fact that the particle with smaller size will be considered as noise, according to the image resolution of the software.
Obvious difference in PSD is shown between methods by laser diffraction (using LS 13 320 Tornado) and microscopy image particle segmentation (Fig.8(a)). This is attributed to the satellite particle, observed in the formation of small particles[
Figure 8.Statistical analysis results and comparation. (a) PSD results measured by the Particlemetric, our method and laser diffraction technique, respectively; (b) Degree of sphericity distribution (DSD) results measured by Particlemetric and proposed method
Unlike the other two methods, spheroidization ratio can be provided by the proposed model, where 646 non-sphere particles among total 12192 particles in the 8 raw images can be calculated, corresponding to a SR value of 94.70%.
3 Conclusion
In this study, a spherical particle image segmentation and auto-statistics system is proposed by employing deep learning and mask merging techniques. The proposed method can recognize particles with four typical shapes and extract their feature and size information, before providing the particle size distribution, degree of sphericity and spheroidization ratio of the powder. Superior to the existing method of image analysis and laser diffraction, the proposed method can also detect overlapped spherical particles with high accuracy, automatically calculating the spheroidization ratio of powder, and providing an orientation in the measurement of satellite ratio of the spherical powder. Besides providing accurate particle size and shape information during the production process of spherical power, the proposed method can also be extended to a large variety of particles.
References
[1] S Cooke, K Ahmadi, S Willerth, et al. Metal additive manufacturing:Technology, metallurgy and modelling. Journal of Manufacturing Processes, 57, 978-1003(2020).
[2] Qian M, Froes F H. Titanium Powder Metallurgy: Science, Technology Applications[M]. Oxfd: ButterwthHeinemann, 2015.
[3] A Strondl, O Lyckfeldt, H K Brodin, et al. Characterization and control of powder properties for additive manufacturing. JOM, 67, 549-554(2015).
[4] P Sun, Z Z Fang, Y Zhang, et al. Review of the methods for production of spherical Ti and Ti alloy powder. JOM, 69, 1853-1860(2017).
[5] W-H Wei, L-Z Wang, T Chen, et al. Study on the flow properties of Ti-6Al-4V powders prepared by radio-frequency plasma spheroidization. Advanced Powder Technology, 28, 2431-2437(2017).
[6] J A Slotwinski, E J Garboczi, P E Stutzman, et al. Characterization of metal powders used for additive manufacturing. Journal of Research of the National Institute of Standards and Technology, 119, 460(2014).
[7] A B Spierings, M Voegtlin, T U Bauer, et al. Powder flowability characterisation methodology for powder-bed-based metal additive manufacturing. Progress in Additive Manufacturing, 1, 9-20(2016).
[8] ISO 133221. Particle size analysisImage analysis methodsPart 1: Static image analysis methods[S]. Switzerl: [s.n.], 2014.
[9] Scientific T. Thermo Scientific ParticleMetric [OL]. [20210321].https:www.thermofisher.cndercatalogproductPARTICLEMETRICSID=srchsrpPARTICLEMETRIC#PARTICLEMETRICSID=srchsrpPARTICLEMETRIC.
[10] ISO 14488. Particulate materialsSampling sample splitting f the determination of particulate properties[S]. Switzerl: [s.n.], 2007.
[11] Z Chong, M Chaoyang, W Zicheng, et al. Spheroidization of TC4 (Ti6Al4V) alloy powders by radio frequency plasma processing. Rare Metal Materials and Engineering, 48, 446-451(2019).
[12] A B Oktay, A Gurses. Automatic detection, localization and segmentation of nano-particles with deep learning in microscopy images. Micron, 120, 113-119(2019).
[13] C T Rueden, J Schindelin, M C Hiner, et al. ImageJ2:ImageJ for the next generation of scientific image data. BMC bioinformatics, 18, 1-26(2017).
[14] T Grant, A Rohou, N Grigorieff. cisTEM, user-friendly software for single-particle image processing. eLife, 7, e35383(2018).
[15] He K, Gkioxari G, Dollár P, et al. Mask rcnn[C]Proceedings of the IEEE International Conference on Computer Vision, 2017.
[16] Y Yu, K Zhang, L Yang, et al. Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN. Computers and Electronics in Agriculture, 163, 104846(2019).
[17] M Frei, F E Kruis. Image-based size analysis of agglomerated and partially sintered particles via convolutional neural networks. Powder Technology, 360, 324-336(2020).
[18] Y Wu, M Lin, S Rohani. Particle characterization with on-line imaging and neural network image analysis. Chemical Engineering Research and Design, 157, 114-125(2020).
[19] H Huang, J Luo, E Tutumluer, et al. Automated segmentation and morphological analyses of stockpile aggregate images using deep convolutional neural networks. Transportation Research Record, 2674, 285-298(2020).
[20] J Ruiz-Santaquiteria, G Bueno, O Deniz, et al. Semantic versus instance segmentation in microscopic algae detection. Engineering Applications of Artificial Intelligence, 87, 103271(2020).
[21] B C Russell, A Torralba, K P Murphy, et al. LabelMe:a database and web-based tool for image annotation. International Journal of Computer Vision, 77, 157-173(2008).
[22] S Ren, K He, R Girshick, et al. Faster r-cnn:Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28, 91-99(2015).
[23] He K, Zhang X, Ren S, et al. Deep residual learning f image recognition[C]. Proceedings of the IEEE Conference on Computer Vision Pattern Recognition, 2016.
[24] A Canziani, A Paszke, E Culurciello. An analysis of deep neural network models for practical applications. arXiv preprint, arXiv:1605.07678(2016).
[25] Lin T Y, Maire M, Belongie S, et al. Microsoft coco: Common objects in context[C]European Conference on Computer Vision, 2014.
[26] P Vangla, N Roy, M L Gali. Image based shape characterization of granular materials and its effect on kinematics of particle motion. Granular Matter, 20, 1-19(2018).
[27] De Bo C, De Bo C. A Practical Guide to Splines[M]. New Yk: SpringerVerlag, 1978.
[28] M L Hentschel, N W Page. Selection of descriptors for particle shape characterization. Particle & Particle Systems Characterization, 20, 25-38(2003).
[29] S Özbilen. Satellite formation mechanism in gas atomised powders. Powder Metallurgy, 42, 70-78(1999).

Set citation alerts for the article
Please enter your email address