• Laser & Optoelectronics Progress
  • Vol. 60, Issue 10, 1010021 (2023)
Xiao Yun*, Kaili Song, Xiaoguang Zhang, and Xinchao Yuan
Author Affiliations
  • School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221008, Jiangsu, China
  • show less
    DOI: 10.3788/LOP220812 Cite this Article Set citation alerts
    Xiao Yun, Kaili Song, Xiaoguang Zhang, Xinchao Yuan. Occluded Video-Based Person Re-Identification Based on Spatial-Temporal Trajectory Fusion[J]. Laser & Optoelectronics Progress, 2023, 60(10): 1010021 Copy Citation Text show less
    Video-based person re-identification framework based on spatial-temporal trajectory fusion
    Fig. 1. Video-based person re-identification framework based on spatial-temporal trajectory fusion
    Structural framework of trajectory prediction
    Fig. 2. Structural framework of trajectory prediction
    Example of temporal trajectory fusion model
    Fig. 3. Example of temporal trajectory fusion model
    Example of spatial fusion loss calculation
    Fig. 4. Example of spatial fusion loss calculation
    Example of large-scale occlusion video sequences
    Fig. 5. Example of large-scale occlusion video sequences
    Example of sequence label modification in MARS_traj dataset
    Fig. 6. Example of sequence label modification in MARS_traj dataset
    Composition of MARS_traj dataset
    Fig. 7. Composition of MARS_traj dataset
    Parameter analysis experimental results. (a) Influence of temporal fusion loss value T on Rank-1 and mAP; (b) influence of spatial fusion loss value N on Rank-1 and mAP
    Fig. 8. Parameter analysis experimental results. (a) Influence of temporal fusion loss value T on Rank-1 and mAP; (b) influence of spatial fusion loss value N on Rank-1 and mAP
    Visualization of video-based person re-identification results
    Fig. 9. Visualization of video-based person re-identification results

    Algorithm 1:video-based person re-identification based on spatial-temporal trajectory fusion

    Input: MARS_traj dataset;trajectory prediction model Social- GAN;video-based person re-identification model

    Output: mAP and Rank-k

    1)spatial coordinates and temporal information of person ID from query dataset of video sequences are input into Social-GAN model;

    2)possible predicted trajectories are generated by generator in Social-GAN based on spatial coordinates and temporal information;

    3)discriminator in Social-GAN discriminates generated predicted trajectory,and obtains matching predicted trajectory dataset query_pred.

    4)For i=1:N1 do

    5)  For j=1:N2 do

    6)   temporal fusion loss ljtem and spatial fusion loss ljsap between j-th video sequence in gallery and i-th video prediction trajectory in query_pred are computed by equation(3)and(4),respectively;

    7)  end

    8)  li-j=minljtem+ljsap;j1,N2;

    9)  value of j corresponding to li-j is obtained and assigned to ij

    10)  sent ij-th video sequence of gallery into query_TP;

    11)end

    12)fusion feature of query_TP and gallery are extracted,respectively;

    13)feature distance metrics based on query_TP and gallery are calculated,and feature vectors of all gallery video sequences are ranked according to distance metrics;

    14)probability of correct match within ranked gallery is calculated according to query;

    15)return mAP and Rank-k.

    Table 0. [in Chinese]
    SubsetMARS_traj
    IDTracklets
    query9090
    gallery1191271
    total1191361
    Table 1. MARS_traj dataset
    MethodMARSMARS_traj
    mAPRank-1mAPRank-1
    COSAM80.5081.2071.9069.40
    COSAM+STTF93.0092.90
    STE-NAVE77.8085.0566.1570.59
    STE-NAVE+STTF92.8896.47
    AP3D84.1089.1062.9062.40
    AP3D+STTF90.1091.70
    TCLNet85.1089.8069.4572.94
    TCLNet+STTF94.8296.47
    Table 2. Performance evaluation on MARS and MARS_traj datasets
    Xiao Yun, Kaili Song, Xiaoguang Zhang, Xinchao Yuan. Occluded Video-Based Person Re-Identification Based on Spatial-Temporal Trajectory Fusion[J]. Laser & Optoelectronics Progress, 2023, 60(10): 1010021
    Download Citation