• Laser & Optoelectronics Progress
  • Vol. 60, Issue 16, 1615008 (2023)
Meng Xia, Hongzhi Du, Jiarui Lin, Yanbiao Sun*, and Jigui Zhu
Author Affiliations
  • State Key Laboratory of Precision Measurement Technology and Instrument, Tianjin University, Tianjin 300072, China
  • show less
    DOI: 10.3788/LOP223015 Cite this Article Set citation alerts
    Meng Xia, Hongzhi Du, Jiarui Lin, Yanbiao Sun, Jigui Zhu. Object Pose Estimation Method Based on Keypoint Distance Network[J]. Laser & Optoelectronics Progress, 2023, 60(16): 1615008 Copy Citation Text show less

    Abstract

    Herein, we present a novel keypoint distance learning network, which utilizes geometric invariance information in pose transformation. Distance estimation is added to the network and robust keypoints are determined, which improves the pose estimation accuracy within six degrees of freedom based on deep learning. The proposed method consists of two stages. First, a keypoint distance network is designed, which achieves RGB-D image feature extraction using a backbone network module and a feature fusion structure and predicts the distances of each point relative to the keypoints, semantics, and confidence using a multilayer perceptron. Second, based on the visual point voting method and the four point distance positioning method, keypoint coordinates are calculated using the multi-dimensional information output from the network. Finally, object poses are obtained through the least square fitting algorithm. To prove the effectiveness of the proposed method, we tested it on public datasets LineMOD and YCB-Video. Experimental results show that the network parameters of this method can be reduced by 50% with improved accuracy compared to ResNet in the original PSPNet framework, with accuracy improvements of 1.1 percentage points and 5.8 percentage points on two datasets, respectively.
    Meng Xia, Hongzhi Du, Jiarui Lin, Yanbiao Sun, Jigui Zhu. Object Pose Estimation Method Based on Keypoint Distance Network[J]. Laser & Optoelectronics Progress, 2023, 60(16): 1615008
    Download Citation