• Optical Instruments
  • Vol. 42, Issue 4, 33 (2020)
Jianpeng SU, Yingping HUANG*, Bogan ZHAO, and Xing HU
Author Affiliations
  • School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
  • show less
    DOI: 10.3969/j.issn.1005-5630.2020.04.006 Cite this Article
    Jianpeng SU, Yingping HUANG, Bogan ZHAO, Xing HU. Research on visual odometry using deep convolution neural network[J]. Optical Instruments, 2020, 42(4): 33 Copy Citation Text show less

    Abstract

    The visual odometry uses visual cues to estimate the pose parameters of the camera motion and localize an agent. Existing visual odometry employs a complex process including feature extraction, feature matching/tracking, and motion estimation. The processing is complicated. This paper presents an end-to-end monocular visual odometry by using convolution neural network (CNN). The method modifies a classification CNN into a sequential inter-frame variation CNN. In virtue of the deep learning technique, the method extracts the global inter-frame variation feature of video images, and outputs pose parameters through three full-connection convolution layers. It has been tested in the public KITTI database. The experimental results show the proposed Deep-CNN-VO model can estimate the motion trajectory of the camera and the feasibility of the method is proved. On the basis of simplifying the complex model, the accuracy is improved compared with the traditional visual odometry system.
    Jianpeng SU, Yingping HUANG, Bogan ZHAO, Xing HU. Research on visual odometry using deep convolution neural network[J]. Optical Instruments, 2020, 42(4): 33
    Download Citation