• Optics and Precision Engineering
  • Vol. 26, Issue 5, 1231 (2018)
LIN Jin-hua1,* and WANG Yan-jie2
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • show less
    DOI: 10.3788/ope.20182605.1231 Cite this Article
    LIN Jin-hua, WANG Yan-jie. Three-dimentional reconstruction of semantic scene based on RGB-D map[J]. Optics and Precision Engineering, 2018, 26(5): 1231 Copy Citation Text show less

    Abstract

    Reconstruction of 3D object is an important part in machine vision system, and the semantic understanding of 3D object is a core function for the machine vision system. In this paper, 3D restoration was combined with the semantic understanding of 3D object, a 3D semantic scene recovery network was proposed. The semantic classification and scene restoration of 3D scene were achieved only by using a single RGB-D map as input. Firstly, an end-to-end 3D convolution neural network was established. The input of the network was a depth map. The 3D context module was used for learning the region within the camera view, then the 3D voxels with semantic labels were generated. Secondly, a synthetic data set with dense volume labels was established to train the depth learning network. Finally, the experimental results showed that the recovery performance w improved by 2.0% compared with the state-of-art. It can be seen that the 3D learning network plays well in 3D scene restoration, it owns high accuracy in semantic annotation of object in the scene.
    LIN Jin-hua, WANG Yan-jie. Three-dimentional reconstruction of semantic scene based on RGB-D map[J]. Optics and Precision Engineering, 2018, 26(5): 1231
    Download Citation