• Optics and Precision Engineering
  • Vol. 32, Issue 10, 1552 (2024)
Guanghui LIU1,2,*, Zhe SHAN1,2, Yuanhai YANG1,2, Heng WANG1,2..., Yuebo MENG1,2,* and Shengjun XU1,2|Show fewer author(s)
Author Affiliations
  • 1College of Information and Control Engineering,Xi'an University of Architecture and Technology,Xi'an70055,China
  • 2Xi'an Key Laboratory of Intelligent Technology for Building and Manufacturing,Xi'an710055,China
  • show less
    DOI: 10.37188/OPE.20243210.1552 Cite this Article
    Guanghui LIU, Zhe SHAN, Yuanhai YANG, Heng WANG, Yuebo MENG, Shengjun XU. Optical remote sensing road extraction network based on GCN guided model viewpoint[J]. Optics and Precision Engineering, 2024, 32(10): 1552 Copy Citation Text show less

    Abstract

    In optical remote sensing images, roads are easily affected by multiple factors such as obstructions, pavement materials, and surrounding environments, resulting in blurred features. However, even if existing road extraction methods enhance their feature perception capabilities, they still suffer from a large number of misjudgments in feature-blurred areas. To address the above issues, this paper proposed the road extraction network based on GCN guided model viewpoint (RGGVNet). RGGVNet adopted the encoder-decoder structure and designed a GCN based viewpoint guidance module (GVPG) to repeatedly guide the model viewpoint at the connection of the encoder and decoder, thereby enhancing attention to feature blurred areas. GVPG took advantage of the fact that the GCN information propagation process had the characteristic of average feature weight, used the road salience levels in different areas as a Laplacian matrix, and participated in GCN information propagation to realize the guidance model perspective. At the same time, a dense guidance viewpoint strategy (DGVS) was proposed, which uses dense connections to connect the encoder, GVPG module, and decoder to each other to ensure effective guidance of model viewpoints while alleviating optimization difficulties. In the decoding stage, a multi-resolution feature fusion module (MRFF) was designed to minimize the information offset and loss of road features of different scales in the feature fusion and upsampling process. In two public remote sensing road datasets, the IoU of our method reached 65.84% and 69.36%, respectively, and the F1-score reached 79.40% and 81.90%, respectively. It can be seen from the quantitative and qualitative experimental results that the performance of our method is superior to other mainstream methods.
    Guanghui LIU, Zhe SHAN, Yuanhai YANG, Heng WANG, Yuebo MENG, Shengjun XU. Optical remote sensing road extraction network based on GCN guided model viewpoint[J]. Optics and Precision Engineering, 2024, 32(10): 1552
    Download Citation