• Spacecraft Recovery & Remote Sensing
  • Vol. 45, Issue 3, 82 (2024)
Sijun DONG and Xiaoliang MENG*
Author Affiliations
  • School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430000, China
  • show less
    DOI: 10.3969/j.issn.1009-8518.2024.03.009 Cite this Article
    Sijun DONG, Xiaoliang MENG. Text-Semantics-Driven Feature Extraction from Remote Sensing Imagery[J]. Spacecraft Recovery & Remote Sensing, 2024, 45(3): 82 Copy Citation Text show less

    Abstract

    With the rapid development of remote sensing technology, high-precision remote sensing image feature extraction has become increasingly crucial in fields such as geographic information science, urban planning, and environmental monitoring. However, traditional image-based remote sensing image feature extraction methods often have limited accuracy when dealing with complex and variable surface features, making it difficult to meet diverse application needs. To address this issue, this paper proposes a novel multimodal remote sensing image semantic segmentation framework (MMRSSEG) that integrates both visual and textual information using deep learning techniques to achieve high-precision analysis of remote sensing images. We conducted a series of experiments on a remote sensing image dataset of buildings, and the results show that MMRSSEG significantly improves the accuracy of pixel-level remote sensing image feature extraction compared to traditional image segmentation methods. In the building recognition task, our method outperformed traditional unimodal algorithms. These experimental results fully demonstrate the effectiveness and prospects of integrating multimodal textual information in remote sensing image segmentation.
    Sijun DONG, Xiaoliang MENG. Text-Semantics-Driven Feature Extraction from Remote Sensing Imagery[J]. Spacecraft Recovery & Remote Sensing, 2024, 45(3): 82
    Download Citation