• Optics and Precision Engineering
  • Vol. 31, Issue 18, 2752 (2023)
Yuan LI1, Xu SHI1, Zhengchun YANG2, Qijuan TAN3,*, and Hong HUANG1,*
Author Affiliations
  • 1Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
  • 2Women and Children’s Hospital of Chongqing Medical University, Chongqing 401147, China
  • 3Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 40000, China
  • show less
    DOI: 10.37188/OPE.20233118.2752 Cite this Article
    Yuan LI, Xu SHI, Zhengchun YANG, Qijuan TAN, Hong HUANG. Spatial-spectral Transformer for classification of medical hyperspectral images[J]. Optics and Precision Engineering, 2023, 31(18): 2752 Copy Citation Text show less

    Abstract

    The development of hyperspectral imaging (HSI) technology offers new avenues for non-invasive medical imaging. However, medical hyperspectral images are characterized by high dimensionality, high redundancy, and the property of “graph-spectral uniformity,” necessitating the design of high-precision diagnostic algorithms. In recent years, transformer modes have been widely applied in medical hyperspectral image processing. However, medical hyperspectral images obtained using various instruments and acquisition methods have significant differences; this considerably hinders the practical applications of existing transformer-based diagnostic models. To address the aforementioned issues, a spatial–spectral self-attention transformer (S3AT) algorithm is proposed to adaptively mine the intrinsic relations between pixels and bands. First, in the transformer encoder, a spatial–spectral self-attention mechanism, which is designed to obtain key spatial information and important bands on hyperspectral images from different viewpoints, is employed. Thus, the spectral–spectral self-attention obtained from different views is fused. Second, in the classification stage, the predictions from different views are fused according to the learned weights. The experimental result on in-vivo human brain and blood cell HSI datasets indicate that the overall classification accuracies reach 82.25% and 91.74%, respectively. This demonstrates that the proposed S3AT algorithm yields enhanced classification performance on medical hyperspectral images.
    Yuan LI, Xu SHI, Zhengchun YANG, Qijuan TAN, Hong HUANG. Spatial-spectral Transformer for classification of medical hyperspectral images[J]. Optics and Precision Engineering, 2023, 31(18): 2752
    Download Citation