• Optics and Precision Engineering
  • Vol. 23, Issue 5, 1474 (2015)
ZHAO Chun-Yang1,2,3,* and ZHAO Huai-Ci1,2
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • 3[in Chinese]
  • show less
    DOI: 10.3788/ope.20152305.1474 Cite this Article
    ZHAO Chun-Yang, ZHAO Huai-Ci. Multimodality robust local feature descriptors[J]. Optics and Precision Engineering, 2015, 23(5): 1474 Copy Citation Text show less

    Abstract

    The intensity-based local feature matching methods are sensitive to image contrast variations, so the performance declines significantly when they are applied in multimodal image registration. To solve the above problem, a multimodality robust local feature descriptor was proposed and the corresponding feature matching method was developed. Firstly, an extraction method for the multimodality robust corner and line segment was proposed based on the phase congruency and local direction information insensitive to contrast variants. Compared with intensity-based method, more equivalent corners and line segments were extracted between multimodal images with more contrast differences. Then, the feature region containing of 48 circular sub-regions was selected by using the corner for a center and the 96 dimensional feature vectors were generated by using the distance values of corners and the length values of line segments located in feature sub-regions. Finally, the feature matching method based on normalized correlation function was proposed and the location constraint-based RANdom SAmple Consensus(RANSAC) algorithm was used to remove false matching point pairs. The experimental results indicate that the precision and repeatability on multimodal image matching of the proposed method reach 80% and 13% respectively. As compared with the other intensity-based image matching methods, the precision and repeatability of proposed method are 2-4 times and 4-7 times respectively those of Symmetric-Scale Invariable Feature Transformation(S-SIFT) and Multimodal-Speeded-up Robust Features(MM-SURF). It concludes that the proposed method outperforms many state-of-the-art methods significantly.