[1] Zhu L, Tian G, Wang B et al. Multi-attention based semantic deep hashing for cross-modal retrieval[J]. Applied Intelligence, 51, 5927-5939(2021).
[2] Wu J, Xie X, Nie L et al. Reconstruction regularized low-rank subspace learning for cross-modal retrieval[J]. Pattern Recognition, 113, 107813(2021).
[3] Cheng Q R, Gu X D. Bridging multimedia heterogeneity gap via Graph Representation Learning for cross-modal retrieval[J]. Neural Networks, 134, 143-162(2021).
[4] Zhang J, Peng Y. Query-adaptive image retrieval by deep-weighted hashing[J]. IEEE Transactions on Multimedia, 20, 2400-2414(2018).
[5] Ahmad J, Muhammad K, Baik S W. Medical image retrieval with compact binary codes generated in frequency domain using highly reactive convolutional features[J]. Journal of medical systems, 42, 1-19(2018).
[6] Lu X, Song L, Xie R et al. Deep binary representation for efficient image retrieval[J]. Advances in Multimedia, 2017, 1-10(2017).
[7] Duan L, Zhao C, Miao J et al. Deep hashing based fusing index method for large-scale image retrieval[J]. Applied Computational Intelligence and Soft Computing, 2017, 250-257(2017).
[8] Ye D, Li Y, Tao C et al. Multiple feature hashing learning for large-scale remote sensing image retrieval[J]. ISPRS International Journal of Geo-Information, 6, 364(2017).
[9] Ding G G, Guo Y C, Zhou J L et al. Large-scale cross-modality search via collective matrix factorization hashing[J]. IEEE Transactions on Image Processing, 25, 5427-5440(2016).
[10] Wang D, Gao X B, Wang X M et al. Semantic topic multimodal hashing for cross-media retrieval[C], 3890-3896(2015).
[11] Zhang D Q, Li W J. Large-scale supervised multimodal hashing with semantic correlation maximization[C], 2177-2183(2014).
[12] Lin Z J, Ding G G, Hu M Q et al. Semantics-preserving hashing for cross-view retrieval[C], 3864-3872(2015).
[13] Gao J, Zhang W, Zhong F et al. UCMH: unpaired cross-modal hashing with matrix factorization[J]. Neurocomputing, 418, 178-190(2020).
[14] Xiong H, Ou W, Yan Z et al. Modality-specific matrix factorization hashing for cross-modal retrieval[J]. Journal of Ambient Intelligence and Humanized Computing, 1-15(2020).
[15] Jiang Q Y, Li W J. Deep cross-modal hashing[C], 3270-3278(2017).
[16] Yang E, Deng C, Liu W et al. Pairwise relationship guided deep hashing for cross-modal retrieval[C], 1618-1625(2017).
[17] Cao Y, Liu B, Long M S et al. Cross-modal hamming hashing[M]. Ferrari V, Hebert M, Sminchisescu C, et al. Computer vision-ECCV 2018. Lecture notes in computer science, 11205, 207-223(2018).
[18] Liu X, Cheung Y, Hu Z et al. Adversarial tri-fusion hashing network for imbalanced cross-modal retrieval[J]. IEEE Transactions on Emerging Topics in Computational Intelligence, 5, 607-619(2020).
[19] Wang X, Zou X, Bakker E M et al. Self-constraining and attention-based hashing network for bit-scalable cross-modal retrieval[J]. Neurocomputing, 400, 255-271(2020).
[20] Yan C, Bai X, Wang S et al. Cross-modal hashing with semantic deep embedding[J]. Neurocomputing, 337, 58-66(2019).
[21] Chatfield K, Simonyan K, Vedaldi A et al. Return of the devil in the details: delving deep into convolutional nets[C], 1-5(2014).
[22] Zhao H S, Shi J P, Qi X J et al. Pyramid scene parsing network[C], 6230-6239(2017).
[23] Liu K W, Fang P P, Xiong H X et al. Person re-identification based on multi-layer feature[J]. Laser & Optoelectronics Progress, 57, 081503(2020).
[24] Li C, Jiang M, Kong J. Multi-branch person re-identification based on multi-scale attention[J]. Laser & Optoelectronics Progress, 57, 201001(2020).
[25] Li S Y, Liu Y H, Zhang R F. Fine-grained image classification based on multi-scale feature fusion[J]. Laser & Optoelectronics Progress, 57, 121002(2020).
[26] Li C, Deng C, Li N et al. Self-supervised adversarial hashing networks for cross-modal retrieval[C], 4242-4251(2018).
[27] Bronstein M M, Bronstein A M, Michel F et al. Data fusion through cross-modality metric learning using similarity-sensitive hashing[C], 3594-3601(2010).
[28] Kumar S, Udupa R. Learning hash functions for cross-view similarity search[C], 1360-1365(2011).