[3] SOUMEKH M. Synthetic aperture radar signal processing with MATLAB algorithms[M]. Hoboken: John Wiley & Sons, Inc., 1999.
[6] HE Y, KANG G L, DONG X Y, et al. Soft filter pruning for accelerating deep convolutional neural networks[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence. Stockholm: AAAI Press, 2018: 2234-2240.
[7] HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[C]//The 28th Annual Conference on Neural Information Processing Systems Conference [S.l.]: e-print arXiv, 2015: 1503.02531.
[8] MIRZADEH S I, FARAJTABAR M, LI A, et al. Improved knowledge distillation via teacher assistant[C]//Proceedings of the AAAI Conference on Artificial Intelligence. New York: AAAI, 2020: 5191-5198.
[9] JI M, SHIN S, HWANG S, et al. Refine myself by teaching myself: feature refinement via self-knowledge distillation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 10659-10668.
[10] SANDLER M, HOWARD A, ZHU M L, et al. MobileNetV2: inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 4510-4520.
[11] MA N N, ZHANG X Y, ZHENG H T, et al. ShuffleNet V2: practical guidelines for efficient CNN architecture design[C]//Proceedings of the European Conference on Computer Vision (ECCV). Cham: Springer, 2018: 122-138.
[12] MEHTA S, RASTEGARI M, SHAPIRO L, et al. ESPNetv2: a light-weight, power efficient, and general purpose convolutional neural network[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 9182-9192.
[13] TURC I, CHANG M W, LEE K, et al. Well-read students learn better: the impact of student initialization on knowledge distillation[EB/OL]. [2024-01-15]. https://arxiv.org/pdf/1908.08962v1.
[14] FANG G F, MA X Y, SONG M L, et al. DepGraph: towards any structural pruning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023: 16091-16101.
[15] HE Y H, ZHANG X Y, SUN J. Channel pruning for accelerating very deep neural networks[C]//Proceedings of the IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 1398-1406.
[16] MA X L, YUAN G, LIN S, et al. ResNet can be pruned 60×: introducing network purification and unused path removal (P-RM) after weight pruning[C]//2019 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH). Qingdao: IEEE, 2019: 1-2.
[17] LI H, KADAV A, DURDANOVIC I, et al. Pruning filters for efficient ConvNets[EB/OL]. [2024-01-15]. https://www.researchgate.net/publication/307536925_Pruning_Filters_for_Efficient_ConvNets.
[19] HUANG L Q, LIU B, LI B Y, et al. OpenSARShip: a dataset dedicated to Sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195-208.
[20] SHAO J Q, QU C W, LI J W, et al. A lightweight convolutional neural network based on visual attention for SAR image target classification[J]. Sensors, 2018, 18(9): 3039.