[4] Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[EB/OL]. 2020-10-22. http://ar5iv.labs.arxiv.org/html/200.11929.
[5] Scaglione L J. Neural network application to particle impact noise detection[C]// Proceedings of 1994 IEEE Inter. Conference on Neural Networks, 1994: 3415-3419.
[6] Tolstikhin I, Houlsby N, Kolesnikov A, et al. MLP-Mixer: An all-MLP architecture for vision[EB/OL]. 2021-5-4. http://arxiv.org/pdf/2105.01601.pdf.
[7] Touvron H, Bojanowski P, Caron M, et al. ResMLP: Feedforward networks for image classification with data-efficient training[EB/OL]. 2021-5-7. http://arxiv.org/pdf/2105.03404v1.pdf.
[8] Liu H X, Dai Z H, So D R, et al. Pay attention to MLPs[EB/OL]. 2021-5-17. http://arxiv.org/pdf/2105.08050.pdf.
[9] Yang Y C, Soatto S. FDA: Fourier domain adaptation for semantic segmentation[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020: 4084-4094.
[10] Rao Y M, Zhao W L, Zhu Z, et al. Global filter networks for image classification[C]// 35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021: 1-19.
[11] Luo W J, Li Y J, Urtasun R, et al. Understanding the effective receptive field in deep convolutional neural networks[C]// NIPS16: Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016: 4905-4913.
[12] Touvron H, Cord M, Douze M, et al. Training data-efficient image transformers & distillation through attention[EB/OL]. 2020-12-23. https://arxiv.org/pdf/2012.12877v1.pdf.
[13] Xu Z Q J, Zhang Y Y, Xiao Y. Training behavior of deep neural network in frequency domain[C]// International Conference on Neural Information Processing, 2018: 1-12.
[14] Campbell F W, Robson J G. Application of Fourier analysis to the visibility of gratings[J]. The Journal Physiology, 1968, 197(3): 551-556.
[15] De Valois R L, DeValois K K. Spatial vision[J]. Ann. Rev. Psychol., 1980, 31(1): 309-341.
[16] Sweldens W. The lifting scheme: A construction of second generation wavelets[J]. Siam J. Math. Anal., 1997, 29(2): 511-546.
[17] Handkiewicz A. Continuous and Discrete Signals[M]. Wiley-IEEE Press, 2009.
[18] Zhang Y, Sun S L, Liu U H, et al. Target state classification by attention-based branch expansion network[J]. Appl. Sciences, 2021, 11(21): 10208.
[19] Shi C P, Xia R Y, Wang L G. A novel multi-branch channel expansion network for garbage image classification[J]. IEEE Access, 2020, 8: 154436-154452.
[20] Huang G, Liu Z, Laurens V D M, et al. Densely connected convolutional networks[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017: 2261-2269.
[21] Liu Z, Lin Y, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]// 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021: 9992-10002.
[22] Bai J, Yuan L, Xia S T, et al. Improving vision transformers by revisiting high-frequency components[C]// European Conference on Computer Vision, 2022: 1-18.