• Semiconductor Optoelectronics
  • Vol. 44, Issue 3, 471 (2023)
WANG Yuchen1,2,3, SUN Shengli1,2,*, CHEN Xianing3, CHEN Baolan3, and MA Yijun1,2,4
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • 3[in Chinese]
  • 4[in Chinese]
  • show less
    DOI: 10.16818/j.issn1001-5868.2022122601 Cite this Article
    WANG Yuchen, SUN Shengli, CHEN Xianing, CHEN Baolan, MA Yijun. Research on Remainer State Identification Based on Filtering Network Method[J]. Semiconductor Optoelectronics, 2023, 44(3): 471 Copy Citation Text show less
    References

    [3] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016: 770-778.

    [4] Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[EB/OL]. 2020-10-22. http://ar5iv.labs.arxiv.org/html/200.11929.

    [5] Scaglione L J. Neural network application to particle impact noise detection[C]// Proceedings of 1994 IEEE Inter. Conference on Neural Networks, 1994: 3415-3419.

    [6] Tolstikhin I, Houlsby N, Kolesnikov A, et al. MLP-Mixer: An all-MLP architecture for vision[EB/OL]. 2021-5-4. http://arxiv.org/pdf/2105.01601.pdf.

    [7] Touvron H, Bojanowski P, Caron M, et al. ResMLP: Feedforward networks for image classification with data-efficient training[EB/OL]. 2021-5-7. http://arxiv.org/pdf/2105.03404v1.pdf.

    [8] Liu H X, Dai Z H, So D R, et al. Pay attention to MLPs[EB/OL]. 2021-5-17. http://arxiv.org/pdf/2105.08050.pdf.

    [9] Yang Y C, Soatto S. FDA: Fourier domain adaptation for semantic segmentation[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020: 4084-4094.

    [10] Rao Y M, Zhao W L, Zhu Z, et al. Global filter networks for image classification[C]// 35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021: 1-19.

    [11] Luo W J, Li Y J, Urtasun R, et al. Understanding the effective receptive field in deep convolutional neural networks[C]// NIPS16: Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016: 4905-4913.

    [12] Touvron H, Cord M, Douze M, et al. Training data-efficient image transformers & distillation through attention[EB/OL]. 2020-12-23. https://arxiv.org/pdf/2012.12877v1.pdf.

    [13] Xu Z Q J, Zhang Y Y, Xiao Y. Training behavior of deep neural network in frequency domain[C]// International Conference on Neural Information Processing, 2018: 1-12.

    [14] Campbell F W, Robson J G. Application of Fourier analysis to the visibility of gratings[J]. The Journal Physiology, 1968, 197(3): 551-556.

    [15] De Valois R L, DeValois K K. Spatial vision[J]. Ann. Rev. Psychol., 1980, 31(1): 309-341.

    [16] Sweldens W. The lifting scheme: A construction of second generation wavelets[J]. Siam J. Math. Anal., 1997, 29(2): 511-546.

    [17] Handkiewicz A. Continuous and Discrete Signals[M]. Wiley-IEEE Press, 2009.

    [18] Zhang Y, Sun S L, Liu U H, et al. Target state classification by attention-based branch expansion network[J]. Appl. Sciences, 2021, 11(21): 10208.

    [19] Shi C P, Xia R Y, Wang L G. A novel multi-branch channel expansion network for garbage image classification[J]. IEEE Access, 2020, 8: 154436-154452.

    [20] Huang G, Liu Z, Laurens V D M, et al. Densely connected convolutional networks[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017: 2261-2269.

    [21] Liu Z, Lin Y, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]// 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021: 9992-10002.

    [22] Bai J, Yuan L, Xia S T, et al. Improving vision transformers by revisiting high-frequency components[C]// European Conference on Computer Vision, 2022: 1-18.

    WANG Yuchen, SUN Shengli, CHEN Xianing, CHEN Baolan, MA Yijun. Research on Remainer State Identification Based on Filtering Network Method[J]. Semiconductor Optoelectronics, 2023, 44(3): 471
    Download Citation