[1] Xiong J H, Hsiang E L, He Z Q et al. Augmented reality and virtual reality displays: emerging technologies and future perspectives[J]. Light: Science & Applications, 10, 216(2021).
[2] Xiong L, Yang X, Zhuo G R et al. Review on motion control of autonomous vehicles[J]. Journal of Mechanical Engineering, 56, 127-143(2020).
[3] Yasuda Y D V, Martins L E G, Cappabianco F A M. Autonomous visual navigation for mobile robots: a systematic literature review[J]. ACM Computing Surveys, 53, 13(2021).
[4] Schönberger J L, Frahm J M. Structure-from-motion revisited[C], 4104-4113(2016).
[5] Engel J, Schöps T, Cremers D. LSD-SLAM: large-scale direct monocular SLAM[M]. Fleet D, Pajdla T, Schiele B, et al. Computer vision-ECCV 2014. Lecture notes in computer science, 8690, 834-849(2014).
[6] Levoy M, Hanrahan P. Light field rendering[C], 31-42(1996).
[7] Liu X M, Du M Z, Ma Z B et al. Depth estimation method of light field image based on occlusion scene[J]. Acta Optica Sinica, 40, 0510002(2020).
[8] Wanner S, Goldluecke B. Variational light field analysis for disparity estimation and super-resolution[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 606-619(2014).
[9] Zhang Y B, Lü H J, Liu Y B et al. Light-field depth estimation via epipolar plane image analysis and locally linear embedding[J]. IEEE Transactions on Circuits and Systems for Video Technology, 27, 739-747(2017).
[10] Tao M W, Hadap S, Malik J et al. Depth from combining defocus and correspondence using light-field cameras[C], 673-680(2013).
[11] Williem, Park I K, Lee K M. Robust light field depth estimation using occlusion-noise aware data costs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 2484-2497(2018).
[12] Peng J Y, Xiong Z W, Zhang Y Y et al. LF-fusion: dense and accurate 3D reconstruction from light field images[C](2017).
[13] Johannsen O, Sulc A, Goldluecke B. On linear structure from motion for light field cameras[C], 720-728(2015).
[14] Vianello A, Ackermann J, Diebold M et al. Robust Hough transform based 3D reconstruction from circular light fields[C], 7327-7335(2018).
[15] Chen C, Lin H T, Yu Z et al. Light field stereo matching using bilateral statistics of surface cameras[C], 1518-1525(2014).
[16] Zhang Y L, Yu P H, Yang W et al. Ray space features for plenoptic structure-from-motion[C], 4641-4649(2017).
[17] Song Z X, Wu Q, Wang X et al. 3D reconstruction with circular light field by using 3D Hough transformation[J]. Journal of Northwestern Polytechnical University, 39, 135-140(2021).
[18] Cai Z W, Liu X L, Pedrini G et al. Accurate depth estimation in structured light fields[J]. Optics Express, 27, 13532-13546(2019).
[19] Vollmer J, Mencl R, Muller H. Improved Laplacian smoothing of noisy surface meshes[J]. Computer Graphics Forum, 18, 131-138(1999).
[20] Furukawa Y, AccuratePonce J.. dense, and robust multiview stereopsis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32, 1362-1376(2010).
[21] Schönberger J L, Zheng E L, Frahm J M et al. Pixelwise view selection for unstructured multi-view stereo[M]. Leibe B, Matas J, Sebe N, et al. Computer vision-ECCV 2016. Lecture notes in computer science, 9907, 501-518(2016).
[22] Cai Z W, Liu X L, Peng X et al. Structured light field 3D imaging[J]. Optics Express, 24, 20324-20334(2016).
[23] Kazhdan M, Bolitho M, Hoppe H. Poisson surface reconstruction[C], 61-70(2006).
[24] Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms[J]. International Journal of Computer Vision, 47, 7-42(2002).