
Search by keywords or author
Journals > > Topics > Image Processing and Image Analysis
Image Processing and Image Analysis|25 Article(s)
Multitask learning-powered large-volume, rapid photoacoustic microscopy with non-diffracting beams excitation and sparse sampling
Wangting Zhou, Zhiyuan Sun, Kezhou Li, Jibao Lv, Zhong Ji, Zhen Yuan, and Xueli Chen
Large-volume photoacoustic microscopy (PAM) or rapid PAM has attracted increasing attention in biomedical applications due to its ability to provide detailed structural and functional information on tumor pathophysiology and the neuroimmune microenvironment. Non-diffracting beams, such as Airy beams, offer extended depth-of-field (DoF), while sparse image reconstruction using deep learning enables image recovery for rapid imaging. However, Airy beams often introduce side-lobe artifacts, and achieving both extended DoF and rapid imaging remains a challenge, hindering PAM’s adoption as a routine large-volume and repeatable monitoring tool. To address these challenges, we developed multitask learning-powered large-volume, rapid photoacoustic microscopy with Airy beams (ML-LR-PAM). This approach integrates advanced software and hardware solutions designed to mitigate side-lobe artifacts and achieve super-resolution reconstruction. Unlike previous methods that neglect the simultaneous optimization of these aspects, our approach bridges this gap by employing scaled dot-product attention mechanism (SDAM) Wasserstein-based CycleGAN (SW-CycleGAN) for artifact reduction and high-resolution, large-volume imaging. We anticipate that ML-LR-PAM, through this integration, will become a standard tool in both biomedical research and clinical practice. Large-volume photoacoustic microscopy (PAM) or rapid PAM has attracted increasing attention in biomedical applications due to its ability to provide detailed structural and functional information on tumor pathophysiology and the neuroimmune microenvironment. Non-diffracting beams, such as Airy beams, offer extended depth-of-field (DoF), while sparse image reconstruction using deep learning enables image recovery for rapid imaging. However, Airy beams often introduce side-lobe artifacts, and achieving both extended DoF and rapid imaging remains a challenge, hindering PAM’s adoption as a routine large-volume and repeatable monitoring tool. To address these challenges, we developed multitask learning-powered large-volume, rapid photoacoustic microscopy with Airy beams (ML-LR-PAM). This approach integrates advanced software and hardware solutions designed to mitigate side-lobe artifacts and achieve super-resolution reconstruction. Unlike previous methods that neglect the simultaneous optimization of these aspects, our approach bridges this gap by employing scaled dot-product attention mechanism (SDAM) Wasserstein-based CycleGAN (SW-CycleGAN) for artifact reduction and high-resolution, large-volume imaging. We anticipate that ML-LR-PAM, through this integration, will become a standard tool in both biomedical research and clinical practice.
Photonics Research
- Publication Date: Jan. 31, 2025
- Vol. 13, Issue 2, 488 (2025)
Robust polarimetric dehazing algorithm based on low-rank approximation and multiple virtual-exposure fusion
Yifu Zhou, Hanyue Wei, Jian Liang, Feiya Ma, Rui Yang, Liyong Ren, and Xuelong Li
Polarimetric dehazing is an effective way to enhance the quality of images captured in foggy weather. However, images of essential polarization parameters are vulnerable to noise, and the brightness of dehazed images is usually unstable due to different environmental illuminations. These two weaknesses reveal that current polarimetric dehazing algorithms are not robust enough to deal with different scenarios. This paper proposes a novel, to our knowledge, and robust polarimetric dehazing algorithm to enhance the quality of hazy images, where a low-rank approximation method is used to obtain low-noise polarization parameter images. Besides, in order to improve the brightness stability of the dehazed image and thus keep the image have more details within the standard dynamic range, this study proposes a multiple virtual-exposure fusion (MVEF) scheme to process the dehazed image (usually having a high dynamic range) obtained through polarimetric dehazing. Comparative experiments show that the proposed dehazing algorithm is robust and effective, which can significantly improve overall quality of hazy images captured under different environments. Polarimetric dehazing is an effective way to enhance the quality of images captured in foggy weather. However, images of essential polarization parameters are vulnerable to noise, and the brightness of dehazed images is usually unstable due to different environmental illuminations. These two weaknesses reveal that current polarimetric dehazing algorithms are not robust enough to deal with different scenarios. This paper proposes a novel, to our knowledge, and robust polarimetric dehazing algorithm to enhance the quality of hazy images, where a low-rank approximation method is used to obtain low-noise polarization parameter images. Besides, in order to improve the brightness stability of the dehazed image and thus keep the image have more details within the standard dynamic range, this study proposes a multiple virtual-exposure fusion (MVEF) scheme to process the dehazed image (usually having a high dynamic range) obtained through polarimetric dehazing. Comparative experiments show that the proposed dehazing algorithm is robust and effective, which can significantly improve overall quality of hazy images captured under different environments.
Photonics Research
- Publication Date: Jul. 26, 2024
- Vol. 12, Issue 8, 1640 (2024)
Screening COVID-19 from chest X-ray images by an optical diffractive neural network with the optimized F number
Jialong Wang, Shouyu Chai, Wenting Gu, Boyi Li, Xue Jiang, Yunxiang Zhang, Hongen Liao, Xin Liu, and Dean Ta
The COVID-19 pandemic continues to significantly impact people’s lives worldwide, emphasizing the critical need for effective detection methods. Many existing deep learning-based approaches for COVID-19 detection offer high accuracy but demand substantial computing resources, time, and energy. In this study, we introduce an optical diffractive neural network (ODNN-COVID), which is characterized by low power consumption, efficient parallelization, and fast computing speed for COVID-19 detection. In addition, we explore how the physical parameters of ODNN-COVID affect its diagnostic performance. We identify the F number as a key parameter for evaluating the overall detection capabilities. Through an assessment of the connectivity of the diffractive network, we established an optimized range of F number, offering guidance for constructing optical diffractive neural networks. In the numerical simulations, a three-layer system achieves an impressive overall accuracy of 92.64% and 88.89% in binary- and three-classification diagnostic tasks. For a single-layer system, the simulation accuracy of 84.17% and the experimental accuracy of 80.83% can be obtained with the same configuration for the binary-classification task, and the simulation accuracy is 80.19% and the experimental accuracy is 74.44% for the three-classification task. Both simulations and experiments validate that the proposed optical diffractive neural network serves as a passive optical processor for effective COVID-19 diagnosis, featuring low power consumption, high parallelization, and fast computing capabilities. Furthermore, ODNN-COVID exhibits versatility, making it adaptable to various image analysis and object classification tasks related to medical fields owing to its general architecture. The COVID-19 pandemic continues to significantly impact people’s lives worldwide, emphasizing the critical need for effective detection methods. Many existing deep learning-based approaches for COVID-19 detection offer high accuracy but demand substantial computing resources, time, and energy. In this study, we introduce an optical diffractive neural network (ODNN-COVID), which is characterized by low power consumption, efficient parallelization, and fast computing speed for COVID-19 detection. In addition, we explore how the physical parameters of ODNN-COVID affect its diagnostic performance. We identify the F number as a key parameter for evaluating the overall detection capabilities. Through an assessment of the connectivity of the diffractive network, we established an optimized range of F number, offering guidance for constructing optical diffractive neural networks. In the numerical simulations, a three-layer system achieves an impressive overall accuracy of 92.64% and 88.89% in binary- and three-classification diagnostic tasks. For a single-layer system, the simulation accuracy of 84.17% and the experimental accuracy of 80.83% can be obtained with the same configuration for the binary-classification task, and the simulation accuracy is 80.19% and the experimental accuracy is 74.44% for the three-classification task. Both simulations and experiments validate that the proposed optical diffractive neural network serves as a passive optical processor for effective COVID-19 diagnosis, featuring low power consumption, high parallelization, and fast computing capabilities. Furthermore, ODNN-COVID exhibits versatility, making it adaptable to various image analysis and object classification tasks related to medical fields owing to its general architecture.
Photonics Research
- Publication Date: Jun. 12, 2024
- Vol. 12, Issue 7, 1410 (2024)
Diffractive neural networks with improved expressive power for gray-scale image classification
Minjia Zheng, Wenzhe Liu, Lei Shi, and Jian Zi
In order to harness diffractive neural networks (DNNs) for tasks that better align with real-world computer vision requirements, the incorporation of gray scale is essential. Currently, DNNs are not powerful enough to accomplish gray-scale image processing tasks due to limitations in their expressive power. In our work, we elucidate the relationship between the improvement in the expressive power of DNNs and the increase in the number of phase modulation layers, as well as the optimization of the Fresnel number, which can describe the diffraction process. To demonstrate this point, we numerically trained a double-layer DNN, addressing the prerequisites for intensity-based gray-scale image processing. Furthermore, we experimentally constructed this double-layer DNN based on digital micromirror devices and spatial light modulators, achieving eight-level intensity-based gray-scale image classification for the MNIST and Fashion-MNIST data sets. This optical system achieved the maximum accuracies of 95.10% and 80.61%, respectively. In order to harness diffractive neural networks (DNNs) for tasks that better align with real-world computer vision requirements, the incorporation of gray scale is essential. Currently, DNNs are not powerful enough to accomplish gray-scale image processing tasks due to limitations in their expressive power. In our work, we elucidate the relationship between the improvement in the expressive power of DNNs and the increase in the number of phase modulation layers, as well as the optimization of the Fresnel number, which can describe the diffraction process. To demonstrate this point, we numerically trained a double-layer DNN, addressing the prerequisites for intensity-based gray-scale image processing. Furthermore, we experimentally constructed this double-layer DNN based on digital micromirror devices and spatial light modulators, achieving eight-level intensity-based gray-scale image classification for the MNIST and Fashion-MNIST data sets. This optical system achieved the maximum accuracies of 95.10% and 80.61%, respectively.
Photonics Research
- Publication Date: May. 27, 2024
- Vol. 12, Issue 6, 1159 (2024)
Complex transmission matrix retrieval for a highly scattering medium via regional phase differentiation
Qiaozhi He, Rongjun Shao, Yuan Qu, Linxian Liu, Chunxu Ding, and Jiamiao Yang
Accurately measuring the complex transmission matrix (CTM) of the scattering medium (SM) holds critical significance for applications in anti-scattering optical imaging, phototherapy, and optical neural networks. Non-interferometric approaches, utilizing phase retrieval algorithms, can robustly extract the CTM from the speckle patterns formed by multiple probing fields traversing the SM. However, in cases where an amplitude-type spatial light modulator is employed for probing field modulation, the absence of phase control frequently results in the convergence towards a local optimum, undermining the measurement accuracy. Here, we propose a high-accuracy CTM retrieval (CTMR) approach based on regional phase differentiation (RPD). It incorporates a sequence of additional phase masks into the probing fields, imposing a priori constraints on the phase retrieval algorithms. By distinguishing the variance of speckle patterns produced by different phase masks, the RPD-CTMR can effectively direct the algorithm towards a solution that closely approximates the CTM of the SM. We built a prototype of a digital micromirror device modulated RPD-CTMR. By accurately measuring the CTM of diffusers, we achieved an enhancement in the peak-to-background ratio of anti-scattering focusing by a factor of 3.6, alongside a reduction in the bit error rate of anti-scattering image transmission by a factor of 24. Our proposed approach aims to facilitate precise modulation of scattered optical fields, thereby fostering advancements in diverse fields including high-resolution microscopy, biomedical optical imaging, and optical communications. Accurately measuring the complex transmission matrix (CTM) of the scattering medium (SM) holds critical significance for applications in anti-scattering optical imaging, phototherapy, and optical neural networks. Non-interferometric approaches, utilizing phase retrieval algorithms, can robustly extract the CTM from the speckle patterns formed by multiple probing fields traversing the SM. However, in cases where an amplitude-type spatial light modulator is employed for probing field modulation, the absence of phase control frequently results in the convergence towards a local optimum, undermining the measurement accuracy. Here, we propose a high-accuracy CTM retrieval (CTMR) approach based on regional phase differentiation (RPD). It incorporates a sequence of additional phase masks into the probing fields, imposing a priori constraints on the phase retrieval algorithms. By distinguishing the variance of speckle patterns produced by different phase masks, the RPD-CTMR can effectively direct the algorithm towards a solution that closely approximates the CTM of the SM. We built a prototype of a digital micromirror device modulated RPD-CTMR. By accurately measuring the CTM of diffusers, we achieved an enhancement in the peak-to-background ratio of anti-scattering focusing by a factor of 3.6, alongside a reduction in the bit error rate of anti-scattering image transmission by a factor of 24. Our proposed approach aims to facilitate precise modulation of scattered optical fields, thereby fostering advancements in diverse fields including high-resolution microscopy, biomedical optical imaging, and optical communications.
Photonics Research
- Publication Date: Apr. 08, 2024
- Vol. 12, Issue 5, 876 (2024)
Learning the imaging mechanism directly from optical microscopy observations
Ze-Hao Wang, Long-Kun Shan, Tong-Tian Weng, Tian-Long Chen, Xiang-Dong Chen, Zhang-Yang Wang, Guang-Can Guo, and Fang-Wen Sun
The optical microscopy image plays an important role in scientific research through the direct visualization of the nanoworld, where the imaging mechanism is described as the convolution of the point spread function (PSF) and emitters. Based on a priori knowledge of the PSF or equivalent PSF, it is possible to achieve more precise exploration of the nanoworld. However, it is an outstanding challenge to directly extract the PSF from microscopy images. Here, with the help of self-supervised learning, we propose a physics-informed masked autoencoder (PiMAE) that enables a learnable estimation of the PSF and emitters directly from the raw microscopy images. We demonstrate our method in synthetic data and real-world experiments with significant accuracy and noise robustness. PiMAE outperforms DeepSTORM and the Richardson–Lucy algorithm in synthetic data tasks with an average improvement of 19.6% and 50.7% (35 tasks), respectively, as measured by the normalized root mean square error (NRMSE) metric. This is achieved without prior knowledge of the PSF, in contrast to the supervised approach used by DeepSTORM and the known PSF assumption in the Richardson–Lucy algorithm. Our method, PiMAE, provides a feasible scheme for achieving the hidden imaging mechanism in optical microscopy and has the potential to learn hidden mechanisms in many more systems. The optical microscopy image plays an important role in scientific research through the direct visualization of the nanoworld, where the imaging mechanism is described as the convolution of the point spread function (PSF) and emitters. Based on a priori knowledge of the PSF or equivalent PSF, it is possible to achieve more precise exploration of the nanoworld. However, it is an outstanding challenge to directly extract the PSF from microscopy images. Here, with the help of self-supervised learning, we propose a physics-informed masked autoencoder (PiMAE) that enables a learnable estimation of the PSF and emitters directly from the raw microscopy images. We demonstrate our method in synthetic data and real-world experiments with significant accuracy and noise robustness. PiMAE outperforms DeepSTORM and the Richardson–Lucy algorithm in synthetic data tasks with an average improvement of 19.6% and 50.7% (35 tasks), respectively, as measured by the normalized root mean square error (NRMSE) metric. This is achieved without prior knowledge of the PSF, in contrast to the supervised approach used by DeepSTORM and the known PSF assumption in the Richardson–Lucy algorithm. Our method, PiMAE, provides a feasible scheme for achieving the hidden imaging mechanism in optical microscopy and has the potential to learn hidden mechanisms in many more systems.
Photonics Research
- Publication Date: Dec. 08, 2023
- Vol. 12, Issue 1, 7 (2024)
Passive imaging through dense scattering media
Yaoming Bian, Fei Wang, Yuanzhe Wang, Zhenfeng Fu, Haishan Liu, Haiming Yuan, and Guohai Situ
Imaging through non-static and optically thick scattering media such as dense fog, heavy smoke, and turbid water is crucial in various applications. However, most existing methods rely on either active and coherent light illumination, or image priors, preventing their application in situations where only passive illumination is possible. In this study we present a universal passive method for imaging through dense scattering media that does not depend on any prior information. Combining the selection of small-angle components out of the incoming information-carrying scattering light and image enhancement algorithm that incorporates time-domain minimum filtering and denoising, we show that the proposed method can dramatically improve the signal-to-interference ratio and contrast of the raw camera image in outfield experiments. Imaging through non-static and optically thick scattering media such as dense fog, heavy smoke, and turbid water is crucial in various applications. However, most existing methods rely on either active and coherent light illumination, or image priors, preventing their application in situations where only passive illumination is possible. In this study we present a universal passive method for imaging through dense scattering media that does not depend on any prior information. Combining the selection of small-angle components out of the incoming information-carrying scattering light and image enhancement algorithm that incorporates time-domain minimum filtering and denoising, we show that the proposed method can dramatically improve the signal-to-interference ratio and contrast of the raw camera image in outfield experiments.
Photonics Research
- Publication Date: Dec. 22, 2023
- Vol. 12, Issue 1, 134 (2024)
Learning-based super-resolution interpolation for sub-Nyquist sampled laser speckles
Huanhao Li, Zhipeng Yu, Qi Zhao, Yunqi Luo, Shengfu Cheng, Tianting Zhong, Chi Man Woo, Honglin Liu, Lihong V. Wang, Yuanjin Zheng, and Puxiang Lai
Information retrieval from visually random optical speckle patterns is desired in many scenarios yet considered challenging. It requires accurate understanding or mapping of the multiple scattering process, or reliable capability to reverse or compensate for the scattering-induced phase distortions. In whatever situation, effective resolving and digitization of speckle patterns are necessary. Nevertheless, on some occasions, to increase the acquisition speed and/or signal-to-noise ratio (SNR), speckles captured by cameras are inevitably sampled in the sub-Nyquist domain via pixel binning (one camera pixel contains multiple speckle grains) due to finite size or limited bandwidth of photosensors. Such a down-sampling process is irreversible; it undermines the fine structures of speckle grains and hence the encoded information, preventing successful information extraction. To retrace the lost information, super-resolution interpolation for such sub-Nyquist sampled speckles is needed. In this work, a deep neural network, namely SpkSRNet, is proposed to effectively up sample speckles that are sampled below 1/10 of the Nyquist criterion to well-resolved ones that not only resemble the comprehensive morphology of original speckles (decompose multiple speckle grains from one camera pixel) but also recover the lost complex information (human face in this study) with high fidelity under normal- and low-light conditions, which is impossible with classic interpolation methods. These successful speckle super-resolution interpolation demonstrations are essentially enabled by the strong implicit correlation among speckle grains, which is non-quantifiable but could be discovered by the well-trained network. With further engineering, the proposed learning platform may benefit many scenarios that are physically inaccessible, enabling fast acquisition of speckles with sufficient SNR and opening up new avenues for seeing big and seeing clearly simultaneously in complex scenarios. Information retrieval from visually random optical speckle patterns is desired in many scenarios yet considered challenging. It requires accurate understanding or mapping of the multiple scattering process, or reliable capability to reverse or compensate for the scattering-induced phase distortions. In whatever situation, effective resolving and digitization of speckle patterns are necessary. Nevertheless, on some occasions, to increase the acquisition speed and/or signal-to-noise ratio (SNR), speckles captured by cameras are inevitably sampled in the sub-Nyquist domain via pixel binning (one camera pixel contains multiple speckle grains) due to finite size or limited bandwidth of photosensors. Such a down-sampling process is irreversible; it undermines the fine structures of speckle grains and hence the encoded information, preventing successful information extraction. To retrace the lost information, super-resolution interpolation for such sub-Nyquist sampled speckles is needed. In this work, a deep neural network, namely SpkSRNet, is proposed to effectively up sample speckles that are sampled below 1/10 of the Nyquist criterion to well-resolved ones that not only resemble the comprehensive morphology of original speckles (decompose multiple speckle grains from one camera pixel) but also recover the lost complex information (human face in this study) with high fidelity under normal- and low-light conditions, which is impossible with classic interpolation methods. These successful speckle super-resolution interpolation demonstrations are essentially enabled by the strong implicit correlation among speckle grains, which is non-quantifiable but could be discovered by the well-trained network. With further engineering, the proposed learning platform may benefit many scenarios that are physically inaccessible, enabling fast acquisition of speckles with sufficient SNR and opening up new avenues for seeing big and seeing clearly simultaneously in complex scenarios.
Photonics Research
- Publication Date: Mar. 30, 2023
- Vol. 11, Issue 4, 631 (2023)
Deep coded exposure: end-to-end co-optimization of flutter shutter and deblurring processing for general motion blur removal
Zhihong Zhang, Kaiming Dong, Jinli Suo, and Qionghai Dai
Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing methods suffer from restrictive assumptions, complicated preprocessing, and inferior performance. To address these issues, we proposed an end-to-end framework to handle general motion blurs with a unified deep neural network, and optimize the shutter’s encoding pattern together with the deblurring processing to achieve high-quality sharp images. The framework incorporates a learnable flutter shutter sequence to capture coded exposure snapshots and a learning-based deblurring network to restore the sharp images from the blurry inputs. By co-optimizing the encoding and the deblurring modules jointly, our approach avoids exhaustively searching for encoding sequences and achieves an optimal overall deblurring performance. Compared with existing coded exposure based motion deblurring methods, the proposed framework eliminates tedious preprocessing steps such as foreground segmentation and blur kernel estimation, and extends coded exposure deblurring to more general blind and nonuniform cases. Both simulation and real-data experiments demonstrate the superior performance and flexibility of the proposed method. Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing methods suffer from restrictive assumptions, complicated preprocessing, and inferior performance. To address these issues, we proposed an end-to-end framework to handle general motion blurs with a unified deep neural network, and optimize the shutter’s encoding pattern together with the deblurring processing to achieve high-quality sharp images. The framework incorporates a learnable flutter shutter sequence to capture coded exposure snapshots and a learning-based deblurring network to restore the sharp images from the blurry inputs. By co-optimizing the encoding and the deblurring modules jointly, our approach avoids exhaustively searching for encoding sequences and achieves an optimal overall deblurring performance. Compared with existing coded exposure based motion deblurring methods, the proposed framework eliminates tedious preprocessing steps such as foreground segmentation and blur kernel estimation, and extends coded exposure deblurring to more general blind and nonuniform cases. Both simulation and real-data experiments demonstrate the superior performance and flexibility of the proposed method.
Photonics Research
- Publication Date: Sep. 27, 2023
- Vol. 11, Issue 10, 1678 (2023)
Snapshot spectral compressive imaging reconstruction using convolution and contextual Transformer
Lishun Wang, Zongliang Wu, Yong Zhong, and Xin Yuan
Spectral compressive imaging (SCI) is able to encode a high-dimensional hyperspectral image into a two-dimensional snapshot measurement, and then use algorithms to reconstruct the spatio-spectral data-cube. At present, the main bottleneck of SCI is the reconstruction algorithm, and state-of-the-art (SOTA) reconstruction methods generally face problems of long reconstruction times and/or poor detail recovery. In this paper, we propose a hybrid network module, namely, a convolution and contextual Transformer (CCoT) block, that can simultaneously acquire the inductive bias ability of convolution and the powerful modeling ability of Transformer, which is conducive to improving the quality of reconstruction to restore fine details. We integrate the proposed CCoT block into a physics-driven deep unfolding framework based on the generalized alternating projection (GAP) algorithm, and further propose the GAP-CCoT network. Finally, we apply the GAP-CCoT algorithm to SCI reconstruction. Through experiments on a large amount of synthetic data and real data, our proposed model achieves higher reconstruction quality (>2 dB in peak signal-to-noise ratio on simulated benchmark datasets) and a shorter running time than existing SOTA algorithms by a large margin. The code and models are publicly available at https://github.com/ucaswangls/GAP-CCoT. Spectral compressive imaging (SCI) is able to encode a high-dimensional hyperspectral image into a two-dimensional snapshot measurement, and then use algorithms to reconstruct the spatio-spectral data-cube. At present, the main bottleneck of SCI is the reconstruction algorithm, and state-of-the-art (SOTA) reconstruction methods generally face problems of long reconstruction times and/or poor detail recovery. In this paper, we propose a hybrid network module, namely, a convolution and contextual Transformer (CCoT) block, that can simultaneously acquire the inductive bias ability of convolution and the powerful modeling ability of Transformer, which is conducive to improving the quality of reconstruction to restore fine details. We integrate the proposed CCoT block into a physics-driven deep unfolding framework based on the generalized alternating projection (GAP) algorithm, and further propose the GAP-CCoT network. Finally, we apply the GAP-CCoT algorithm to SCI reconstruction. Through experiments on a large amount of synthetic data and real data, our proposed model achieves higher reconstruction quality (>2 dB in peak signal-to-noise ratio on simulated benchmark datasets) and a shorter running time than existing SOTA algorithms by a large margin. The code and models are publicly available at https://github.com/ucaswangls/GAP-CCoT.
Photonics Research
- Publication Date: Jul. 22, 2022
- Vol. 10, Issue 8, 1848 (2022)
Topics