Search by keywords or author
- Special Issue
- Theme Issue on AI and Photonics
- 17 Article (s)
Editorials
Special Section Guest Editorial: Photonics and AI—a symphony of light and intelligence
Guohai Situ, and Yeshaiahu Fainman
The editorial introduces the joint theme issue of Advanced Photonics and Advanced Photonics Nexus, “Photonics and AI,” which showcases the latest research at the intersection of these two disciplines. The editorial introduces the joint theme issue of Advanced Photonics and Advanced Photonics Nexus, “Photonics and AI,” which showcases the latest research at the intersection of these two disciplines.
Advanced Photonics
- Publication Date: Oct. 31, 2024
- Vol. 6, Issue 5, 050101 (2024)
Solving partial differential equations with waveguide-based metatronic networks
Ross Glyn MacDonald, Alex Yakovlev, and Victor Pacheco-Peña
Photonic computing has recently become an interesting paradigm for high-speed calculation of computing processes using light–matter interactions. Here, we propose and study an electromagnetic wave-based structure with the ability to calculate the solution of partial differential equations (PDEs) in the form of the Helmholtz wave equation, ∇ 2f ( x , y ) + k2f ( x , y ) = 0, with k as the wavenumber. To do this, we make use of a network of interconnected waveguides filled with dielectric inserts. In so doing, it is shown how the proposed network can mimic the response of a network of T-circuit elements formed by two series and a parallel impedances, i.e., the waveguide network effectively behaves as a metatronic network. An in-depth theoretical analysis of the proposed metatronic structure is presented, showing how the governing equation for the currents and impedances of the metatronic network resembles that of the finite difference representation of the Helmholtz wave equation. Different studies are then discussed including the solution of PDEs for Dirichlet and open boundary value problems, demonstrating how the proposed metatronic-based structure has the ability to calculate their solutions. Photonic computing has recently become an interesting paradigm for high-speed calculation of computing processes using light–matter interactions. Here, we propose and study an electromagnetic wave-based structure with the ability to calculate the solution of partial differential equations (PDEs) in the form of the Helmholtz wave equation, ∇ 2f ( x , y ) + k2f ( x , y ) = 0, with k as the wavenumber. To do this, we make use of a network of interconnected waveguides filled with dielectric inserts. In so doing, it is shown how the proposed network can mimic the response of a network of T-circuit elements formed by two series and a parallel impedances, i.e., the waveguide network effectively behaves as a metatronic network. An in-depth theoretical analysis of the proposed metatronic structure is presented, showing how the governing equation for the currents and impedances of the metatronic network resembles that of the finite difference representation of the Helmholtz wave equation. Different studies are then discussed including the solution of PDEs for Dirichlet and open boundary value problems, demonstrating how the proposed metatronic-based structure has the ability to calculate their solutions.
Advanced Photonics Nexus
- Publication Date: Oct. 18, 2024
- Vol. 3, Issue 5, 056007 (2024)
News and Commentaries
Photonics and AI: a conversation with Professor Demetri Psaltis
Guohai Situ
Demetri Psaltis (École Polytechnique Fédérale de Lausanne) discusses advances in optical computing, in conversation with Guohai Situ (Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences), for the Advanced Photonics theme issue on AI and Photonics. Demetri Psaltis (École Polytechnique Fédérale de Lausanne) discusses advances in optical computing, in conversation with Guohai Situ (Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences), for the Advanced Photonics theme issue on AI and Photonics.
Advanced Photonics
- Publication Date: Sep. 18, 2024
- Vol. 6, Issue 5, 050501 (2024)
Neural network enables ultrathin flat optics imaging in full color
Anna Wirth Singh, Johannes Froch, and Arka Majumdar
The article comments on a recently developed neural network that enables ultrathin flat optics imaging in full color. The article comments on a recently developed neural network that enables ultrathin flat optics imaging in full color.
Advanced Photonics
- Publication Date: Sep. 17, 2024
- Vol. 6, Issue 5, 050502 (2024)
Reviews
Machine learning for perovskite optoelectronics: a review
Feiyue Lu, Yanyan Liang, Nana Wang, Lin Zhu, and Jianpu Wang
Metal halide perovskite materials have rapidly advanced in the perovskite solar cells and light-emitting diodes due to their superior optoelectronic properties. The structure of perovskite optoelectronic devices includes the perovskite active layer, electron transport layer, and hole transport layer. This indicates that the optimization process unfolds as a complex interplay between intricate chemical crystallization processes and sophisticated physical mechanisms. Traditional research in perovskite optoelectronics has mainly depended on trial-and-error experimentation, a less efficient approach. Recently, the emergence of machine learning (ML) has drastically streamlined the optimization process. Due to its powerful data processing capabilities, ML has significant advantages in uncovering potential patterns and making predictions. More importantly, ML can reveal underlying patterns in data and elucidate complex device mechanisms, playing a pivotal role in enhancing device performance. We present the latest advancements in applying ML to perovskite optoelectronic devices, covering perovskite active layers, transport layers, interface engineering, and mechanisms. In addition, it offers a prospective outlook on future developments. We believe that the deep integration of ML will significantly expedite the comprehensive enhancement of perovskite optoelectronic device performance. Metal halide perovskite materials have rapidly advanced in the perovskite solar cells and light-emitting diodes due to their superior optoelectronic properties. The structure of perovskite optoelectronic devices includes the perovskite active layer, electron transport layer, and hole transport layer. This indicates that the optimization process unfolds as a complex interplay between intricate chemical crystallization processes and sophisticated physical mechanisms. Traditional research in perovskite optoelectronics has mainly depended on trial-and-error experimentation, a less efficient approach. Recently, the emergence of machine learning (ML) has drastically streamlined the optimization process. Due to its powerful data processing capabilities, ML has significant advantages in uncovering potential patterns and making predictions. More importantly, ML can reveal underlying patterns in data and elucidate complex device mechanisms, playing a pivotal role in enhancing device performance. We present the latest advancements in applying ML to perovskite optoelectronic devices, covering perovskite active layers, transport layers, interface engineering, and mechanisms. In addition, it offers a prospective outlook on future developments. We believe that the deep integration of ML will significantly expedite the comprehensive enhancement of perovskite optoelectronic device performance.
Advanced Photonics
- Publication Date: Aug. 27, 2024
- Vol. 6, Issue 5, 054001 (2024)
Research Articles
Ultra-wide FOV meta-camera with transformer-neural-network color imaging methodology
Yan Liu, Wen-Dong Li, Kun-Yuan Xin, Ze-Ming Chen, Zun-Yi Chen, Rui Chen, Xiao-Dong Chen, Fu-Li Zhao, Wei-Shi Zheng, and Jian-Wen Dong
Planar cameras with high performance and wide field of view (FOV) are critical in various fields, requiring highly compact and integrated technology. Existing wide FOV metalenses show great potential for ultrathin optical components, but there is a set of tricky challenges, such as chromatic aberrations correction, central bright speckle removal, and image quality improvement of wide FOV. We design a neural meta-camera by introducing a knowledge-fused data-driven paradigm equipped with transformer-based network. Such a paradigm enables the network to sequentially assimilate the physical prior and experimental data of the metalens, and thus can effectively mitigate the aforementioned challenges. An ultra-wide FOV meta-camera, integrating an off-axis monochromatic aberration-corrected metalens with a neural CMOS image sensor without any relay lenses, is employed to demonstrate the availability. High-quality reconstructed results of color images and real scene images at different distances validate that the proposed meta-camera can achieve an ultra-wide FOV (>100 deg) and full-color images with the correction of chromatic aberration, distortion, and central bright speckle, and the contrast increase up to 13.5 times. Notably, coupled with its compact size (< 0.13 cm3), portability, and full-color imaging capacity, the neural meta-camera emerges as a compelling alternative for applications, such as micro-navigation, micro-endoscopes, and various on-chip devices. Planar cameras with high performance and wide field of view (FOV) are critical in various fields, requiring highly compact and integrated technology. Existing wide FOV metalenses show great potential for ultrathin optical components, but there is a set of tricky challenges, such as chromatic aberrations correction, central bright speckle removal, and image quality improvement of wide FOV. We design a neural meta-camera by introducing a knowledge-fused data-driven paradigm equipped with transformer-based network. Such a paradigm enables the network to sequentially assimilate the physical prior and experimental data of the metalens, and thus can effectively mitigate the aforementioned challenges. An ultra-wide FOV meta-camera, integrating an off-axis monochromatic aberration-corrected metalens with a neural CMOS image sensor without any relay lenses, is employed to demonstrate the availability. High-quality reconstructed results of color images and real scene images at different distances validate that the proposed meta-camera can achieve an ultra-wide FOV (>100 deg) and full-color images with the correction of chromatic aberration, distortion, and central bright speckle, and the contrast increase up to 13.5 times. Notably, coupled with its compact size (< 0.13 cm3), portability, and full-color imaging capacity, the neural meta-camera emerges as a compelling alternative for applications, such as micro-navigation, micro-endoscopes, and various on-chip devices.
Advanced Photonics
- Publication Date: May. 20, 2024
- Vol. 6, Issue 5, 056001 (2024)
Authentication through residual attention-based processing of tampered optical responses
Blake Wilson, Yuheng Chen, Daksh Kumar Singh, Rohan Ojha, Jaxon Pottle, Michael Bezick, Alexandra Boltasseva, Vladimir M. Shalaev, and Alexander V. Kildishev
The global chip industry is grappling with dual challenges: a profound shortage of new chips and a surge of counterfeit chips valued at $75 billion, introducing substantial risks of malfunction and unwanted surveillance. To counteract this, we propose an optical anti-counterfeiting detection method for semiconductor devices that is robust under adversarial tampering features, such as malicious package abrasions, compromised thermal treatment, and adversarial tearing. Our new deep-learning approach uses a RAPTOR (residual, attention-based processing of tampered optical response) discriminator, showing the capability of identifying adversarial tampering to an optical, physical unclonable function based on randomly patterned arrays of gold nanoparticles. Using semantic segmentation and labeled clustering, we efficiently extract the positions and radii of the gold nanoparticles in the random patterns from 1000 dark-field images in just 27 ms and verify the authenticity of each pattern using RAPTOR in 80 ms with 97.6% accuracy under difficult adversarial tampering conditions. We demonstrate that RAPTOR outperforms the state-of-the-art Hausdorff, Procrustes, and average Hausdorff distance metrics, achieving a 40.6%, 37.3%, and 6.4% total accuracy increase, respectively. The global chip industry is grappling with dual challenges: a profound shortage of new chips and a surge of counterfeit chips valued at $75 billion, introducing substantial risks of malfunction and unwanted surveillance. To counteract this, we propose an optical anti-counterfeiting detection method for semiconductor devices that is robust under adversarial tampering features, such as malicious package abrasions, compromised thermal treatment, and adversarial tearing. Our new deep-learning approach uses a RAPTOR (residual, attention-based processing of tampered optical response) discriminator, showing the capability of identifying adversarial tampering to an optical, physical unclonable function based on randomly patterned arrays of gold nanoparticles. Using semantic segmentation and labeled clustering, we efficiently extract the positions and radii of the gold nanoparticles in the random patterns from 1000 dark-field images in just 27 ms and verify the authenticity of each pattern using RAPTOR in 80 ms with 97.6% accuracy under difficult adversarial tampering conditions. We demonstrate that RAPTOR outperforms the state-of-the-art Hausdorff, Procrustes, and average Hausdorff distance metrics, achieving a 40.6%, 37.3%, and 6.4% total accuracy increase, respectively.
Advanced Photonics
- Publication Date: Jul. 17, 2024
- Vol. 6, Issue 5, 056002 (2024)
Multiplane quantitative phase imaging using a wavelength-multiplexed diffractive optical processor
Che-Yung Shen, Jingxi Li, Yuhang Li, Tianyi Gan, Langxing Bai, Mona Jarrahi, and Aydogan Ozcan
Quantitative phase imaging (QPI) is a label-free technique that provides optical path length information for transparent specimens, finding utility in biology, materials science, and engineering. Here, we present QPI of a three-dimensional (3D) stack of phase-only objects using a wavelength-multiplexed diffractive optical processor. Utilizing multiple spatially engineered diffractive layers trained through deep learning, this diffractive processor can transform the phase distributions of multiple two-dimensional objects at various axial positions into intensity patterns, each encoded at a unique wavelength channel. These wavelength-multiplexed patterns are projected onto a single field of view at the output plane of the diffractive processor, enabling the capture of quantitative phase distributions of input objects located at different axial planes using an intensity-only image sensor. Based on numerical simulations, we show that our diffractive processor could simultaneously achieve all-optical QPI across several distinct axial planes at the input by scanning the illumination wavelength. A proof-of-concept experiment with a 3D-fabricated diffractive processor further validates our approach, showcasing successful imaging of two distinct phase objects at different axial positions by scanning the illumination wavelength in the terahertz spectrum. Diffractive network-based multiplane QPI designs can open up new avenues for compact on-chip phase imaging and sensing devices. Quantitative phase imaging (QPI) is a label-free technique that provides optical path length information for transparent specimens, finding utility in biology, materials science, and engineering. Here, we present QPI of a three-dimensional (3D) stack of phase-only objects using a wavelength-multiplexed diffractive optical processor. Utilizing multiple spatially engineered diffractive layers trained through deep learning, this diffractive processor can transform the phase distributions of multiple two-dimensional objects at various axial positions into intensity patterns, each encoded at a unique wavelength channel. These wavelength-multiplexed patterns are projected onto a single field of view at the output plane of the diffractive processor, enabling the capture of quantitative phase distributions of input objects located at different axial planes using an intensity-only image sensor. Based on numerical simulations, we show that our diffractive processor could simultaneously achieve all-optical QPI across several distinct axial planes at the input by scanning the illumination wavelength. A proof-of-concept experiment with a 3D-fabricated diffractive processor further validates our approach, showcasing successful imaging of two distinct phase objects at different axial positions by scanning the illumination wavelength in the terahertz spectrum. Diffractive network-based multiplane QPI designs can open up new avenues for compact on-chip phase imaging and sensing devices.
Advanced Photonics
- Publication Date: Jul. 25, 2024
- Vol. 6, Issue 5, 056003 (2024)
Superresolution imaging using superoscillatory diffractive neural networks
Hang Chen, Sheng Gao, Haiou Zhang, Zejia Zhao, Zhengyang Duan, Gordon Wetzstein, and Xing Lin
Optical superoscillation enables far-field superresolution imaging beyond diffraction limits. However, existing superoscillatory lenses for spatial superresolution imaging systems still confront critical performance limitations due to the lack of advanced design methods and limited design degree of freedom. Here, we propose an optical superoscillatory diffractive neural network (SODNN) that achieves spatial superresolution for imaging beyond the diffraction limit with superior optical performance. SODNN is constructed by utilizing diffractive layers for optical interconnections and imaging samples or biological sensors for nonlinearity. This modulates the incident optical field to create optical superoscillation effects in three-dimensional (3D) space and generate the superresolved focal spots. By optimizing diffractive layers with 3D optical field constraints under an incident wavelength size of λ, we achieved a superoscillatory optical spot and needle with a full width at half-maximum of 0.407λ at the far-field distance over 400λ without sidelobes over the field of view and with a long depth of field over 10λ. Furthermore, the SODNN implements a multiwavelength and multifocus spot array that effectively avoids chromatic aberrations, achieving comprehensive performance improvement that surpasses the trade-off among performance indicators of conventional superoscillatory lens design methods. Our research work will inspire the development of intelligent optical instruments to facilitate the applications of imaging, sensing, perception, etc. Optical superoscillation enables far-field superresolution imaging beyond diffraction limits. However, existing superoscillatory lenses for spatial superresolution imaging systems still confront critical performance limitations due to the lack of advanced design methods and limited design degree of freedom. Here, we propose an optical superoscillatory diffractive neural network (SODNN) that achieves spatial superresolution for imaging beyond the diffraction limit with superior optical performance. SODNN is constructed by utilizing diffractive layers for optical interconnections and imaging samples or biological sensors for nonlinearity. This modulates the incident optical field to create optical superoscillation effects in three-dimensional (3D) space and generate the superresolved focal spots. By optimizing diffractive layers with 3D optical field constraints under an incident wavelength size of λ, we achieved a superoscillatory optical spot and needle with a full width at half-maximum of 0.407λ at the far-field distance over 400λ without sidelobes over the field of view and with a long depth of field over 10λ. Furthermore, the SODNN implements a multiwavelength and multifocus spot array that effectively avoids chromatic aberrations, achieving comprehensive performance improvement that surpasses the trade-off among performance indicators of conventional superoscillatory lens design methods. Our research work will inspire the development of intelligent optical instruments to facilitate the applications of imaging, sensing, perception, etc.
Advanced Photonics
- Publication Date: Oct. 07, 2024
- Vol. 6, Issue 5, 056004 (2024)
Diffraction casting
Ryosuke Mashiko, Makoto Naruse, and Ryoichi Horisaki
Optical computing is considered a promising solution for the growing demand for parallel computing in various cutting-edge fields that require high integration and high-speed computational capacity. We propose an optical computation architecture called diffraction casting (DC) for flexible and scalable parallel logic operations. In DC, a diffractive neural network is designed for single instruction, multiple data (SIMD) operations. This approach allows for the alteration of logic operations simply by changing the illumination patterns. Furthermore, it eliminates the need for encoding and decoding of the input and output, respectively, by introducing a buffer around the input area, facilitating end-to-end all-optical computing. We numerically demonstrate DC by performing all 16 logic operations on two arbitrary 256-bit parallel binary inputs. Additionally, we showcase several distinctive attributes inherent in DC, such as the benefit of cohesively designing the diffractive elements for SIMD logic operations that assure high scalability and high integration capability. Our study offers a design architecture for optical computers and paves the way for a next-generation optical computing paradigm. Optical computing is considered a promising solution for the growing demand for parallel computing in various cutting-edge fields that require high integration and high-speed computational capacity. We propose an optical computation architecture called diffraction casting (DC) for flexible and scalable parallel logic operations. In DC, a diffractive neural network is designed for single instruction, multiple data (SIMD) operations. This approach allows for the alteration of logic operations simply by changing the illumination patterns. Furthermore, it eliminates the need for encoding and decoding of the input and output, respectively, by introducing a buffer around the input area, facilitating end-to-end all-optical computing. We numerically demonstrate DC by performing all 16 logic operations on two arbitrary 256-bit parallel binary inputs. Additionally, we showcase several distinctive attributes inherent in DC, such as the benefit of cohesively designing the diffractive elements for SIMD logic operations that assure high scalability and high integration capability. Our study offers a design architecture for optical computers and paves the way for a next-generation optical computing paradigm.
Advanced Photonics
- Publication Date: Oct. 03, 2024
- Vol. 6, Issue 5, 056005 (2024)
Photonics provides AI not only
with the tools to sense and
communicate more effectively,
but also with the instruments
to accelerate the inference
speed. Moreover, AI offers
photonics the intelligence
to process, analyze and
interpret the sensed data,
but also to solve a wide
class of inverse problems
in photonics design,
imaging and wavefront
reconstruction in ways
not possible before.