• Advanced Photonics Nexus
  • Vol. 4, Issue 2, 026001 (2025)
Yanqi Chen1,2,†, Jiurun Chen3, Zhiping Wang4, Yuting Gao1,2..., Yonghong He3, Yishi Shi2 and An Pan1,2,*|Show fewer author(s)
Author Affiliations
  • 1Chinese Academy of Sciences, Xi’an Institute of Optics and Precision Mechanics, State Key Laboratory of Transient Optics and Photonics, Xi’an, China
  • 2University of Chinese Academy of Sciences, School of Optoelectronics, Beijing, China
  • 3Tsinghua University, Tsinghua Shenzhen International Graduate School, Shenzhen, China
  • 4Biozentrum, University of Basel, Basel, Switzerland
  • show less
    DOI: 10.1117/1.APN.4.2.026001 Cite this Article Set citation alerts
    Yanqi Chen, Jiurun Chen, Zhiping Wang, Yuting Gao, Yonghong He, Yishi Shi, An Pan, "Fast full-color pathological imaging using Fourier ptychographic microscopy via closed-form model-based colorization," Adv. Photon. Nexus 4, 026001 (2025) Copy Citation Text show less

    Abstract

    Full-color imaging is essential in digital pathology for accurate tissue analysis. Utilizing advanced optical modulation and phase retrieval algorithms, Fourier ptychographic microscopy (FPM) offers a powerful solution for high-throughput digital pathology, combining high resolution, large field of view, and extended depth of field (DOF). However, the full-color capabilities of FPM are hindered by coherent color artifacts and reduced computational efficiency, which significantly limits its practical applications. Color-transfer-based FPM (CFPM) has emerged as a potential solution, theoretically reducing both acquisition and reconstruction threefold time. Yet, existing methods fall short of achieving the desired reconstruction speed and colorization quality. In this study, we report a generalized dual-color-space constrained model for FPM colorization. This model provides a mathematical framework for model-based FPM colorization, enabling a closed-form solution without the need for redundant iterative calculations. Our approach, termed generalized CFPM (gCFPM), achieves colorization within seconds for megapixel-scale images, delivering superior colorization quality in terms of both colorfulness and sharpness, along with an extended DOF. Both simulations and experiments demonstrate that gCFPM surpasses state-of-the-art methods across all evaluated criteria. Our work offers a robust and comprehensive workflow for high-throughput full-color pathological imaging using FPM platforms, laying a solid foundation for future advancements in methodology and engineering.

    1 Introduction

    Using an optical microscope to analyze pathology slides is a benchmark routine in clinical disease diagnosis and nonclinical medical examinations. With the rapid development of computational methods, digital imaging solutions, such as whole slide imaging (WSI), have become increasingly mainstream in pathology.13 However, conventional WSI systems employing objectives with a high numerical aperture (NA) suffer from several issues, such as a limited field of view (FOV) and depth of field (DOF).4,5 Solutions involving mechanical lateral scanning and z-axis focusing require highly precise hardware and can potentially distort pathology images. Therefore, alternative solutions for achieving high-resolution (HR) and high-throughput WSI are greatly desired.

    Thanks to advancements in optical modulation and postdigital processing techniques, optical computational imaging offers solutions for biomedical applications with simpler hardware and more robust imaging results. Adapted from microwave synthetic aperture techniques, Fourier ptychographic microscopy (FPM) is a promising candidate for high-throughput pathology imaging.68 In a typical implementation, FPM utilizes a programmable LED array for illumination modulation and a phase-retrieval-based aperture synthesis algorithm for superresolution. This optimization framework allows FPM to correct optical aberrations without external setups.9,10 Consequently, FPM can achieve large FOV, HR, and aberration-free imaging using a low NA objective. FPM has been proven feasible in various fields, such as label-free quantitative pathology analysis,11,12 high-throughput cytometric analysis,13,14 and three-dimensional tissue imaging.15,16

    One of the main obstacles to the large-scale application of FPM is the forward model mismatch caused by inaccurate system parameter estimation and the vignetting effect,17,18 which prevent the FPM reconstruction algorithm from outputting full-field, high-quality images. To address this issue, Zhang et al.19 proposed a feature domain backpropagation method (FD-FPM) to bypass the model mismatch, demonstrating nonblocking, full-field FPM reconstruction. Another significant challenge is the high computational demand for FPM reconstruction. In regular pathology, at least three-channel imaging (RGB) is needed for tissue differentiation, and more than three wavelength channels are required when combining spectral imaging techniques. This requirement substantially increases the time needed for FPM data acquisition and reconstruction, compromising FPM’s high-throughput advantage. A straightforward solution involves wavelength multiplexing methods, such as RGB simultaneous illumination with a chromatic camera20,21 or RGB multiplexed illumination with a monochrome camera.22,23 However, the former suffers from degraded raw data due to the Bayer filter, and the latter requires a mixed-state decoupling algorithm, which does not reduce reconstruction time. Overall, wavelength multiplexing-based methods cannot adequately address the problem of high computational demand.

    Instead of reconstructing HR images for each channel and then generating an RGB HR image, color-transfer-based FPM (CFPM) assumes that the low-resolution (LR) color image contains sufficient color information.2426 By transferring this color information to the HR single-channel image, CFPM can generate high-quality HR color images, thereby easing the computational burden. We classify existing CFPM methods as nonmodel-based and model-based, depending on whether they consider the statistical or spatial properties of the color space (such as RGB space) in the LR color image and HR image. As an example of a nonmodel-based method, wavelet-based FPM image colorization (wavCFPM) fuses the wavelet coefficients of all three channels of the LR RGB image with the green-channel HR reconstructed image.24 However, distortion can occur when the red or blue channel LR image is fused into the mismatched green channel HR image. Another nonmodel-based method conducts similar fusion in the image’s frequency domain (SFCFPM).27 In addition, one can utilize a neural network for FPM colorization,28 but this requires a large-sized labeled training set and suffers from a lack of generality and physical interpretability.2931 In contrast, model-based CFPM colorizes the HR image according to statistical or spatial properties in the color domain. For example, Gao et al.25 proposed colorizing the HR gray image by matching its RGB histogram to that of the LR RGB image. Although this method involves color statistical properties, it still suffers from decreased color contrast due to the lack of spatial color information. Chen et al.26 proposed a spatially filtered method (CFFPM) to iteratively predict the HR color image using color-space alternating projection (AP). Inspired by the Gerchberg–Saxton (GS) algorithm in phase retrieval,32 color-space AP enables the predicted image to obtain the identical color distribution of the LR color image in Lab color space while maintaining its green channel consistency with the HR image. However, CFFPM did not reveal its underlying mathematical principles, which likely share roots with GS in optimization theory, thereby hindering its further improvement and perfection. For example, one could expect that model-based CFPM could be solved in a noniterative way with high precision.

    In this work, we report a fast and high-quality colorization method for full-color pathological imaging using FPM through a generalized dual-color-space constrained model. This model addresses the colorization problem from an optimization standpoint. Rather than iterative calculation, we demonstrate that the model has a closed-form solution when linear color transforms are employed, enabling efficient colorization. Additionally, we introduce several preprocessing strategies to further improve the performance of colorization. We refer to the complete framework as generalized CFPM (gCFPM). Both simulations and experiments demonstrate that the proposed method outperforms state-of-the-art techniques in various ways, including time consumption and colorization quality. Moreover, gCFPM offers an extended DOF compared to the traditional methods. We believe that our work will significantly advance the application of FPM in high-throughput pathological imaging.

    2 Results

    2.1 Principle of gCFPM

    In the regular workflow of FPM, hundreds of raw LR images are used to reconstruct an HR complex transmission function of the object using iterative algorithms. To expedite the retrieval of an HR color image, CFPM utilizes an LR color image XRGBLR to colorize the retrieved HR gray image. Without loss of generality, we denoted this HR gray image as KHR (K=R,G, or B). Specifically, model-based CFPM aims to predict HR color image XRGBHR matches the color distribution of XRGBLR while maintaining consistent spatial resolution information with KHR, as illustrated in Fig. 1(a). To extract the color distribution of an image defined in regular RGB space, it must be transformed into another color space where spatial resolution information and color distribution are decoupled. We denoted the extracted LR color distribution as VLR=TXRGBLR, where T is the arbitrary color transformation. With two known observations in different color spaces—KHR in the RGB space and VLR in the transformed color space—these serve as constraints for predicting XRGBHR. This leads to the formulation of a generalized dual-color-space constrained model as follows: find  X^RGBLRs.t.  {TX^RGBHR=VLRK^HR=KHR.

    Principle of gCFPM. (a) Schematic of generalized dual-color-space-constrained model for color transfer; (b) principle of RGB-to-IHS transform; and (c) flow chart of gCFPM using IHS-RGB space constraints.

    Figure 1.Principle of gCFPM. (a) Schematic of generalized dual-color-space-constrained model for color transfer; (b) principle of RGB-to-IHS transform; and (c) flow chart of gCFPM using IHS-RGB space constraints.

    It is important to note that upsampling of XRGBLR is excluded in Eq. (1) for simplicity but should not be omitted when implemented in practice. Color-space AP can be regarded as a solution to Eq. (1) by iteratively projecting XRGBHR onto two sets defined by TX^RGBHR=VLR and X^RGBHR=VLR. By measuring difference of prediction and known observations using Euclidean distance, Eq. (1) can be transformed into a nonconstrained optimization problem, X^RGBHR=argminLC=argminTX^RGBHRVLR22+AKX^RGBHRKHR22,where ·2 means Euclidean norm and AK is the operator extracting channel K from an RGB image. To solve Eq. (2), we introduce intensity–hue–saturation (IHS) color space,33,34 which has been widely used in pan-sharpening tasks within the field of remote sensing.3538 As shown in Fig. 1(b), the IHS space is connected with RGB space by a linear coordinate transformation. This transformation rotates the RGB color cube so that the new intensity axis aligns with the gray axis, while its perpendicular plane spans the intensity-independent color space. When represented in a polar coordinate system, the hue and saturation correspond to the angular and radial coordinates, respectively. However, Eq. (2) minimizes Euclidean distance in color space, so we still present it in a Cartesian coordinate system with two orthogonal bases, V1 and V2. Using this linear IHS transform, cost function LC in Eq. (2) becomes quadratic, allowing it to be solved in closed form.

    In addition to the colorization model described above, we have proposed several strategies to enhance the performance of gCFPM, as outlined in the flow chart in Fig. 1(c). The first step is to record the LR color image. Rather than using the center LED for illumination, which leads to coherent color artifacts,9,28,39 we illuminate all the LEDs in the bright-field region. This incoherent illumination eliminates artifacts and produces color images that are more consistent with human visual perception. The second step is to decide in which channel the raw data of FPM should be recorded. We use image entropy (IE) to measure the information content40 in each channel of XRGBLR; the one with the highest IE is selected for FPM data recording and HR gray image reconstruction. The third step involves applying FPM algorithms to recover KHR, the system’s coherent transfer function HC, and the optical transfer function HO. Assuming a system with low chromatic aberration, the color map VLR is refined by deconvolution with the recovered HO. This constitutes the fourth step. Finally, the HR color image XRGBHR is predicted using the generalized dual-color-space constrained model.

    2.2 Full Color Imaging via gCFPM

    Experiments were conducted using a self-built FPM platform and a personal computer (PC). The FPM platform is equipped with a 17×17 high-brightness programmable RGB LED array, with center wavelengths at 631.23, 538.86, and 456.70 nm, respectively. It also features a 4× objective (Olympus OPLN4× NA 0.10) and a CCD camera sensor (ZWO ASI178MM, 2.4  μm pixel pitch, 3096×2080). The synthesized NA of the FPM platform is approximately 0.63. The PC is equipped with an Intel Core i5-12400F CPU and Nvidia GeForce RTX3060 Ti graphics card. Bright-field LEDs are covered by a diffuser to ensure uniform incoherent illumination when recording LR color images. The white balance of LR color images is corrected using a dynamic threshold algorithm.41 FPM raw data are downsampled by a factor of 2 to reduce the computational burden. Then the central region of 1000  pixel×1000  pixel is cropped for reconstruction using FD-FPM, resulting in an HR image with a resolution of 5000  pixel×5000  pixel.

    Figure 2(a) shows the full-color imaging result of a cat stomach smooth muscle section, whose photograph is shown in Fig. 2(b). FPM HR image reconstruction is performed in the green channel. For comparison, the LR color image and the HR image obtained by directly fusing the three channels recovered by FPM (3-FPM) are also included. Enlarged regions of interest (ROIs) are displayed in Fig. 2(c). The results demonstrate that gCFPM achieves significant resolution improvement while preserving rich color information. Although the color distribution is derived from the LR color map, it does not compromise spatial resolution, which is comparable to that of 3-FPM. However, the HR color image obtained through 3-FPM shows inconsistent background color tones across the FOV, as evidenced by ROIs 1 and 2. Additionally, the zoomed-in area in ROI 3 reveals that the 3-FPM result is significantly affected by dispersion-like artifacts around the edges, which nearly obscure the details. In contrast, the gCFPM result maintains clarity, allowing for easy recognition of details. Figure 2(d) illustrates the total time consumption of gCFPM and 3-FPM. In our practice, single-channel data recording and image reconstruction of 5000  pixel×5000  pixel resolution on a GPU typically take around 2 and 7 min, respectively, which triples when the R, G, and B channels are processed separately in 3-FPM. While gCFPM requires only single-channel reconstruction and additional colorization, taking >5  s, this results in a time savings of approximately two-thirds compared to traditional methods. To further validate our method, we inspect two additional sections, as shown in Fig. 3. The reconstruction channel of FPM HR image for Figs. 3(c) and 3(d) is red and green, respectively. These two samples also demonstrate the excellent imaging performance of our method.

    Full-color imaging via gCFPM. (a) Imaging result of the sample; (b) photograph of cat stomach smooth muscle section; (c) enlarged view of ROI; and (d) full process time comparison of 3-FPM (three-channel FPM) and gCFPM.

    Figure 2.Full-color imaging via gCFPM. (a) Imaging result of the sample; (b) photograph of cat stomach smooth muscle section; (c) enlarged view of ROI; and (d) full process time comparison of 3-FPM (three-channel FPM) and gCFPM.

    (a) Photograph of stratified epithelium section; (b) photograph of lycopodium sporophyll spike longitudinal section; (c) imaging result of (a); and (d) imaging result of (b).

    Figure 3.(a) Photograph of stratified epithelium section; (b) photograph of lycopodium sporophyll spike longitudinal section; (c) imaging result of (a); and (d) imaging result of (b).

    3 Materials and Methods

    3.1 Solving Dual-Color-Space Constrained Model

    This section introduces the detailed solution of Eq. (2) with IHS transform. One can easily extend the following formula to other linear color transforms. The RGB-to-IHS transform is given as [IV1V2]=[13131326262322220][RGB].

    I is the intensity, and V1 and V2 are the two orthogonal bases in the color plane. For simplicity, we denote the V1 and V2 related row vector in the transform matrix as T1=[262623] and T2=[22220]. That is, Vi=Ti[RGB]T, i=1,2. Then the cost function in Eq. (2) is rephrased as LC=T1X^RGBHRV1LR22+T2X^RGBHRV2LR22+AKX^RGBHRKHR22.

    To minimize Eq. (4), the corresponding derivative LCX^RGBHR is set to zero. This leads to solving the following linear equation: (T1TT1+T2TT2+AKTAK)X^RGBHR=T1TV1LR+T2TV2LR+AKTKHR.

    The coefficient matrix proves to be full rank and inversible for all three channels of RGB. Therefore, X^RGBHR is given by directly solving Eq. (5). This linear solution to Eq. (5) indicates that each channel of X^RGBHR can be expressed as a linear combination of V1LR, V2LR, and KHR. For example, taking K=G, calculation of X^RGBHR is detailed as follows: [R^HRG^HRB^HR]=[02100126221][V1LRV2LRKHR].

    The above derivation is based on the linear color transform. However, nonlinear transform is more common in practical applications because it realizes better decoupling of images’ color and resolution information.4245 To solve a dual-color-space-constrained model based on a nonlinear transform, AP on two color spaces is the most straightforward method. Another potential solution is gradient descent-based optimization,4648 which requires the nonlinear color transform to be continuously differentiable or have an existing subgradient. Both methods are iterative and do not guarantee convergence. In contrast, the linear transform-based solution is closed form and unique.

    3.2 Solving Deconvolution Model in Transformed Color Space

    Besides the object’s HR image, FPM also recovers the coherent transfer function HC, which characterizes the optical aberration of the system, and the incoherent optical transfer function HO is then derived as the autocorrelation of HC.49,50 This enables correction and contrast enhancement of color distribution in recorded observations. Assuming the system’s chromatic aberration is minor, all three channels of XRGBLR are degraded by HO. This can be equally modeled in a linear-transformed color space, resulting in the following equation as ViLR=F1HOFV^iLR, where V^iLR represents the ideal distribution of ViLR, F and F1 denote the Fourier transform and its inversion, signifies the element-wise product, and i=1, 2. We introduce a Tikhonov-regularized deconvolution model as51V^iLR=argminLdec=argminF1HOFV^iLRViLR22+σΔV^iLR22,where σ is the regularization weight and Δ means the difference operator. The second term in Eq. (7) restricts the edge energy of V^i to prevent color artifacts. The Fourier domain solution to Eq. (7) is given as V^iLR=F1(HOFViLR|HO|2+σ|HΔ|2),where HΔ is the transfer function of Δ.

    4 Discussion

    4.1 Comparison with Other Methods

    To validate the performance of gCFPM, we compare it with wavCFPM, SFCFPM, and CFFPM. Algorithm parameters are set according to the suggestion in the corresponding paper. All the methods use an incoherent LR image as a color donor for the sake of fairness, although a coherent image with significant color artifacts is adopted in their original versions.

    First, a comparison was conducted using simulation data, with system parameters consistent with those used in the experiment. The ground truth of the color target and the result using different methods are shown in Fig. 4(a). Quantitative comparison using root-mean-squared error (RMSE) and color extension of structure similarity (CSSIM)52 is displayed in Fig. 4(b). The zoomed-in region of the gCFPM result displays rich color with minimal structural and color artifacts. Both RMSE and CSSIM evaluation demonstrate that the result of gCFPM is closest to the ground truth.

    Comparison of different colorization methods. (a) Simulation result; (b) RMSE and CSSIM comparison in (a); and (c) comparison using experimental data. The upper row in (c) shows the result of ROI 3 in the stratified epithelium section; the lower row in (c) shows the result of ROI 1 in lycopodium sporophyll spike L.S.

    Figure 4.Comparison of different colorization methods. (a) Simulation result; (b) RMSE and CSSIM comparison in (a); and (c) comparison using experimental data. The upper row in (c) shows the result of ROI 3 in the stratified epithelium section; the lower row in (c) shows the result of ROI 1 in lycopodium sporophyll spike L.S.

    Next, a comparison was made using experimental data. Results of ROI 3 of a stratified epithelium section and ROI 1 of lycopodium sporophyll spike L.S. are illustrated, as displayed in Fig. 4(c). We use an incoherent HR color image obtained by 20× objective with 0.4 NA as a reduced reference. In ROI 3 of the stratified epithelium section, where color tones are monotonous, gCFPM still delivers satisfying visual quality, whereas wavCFPM exhibits noticeable resolution loss and CFFPM shows reduced color contrast. In ROI 1 of the lycopodium sporophyll spike L.S., FSCFPM, and CFFPM display significant color loss around detailed structures, failing to distinguish adjacent pink and blue areas. Moreover, CFFPM is disrupted by mosaic-like artifacts resulting from feature-matching-based color transfer, which may fail in regions with minor structures or gradual changes. In contrast, gCFPM maintains HR without losing color details, offering quality comparable to that of the reference image obtained with the high NA objective. Further, we evaluated each method on the entire FOV image of 5000  pixel×5000  pixel using quantitative indices, including colorization time on CPU, color image sharpness,53 and colorfulness.54 The assessment values are provided in Table 1. For both of the inspected samples, the sharpness value of wavCFPM is noticeably lower than that of the others, while the abnormally high sharpness value of CFFPM is partly due to mosaic-like artifacts. gCFPM demonstrates competitive sharpness compared to other methods while achieving the highest colorfulness value and requiring the least time for processing. Both qualitative and quantitative comparisons demonstrate the superiority of the proposed method over others.

    Time (s)Stratified epithelium sectionLycopodium sporophyll spike L.S.
    SharpnessColorfulnessSharpnessColorfulness
    20 × 0.4NA17.59610.673324.25650.6535
    wavFPM22.950214.18530.695920.07680.5937
    FSCFPM3.719619.97780.730129.43960.5546
    CFFPM1622.744925.83840.640130.78400.5964
    gCFPM3.686820.34900.822028.57470.6435

    Table 1. Quantitative comparison of different methods in experiments.

    4.2 Large Depth-of-Field Full-Color Imaging

    Due to the limited DOF of high-NA objectives, traditional systems require a high-precision z-axis controller and supporting algorithms to suppress focus drifting. In contrast, FPM-based systems typically have a large primary DOF due to the use of low-NA objectives, which is further extended by robust aberration correction algorithms. This section demonstrates large DOF full-color imaging using gCFPM. We defocused the sample from 30 to +30  μm in steps of 10  μm, with the imaging results of gCFPM shown in Fig. 5(a). For comparison, the result using a 20× objective is also provided. The gCFPM method presents clear structures and abundant color information within a DOF of 60  μm. The color distribution at 30  μm is slightly disrupted by enhanced chromatic aberration in the large defocus region. In contrast, the 20× objective only produces a clear image in the in-focus position, reflecting its extremely small DOF. To quantitatively assess the extended DOF of gCFPM, the entire FOV was segmented into 100 blocks, and the focus quality55 and sharpness indices were applied, as shown in Figs. 5(b) and 5(c). The solid line represents the mean value across all blocks, and error bars represent the indices’ variation across these blocks. The opposite behavior of gCFPM’s error bar indicates that blocks across the whole FOV are all precisely in focus, while those with sparse or dense image context are well distinguished. The quantitative assessment is consistent with the qualitative analysis, which confirms the effectiveness of gCFPM in maintaining image quality across an extended DOF.

    (a) Imaging results of gCFPM and 20× objective with defocus ranges from −30 to 30 μm; (b), (c) focus quality and sharpness assessment on two full-color imaging methods.

    Figure 5.(a) Imaging results of gCFPM and 20× objective with defocus ranges from 30 to 30  μm; (b), (c) focus quality and sharpness assessment on two full-color imaging methods.

    4.3 Extending to Other Linear Color Transforms

    In addition to the IHS transform, the gCFPM model can be extended to other linear color transforms. We present an additional result on the simulation image in Table 2, using YUV transform with three different standards (BT.601,56 BT.709,57 and BT.202058) and pseudo-KL transform (PKLT).35,59 The results show that the quantitative assessment of PKLT is slightly better than that of the other four transforms. However, this improvement is too minor to be noticeable by visual perception.

    RMSE1-CSSIM
    IHS0.09280.1355
    YUV (BT.601)0.09260.1354
    YUV (BT.709)0.09270.1354
    YUV (BT.2020)0.09270.1354
    PKLT0.08910.1325

    Table 2. Assessment of gCFPM based on the extended linear transform.

    5 Conclusion and Outlooks

    Having broken the trade-off between resolution and FOV, FPM paves the way for the next generation of low-cost, high-precision digital pathology imaging. However, the high computational demands for FPM have hindered its broader application, particularly in high-throughput settings where true-color imaging is essential. CFPM offers a partial solution by transferring color from an LR image to the HR reconstructed gray-scale image. However, previous approaches have not fully harnessed the potential of color transfer, leaving room for improvement in both computational efficiency and colorization quality.

    In this paper, we proposed a fast full-color imaging method for Fourier ptychography microscopy called gCFPM, based on a generalized dual-color-space-constrained model. This model addresses the color transfer problem in FPM from an optimization standpoint, enabling noniterative and precise colorization using only an LR color image and a single-channel FPM reconstruction. This significantly improves the efficiency of high-throughput FPM imaging platforms. Additionally, we introduced several preprocessing strategies to further enhance the colorization quality of the proposed method. Both simulation and experimental results demonstrate that the proposed method outperforms state-of-the-art techniques in terms of colorization quality and computation time. Furthermore, our method supports large DOF full-color imaging without requiring high-precision hardware. These findings indicate that gCFPM fully utilizes both color and resolution information from limited raw data, providing results comparable to traditional optical microscopy.

    In the current framework of CFPM, details beyond the objective’s bandwidth are completely lost, and even gCFPM exhibits some inevitable color distortion in these areas. To overcome these inherent limitations, future improvements may come from enhancing the raw data. With advanced modulation and demodulation techniques, CFPM has the potential to break the trade-off between imaging efficiency and color accuracy, just as it has addressed the trade-off between resolution and FOV. Nevertheless, we are confident that the proposed method has sufficient capability for most applications and will serve as a solid foundation for future developments.

    Yanqi Chen is an MS degree student in optics at Xi’an Institute of Optics and Precision Mechanics (XIOPM), Chinese Academy of Sciences (CAS), China. He received his bachelor’s degree in Optical Information Science and Technology from Nanjing Normal University, China, in 2019. His current research focuses on Fourier ptychographic microscopy and coherent diffraction imaging.

    Jiurun Chen is a PhD student in Electronic Information Engineering at Tsinghua Shenzhen International Graduate School, China. He received his bachelor’s degree in Mechanical Engineering from Central South University, China, in 2020, and his master’s degree in Electronic Information Engineering from the Xi’an Institute of Optics and XIOPM CAS, China, in 2023. His current research focuses on biomedical engineering and optical imaging.

    Zhiping Wang is an MS degree student in physics of life at University of Basel, Switzerland. He received his bachelor’s degree in physics from Lanzhou University, China, in 2024. His current research focuses on computational imaging and structural biology.

    Yuting Gao is a PhD student in Electronic Engineering at XIOPM, CAS, China. She received her bachelor’s degree in Electronic Engineering from Xi’an University of Architecture and Technology, China, in 2019. Her current research focuses on Fourier ptychographic microscopy.

    Yonghong He is a professor and PhD supervisor in biomedical engineering and optics at Tsinghua Shenzhen International Graduate School, China. He received his PhD in optics from South China Normal University in 2002 and completed a postdoc at Cranfield University, UK. His current research focuses on biomedical optics, OCT imaging, and high-throughput biochip analysis.

    Yishi Shi is a professor and PhD supervisor in opto-electronic engineering at University of Chinese Academy of Sciences, China. He received his PhD in optics from University of Chinese Academy of Sciences in 2008 and completed a postdoc in opto-electronic engineering in 2010. His current research focuses on opto-electronic information, artificial intelligence, and optical imaging.

    An Pan is an associate professor and a principal investigator at XIOPM, CAS, China, and the head of the Pioneering Interdiscipline Center of the State Key Laboratory of Transient Optics and Photonics. He received his BE degree in electronic science and technology from Nanjing University of Science and Technology (NJUST), China, in 2014, and he obtained his PhD in optical engineering at XIOPM, CAS, China, in 2020. He was a visiting graduate at Bar-Ilan University, Israel, in 2016 and California Institute of Technology (Caltech), USA, from 2018 to 2019, respectively. He focuses on the computational optical imaging and biophotonics area and is among the first to work on Fourier ptychography. He was selected as the 2024 Optica Ambassador and is the winner of the 2021 Forbes China 30 Under 30 List, 2021 Excellent Doctoral Dissertation of CAS, 2020 Special President Award of CAS, 2019 OSA Boris P. Stoicheff Memorial Scholarship, the 1st Place Poster Award of the 69th Lindau Nobel Laureate Meetings in Germany (Lindau Scholar), and 2017 SPIE Optics and Photonics Education Scholarship. He has published 40 peer-reviewed journal papers and is a referee for more than 40 peer-reviewed journals. He is an early career member of Optica and SPIE.

    References

    [1] M. K. K. Niazi, A. V. Parwani, M. N. Gurcan. Digital pathology and artificial intelligence. Lancet Oncol., 20, e253-e261(2019).

    [2] N. Kumar, R. Gupta, S. Gupta. Whole slide imaging (WSI) in pathology: current perspectives and future directions. J. Digit. Imaging, 33, 1034-1040(2020).

    [3] R. Brixtel et al. Whole slide image quality in digital pathology: review and perspectives. IEEE Access, 10, 131005-131035(2022).

    [4] K. Guo et al. InstantScope: a low-cost whole slide imaging system with instant focal plane detection. Biomed. Opt. Express, 6, 3210-3216(2015).

    [5] C. Guo et al. OpenWSI: a low-cost, high-throughput whole slide imaging system via single-frame autofocusing and open-source hardware. Opt. Lett., 45, 260-263(2019).

    [6] G. Zheng, R. Horstmeyer, C. Yang. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics, 7, 739-745(2013).

    [7] A. Pan, C. Zuo, B. Yao. High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine. Rep. Prog. Phys., 83, 096101(2020).

    [8] S. Jiang et al. Spatial-and Fourier-domain ptychography for high-throughput bio-imaging. Nat. Protoc., 18, 2051-2083(2023).

    [9] X. Ou, G. Zheng, C. Yang. Embedded pupil function recovery for Fourier ptychographic microscopy. Opt. Express, 22, 4960-4972(2014).

    [10] Y. Chen, J. Xu, A. Pan. Depth-of-field extended Fourier ptychographic microscopy without defocus distance priori. Opt. Lett., 49, 3222-3225(2024).

    [11] R. Horstmeyer et al. Digital pathology with Fourier ptychography. Comput. Med. Imaging Graph., 42, 38-43(2015).

    [12] M. Valentino et al. Beyond conventional microscopy: observing kidney tissues by means of Fourier ptychography. Front. Physiol., 14, 1120099(2023).

    [13] A. C. Chan et al. Parallel Fourier ptychographic microscopy for high-throughput screening with 96 cameras (96 eyes). Sci. Rep., 9, 11114(2019).

    [14] Y. Shu et al. Adaptive optical quantitative phase imaging based on annular illumination Fourier ptychographic microscopy. PhotoniX, 3, 24(2022).

    [15] R. Horstmeyer et al. Diffraction tomography with Fourier ptychography. Optica, 3, 827-835(2016).

    [16] S. Xu et al. Tensorial tomographic Fourier ptychography with applications to muscle tissue imaging. Adv. Photonics, 6, 026004-026004(2024).

    [17] A. Pan et al. Vignetting effect in Fourier ptychographic microscopy. Opt. Lasers Eng., 120, 40-48(2019).

    [18] T. Feng et al. Linear-space-variant model for Fourier ptychographic microscopy. Opt. Lett., 49, 2617-2620(2024).

    [19] S. Zhang et al. FPM-WSI: Fourier ptychographic whole slide imaging via feature-domain backdiffraction. Optica, 11, 634-646(2024).

    [20] K. Zhang et al. Using symmetric illumination and color camera to achieve high throughput Fourier ptychographic microscopy. J. Biophotonics, 16, e202200303(2023).

    [21] J. Sun et al. Single-shot quantitative phase microscopy based on color-multiplexed Fourier ptychography. Opt. Lett., 43, 3365-3368(2018).

    [22] M. Wang et al. A color-corrected strategy for information multiplexed Fourier ptychographic imaging. Opt. Commun., 405, 406-411(2017).

    [23] Y. Zhou et al. Fourier ptychographic microscopy using wavelength multiplexing. J. Biomed. Opt., 22, 066006(2017).

    [24] J. Zhang et al. Efficient colorful Fourier ptychographic microscopy reconstruction with wavelet fusion. IEEE Access, 6, 31729-31739(2018).

    [25] Y. Gao et al. High-throughput fast full-color digital pathology based on Fourier ptychographic microscopy via color transfer. Sci. China Phys. Mech. Astron., 64, 114211(2021).

    [26] J. Chen et al. Rapid full-color Fourier ptychographic microscopy via spatially filtered color transfer. Photonics Res., 10, 2410-2421(2022).

    [27] J. Zhen et al. Fast color Fourier ptychographic microscopy based on spatial filtering frequency fusion. Opt. Laser Technol., 181, 112054(2025).

    [28] R. Wang et al. Virtual brightfield and fluorescence staining for Fourier ptychography via unsupervised deep learning. Opt. Lett., 45, 5405-5408(2020).

    [29] Y. Rivenson et al. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nat. Biomed. Eng., 3, 466-477(2019).

    [30] Y. Zhang et al. Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue. Light Sci. Appl., 9, 78(2020).

    [31] Y. Wang et al. A virtual staining method based on self-supervised GAN for Fourier ptychographic microscopy colorful imaging. Appl. Sci., 14, 1662(2024).

    [32] G.-Z. Yang et al. Gerchberg–Saxton and Yang–Gu algorithms for phase retrieval in a nonunitary transform system: a comparison. Appl. Opt., 33, 209-218(1994).

    [33] R. S. Ledley, M. Buas, T. J. Golab. Fundamentals of true-color image processing, 791-795(1990).

    [34] W. Carper et al. The use of intensity-hue-saturation transformations for merging spot panchromatic and multispectral image data. Photogramm. Eng. Remote Sens., 56, 459-467(1990).

    [35] T.-M. Tu et al. A new look at IHS-like image fusion methods. Inf. Fusion, 2, 177-186(2001).

    [36] Z.-M. Zhou et al. Joint IHS and variational methods for pan-sharpening of very high resolution imagery, 2597-2600(2013).

    [37] Y. Song et al. An adaptive pansharpening method by using weighted least squares filter. IEEE Geosci. Remote Sens. Lett., 13, 18-22(2015).

    [38] P. Liu, L. Xiao. A novel generalized intensity-hue-saturation (GIHS) based pan-sharpening method with variational Hessian transferring. IEEE Access, 6, 46751-46761(2018).

    [39] Y. Fan et al. Efficient synthetic aperture for phaseless Fourier ptychographic microscopy with hybrid coherent and incoherent illumination. Laser Photonics Rev., 17, 2200201(2023).

    [40] D.-Y. Tsai, Y. Lee, E. Matsuyama. Information entropy measure for evaluation of image quality. J. Digit. Imaging, 21, 338-347(2008).

    [41] C.-C. Weng, H. Chen, C.-S. Fuh. A novel automatic white balance method for digital still cameras, 3801-3804(2005).

    [42] G. M. Johnson, M. D. Fairchild. A top down description of S-CIELAB and CIEDE2000. Color Res. Appl., 28, 425-435(2003).

    [43] S. Sural, G. Qian, S. Pramanik. Segmentation and histogram generation using the HSV color space for image retrieval, II(2002).

    [44] A. R. Weeks, C. E. Felix, H. R. Myler. Edge detection of color images using the HSL color space. Proc. SPIE, 2424, 291-301(1995).

    [45] C. Li et al. A revision of CIECAM02 and its CAT and UCS, 208-212(2016).

    [46] W. W. Hager, H. Zhang. A survey of nonlinear conjugate gradient methods. Pac. J. Optim., 2, 35-58(2006).

    [47] D. P. Kingma. Adam: a method for stochastic optimization(2014).

    [48] X. Xie et al. Adan: adaptive Nesterov momentum algorithm for faster optimizing deep models. IEEE Trans. Pattern Anal. Mach. Intell., 46, 9508-9520(2024).

    [49] J. Chung et al. Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography. Biomed. Opt. Express, 7, 352-368(2016).

    [50] J. W. Goodman. Introduction to Fourier optics(2005).

    [51] A. N. Tikhonov. Solution of incorrectly formulated problems and the regularization method. Sov. Dok., 4, 1035-1038(1963).

    [52] A. Toet, M. P. Lucassen. A new universal colour image fidelity metric. Displays, 24, 197-207(2003).

    [53] C. Shi, Y. Lin, X. Cao. No reference image sharpness assessment based on global color difference variation. Chin. J. Electron., 33, 293-302(2024).

    [54] K. Panetta, C. Gao, S. Agaian. No reference color image contrast and quality measures. IEEE Trans. Consum. Electron., 59, 643-651(2013).

    [55] M. S. Hosseini et al. Focus quality assessment of high-throughput whole slide imaging in digital pathology. IEEE Trans. Med. Imaging, 39, 62-74(2019).

    [56] et al. Studio Encoding Parameters of Digital Television for Standard 4:3 and Wide-Screen 16:9 Aspect Ratios(2011).

    [57] . Parameter Values for the HDTV Standards for Production and International Programme Exchange(2002).

    [58] Parameter values for ultra-high definition television systems for production and international programme exchange, 1-7(2012).

    [59] R. M. Haralick, L. Shapiro. Computer and Robot Vision, 1(1992).

    Yanqi Chen, Jiurun Chen, Zhiping Wang, Yuting Gao, Yonghong He, Yishi Shi, An Pan, "Fast full-color pathological imaging using Fourier ptychographic microscopy via closed-form model-based colorization," Adv. Photon. Nexus 4, 026001 (2025)
    Download Citation