Untrained deep network powered with explicit denoiser for phase recovery in inline holography

The quantitative phase information of biological samples is directly associated with the thickness and refractive index of the sample and, therefore, improves clinical analysis.1,21. A. Ghosh, J. Noble, A. Sebastian, S. Das, and Z. Liu, “ Digital holography for non-invasive quantitative imaging of two-dimensional materials,” J. Appl. Phys. 495(127), 084901 (2020). https://doi.org/10.1063/1.51281352. H. P. Gurram, P. Panta, V. P. Pandiyan, I. Ghori, and R. John, “ Digital holographic microscopy for quantitative and label-free oral cytology evaluation,” Opt. Eng. 59(2), 024105 (2020). https://doi.org/10.1117/1.OE.59.2.024105 Digital inline holographic microscopy (DIHM) enables label-free quantitative imaging of transparent samples and is widely used in medical and physical science. In DIHM, the complex phase information is encoded in an interference pattern known as a hologram, where 3D information of the object is recorded in a single 2D image.33. D. Gabor, “ A new microscopic principle,” Nature 161, 777–778 (1948). https://doi.org/10.1038/161777a0 The hologram is then reconstructed by backpropagating the hologram from the detector plane to the object plane. Extended Depth-Of-Field (DOF) can be achieved during the phase retrieval process, to obtain 3D distribution of the sample.44. C. Wen, C. Quan, and C. J. Tay, “ Extended depth of focus in a particle field measurement using a single-shot digital hologram,” Appl. Phys. Lett. 95(20), 201103 (2009). https://doi.org/10.1063/1.3263141 Due to the missing phase information in the hologram, the reconstruction has an undesirable effect called a twin image artifact. To suppress this twin image, several phase retrieval methods have been proposed in the literature.The lensless DIHM (LDIHM) is a simple and cost-effective imaging modality for point-of-care applications.5,65. H. P. R. Gurram, A. S. Galande, and R. John, “ Nanometric depth phase imaging using low-cost on-chip lensless inline holographic microscopy,” Opt. Eng. 59(10), 104105 (2020). https://doi.org/10.1117/1.OE.59.10.1041056. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “ Compact, lightweight and cost-effective microscope based on lensless incoherent holography for tele medicine applications,” Lab Chip 10, 1417–1428 (2010). https://doi.org/10.1039/c000453g Here, the object is illuminated by a partially coherent source and the interference pattern of diffracted wave Uo, and the un-diffracted wave (UR) is captured at the detector kept at a distance z (≤1 mm) from the sample, I=Uo+UR2=Uo2+UR2+Uo*UR+UoUR*.(1)The un-diffracted wave known as the reference wave is recorded without the sample in the same recording conditions. The effect of UR2 is eliminated by normalizing the hologram, and the self-interference term, Uo2, is considered as the system noise, e. Equation (1), therefore, can also be written as I=Uo*UR+UoUR*+e=H(uo),(2)where H(.) is the function that maps the object field to the hologram intensity with the known reference wave. The diffracted wave at the detector plane is given as follows: Uo(x,y)=∬uo(x1,y1)exp(ikz)h(x−x1,y−y1)dx1dy1,(3)where h(.) denotes free space propagation77. T. Latychevskaia and H. W. Fink, “ Practical algorithms for simulation and reconstruction of digital in-line holograms,” Appl. Opt. 54(9), 2424–2434 (2015). https://doi.org/10.1364/AO.54.002424 of the diffracted wave for the propagation distance, z. k is the wave number (k=2π/λ), and uo(x1,y1) is the object field. The reconstruction [ûobj(x1,y1)] at distance z from the detector is then a Fourier transform of the transfer function and is represented as follows: ûobj(x1,y1)=F−1[F(I).exp(−2πizλ)1−(λfx)2−(λfy)2].(4)In Eq. (4), F and F−1 are Fourier transform and inverse Fourier transform, respectively, while fx and fy are the spatial frequency coordinates. However, the twin image artifacts, caused by the propagation of the conjugated wavefront with missing phase information, contaminate the reconstruction.This missing phase can be retrieved using iterative methods such as Gerchberg–Saxton (GS) algorithm88. R. Gerchberg and W. Saxton, “ A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972). and hybrid input–output99. J. R. Fienup, “ Phase retrieval algorithms: A comparison,” Appl. Opt. 21, 2758–2769 (1982). https://doi.org/10.1364/AO.21.002758 algorithm, which perform two-way constrained iterations. The popular object support constraints (OSC) applied in the object plane are positivity constraint [if aix1,y1<0 then aix1,y1=0],10,1110. T. Latychevskaia, “ Iterative phase retrieval for digital holography: Tutorial,” J. Opt. Soc. Am. A 36(12), D31–D40 (2019). https://doi.org/10.1364/JOSAA.36.000D3111. T. Latychevskaia and H. W. Fink, “ Solution to the twin image problem in holography,” Phys. Rev. Lett. 98, 233901 (2007). https://doi.org/10.1103/PhysRevLett.98.233901 sparsity constraints,1212. L. Denis, D. Lorenz, E. Thiébaut, C. Fournier, and D. Trede, “ Inline hologram reconstruction with sparsity constraints,” Opt. Lett. 34(22), 3475–3477 (2009). https://doi.org/10.1364/OL.34.003475 and total variation (TV)1313. L. I. Rudinet, S. Osher, and E. Fatemi, “ Nonlinear total variation-based noise removal algorithms,” Physica D 60(1–4), 259–268 (1992). https://doi.org/10.1016/0167-2789(92)90242-F for phase retrieval. Propagation between the hologram and the object plane with a specific object support region results in better phase retrieval but fails to retrieve textural and structural details of the object. This can be further improved by using phase diversity. Phase diversity can be achieved by recording multiple images with varying recording conditions such as multi-angle illumination,1414. W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “ Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016). https://doi.org/10.1038/srep22738 multi-wavelength,1515. H. Zhang, T. Stangner, K. Wiklund, and M. Andersson, “ Object plane detection and phase retrieval from single-shot holograms using multi-wavelength in-line holography,” Appl. Opt. 57(33), 9855–9862 (2018). https://doi.org/10.1364/AO.57.009855 and multi-height.16,1716. L. J. Allen and M. P. Oxley, “ Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001). https://doi.org/10.1016/S0030-4018(01)01556-517. A. Greenbaum and A. Ozcan, “ Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012). https://doi.org/10.1364/OE.20.003129 Recording multiple holograms imposes computational complexity and increases the set-up cost. Furthermore, live cell imaging is not possible with these set-ups, thereby limiting their use in real-time analysis.In another approach, the inverse problem (IP) method is applied to obtain the desired reconstruction by solving uo=H−1(I). This is typically solved by inverse problem approach as18,1918. W. Zhang, L. Cao, D. J. Brady, H. Zhang, J. Cang, H. Zhang, and G. Jin, “ Twin-image-free holography: A compressive sensing approach,” Phys. Rev. Lett. 121, 093902 (2018). https://doi.org/10.1103/PhysRevLett.121.09390219. C. Fournier, L. Denis, E. Thiebaut, T. Fournel, and M. Seifi, “ Inverse problem approaches for digital hologram reconstruction,” Proc. SPIE 8043, 80430S (2011). https://doi.org/10.1117/12.885761 where ρ(θ) is the handcrafted prior of the object, which ensures certain characteristics in the reconstructed object such as smoothness, sparsity, and total variation. The most popular prior in holographic imaging is sparsity as the twin image-free reconstruction is sparse in nature. The IP-based methods do not reduce or remove twin images, but instead, search for the reconstructed object that is most consistent with the captured hologram. The reconstruction finds a solution ûo that is a good fit for the captured hologram given some prior knowledge. However, due to the limited discriminative power of the handcrafted priors, they often fail to capture the rich structure of many natural signals. To further improve the reconstruction, several learned priors, such as sparse coding,2020. R. Yair, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “ Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018). https://doi.org/10.1038/lsa.2017.141 dictionary learning,2121. Y. Shuai, H. Cui, Y. Long, and J. Wu, “ Digital inline holographic reconstruction with learned sparsifying transform,” Opt. Commun. 498, 127220 (2021). https://doi.org/10.1016/j.optcom.2021.127220 and learned plug-and-play regularizations,2222. Z. Kai, W. Zuo, S. Gu, and L. Zhang, “ Learning deep CNN denoiser prior for image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition ( IEEE, 2017), pp. 3929–3938. have been proposed. These priors learn through a limited dataset. However, the phase retrieval ability of the inverse approach has not been explored extensively. Integrating the alternating projections strategy with the regularized inversion for single-shot phase reconstruction has created new research opportunities.23,2423. M. Fabien, L. Denis, T. Olivier, and C. Fournier, “ From Fienup's phase retrieval techniques to regularized inversion for in-line holography: Tutorial,” J. Opt. Soc. Am. A 36(12), D62–80 (2019). https://doi.org/10.1364/JOSAA.36.000D6224. A. S. Galande, H. P. Gurram, A. P. Kamireddy, V. S. Venkatapuram, Q. Hasan, and R. John, “ Quantitative phase imaging of biological cells using lensless inline holographic microscopy through sparsity-assisted iterative phase retrieval algorithm,” J. Appl. Phys. 132(14), 243102 (2022). https://doi.org/10.1063/5.0123677Inverse problems can further be solved with supervised or unsupervised deep learning methods.18,25,2618. W. Zhang, L. Cao, D. J. Brady, H. Zhang, J. Cang, H. Zhang, and G. Jin, “ Twin-image-free holography: A compressive sensing approach,” Phys. Rev. Lett. 121, 093902 (2018). https://doi.org/10.1103/PhysRevLett.121.09390225. Q. Adnan, I. Ilahi, F. Shamshad, F. Boussaid, M. Bennamoun, and J. Qadir, “ Untrained neural network priors for inverse imaging problems: A survey,” IEEE Trans. Pattern Anal. Mach. Intell. 2022, 1–20. https://doi.org/10.1109/TPAMI.2022.320452726. O. Gregory, A. Jalal, C. A. Metzler, R. G. Baraniuk, A. G. Dimakis, and R. Willett, “ Deep learning techniques for inverse problems in imaging,” IEEE J. Sel. Areas Inf. Theory 1, 39–56 (2020). https://doi.org/10.1109/JSAIT.2020.2991563 The learned inversion-based solution to Eq. (4) learns the mapping of input intensity image (I) to the reconstructed object (uo). If Rθ is the mapping function with θ as the network parameters, the learning can be represented as2727. Z. Tianjiao, Y. Zhu, and E. Y. Lam, “ Deep learning for digital holography: A review,” Opt. Express 29(24), 40572–40593 (2021). https://doi.org/10.1364/OE.443367 Rθ*=argminuo, θ ∀Ik, uokϵST.(6)The learned mapping function Rθ* can now map the diffraction pattern, I, that is not in the training set for the corresponding object ûo. The performance of Rθ* is heavily dependent on the size and variance of the dataset ST. Gathering input–output image pairs (Ik, uok) for practical applications is time-consuming and an infeasible task. Also, the test image may be different from the images given at the time of training due to mechanical and environmental instability. Furthermore, the deep networks rely extensively on the morphological characteristics of the sample; hence, separate training is required for each sample. To alleviate this data requirement problem, deep image prior (DIP) was introduced, which is the first untrained network implemented to achieve denoising, image restoration, image inpainting, and super-resolution from a single noisy image.2828. U. Dmitry, A. Vedaldi, and V. Lempitsky, “ Deep image prior,” in Proceedings of the IEEE CVPR ( IEEE, 2018), pp. 9446–9454.Recently, physics-aware untrained neural networks have proved to be efficient for image restoration by solving standard inverse problems, without requiring any prior training.29–3329. L. Huayu, X. Chen, H. Wu, Z. Chi, C. Mann, and A. Razi, “ Deep DIH: Statistically inferred reconstruction of digital in-line holography by deep learning,” IEEE Access 8, 202648–202659 (2020). https://doi.org/10.1109/ACCESS.2020.303638030. W. Fei, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “ Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). https://doi.org/10.1038/s41377-020-0302-331. S. Kumar, “ Phase retrieval with physics informed zero-shot network,” Opt. Lett. 46, 5942–5945 (2021). https://doi.org/10.1364/OL.43362532. Y. Yao, H. Chan, S. Sankaranarayanan, P. Balaprakash, R. J. Harder, and M. J. Cherukara, “ AutoPhaseNN: Unsupervised physics-aware deep learning of 3D nanoscale Bragg coherent diffraction imaging,” npj Comput. Mater. 8(1), 124 (2022). https://doi.org/10.1038/s41524-022-00803-w33. M. Raissi, P. Perdikaris, and G. E. Karniadakis, “ Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” J. Comput. Phys. 378, 686–707 (2019). https://doi.org/10.1016/j.jcp.2018.10.045 Phase reconstruction using DIP is achieved by incorporating a physical model (forward propagation) that is a manifestation of the image formation process in conventional deep learning. Phase reconstruction for DIHM using DIP2929. L. Huayu, X. Chen, H. Wu, Z. Chi, C. Mann, and A. Razi, “ Deep DIH: Statistically inferred reconstruction of digital in-line holography by deep learning,” IEEE Access 8, 202648–202659 (2020). https://doi.org/10.1109/ACCESS.2020.3036380 has shown superior performance compared to compressive sensing (CS) and conventional propagation-based reconstructions even without prior training. DIP-based reconstruction is formulated as29,3029. L. Huayu, X. Chen, H. Wu, Z. Chi, C. Mann, and A. Razi, “ Deep DIH: Statistically inferred reconstruction of digital in-line holography by deep learning,” IEEE Access 8, 202648–202659 (2020). https://doi.org/10.1109/ACCESS.2020.303638030. W. Fei, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “ Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). https://doi.org/10.1038/s41377-020-0302-3 Rθ*=argminuo, θ s.t. uo=Rθz,(7)where the network Rθ generates the object (uo) from either the fixed noise vector, z, or some initial reconstructed object. Then, the forward propagation model as given in Eq. (2) is used to generate a new hologram. The error between the captured and the measured hologram is given back to the network to update the parameters. The interplay between the network (Rθ) and the forward model [H(uo)] learns the network parameters (θ). These learned parameters are used to reconstruct the amplitude and phase of the object that are most consistent with the input hologram. Since DIP-based phase retrieval does not require large dataset for training, it is suitable for single-shot phase retrieval. However, the network parameters are fit through the physical image formation model, which essentially depends on spatial frequency components (fx, fy), propagation distance (z), and wavelength (λ). The process of fitting the model output to a single measured hologram results in overfitting the interference-related noise and weight decay.In this Letter, we have adopted DIP powered with regularization by denoising (RED) by adding explicit denoisers [ρx] as a prior.34,3534. M. Gary, P. Milanfar, and M. Elad, “ Deepred: Deep image prior powered by red,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019.35. P. Cascarano, A. Sebastiani, M. C. Comes, G. Franchini, and F. Porta, “ Combining weighted total variation and deep image prior for natural and medical image restoration via ADMM,” in 21st International Conference on Computational Science and Its Applications (ICCSA) ( IEEE, 2021), pp. 39–46. Henceforth, we will refer our proposed method as DIP-RED. Explicit denoisers can further enrich the regularization effect and reduce noise in the reconstructed image, Rθ*=argminuo, θ s.t. uo=Rθz.(8)RED added in Eq. (8) is untrained, and it can potentially remove small defocused components in the features space, which substantially improves the reconstruction outcome and is defined as3636. R. Yaniv, M. Elad, and P. Milanfar, “ The little engine that could: Regularization by denoising (RED),” SIAM J. Imaging Sci. 10(4), 1804–1844 (2017). https://doi.org/10.1137/16M1102884 RED can incorporate any image-denoising algorithm that satisfies some conditions as given in the supplementary material. However, differentiation of denoiser function in the optimization process is a daunting task and not recommended for most denoisers. Fortunately, with the aid of the variable splitting technique known as Alternating Directions of the Multipliers (ADMM), it is possible to separate the data fidelity term and the regularization term.3737. B. Stephen, N. Parikh, and E. Chu, Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers ( Now Publishers, Inc., 2011). ADMM allows parallel execution of the denoiser and the network parameters. DIP-RED can be represented as Here, uo is the reconstruction obtained in the previous iteration and ûo is the reconstruction of the current iteration. The first term is the standard inverse problem solved by DIP. The second term represents the RED function, and f(.) is a denoiser function of choice. Under some mild conditions on f(.), the gradient of RED function is uo−fuo, which avoids unnecessary differentiation of the denoiser.3535. P. Cascarano, A. Sebastiani, M. C. Comes, G. Franchini, and F. Porta, “ Combining weighted total variation and deep image prior for natural and medical image restoration via ADMM,” in 21st International Conference on Computational Science and Its Applications (ICCSA) ( IEEE, 2021), pp. 39–46. The third term is proximity regularization, which forces network output Rθûo to be close to uo−t. t is the Lagrange multipliers vector, and α  and β are regularization parameters. The minimization problem in Eq. (10) can be solved by ADMM37,3837. B. Stephen, N. Parikh, and E. Chu, Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers ( Now Publishers, Inc., 2011).38. W. Zaiwen, C. Yang, X. Liu, and S. Marchesini, “ Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Probl. 28(11), 115010 (2012). https://doi.org/10.1088/0266-5611/28/11/115010 by updating the three parameters, θ, t, and uo, sequentially. θ should be updated by keeping t and uo constant and solving the following equation: Equation (11) introduces proximity regularization (second term) to the DIP solution (first term) that enhances the stability and robustness of the model. uo is then updated by keeping θ and t constant and solving Equation (12) is the standard RED objective3636. R. Yaniv, M. Elad, and P. Milanfar, “ The little engine that could: Regularization by denoising (RED),” SIAM J. Imaging Sci. 10(4), 1804–1844 (2017). https://doi.org/10.1137/16M1102884 and can be solved by two ways: either by fixed-point strategy or the steepest-descent method (detailed derivation can be found in Refs. 3535. P. Cascarano, A. Sebastiani, M. C. Comes, G. Franchini, and F. Porta, “ Combining weighted total variation and deep image prior for natural and medical image restoration via ADMM,” in 21st International Conference on Computational Science and Its Applications (ICCSA) ( IEEE, 2021), pp. 39–46. and 3636. R. Yaniv, M. Elad, and P. Milanfar, “ The little engine that could: Regularization by denoising (RED),” SIAM J. Imaging Sci. 10(4), 1804–1844 (2017). https://doi.org/10.1137/16M1102884). We apply a simple steepest-descent method, which takes the gradient of the above function and updates uo in steps, ûo=uo−c[α(uo−fuo)+β(uo−Rθûo−t)],(13)where c is the step size and should be chosen to guarantee a descent. It is worth noting that the output of the network is not directly considered as the next object estimate but instead it is used to calculate a residual image, whose correlation with the object estimate penalizes the cost function.Finally, the Lagrange multiplier vector t is updated by keeping θ and uo constant, tk+1=tk−(uo+Rθûo).(14)The schematic diagram of the proposed DIP-RED method is shown in Fig. 1. Using the variable splitting method (ADMM), it is possible to deal with error term and regularization separately. ADMM converges faster compared to conventional optimization methods due to the advantage of iterative updates of multipliers. The decoupling of the error term and the regularization provides flexibility to use a wide variety of denoising models to solve image reconstruction tasks by modifying regularization-related sub-steps. Also, this allows the application of denoisers after fixed intervals, which reduces computational overhead and prevents the over-smoothing effect.We have adopted the U-net “hourglass” architecture as shown in Fig. 2. The network (Rθ) consists of four convolution blocks in the encoder and four de-convolution blocks in the decoder. Each convolution block is formed with a sequence of two convolution layers followed by a max-pooling layer. We have observed that batch normalization applied just after the convolution and before the activation layer instead of applying after the activation layer elevates the performance. Each de-convolution block has a transpose layer followed by two convolution layers. Skip connections have been applied to connect the shallow layers with deep layers to propagate the features. The hologram of size 1000 × 1000 is cropped from the full field-of-view (FOV) image. The reconstructed complex object after first backpropagation to the object plane is given as the input to the network.The LDIHM set-up used for the experiments consists of a partially coherent LED light source of wavelength 627 nm. The LED is butt-coupled to an optical fiber (Model M15L01, Thorlabs).55. H. P. R. Gurram, A. S. Galande, and R. John, “ Nanometric depth phase imaging using low-cost on-chip lensless inline holographic microscopy,” Opt. Eng. 59(10), 104105 (2020). https://doi.org/10.1117/1.OE.59.10.104105 The source to sample distance z1∼3−5 cm and the CMOS camera of pixel size 1.67 μm and resolution of 3840 × 2784 pixels is kept very close to the sample usually in the range of millimeters, which results in a large field-of-view (FOV) ∼29 mm2.The proposed DIP-RED method aims to boost the performance of the DIP-based single-shot hologram reconstruction by using an explicit denoiser. The proposed DIP-RED is first compared with conventional methods such as the angular spectrum method (ASM)77. T. Latychevskaia and H. W. Fink, “ Practical algorithms for simulation and reconstruction of digital in-line holograms,” Appl. Opt. 54(9), 2424–2434 (2015). https://doi.org/10.1364/AO.54.002424 and the compressive sensing (CS)1818. W. Zhang, L. Cao, D. J. Brady, H. Zhang, J. Cang, H. Zhang, and G. Jin, “ Twin-image-free holography: A compressive sensing approach,” Phys. Rev. Lett. 121, 093902 (2018). https://doi.org/10.1103/PhysRevLett.121.093902 method as well as DIP. Figure 3 shows dysplasia tonsillar mucosa tissue reconstruction reproduced from Ref. 2929. L. Huayu, X. Chen, H. Wu, Z. Chi, C. Mann, and A. Razi, “ Deep DIH: Statistically inferred reconstruction of digital in-line holography by deep learning,” IEEE Access 8, 202648–202659 (2020). https://doi.org/10.1109/ACCESS.2020.3036380 for comparison purposes. The first row refers to the amplitude reconstruction and the second row shows phase retrieved by each method. The enlarged amplitude of region of interest is shown in the third row.

留言 (0)

沒有登入
gif