Correction of ring artifacts with Swin-Conv-U-Net for x-ray computed tomography

The advantages of x-ray tomography include high penetrability, improved resolution, and rich sources of contrast.1–31. A. Sakdinawat and D. Attwood, Nat. Photonics 4, 840–848 (2010). https://doi.org/10.1038/nphoton.2010.2672. P. J. Withers, C. Bouman, S. Carmignato, V. Cnudde, D. Grimaldi, C. K. Hagen, E. Maire, M. Manley, A. Du Plessis, and S. R. Stock, Nat. Rev. Methods Primers 1, 18 (2021). https://doi.org/10.1038/s43586-021-00015-43. G. B. Zan, G. N. Qian, S. Gul, H. Y. Pan, Q. Li, J. Z. Li, D. J. Vine, S. Lewis, W. B. Yun, P. Pianetta, H. Li, X. Q. Yu, and Y. J. Liu, ACS Mater. Lett. 3, 1786–1792 (2021). https://doi.org/10.1021/acsmaterialslett.1c00600 Its applications have been extended to various fields, including materials science, biology, chemistry, and medicine.4–74. T. Y. Fu, F. Monaco, J. Z. Li, K. Zhang, Q. X. Yuan, P. Cloetens, P. Pianetta, and Y. J. Liu, Adv. Funct. Mater. 32, 9 (2022). https://doi.org/10.1002/adfm.2022030705. H. R. Lee, L. Liao, W. Xiao, A. Vailionis, A. J. Ricco, R. White, Y. Nishi, W. Chiu, S. Chu, and Y. Cui, Nano Lett. 21, 651–657 (2021). https://doi.org/10.1021/acs.nanolett.0c042306. Z. S. Jiang, J. Z. Li, Y. Yang, L. Q. Mu, C. X. Wei, X. Q. Yu, P. Pianetta, K. J. Zhao, P. Cloetens, F. Lin, and Y. J. Liu, Nat. Commun. 11, 9 (2020). https://doi.org/10.1038/s41467-020-16233-57. C. Y. Zhang, S. K. Yao, C. Xu, Y. N. Chang, Y. B. Zong, K. Zhang, X. Z. Zhang, L. J. Zhang, C. Y. Chen, Y. L. Zhao, H. D. Jiang, X. Y. Gao, and Y. L. Wang, Anal. Chem. 93, 1237–1241 (2021). https://doi.org/10.1021/acs.analchem.0c04662 However, owing to the limitations of current area-detector technology, different pixels generally show slightly inconsistent responses to x-ray, usually leading to stripes in the sinograms.8–118. M. Boin and A. Haibel, Opt. Express 14, 12071–12075 (2006). https://doi.org/10.1364/OE.14.0120719. L. C. P. Croton, G. Ruben, K. S. Morgan, D. M. Paganin, and M. J. Kitchen, Opt. Express 27, 14231–14245 (2019). https://doi.org/10.1364/OE.27.01423110. D. Jha, H. O. Sørensen, S. Dobberschütz, R. Feidenhans'l, and S. L. S. Stipp, Appl. Phys. Lett. 105, 4 (2014). https://doi.org/10.1063/1.489744111. P. Paleo and A. Mirone, J. Synchrotron Radiat. 22, 1268–1278 (2015). https://doi.org/10.1107/S1600577515010176 After computed tomography (CT) reconstruction, the sinogram stripes are transformed into ring and semi-ring artifacts in tomographic slices, resulting in loss of details of the image, especially the sample information around the rotation center area.12–1512. J. Sijbers and A. Postnov, Phys. Med. Biol. 49, N247–N253 (2004). https://doi.org/10.1088/0031-9155/49/14/N0613. D. Prell, Y. Kyriakou, and W. A. Kalender, Phys. Med. Biol. 54, 3881–3895 (2009). https://doi.org/10.1088/0031-9155/54/12/01814. L. X. Yan, T. Wu, S. Zhong, and Q. D. Zhang, Phys. Med. Biol. 61, 1278–1292 (2016). https://doi.org/10.1088/0031-9155/61/3/127815. L. Massimi, F. Brun, M. Fratini, I. Bukreeva, and A. Cedola, Phys. Med. Biol. 63, 8 (2018). https://doi.org/10.1088/1361-6560/aaa706 In addition, as the gray levels in the reconstructed images are influenced by these ring artifacts, quantitative analysis of the measured data is difficult. Post-processing steps, such as binarization or segmentation of image information, are significantly complicated by the presence of such artifacts.Methods for correcting ring artifacts have been proposed to overcome these problems. These methods can be divided into two main categories. Methods in the first category modify the CT scanning mode. A typical method slightly translates the detector by a random distance during CT acquisition.16–1816. Y. N. Zhu, M. L. Zhao, H. W. Li, and P. Zhang, Med. Phys. 40, 14 (2013). https://doi.org/10.1118/1.479069717. G. R. Davis and J. C. Elliott, “ X-ray microtomography scanner using time-delay integration for elimination of ring artefacts in the reconstructed image,” Nucl. Instrum. Methods Phys. Res., Sect. A 394, 157–162 (1997). https://doi.org/10.1016/S0168-9002(97)00566-418. D. M. Pelt and D. Y. Parkinson, Meas. Sci. Technol. 29, 9 (2018). https://doi.org/10.1088/1361-6501/aa9dd9 Although this method can effectively suppress ring artifacts, it requires extra high-precision motion motors and increases the complexity and time cost of CT acquisition. Methods in the other category use image post-processing.19–2119. F. Sadi, S. Y. Lee, and M. K. Hasan, Comput. Biol. Med. 40, 109–118 (2010). https://doi.org/10.1016/j.compbiomed.2009.11.00720. N. T. Vo, R. C. Atwood, and M. Drakopoulos, Opt. Express 26, 28396–28412 (2018). https://doi.org/10.1364/OE.26.02839621. X. K. Liang, Z. C. Zhang, T. Y. Niu, S. D. Yu, S. B. Wu, Z. C. Li, H. L. Zhang, and Y. Q. Xie, Phys. Med. Biol. 62(13), 5276 (2017). https://doi.org/10.1088/1361-6560/aa7017 The most commonly used traditional correction method is Fourier wavelet (FW) ring artifact removal,2222. B. Münch, P. Trtik, F. Marone, and M. Stampanoni, Opt. Express 17, 8567–8591 (2009). https://doi.org/10.1364/OE.17.008567 which combines Wavelet transform and a Fourier filter to remove the ring artifacts. Although this method can correct ring artifacts in a reconstructed three-dimensional (3D) image to a certain extent, it cannot suppress the strong artifacts around the rotation center area and also significantly smooths the fine structure of the sample. With the rapid development of artificial intelligence technology, the stripe noise removal neural network (SNRNN) method has also been proposed.2323. J. T. Guan, R. Lai, and A. Xiong, IEEE Access 7, 44544–44554 (2019). https://doi.org/10.1109/ACCESS.2019.2908720 Similar to the traditional FW method, it suppresses stripe artifacts in sinograms to avoid ring artifacts after reconstruction. However, instead of Fourier filtering, a neutral network can achieve better stripe removal and has powerful recognition and extraction functions. However, the SNRNN method is not very applicable to areas of strong artifacts, especially around the rotation center.To correct ring artifacts while maintaining the quality and resolution of the 3D reconstruction, a ring artifact correction method based on Swin-Conv-U-Net (SCUN) is presented here. Simulation and experimental results show that this method has the characteristics of high accuracy and high robustness. The main contributions of this paper can be summarized as follows: 1.

The SCUN method is developed to remove ring artifacts from reconstructed tomographic images, while helping to effectively restore image details. The SCUN method can give high-quality reconstructed results especially for strong ring artifacts around the rotation center area.

2.

A regularizer is incorporated into the loss function, which can avoid errors caused by removal of original details of the image in the process of removing ring artifacts and repair the loss of image details caused by ring artifacts.

3.

The proposed method directly implements artifact correction on the reconstructed slice and is, thus, more suitable for medical CT data where no sinogram can be obtained. The proposed method is also more capable of preserving and restoring image details compared with the traditional reconstructed slice-correction method.

The detailed network structure of SCUN2424. K. Zhang, Y. Li, J. Liang, J. Cao, Y. Zhang, H. Tang, R. Timofte, and L. V. Gool, “Practical blind denoising via Swin-Conv-UNet and data synthesis,” arXiv:2203.13278 (2022). is shown in Fig. 1. The idea of the SCUN network mainly comes from Swin Transformer and U-Net.25,2625. T. Falk, D. Mai, R. Bensch, Ö. Çiçek, A. Abdulkadir, Y. Marrakchi, A. Bohm, J. Deubner, Z. Jäckel, K. Seiwald, A. Dovzhenko, O. Tietz, C. Dal Bosco, S. Walsh, D. Saltukoglu, T. L. Tay, M. Prinz, K. Palme, M. Simons, I. Diester, T. Brox, and O. Ronneberger, Nat. Methods 16, 67 (2019). https://doi.org/10.1038/s41592-018-0261-226. O. Ronneberger, P. Fischer, and T. Brox, “ U-Net: Convolutional networks for biomedical image segmentation,” in 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Lecture Notes in Computer Science ( Springer International Publishing Ag, 2015), pp. 234–241. Specifically, in the SCUN network, the convolution layers of U-Net are replaced by Swin-Conv blocks. Similar to U-Net, the SCUN network is composed of an encoder and a decoder. The encoder is composed of four Swin-Conv blocks connected by 2 × 2 stride convolutions, which down-sample the input to reduce computation cost. After the encoder, the encoded data are passed to the decoder, which is composed of three Swin-Conv structures connected by 2 × 2 transposed convolutions,2727. Y. P. Zhou, H. Y. Chang, X. L. Lu, and Y. H. Lu, Knowl.-Based Syst. 254, 12 (2022). https://doi.org/10.1016/j.knosys.2022.109658 thereby up-sampling the input data and restoring the data size. In addition, the decoder is connected to the encoder through the skip connections, which provide sufficient context information in the decoding process. The detailed structure of the Swin-Conv block, which is the main difference between the SCUN network and U-Net, is shown in Fig. 1(b). First, by a 1 × 1 convolution layer, the input data are split and subsequently passed through the W-MAS (window multi-head self-attention modules) block of the Swin transformer and the residual convolution block.2828. K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, “ Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Conference on Computer Vision and Pattern Recognition ( IEEE, 2016), pp. 770–778. Second, the outputs of the W-MAS block and residual convolution block are concatenated by 1 × 1 convolution layers. Finally, the two steps are repeated, except that the W-MAS block is replaced by the SW-MAS (shifted window multi-head self-attention modules) block. The integration of the U-Net structure and Swin-Conv block provides the following advantages. The Swin-Conv block has the local modeling capability of the residual network (the convolution computer system is local only for continuous local pixels) and the non-local modeling capability of the Swin transformer (the attention computer system is non-local because it carries out an attention operation between each continuous and discontinuous element in the selected window). This not only helps the network to recognize artifacts and image information but can also effectively repair the damaged pixels based on their neighborhood pixel values.29–3129. J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” arXiv:1810.04805 (2018).30. K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “ Masked Autoencoders Are Scalable Vision Learners,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Conference on Computer Vision and Pattern Recognition ( IEEE, 2020), pp. 15979–15988.31. A. Krull, T. O. Buchholz, and F. Jug, “ Noise2Void—Learning denoising from single noisy images,” in 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Conference on Computer Vision and Pattern Recognition ( IEE Computer Society, 2019), pp. 2124–2132. The 1 × 1 convolution is used to fuse and separate information between the Swin transformer block and the residual block, which can reduce the computational complexity and the number of parameters.The specific workflow of the SCUN method is shown in Fig. 2. First, the acquired sinogram images are reconstructed by the filtered back projection (FBP) reconstruction method to obtain a reconstructed 3D volume with ring artifacts.32,3332. D. Güersoy, F. De Carlo, X. H. Xiao, and C. Jacobsen, “ TomoPy: A framework for the analysis of synchrotron tomographic data,” Proc. SPIE 21(5), 1188–1193 (2014). https://doi.org/10.1107/S160057751401393933. D. M. Pelt, D. Gürsoy, W. J. Palenstijn, J. Sijbers, F. De Carlo, and K. J. Batenburg, J. Synchrotron Radiat. 23, 842–849 (2016). https://doi.org/10.1107/S1600577516005658 Second, the slices are directly input into the SCUN network. Finally, high-quality reconstruction results without ring artifacts can be generated by the SCUN network. The most significant feature that can be used to separate the ring artifacts from the true sample details is the circular structure whose center is strictly the rotation axis. Evidently, the detection of such characteristic requires local and non-local modeling capability. In SCUN, the attention computer system of the Swin transformer and convolution computer system of the convolution layer can effectively handle the local and non-local information of the slices.34,3534. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” arXiv:2103.14030 (2021).35. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “ Attention is all you need,” in 31st Annual Conference on Neural Information Processing Systems (NIPS), 2017. These advantages meet the ring removal requirements and enable SCUN to accurately identify and remove ring artifacts without erosion of the details.

The loss function is also an important part of the neural network, especially in the process of network training; it not only plays a part in evaluating the output of the network but also guides the parameter update of the network to optimize the parameters of each layer of the network. The loss function in this study consists of two items: a correction loss function LM and perceptual loss function LP.

The mean square error is adopted as a correction loss function.3636. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. H. Wang, and W. Z. Shi, “ Photo-realistic single image super-resolution using a generative adversarial Network,” in 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) ( IEEE, 2017), pp. 105–114. As one of the common loss functions, it can accurately evaluate the error between the network output and the ground truth as follows: LM=1N∑i=1N(Pi−Pi′)2,(1)where N is the total pixel number, Pi is the ith pixel value of the ground truth, and P′i is the ith pixel value of the network output. We also introduced perceptual loss function LP to restore and save image details to the greatest extent,3737. J. Johnson, A. Alahi, and F. F. Li, “ Perceptual losses for real-time style transfer and super-Resolution,” in 14th European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science ( Springer International Publishing Ag, 2016), pp. 694–711. LP=1N∑i=1N||VGGI−VGGI′||,(2)where VGG represents the conv4_3 features in the ImageNet pre-trained VGG16 model,3838. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognitions,” arXiv:1409.1556 (2014). I is the ground truth, and I′ is the corrected slice output by the network.Therefore, the loss function of the network in this paper is where λ is a regularization coefficient used to balance the relationship between two loss functions to prevent the network from under-fitting or over-fitting. Parameter λ is determined before network training.The SCUN method is compiled in the Python 3.7 environment, and the network of SCUN was built based on the PyTorch framework.3939. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. M. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. J. Bai, and S. Chintala, “ PyTorch: An imperative style, high-performance deep learning library,” in 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019. To evaluate the performance of this method, it was tested on a workstation composed of a 2.2 GHz Intel Xeon Silver 4114 CPU and a NVIDIA Quadro P6000 GPU.The SCUN method was evaluated using synthetic data. A large experimental dataset is fundamental for network training. However, insufficient experimental data were available to fulfill this demand. Therefore, based on DIV2K,4040. E. Agustsson and R. Timofte, “ NTIRE 2017 challenge on single image super-resolution: Dataset and study,” in 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) ( IEEE, 2017), pp. 1122–1131. a synthetic dataset was constructed to train the network and make a preliminary evaluation. The synthetic dataset was processed as follows. First, images randomly selected from the DIV2K dataset were nominated as the ground truth slice R1. Second, image pre-processing and forward projection were performed on slice R1 to obtain ground truth sinograms P1. Third, the sinogram images P1 were added with random stripe artifacts to obtain the artifact sinogram data P2 and, subsequently, reconstructed with FBP to obtain the artifact reconstructed slice R2. Finally, the ground truth slice R1 and corresponding artifact slice R2 were split into a training dataset and a test dataset, which were used in the training and validation phases, respectively. The use of a synthetic dataset instead of experimental data is a more accessible and feasible approach, given the large amount of data required for network training. In addition, a synthetic dataset is more appropriate for quantitative evaluation, as the ground truth is available. Moreover, the DIV2K dataset could provide not only a large quantity of images but also a great variety, thereby improving the generalization performance of the network.The simulation data were initially used to evaluate the method qualitatively. The ring artifact correction results obtained with different methods are shown in Fig. 3. Figures 3(a)–3(d) show the unprocessed slice with ring artifacts and the ring removal results obtained with the FW, SNRNN, and SCUN methods, respectively. The residual images, as shown in Figs. 3(f)–3(i), represent the differences between the ground truth Figs. 3(e) and 3(a)–3(d). As shown in Fig. 3(a), without reasonable correction, the image details are corrupted by ring artifacts, which impede inspection and further quantitative statistical analysis, especially in the central area around the rotation axis [Fig. 3(f)]. As shown in Fig. 3(b), although the FW method was capable of removing ring artifacts to a certain extent, the precision of correction was reduced as the intensity of the artifacts increased in the rotation center area [Fig. 3(g)]. Although the SNRNN method [Fig. 3(c)] showed a better effect than the FW method, there were still considerable ring artifacts left in the residuals graph [Fig. 3(h)]. The artifacts are almost invisible in Fig. 3(d) and almost consistent with ground truth. Furthermore, residuals in Fig. 3(i) are negligible, and the rotation center area has the best recovery effect.After the qualitative analysis, a quantitative analysis was also carried out. The accuracy results for correction of ring artifacts of different degrees by different methods are shown in Fig. 4. As shown in Fig. 4(e), the accuracy of the FW method was relatively low for different levels of noise, and when the artifacts were weak, it was even inferior to the raw data, as a strong low frequency bias was introduced. The SNRNN method could not fully remove some thick ring artifacts, especially in the central area. Thus, its accuracy was only superior to the FW method. By contrast, the SCUN method maintained the highest accuracy regardless of the level of artifacts. The comparison demonstrates the accuracy, strong robustness, and universality of the SCUN method.

To further verify whether the proposed network trained with the synthetic dataset could be generalized to experimental data, the performance of the SCUN method was also tested with micro-resolution x-ray CT data with effective pixel size of 2.5 μm acquired in the 4W1A beamline at Beijing Synchrotron Radiation Facility. In the CT experiment, 360 projective images of oil shale were recorded over an angular range of 0°–179.5°, and the sample was illuminated by an 8-keV x-ray.

Figure 5(a) shows the unprocessed raw slice image, and Figs. 5(b)–5(d) shows the slice images corrected by the FW, SNRNN, and SCUN methods, respectively. Figures 5(e)–5(h) shows enlargements of the outlined areas from Figs. 5(a)–5(d). Similar to the simulation data results, a large number of artifacts are shown in Fig. 5(b), especially in the central area [Fig. 5(f)], thereby, seriously damaging the details of the image. Although most artifacts in Fig. 5(c) are suppressed, strong artifacts can still be found, especially in the central area [Fig. 5(g)]. Almost no artifact could be found in the correction results of the SCUN method, and this method also achieved an accurate correction and recovery effect for the central area with the most serious artifacts. The intensity diagram [Fig. 5(i)] quantitatively proved that the SCUN method still positively affected the experimental data. Figure 5(i) further demonstrates that the proposed method not only has the highest artifact suppression effect but can also reasonably restore the original details of the image.

This study introduced a ring artifact correction method for x-ray CT imaging based on Swin-Conv-U-Net. Traditional methods usually process sinograms to prevent artifacts in further 3D reconstruction. However, the proposed SCUN method directly removes ring artifacts in the reconstructed slices. SCUN replaces the convolution layer of U-Net with a Swin-Conv block. The Swin-Conv block can exploit the local modeling advantages of convolution and the non-local modeling advantages of Swin transformer. These advantages enable the proposed method to remove artifacts while reasonably repairing the image; in particular, the rotation center with the most serious artifacts can be accurately repaired and corrected. Use of a tomogram corrected by the SCUN method will greatly improve image quality and make the image more convenient for subsequent quantitative analysis. Validation with simulated and experimental data show that the SCUN method has superior correction accuracy and stability compared with traditional methods regardless of the degree of artifacts, thus, implying the strong robustness of the SCUN method.

We acknowledge 4W1A beamline of the Beijing Synchrotron Radiation Facility and BL18B beamline of Shanghai Synchrotron Radiation Facility for the experimental data and facilities provided.

This work was partly supported by the National Key Research and Development Program of China (Nos. 2022YFA1603600 and 2021YFA1600800) and the National Natural Science Foundation of China (No. U2032107).

Conflict of Interest

The authors have no conflicts to disclose.

Author Contributions

Tianyu Fu and Sen Qiu contributed equally to this work. Y. Tao and Q. Yuan conceived the study. Y. Wang, K. Zhang, J. Zhang, S. Wang, W. Huang, and C. Zhou contributed to the interpretation of the data. T. Fu, S. Qiu, and Y. Wang wrote the manuscript with valuable input from all coauthors.

Tianyu Fu: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Software (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). Ye Tao: Data curation (equal); Funding acquisition (equal); Investigation (equal). Qingxi Yuan: Conceptualization (equal); Funding acquisition (equal); Project administration (equal); Supervision (equal); Writing – review & editing (equal). Sen Qiu: Investigation (equal); Project administration (equal); Resources (equal); Validation (equal); Visualization (equal); Writing – original draft (equal). Yan Wang: Conceptualization (equal); Methodology (equal); Writing – original draft (equal); Writing – review & editing (equal). Kai Zhang: Formal analysis (equal); Project administration (equal); Writing – review & editing (equal). Jin Zhang: Data curation (equal). Shanfeng Wang: Investigation (equal). Wanxia Huang: Investigation (equal). Chenpeng Zhou: Validation (equal). Xinyu Zhao: Investigation (equal); Validation (equal).

The data that support the findings of this study are available from the corresponding authors upon reasonable request.

REFERENCES

1. A. Sakdinawat and D. Attwood, Nat. Photonics 4, 840–848 (2010). https://doi.org/10.1038/nphoton.2010.267, Google ScholarCrossref, ISI2. P. J. Withers, C. Bouman, S. Carmignato, V. Cnudde, D. Grimaldi, C. K. Hagen, E. Maire, M. Manley, A. Du Plessis, and S. R. Stock, Nat. Rev. Methods Primers 1, 18 (2021). https://doi.org/10.1038/s43586-021-00015-4, Google ScholarCrossref3. G. B. Zan, G. N. Qian, S. Gul, H. Y. Pan, Q. Li, J. Z. Li, D. J. Vine, S. Lewis, W. B. Yun, P. Pianetta, H. Li, X. Q. Yu, and Y. J. Liu, ACS Mater. Lett. 3, 1786–1792 (2021). https://doi.org/10.1021/acsmaterialslett.1c00600, Google ScholarCrossref4. T. Y. Fu, F. Monaco, J. Z. Li, K. Zhang, Q. X. Yuan, P. Cloetens, P. Pianetta, and Y. J. Liu, Adv. Funct. Mater. 32, 9 (2022). https://doi.org/10.1002/adfm.202203070, Google ScholarCrossref5. H. R. Lee, L. Liao, W. Xiao, A. Vailionis, A. J. Ricco, R. White, Y. Nishi, W. Chiu, S. Chu, and Y. Cui, Nano Lett. 21, 651–657 (2021). https://doi.org/10.1021/acs.nanolett.0c04230, Google ScholarCrossref6. Z. S. Jiang, J. Z. Li, Y. Yang, L. Q. Mu, C. X. Wei, X. Q. Yu, P. Pianetta, K. J. Zhao, P. Cloetens, F. Lin, and Y. J. Liu, Nat. Commun. 11, 9 (2020). https://doi.org/10.1038/s41467-020-16233-5, Google ScholarCrossref7. C. Y. Zhang, S. K. Yao, C. Xu, Y. N. Chang, Y. B. Zong, K. Zhang, X. Z. Zhang, L. J. Zhang, C. Y. Chen, Y. L. Zhao, H. D. Jiang, X. Y. Gao, and Y. L. Wang, Anal. Chem. 93, 1237–1241 (2021). https://doi.org/10.1021/acs.analchem.0c04662, Google ScholarCrossref8. M. Boin and A. Haibel, Opt. Express 14, 12071–12075 (2006). https://doi.org/10.1364/OE.14.012071, Google ScholarCrossref9. L. C. P. Croton, G. Ruben, K. S. Morgan, D. M. Paganin, and M. J. Kitchen, Opt. Express 27, 14231–14245 (2019). https://doi.org/10.1364/OE.27.014231, Google ScholarCrossref10. D. Jha, H. O. Sørensen, S. Dobberschütz, R. Feidenhans'l, and S. L. S. Stipp, Appl. Phys. Lett. 105, 4 (2014). https://doi.org/10.1063/1.4897441, Google ScholarScitation11. P. Paleo and A. Mirone, J. Synchrotron Radiat. 22, 1268–1278 (2015). https://doi.org/10.1107/S1600577515010176, Google ScholarCrossref12. J. Sijbers and A. Postnov, Phys. Med. Biol. 49, N247–N253 (2004). https://doi.org/10.1088/0031-9155/49/14/N06, Google ScholarCrossref13. D. Prell, Y. Kyriakou, and W. A. Kalender, Phys. Med. Biol. 54, 3881–3895 (2009). https://doi.org/10.1088/0031-9155/54/12/018, Google ScholarCrossref14. L. X. Yan, T. Wu, S. Zhong, and Q. D. Zhang, Phys. Med. Biol. 61, 1278–1292 (2016). https://doi.org/10.1088/0031-9155/61/3/1278, Google ScholarCrossref15. L. Massimi, F. Brun, M. Fratini, I. Bukreeva, and A. Cedola, Phys. Med. Biol. 63, 8 (2018). https://doi.org/10.1088/1361-6560/aaa706, Google ScholarCrossref16. Y. N. Zhu, M. L. Zhao, H. W. Li, and P. Zhang, Med. Phys. 40, 14 (2013). https://doi.org/10.1118/1.4790697, Google ScholarCrossref17. G. R. Davis and J. C. Elliott, “ X-ray microtomography scanner using time-delay integration for elimination of ring artefacts in the reconstructed image,” Nucl. Instrum. Methods Phys. Res., Sect. A 394, 157–162 (1997). https://doi.org/10.1016/S0168-9002(97)00566-4, Google ScholarCrossref18. D. M. Pelt and D. Y. Parkinson, Meas. Sci. Technol. 29, 9 (2018). https://doi.org/10.1088/1361-6501/aa9dd9, Google ScholarCrossref19. F. Sadi, S. Y. Lee, and M. K. Hasan, Comput. Biol. Med. 40, 109–118 (2010). https://doi.org/10.1016/j.compbiomed.2009.11.007, Google ScholarCrossref20. N. T. Vo, R. C. Atwood, and M. Drakopoulos, Opt. Express 26, 28396–28412 (2018). https://doi.org/10.1364/OE.26.028396, Google Scholar

留言 (0)

沒有登入
gif