Assessing the deep learning based image quality enhancements for the BGO based GE omni legend PET/CT

The PET-CT scanner

The PET-CT scanner used for this evaluation is the Omni Legend from GE Healthcare installed in June 2023 in the University Hospital of Ghent, UZ Ghent (Fig. 1).

Fig. 1figure 1

GE Omni Legend 32, operational at the University Hospital of Ghent

The PET component consists of 6 detector rings each containing 22 detector units resulting in an axial field of view (AFOV) of 32 cm. Within each detector unit there are 4 blocks, each containing 6 x 12 BGO crystals coupled to 3 x 6 SiPMs (6 x 6 mm2 each). The use of BGO results in a sensitivity of 47.03 cps/kBq at the center of the field of view (FOV)[16]. BGO scintillation crystals have relatively slow scintillation decay times compared to other scintillation materials like lutetium oxyorthosilicate (LSO) [24]. This slower scintillation decay time results in photons reaching the SiPMs over a longer time-frame, resulting in a slower pile-up of the SiPM signals and thereby reducing the time resolution for the detector. This complicates the precise temporal determination of gamma-ray emissions along the line of response [25]. Therefore, the system is not able to perform TOF measurements.

The deep learning algorithm implemented on the GE Omni was trained on hundreds of TOF datasets from different sites. It uses a convolutional network, more specifically a residual U-Net architecture, to predict the TOF BSREM (block sequential regularized expectation maximization) image from the non-TOF BSREM reconstruction [22]. Three separate models were trained with differing levels of contrast-enhancement-to-noise trade off: low precision (LP) for more noise reduction, medium precision (MP) as a middle ground, and high precision (HP) for better contrast enhancement. These three models were obtained by training on BSREM reconstructions with different ? parameters, where a higher ? value corresponds to a higher degree of regularization (and therefore, more noise reduction but lower contrast), and vice versa. Therefore, the LP model was trained on higher ? values, and the HP model on lower ? values. When the model is used for inference on the GE Omni, it is applied as a post-processing step after the conventional non-TOF BSREM reconstruction. Any of the three models can be used with any ? value, but it is logical to use a ? value within the range for which the model was trained.

Phantom study

For this study we used the NEMA IQ phantom featuring six spheres with diameters of 10, 13, 17, 22, 28 and 37 mm, along with a lung insert. The phantom was filled according to the NEMA NU 2-2018 Image Quality test procedure with fluorodeoxyglucose F18 (FDG), using a total activity of 20.38 MBq [26]. The activity concentration ratio between spheres and background was 4:1. The phantom was scanned using two bed positions with 25% overlap, with the spheres positioned in the overlap region. This was done three consecutive times, each with a duration of 90 s/bed position, to increase statistical confidence. Data acquisition was done in list mode to allow reconstruction of shorter acquisition times (60, 30 and 10 s/bed position). Reconstruction of the images was done with the software of the GE Omni Legend, using an iterative reconstruction (VUE Point HD) with Bayesian penalized likelihood (Q.Clear) and Precision Deep Learning (PDL), using a matrix size of 384x384 (\(1.82\,\times \,1.82\,\hbox ^2\) pixels) and a slice thickness of 2.07 mm. The choice of the beta value of Q.Clear depended on the specific deep learning method applied, with values of 350, 650 and 850 for High Precision Deep Learning (HPDL), Medium Precision Deep Learning (MPDL) and Low Precision Deep Learning (LPDL), respectively. For comparative analysis, No Deep Learning (NDL) images were also made for Q.Clear beta values of 350, 650 and 850. These values were within the midrange suggested by GE for each of the methods. On each of the reconstructions, regions of interest (ROIs) were drawn using Amide [27]. On the central slice six ROIs with diameters of 10, 13, 17, 22, 28 and 37 mm were drawn on the spheres. Background ROIs were drawn on the central slice and at ± 1 cm and ± 2 cm from the central slice, according to NEMA NU 2-2018 specifications. Each sphere size had 12 background ROIs per slice, resulting in a total of 60 background ROIs per sphere size. Image quality was determined using the contrast recovery coefficient (CRC), background variablity (BV) and contrast-to-noise ratio (CNR). These values were then averaged over the three acquisitions.

The CRC for sphere ’j’ was determined as:

$$\begin CRC = \frac}}-1}-1} \end$$

where \(C_\) represents the average counts in ROI of sphere ’j’, \(C_\) is the average counts in background ROIs with same size as sphere ’j’ and \(\frac\) represent the activity concentration ratio between the hot spheres and the background.

Percent background variability was calculated as:

$$\begin BV = \frac}*100\% \end$$

where \(SD_j\) is the standard deviation of the 60 background ROIs.

The CNR was calculated as:

$$\begin CNR=\frac}} \end$$

where the \(\mu _i\) and \(\mu _j\) are the mean pixel values of two distinct ROI in an image, and \(\sigma _i\) and \(\sigma _j\) are the square root of the average of their variances. This metric effectively measures the distinguishability of features in the presence of noise, with higher CNR values indicating superior image quality and contrast resolution [28]. This analysis provides information about the contrast recovery, background variability and noise characteristics of the image. By calculating these characteristics, we aimed to evaluate the impact of different deep learning settings with various precisions offered by the tomograph under investigation on its image quality.

Patient study

To compare the performance of each reconstruction condition in the GE Omni PET/CT, two patients were selected: a twenty years old male with a body mass index (BMI) of \(20\,\hbox ^\) and a twenty nine years old female with a BMI of \(35\,\hbox ^\). Both patients were diagnosed with a lung nodule. The acquired data from each patient were reconstructed using various deep learning precision levels. The reconstructed images were visualized using AMIDE software, which enabled the plotting of line intensity profiles over the nodule in the transverse (T), coronal (C), and sagittal (S) planes. This approach allowed for a detailed evaluation of the imaging performance across different reconstruction techniques and their impact on nodule visualization.

留言 (0)

沒有登入
gif