Undersampling artifact reduction for free-breathing 3D stack-of-radial MRI based on a deep adversarial learning network

Free-breathing abdominal MRI techniques can achieve volumetric coverage, high spatial resolution and high signal-to-noise ratio (SNR) for subjects with breath-hold difficulties, such as pediatric or elderly patients, and patients with disabilities, neurological disorders or inability to comply with operator instructions [[1], [2], [3]]. A promising approach to enabling abdominal free-breathing scans is 3D stack-of-radial MRI [[4], [5], [6], [7], [8]]. However, to reduce respiratory motion artifacts, stack-of-radial abdominal MR images require self-gating and sufficient number of radial spokes to achieve good image quality, resulting in a relatively long acquisition time compared to breath-hold techniques. The typical acquisition time of clinically used free-breathing stack-of-radial technique is approximately 7 times that of the clinically used breath-hold Cartesian technique [9,10].

One of the practical acceleration approaches is to undersample the radial k-space and reconstruct images from the acquired radial data by using a priori information about the data, e.g., incoherency in the sampling pattern and redundant information in the temporal or channel direction of the data. Reconstruction methods such as parallel imaging [11,12] and compressed sensing [6,7,13] have been extensively studied. However, parallel imaging and compressed sensing methods cannot completely remove streaking artifacts with high acceleration rates [14,15].

Deep neural networks such as convolutional neural networks (CNNs) and generative adversarial networks (GANs) have been recently used to reduce image artifacts and noise [[16], [17], [18], [19], [20], [21]]. In particular, convolutional U-Nets have gained much attention in non-Cartesian image artifact reduction [[17], [18], [19],[22], [23], [24]] and image reconstruction [[25], [26], [27], [28]] problems. Hauptmann et al. [17] demonstrated the feasibility of using a residual U-Net to suppress streaking artifacts of undersampled real-time cardiovascular MRI. El-Rewaidy et al. [26] achieved fast and accurate reconstruction of dynamic cardiac MRI by using k-space and image domain CNNs and spatial-temporal information among neighboring time frames. However, these studies all used pixel-wise loss functions (i.e., L1/L2-norm) to train the networks, resulting in image blurring and loss of image details [21,29,30]. In addition, these networks were only tested using the data acquired in a single institution, while imaging parameters typically vary in different institutions. Their performance on data acquired in different institutions and with different acceleration factors remains to be investigated.

GANs have been shown to improve perceptual sharpness and image quality through an adversarial training process [21,[30], [31], [32], [33], [34]]. Yang et al. [21] demonstrated that the conditional GAN preserved image details for MRI de-aliasing tasks better than CNNs trained solely on pixel-wise losses. Mardani et al. [34] demonstrated that the combination of adversarial loss and pixel-wise loss resulted in high-resolution and visually appealing images. Although there has been extensive research on denoising and de-aliasing for accelerated Cartesian images [21,[34], [35], [36], [37], [38], [39]], the performance of the adversarial loss on destreaking tasks for non-Cartesian imaging, specifically radial k-space sampling trajectories, has not been extensively investigated. The streaking artifacts are high-frequency incoherent artifacts which are inherently different from noise and aliasing artifacts with Cartesian sampling. The controversy is that removing the streaking artifacts may remove the high-frequency image content at the same time. Thus, the feasibility of using the adversarial loss to remove streaking artifacts and preserve image details is of interest. Liu et al. [40] proposed to train the 2D adversarial network using varying undersampling patterns and showed increased robustness of the network in removing undersampling artifacts. However, the performance of a 3D network to utilize the inter-slice information has not been investigated and systematically evaluated.

This study aimed to investigate the feasibility and performance of GAN for reducing streaking artifacts in free-breathing undersampled stack-of-radial abdominal images. Specifically, a 3D GAN with the adversarial and image content losses, i.e., structural similarity index (SSIM) and L2 norm, was developed and trained. A novel data augmentation by respiratory gating into multiple respiratory states was proposed and implemented. The 3D GAN was compared to a traditional artifact-reduction pixel-wise U-Net approach with regard to streaking artifact reduction for undersampled radial k-space data with various acceleration factors. Lastly, the 3D GAN was assessed with preliminary testing data from another institution.

留言 (0)

沒有登入
gif