Three‐dimensional self super‐resolution for pelvic floor MRI using a convolutional neural network with multi‐orientation data training

Purpose

High-resolution pelvic magnetic resonance (MR) imaging is important for the high-resolution and high-precision evaluation of pelvic floor disorders (PFDs), but the data acquisition time is long. Because high-resolution three-dimensional (3D) MR data of the pelvic floor are difficult to obtain, MR images are usually obtained in three orthogonal planes: axial, sagittal, and coronal. The in-plane resolution of the MR data in each plane is high, but the through-plane resolution is low. Thus, we aimed to achieve 3D super-resolution using a convolutional neural network (CNN) approach to capture the intrinsic similarity of low-resolution 3D MR data from three orientations.

Methods

We used a two-dimensional (2D) super-resolution CNN model to solve the 3D super-resolution problem. The residual-in-residual dense block network (RRDBNet) was used as our CNN backbone. For a given set of low through-plane resolution pelvic floor MR data in the axial or coronal or sagittal scan plane, we applied the RRDBNet sequentially to perform super-resolution on its two projected low-resolution views. Three datasets were used in the experiments, including two private datasets and one public dataset. In the first dataset (dataset 1), MR data acquired from 34 subjects in three planes were used to train our super-resolution model, and low-resolution MR data from 9 subjects were used for testing. The second dataset (dataset 2) included a sequence of relatively high-resolution MR data acquired in the coronal plane. The public MR dataset (dataset 3) was used to demonstrate the generalization ability of our model. To show the effectiveness of RRDBNet, we used datasets 1 and 2 to compare RRDBNet with interpolation and enhanced deep super-resolution (EDSR) methods in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index. Since 3D MR data from one view have two projected low-resolution views, different super-resolution orders were compared in terms of PSNR and SSIM. Finally, to demonstrate the impact of super-resolution on the image analysis task, we used datasets 2 and 3 to compare the performance of our method with interpolation on the 3D geometric model reconstruction of the urinary bladder.

Results

RRDBNet outperformed the interpolation and EDSR methods on the dataset 1. With RRDBNet, training with three planes images had better performance than with one plane images. When achieving super-resolution, we found that our method obtained better smoothness and continuity than other methods on both projected and scanned views. When tested on the dataset 2, our model also obtained better PSNR and SSIM results on both projected and scanned views. We also found that it performed differently when applying 3D super-resolution with different orders. Next, the super-resolution results in the dataset 3 demonstrated good generalization capability of our method. Finally, the 3D geometric model results of the urinary bladder demonstrated that the super-resolution improved the 3D geometric model reconstruction results.

Conclusions

A CNN-based method was used to learn the intrinsic similarity among MR acquisitions from different scan planes. Through-plane super-resolution for pelvic MR images was achieved without using high-resolution 3D data, which is useful for the analysis of PFDs.

This article is protected by copyright. All rights reserved

留言 (0)

沒有登入
gif