An automatic deep learning-based workflow for glioblastoma survival prediction using pre-operative multimodal MR images: a feasibility study

4. Discussion

In this paper, we proposed an automatic workflow for GBM survival prediction based on four pre-operative MR images. The VGG-Seg was proposed and trained using 105 glioma patients for automatically generating GBM contours from four MR images. The trained VGG-Seg was applied to 163 GBM patients to generate their autosegmented tumor contours for survival analysis. We extracted handcrafted and DL-based radiomic features from the MR images using the autosegmented contours for these patients. Two Cox regression models were trained using the extracted features to construct the handcrafted and DL-based signatures for survival prediction.

The handcrafted signature achieved a C-index of 0.64, while the DL-based signature achieved a C-index of 0.67. The DL-based signature achieved numerically higher AUCs, evaluated at the OS of 300 days and 450 days, than the handcrafted signature. Additionally, the DL-based signature, unlike the handcrafted signature, resulted in prognostically distinct groups using either X-tile generated or median threshold. Shboul et al. did not report the C-index but the accuracy of 0.52 in classifying GBM patients into three survival outcome groups12. However, DL-based radiomic features were not investigated in this study. It is also difficult to know whether significant patient stratification was achieved for the testing GBM patients in this study since log-rank tests were not conducted.

The VGG-Seg achieved accurate automatic GBM segmentation, with a mean Dice coefficient of 0.86 for the 163 GBM patients. A study showed that the mean Dice coefficient between the whole tumor contours drawn by two experts based on multi-modal MR images was 0.8627. Recently, many studies have proposed novel 3D CNN architectures for improving glioma segmentation accuracy28–30. The goal of this study is not to benchmark the best segmentation model but to develop an automatic workflow that can achieve accurate GBM survival prediction. Other automatic segmentation methods can be integrated into the proposed workflow but were not explored within the scope of this study. Potential future work includes selecting the best segmentation model and investigating whether more accurate autosegmented contours may result in a better survival prediction model.

We included 75 LGG patients for training the VGG-Seg because we found that the VGG-Seg trained with both 75 LGG patients and 30 GBM patients achieved better performance than the VGG-Seg trained with 30 GBM patients alone. This is expected as LGG and GBM have a similar appearance in MR images. The VGG-Seg could generate three tumor subregion labels. However, the accuracy of segmenting subregion labels using the VGG-Seg was low, with the mean Dice coefficients of the tumor subregions smaller than 0.75. Hence, we decided to use the whole tumor contours for feature extraction.

Our study has several limitations. First, the number of patients is limited so we only investigated the transfer learning method for survival prediction. A CNN trained from scratch for survival prediction could directly learn useful features from MR images. However, it could be easily overfitted and hence require more patient data to achieve robust performance. Other methods like training an autoencoder for feature extraction would also be valuable to explore. Second, the information provided by the MR images may be limited and not powerful enough for achieving more accurate models. Future work could be done to include genomic features and investigate whether the combination of genomic and radiomic features could improve prediction performance. Third, we did not consider the treatment status of patients due to data scarcity. Integrating treatment status may help achieve better prediction performance and is worthy of investigation in the future.

Reference

1. Louis DN, Perry A, Reifenberger • Guido, et al. The 2016 World Health Organization Classification of Tumors of the Central Nervous System: a summary. Acta Neuropathol. 2016;3:803-820. doi:10.1007/s00401-016-1545-1

2. Ostrom QT, Bauchet L, Davis FG, et al. The epidemiology of glioma in adults: a "state of the science" review. Neuro Oncol. 2014;16(7):896-913. doi:10.1093/neuonc/nou087

3. Domingo-Musibay E, Galanis E. What next for newly diagnosed glioblastoma? Futur Oncol. 2015;11(24):3273-3283. doi:10.2217/fon.15.258

4. TAMIMI AF, JUWEID M. Epidemiology and Outcome of Glioblastoma. In: Glioblastoma. Codon Publications; 2017:143-153. doi:10.15586/codon.glioblastoma.2017.ch8

5. Nicolasjilwan M, Hu Y, Yan C, et al. Addition of MR imaging features and genetic biomarkers strengthens glioblastoma survival prediction in TCGA patients. J Neuroradiol. 2015;42(4):212-221. doi:10.1016/j.neurad.2014.02.006

6. Sanghani P, Ang BT, King NKK, Ren H. Regression based overall survival prediction of glioblastoma multiforme patients using a single discovery cohort of multi-institutional multi-channel MR images. Med Biol Eng Comput. 2019;57(8):1683-1691. doi:10.1007/s11517-019-01986-z

7. Lao J, Chen Y, Li Z-C, et al. A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme. Sci Rep. 2017;7(1):10353. doi:10.1038/s41598-017-10649-8

8. Pavic M, Bogowicz M, Würms X, et al. Influence of inter-observer delineation variability on radiomics stability in different tumor sites. Acta Oncol (Madr). 2018;57(8):1070-1074. doi:10.1080/0284186X.2018.1445283

9. Fiset S, Welch ML, Weiss J, et al. Repeatability and reproducibility of MRI-based radiomic features in cervical cancer. Radiother Oncol. 2019;135:107-114. doi:10.1016/j.radonc.2019.03.001

10. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol 9351. Springer Verlag; 2015:234-241. doi:10.1007/978-3-319-24574-4_28

12. Shboul ZA, Alam M, Vidyaratne L, Pei L, Elbakary MI, Iftekharuddin KM. Feature-Guided Deep Radiomics for Glioblastoma Patient Survival Prediction. Front Neurosci. 2019;13:966. doi:10.3389/fnins.2019.00966

13. Antropova N, Huynh BQ, Giger ML. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Med Phys. 2017;44(10):5162-5171. doi:10.1002/mp.12453

14. Afshar P, Mohammadi A, Plataniotis KN, Oikonomou A, Benali H. From Handcrafted to Deep-Learning-Based Cancer Radiomics: Challenges and opportunities. IEEE Signal Process Mag. 2019;36(4):132-160. doi:10.1109/MSP.2019.2900993

15. Fu J, Zhong X, Li N, et al. Deep learning-based radiomic features for improving neoadjuvant chemoradiation response prediction in locally advanced rectal cancer. Phys Med Biol. 2020;65(7):075001. doi:10.1088/1361-6560/ab7970

16. Menze BH, Jakab A, Bauer S, et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans Med Imaging. 2015;34(10):1993-2024. doi:10.1109/TMI.2014.2377694

17. Bakas S, Akbari H, Sotiras A, et al. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci Data. 2017;4:170117. doi:10.1038/sdata.2017.117

18. Bakas S, Reyes M, Jakab A, et al. Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge. November 2018. http://arxiv.org/abs/1811.02629. Accessed October 23, 2019.

19. Tustison NJ, Avants BB, Cook PA, et al. N4ITK: improved N3 bias correction. IEEE Trans Med Imaging. 2010;29(6):1310-1320. doi:10.1109/TMI.2010.2046908

20. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv Prepr arXiv14091556. 2014.

21. Ulyanov D, Vedaldi A, Lempitsky V. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In: Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. Vol 2017-January. Institute of Electrical and Electronics Engineers Inc.; 2017:4105-4113. doi:10.1109/CVPR.2017.437

22. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Vol 2016-December. IEEE Computer Society; 2016:770-778. doi:10.1109/CVPR.2016.90

23. Kingma D, Ba J. Adam: A method for stochastic optimization. arXiv Prepr arXiv14126980. 2014.

24. van Griethuysen JJM, Fedorov A, Parmar C, et al. Computational Radiomics System to Decode the Radiographic Phenotype. Cancer Res. 2017;77(21):e104-e107. doi:10.1158/0008-5472.CAN-17-0339

25. Deng J, Dong W, Socher R, Li L-J, Kai Li, Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In: Institute of Electrical and Electronics Engineers (IEEE); 2010:248-255. doi:10.1109/cvpr.2009.5206848

26. Camp RL, Dolled-Filhart M, Rimm DL. X-tile: A new bio-informatics tool for biomarker assessment and outcome-based cut-point optimization. Clin Cancer Res. 2004;10(21):7252-7259. doi:10.1158/1078-0432.CCR-04-0713

27. Porz N, Bauer S, Pica A, et al. Multi-Modal Glioblastoma Segmentation: Man versus Machine. Strack S, ed. PLoS One. 2014;9(5):e96873. doi:10.1371/journal.pone.0096873

28. Ghosal P, Reddy S, Sai C, Pandey V, Chakraborty J, Nandi D. A Deep Adaptive Convolutional Network for Brain Tumor Segmentation from Multimodal MR Images. In: TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON). IEEE; 2019:1065-1070. doi:10.1109/TENCON.2019.8929402

29. Myronenko A. 3D MRI brain tumor segmentation using autoencoder regularization. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol 11384 LNCS. Springer Verlag; 2019:311-320. doi:10.1007/978-3-030-11726-9_28

30. Fu J, Singhrao K, Qi XS, Yang Y, Ruan D, Lewis JH. Three-dimensional multipath DenseNet for improving automatic segmentation of glioblastoma on pre-operative multimodal MR images. Med Phys. 2021. doi:10.1002/mp.14800

留言 (0)

沒有登入
gif