Neural networks trained with a camouflage detection step show enhanced accuracy and sensitivity in identifying brain tumors from MRI scans, mimicking expert radiologists.
Study: Deep learning and transfer learning for brain tumor detection and classification. Image Credit: Elif Bayraktar/Shutterstock.com
In a recent study published in Biology Methods and Protocols, researchers examined the use of convolutional neural networks (CNNs) and transfer learning to improve brain tumor detection in magnetic resonance imaging (MRI) scans.
Using CNNs pre-trained on detecting animal camouflage for transfer learning, the study investigated whether this unconventional step could enhance the accuracy of CNNs in identifying gliomas and improve diagnostic support in medical imaging.
BackgroundArtificial intelligence (AI) and deep learning models, including CNNs, have made significant strides in medical imaging, especially in the detection and classification of complex patterns in tasks such as tumor detection. Furthermore, CNNs excel at learning and recognizing features from images, allowing them to categorize unseen data accurately.
Additionally, transfer learning — a process where pre-trained models are adapted to new but related tasks — can enhance the effectiveness of CNNs, especially in image-based applications where data may be limited.
While numerous CNNs have been trained on large datasets for brain tumor detection, the inherent similarities between normal and cancerous tissues continue to present challenges.
About the studyThe present study used a combination of CNN-based models and transfer learning techniques to explore the classification of brain tumors using MRI scans.
The researchers used a main dataset that consisted of T1-weighted and T2-weighted post-contrast MRI images showing three types of gliomas — astrocytomas, oligodendrogliomas, and oligoastrocytomas — as well as normal brain images.
Data for glioma MRIs were obtained from online sources, while the Veterans Affairs Boston Healthcare System provided normal brain MRIs. While the researchers used manual image preprocessing, which involved cropping and resizing, no additional spatial normalization that could introduce bias was performed.
The study was unique in its use of a CNN pre-trained on detection of camouflaged animals, and the researchers hypothesized that training the CNN on patterns in camouflaged animals might enhance the network’s sensitivity to subtle features in brain MRIs.
They believed that parallels could be drawn between discriminating cancerous tissues and cells from the healthy tissue surrounding the tumor and detecting animals using natural camouflage to hide.
This pre-trained model was used as a baseline for transfer learning in the neural networks used in the study, namely, T1Net and T2Net, to classify T1- and T2-weighted MRIs, respectively. Furthermore, to analyze the performance of the CNN beyond parameters such as accuracy, the study employed explainable AI (XAI) techniques.
The study mapped the feature spaces using principal component analysis to visualize data distribution. At the same time, DeepDreamImage provided visual interpretations of internal patterns, and Gradient-Weighted Class Activation Mapping or Grad-CAM saliency maps highlighted the critical areas in MRI scans used by the network for classification.
Cumulatively, these methods offered insights into the decision-making processes of the CNN and the impact of camouflage transfer learning on classification outcomes.
ResultsThe study showed that transferring learning from animal camouflage detection improved the performance of CNN in tasks involving the classification of brain tumors. Notably, transfer learning significantly boosted the accuracy for the T2-weighted MRI model, achieving 92.20% accuracy, which was a significant increase from the non-transfer model’s 83.85% accuracy.
This improvement was statistically significant with a p-value of 0.0035 and substantially enhanced the classification accuracy for astrocytomas. For T1-weighted MRI scans, the transfer-trained network showed an accuracy of 87.5%, though this improvement was not statistically significant.
Furthermore, the feature spaces generated from both models after transfer learning indicated improved generalization ability, particularly for T2Net.
Compared to baseline models, the transfer-trained networks displayed a clearer separation between tumor categories, with the T2 transfer model showing enhanced distinction, especially for astrocytomas.
The DeepDreamImage visualizations provided additional detail, showing more defined and distinct ‘feature prints’ for each glioma type in transfer-trained networks compared to baseline models.
This distinction suggested that transfer learning from camouflage detection helped networks better identify key tumor characteristics, potentially by generalizing from subtle camouflage patterns.
Moreover, the GradCAM saliency maps revealed that both T1 and T2 networks focused on both tumor areas and surrounding tissues during classification. This was similar to the diagnostic process used by human radiologists for examining tissue distortion adjacent to tumors, indicating that the transfer-trained networks could detect more subtle, relevant features in MRI scans.
ConclusionsIn summary, the study indicated that transfer learning from networks pre-trained on the detection of animal camouflage improved the performance of CNNs in classifying brain tumors in MRI scans, especially with T2-weighted images. This approach enhanced the networks' ability to detect subtle tumor features and increased classification accuracy.
These findings support the potential for unconventional training sources to enhance neural network performance in complex medical imaging tasks, offering a promising direction for future AI-assisted diagnostic tools.
留言 (0)