Diagnostics, Vol. 13, Pages 101: D2BOF-COVIDNet: A Framework of Deep Bayesian Optimization and Fusion-Assisted Optimal Deep Features for COVID-19 Classification Using Chest X-ray and MRI Scans

In earlier years, computer vision researchers developed many algorithms for the identification and classification of COVID-19 using CXR images [18,19]. Few researchers developed innovative deep learning (DL) architectures for the detection and identification of corona viruses from CXR and CT images, while the majority of studies focused on traditional techniques [20,21]. Muhammad et al. [16] presented a framework for corona virus classification from CXR images using deep explainable AI. For the training and feature extraction processes, two deep learning models were used. They used canonical correlation analysis to improve feature fusion. Furthermore, the hybrid whale-elephant herding feature section was used to optimize fused features. Three publicly available datasets were used by the authors. They achieved accuracies of 99.1, 98.2, and 96.7%, which were better than previous techniques. The limitation of this work was the optimization algorithm’s static threshold value, which will be resolved in future work. Ameer et al. [22] presented a framework by employing CNN-LSTM for corona classification using CXR images. They developed a novel CNN-LSTM method with modified EfficientNetB0 for deep feature extractions. Additionally, extracted features were fused using serial-based maximum value fusion techniques, and improved moth flame feature selection was employed on the fused vector. The studies were carried out on three publicly available datasets and yielded accuracy rates of 93.0, 94.5, and 98.5%, respectively. The drawback of this work was the fusion process, which controlled the vector size and increase the computational time. Xiaole et al. [23] introduced a novel branch model network using transformations and CNN for the recognition of CT scan images. They implemented two branches-based models. One was built using CNN and the second one using transformation-based branches. The features were fused the using bi-directional approach. They used a large scale COVID-19 dataset for the experiment and achieved a 96.7% accuracy rate. The limitation of this research was the incomplete information of patients and the inadequate amount of features. Aksh et al. [24] implemented an efficient CNN model for the detection of COVID-19 using CXR and CT images. They designed a CNN model that included several layers and visualized weights through GradCam visualization. The entire introduced method was implemented on the COVID-19 multiclass CT dataset and achieved a 97.6% accuracy rate. The drawback of this research was the inadequate amounts of data used for the training process. Gayathri et al. [25] created a computer-aided mechanism for the diagnosis of COVID-19 via CXR images. The presented method was based on the DCNN and sparse auto encoder. The experiments were performed on COVID-19 and non-COVID images and attained an accuracy of 95.7%. AbdElhamid et al. [26] developed a COVID-19 multi-classification technique using CXR images. An XceptionNet pre-trained model that was trained using TL, and obtained features from the GAP layer were part of the proposed model. A three-class, open source dataset was used in the experiment, which had a 99.3% accuracy rate. The limitation of this method was the inadequate number of images used in the selected dataset. Rahul et al. [27] employed a framework that diagnosed COVID-19 using deep features and correlation coefficients. In that study, the authors applied DCNN for feature extraction that was further utilized for the classification. Veerraju et al. [28] implemented a novel technique that diagnosed COVID-19 by adapting hyperparameters via a hosted cuckoo optimization algorithm. Samritika et al. [29] designed an automated detection and classification framework using chest images via CNN. The authors performed two classification tasks—binary and multiclass—and achieved improved accuracy. Vruddhi et al. [30] presented a method of diagnosing COVID-19 using CT images and deep learning methods. They designed a novel CNN model named CTnet-10. The selected dataset consisted of two classes, and the experiments attained a higher accuracy of 82.1%. Moreover, they used traditional DCNN networks and achieved a 94.52% accuracy rate. Umut et al. [31] presented an automated and effective method for the detection of coronavirus disease. The authors extracted the features using four CNN architectures and fused the information using the ranking-based technique. The proposed method achieved a 98.93% accuracy rate. The disadvantage of this work was the ranking-based fusion, because it missed the important features. Ghulam et al. [32] presented a multi-layer fusion for the classification of coronavirus disease from lung ultrasound images. The presented model was designed by five main blocks of convolutional connectors and employed the fusion technique. The open source dataset was selected for the experimental process and achieved a 92.5% accuracy rate. The high number of parameters was the major limitation of this work. Emtiaz et al. [33] presented a deep learning-based classification framework using CXR images. The authors designed novel CNN architecture based on 22 layers that were further employed for the classification. For a binary dataset, the presented framework obtained 99.1% accuracy, whereas the multiclass accuracy was 94.2%. Dalia et al. [34] presented an optimized deep learning network using the GSO algorithm. The employed approach was performed on the binary class dataset. By this approach, they achieved a significant accuracy of 98%. Abirami et al. [35] presented a novel framework based on generative adversarial networks for the classification of COVID-19 using medial CXR images. The augmentation process was employed by using GAN, and the generated samples fed to a novel created network. The described framework achieved 99.78% accuracy. Abirami et al. [36] presented a framework that automated segments and identified COVID-19 lung infection using CT-scan images. The created model achieved 98.10% accuracy for classification and the segmentation achieved 81.1% accuracy for the dice coefficient using GAN segmentation. Irfan et al. [37] presented an automated framework for diagnosing the COVID-19 disease using X-ray images. The models Densenet121, Resnet50, VGG16, and VGG19 were trained using transfer learning. The CXR and normal images were collected from four different publically available datasets. The dataset consisted of two classes (COVID and normal). Using this approach, the presented framework achieved 99.3% accuracy. VGG16 and VGG19 outperformed the other two models. The limitation of this work was that the authors only collected the COVID-19 and normal images from the different datasets. The authors removed other classes, such as pneumonia. The presented framework was unable to diagnose the other respiratory infections. Naeem et al. [38] presented a novel method that detected the infection of the COVID-19 disease by using chest radiography images. The described model had nine convolutional and one fully connected layer. The provided architecture used two activation functions: the ReLu activation function and the Leaky Relu. The model experiments were conducted on multiclass datasets. The datasets consisted of three classes (COVID-19, normal, and pneumonia). Using this approach, the authors achieved 98.40% accuracy. Shifat et al. [39] described a technique based on a Bayesian optimization of deep learning approach for the classification of the COVID-19 disease using X-ray images. The presented framework developed a novel DCNN model named COVIDXception-Net and it was trained by employing Bayesian optimization for the selection of the best-trained model. The authors performed the whole experiment on four publically available datasets, and the provided framework achieved 99.2% of accuracy. At the end, they performed qualitative analysis by utilizing the GRAD-CAM visualization.In the summary, the authors in the literature used pre-trained models with transfer learning concepts for COVID-19 classification. Few of them focused on the binary class problem, and many of them considered the multiclass problem. The deep models were trained using static hyperparameters, such as learning rate, depth section momentum, and the number of epochs. In addition, the authors selected a relatively small number of datasets for the training process. There are several challenges in effectively classifying COVID-19 using standard chest X-rays. Individuals with COVID-19 may have radiological imaging that resembles that of patients with bacterial or viral pneumonia, most notably those caused by SARS and MERS. As a result, the ability to correctly diagnose diseases by examining medical imagery has become a critical challenge. The first issue is the classification of multiple classifications, including COVID-19, viral pneumonia, lung opacity, TB, fibrosis patterns, and normal images. This is a challenge, since there are so many different types of lung diseases. These images are shown quite well in Figure 2. This figure illustrates that there is a high degree of resemblance between each image, which means that there is a chance that an incorrect classification will be made. The second challenge is the removal of redundant and useless information, which lowers the accuracy of classification, while simultaneously increasing computation time.

留言 (0)

沒有登入
gif