Artificial intelligence method based on multi-feature fusion for automatic macular edema (ME) classification on spectral-domain optical coherence tomography (SD-OCT) images

Introduction

Macular edema (ME) is a common ocular manifestation of fluid infiltration or inflammation in the sensitive macular area of the retina, and is an important cause of visual deterioration (Song et al., 2022). There are several ME-related eye diseases, including diabetic ME (DME), retinal-vein occlusion (RVO), age-related macular degeneration (AMD), and central serous chorioretinopathy (CSC). Chronic hyperglycemia in diabetes mellitus (DM) causes damage to capillaries, resulting in retinal ischemia and increased vascular permeability, which leads to DME (Kim et al., 2020). Wet AMD is a result of subretinal choroidal neovascularization, resulting in fragile and leaky blood vessels that penetrate through Bruch’s membrane and cause edema (McGill et al., 2017). Excessive angiogenic growth factors, caused by hypoxia secondary to RVO, leads to vascular leakage and ME (Narayanan et al., 2021). In CSC, choroidal congestion, thickening, and hyperpermeability are considered to cause leakage through the retinal pigment epithelium (RPE) (Schellevis et al., 2019).

Optical coherence tomography (OCT) is a high-resolution, non-contact, and non-invasive biomedical imaging technique and is often used in eye clinics for macular disease. The basic principle of OCT imaging is that a beam of light is emitted into the tissues to be examined and detects the reflected or back-scattered light from the tissues; this reflected light will interfere with light that originated from the same source and the reflectivity profile along the light beam can be derived from the interference signal and used to generate an A-scan (Liu et al., 2019). The combination of multiple A-scans along the horizontal axis produces a brightness scan (B-scan) (Khalid et al., 2017). Compared with Fundus cameras, OCT systems has the high contrast and depth sectioning capability (LaRocca et al., 2014). And high-quality cross-sectional images of the neurosensory retina can be acquired without pupil dilatation in a matter of seconds (Ouyang et al., 2013). Therefore the sensitivity of OCT for detection of a variety of retinal irregularities was higher.

Macular edema diagnosis by OCT is based on the visualization of the retinal structure. However, spectral-domain optical coherence tomography (SD-OCT) can better delineate the different retinal layers so that the histological changes of ME can be shown in more detail. In DME patients, SD-OCT shows mild retinal edema with cystic spaces located only in the outer plexiform layer (OPL), whereas, when edema worsens, they involved both the OPL and the outer nuclear layer (ONL) (Leung et al., 2008). SD-OCT image analysis was also more sensitive than FAF for identifying geographic atrophy GA in patients treated for exudative AMD (Massamba et al., 2019). For CSC patients, SD-OCT can show shallow serous detachments and provided precise information about the amount and localization of subretinal fluid and RPE abnormalities (Murthy et al., 2016). SD-OCT also can quantify retinal thickness changes in eyes with cystoid macular edema (CME) from central retinal vein occlusion (CRVO) and is superior to contact lens–assisted biomicroscopy to identify foveal edema (Decroos et al., 2013).

Currently, ME diagnosis depends on the subjective evaluation of OCT and the clinical experience of ophthalmologists. Not only does this process take a lot of time, energy, and requires training, but the ability of ophthalmologists at different levels to diagnose diseases ranges widely. With the application of artificial intelligence in ophthalmology, a large number of machine learning-based computer-aided diagnosis (CAD) models have been developed for the quantitative analysis of OCT images to achieve the automatic diagnosis of macular diseases. Alsaih et al. (2017) applied machine-learning techniques for DME classification on SD-OCT images, both the sensitivity (SE) and specificity (SP) of the best result were 87.5%. Chen Y. et al. (2021) applied convolutional-neural-network-based transfer learning to classify AMD, the CNN models with appropriate algorithm hyperparameters had excellent capability and performance in classifying OCT images of AMD and DME. However, their studies made only binary classification, which limits the application of machine-learning algorithms in the diagnosis of many diseases. Wang et al. (2016) proposed a CAD model to discriminate AMD, DME, and healthy macula on OCT images, the best model based on the sequential minimal optimization (SMO) algorithm achieved 99.3% in the overall accuracy for the three classes of samples. However, the coverage of disease types was still inadequate and their studies were all based on single features.

Currently, the signal fusion methods have attracted the attention of many researchers for solving pattern recognition problems, and that were divided into three categories which are early fusion, intermediate fusion, and late fusion (Verma and Tiwary, 2014). Early fusion is also named as feature level fusion which emphasizes the data combination before the classification (Zhang et al., 2017). It was defined as performing merge and splitting operations on existing feature sets to generate new feature sets. Using the feature fusion approach of deep learning and machine learning, the complementary information of abstract features of deep learning and detailed features of machine learning can be realized (Wang et al., 2022). The accuracy of models could be improved (Khan and Hasan, 2020). Therefore, we introduced an artificial intelligence method of fusion of traditional features and deep features, aiming to automate the classification of DME, AME, RVO, and CSC from DM based on SD-OCT images (Figure 1).

www.frontiersin.org

Figure 1. The flowchart of this study.

Materials and methods Image collection and pre-processing

A total of 1,213 two-dimensional (2D) cross-sectional OCT images of ME were collected from the Jiangxi Provincial People’s Hospital (China) between 2016 and 2021. According to OCT reports of a senior ophthalmologist, 300 images with DME, 303 images with AMD, 304 images with RVO, and 306 images with CSC were included. And the set was randomly divided into a training set and a test set, at a ratio of 8:2. To protect patient privacy, patient images were all anonymized prior to analysis. All the OCT images were acquired by the same experienced ophthalmologist on the same machine, i.e., Heidelberg Spectralis OCT (Heidelberg Engineering, Dossenheim, Germany). Then, the macular area was outlined manually with ITK-SNAP software, which was the mask region of interest (ROI) images. The obtained mask files were used for traditional omics features extraction. Next, the image was cropped to the ROI specifications and the segments of ROI were used for DL features extraction.

Traditional omics features extraction and selection

Based on the mask files of the original image files, traditional omics features of images were extracted based on the first-order statistics, shape, size, and texture. The Z-score standardization method was then used to normalize the extracted features and the Spearman correlation coefficients were used to select the normalized feature.

DL features extraction and model visualization

The segments of ROI were input into alexnet, inception_v3, resnet34, and vgg13 models, respectively, which were initialized using the pre-trained weights from ImageNet, and the DL features were obtained. And then the DL features were selected by dimensionality reduction using principal components analysis (PCA). Finally, the selected features of the four DL models were fused.

In order to evaluate the deep learning-focused regions, the gradient-weighted class activation map (Grad-CAM) method was used. In this method, gradient information flowing from input layers to the last convolution layer of a convolutional neural network (CNN) is used, and coarse heat maps of important regions in the input images are generated (Chen T. et al., 2021). Based on the coarse heat maps, we can understand which areas of the segments are most likely to be focused by the DL models.

Early fusion and lasso model established

Feature fusion was performed after the pooling layer in the model. The traditional omics features and the deep-fusion features were fused into a composite feature vector. Then in the training set, the composite feature vector was input into further fused as fusion features set. A t-distributed stochastic neighbor embedding (t-SNE) algorithm was used to visualize the features vectors from feature space of high dimensions into 2D space. Then, the fusion features set was divided into a training set and a test set, at a ratio of 7:3. The lasso model which was established to further select features. We chosed the optimal λ based on the minimum criteria according to fivefold cross validation.

Classification models established

The support vector machine (SVM), K-nearest neighbor (KNN), ExtraTrees, logistic regression (LR), and multilayer perceptron (MLP) were used to establish the classification models in the training set and the performance of the final classification models was evaluated in the test set. Finally, the classification performance of the different models was assessed and compared.

Statistical analysis

The accuracy, confusion matrix and the receiver operating characteristic (ROC) curve of the classification models were used to evaluate the performance of models. All statistical analyses were performed and visualized in Python (version 3.9.7).

Results Characteristics of OCT images

The total of 1,213 original images of ME were collected, included DME (n = 300), AMD (n = 303), RVO (n = 304), and CSC (n = 306). The training set was consisted of 849 images, included DME (n = 240), AMD (n = 243), RVO (n = 243), and CSC (n = 245). The test set was consisted of 364 images, included DME (n = 60), AMD (n = 60), RVO (n = 61), and CSC (n = 61). And then, the original image files and corresponding mask files were obtained to use for traditional omics features extraction. The segments based on maximum ROI mask were used for deep learning features extraction.

Characteristics of traditional omics features

For each ROI, a total of 107 features of each image were extracted, and after the Spearman correlation coefficients, the final 38 features of each image were selected. Including 6 first-order features, 4 shape-based features, and 28 textural features. The textural features were composed of 5 Gray Level Co-occurrence Matrix (GLCM), 4 Gray Level Run Length Matrix (GLRLM), 9 Gray Level Size Zone Matrix (GLSZM), 6 Gray Level Dependence Matrix (GLDM), and 4 Neighboring Gray Tone Difference Matrix (NGTDM) as shown in Figure 2.

www.frontiersin.org

Figure 2. The pie chart for traditional omics features distribution.

Characteristics of deep learning features and model visualization

There were 9,216, 2,048, 512, and 16,383 deep learning features of each image were obtained from the alxnet, inception_v3, resnet34, and vgg13, respectively, which were on “avgpool” layer before last FC layers. Dimension reduction with PCA compressed features into 31. Finally, a deep fusion feature subset containing 124 compression features were obtained. And the heatmaps of Grad-CAM highlighted areas which the deep learning models likely focused on as shown in Figure 3.

www.frontiersin.org

Figure 3. Gradient-weighted class-activation map (Grad-CAM) visualization of deep learning feature extraction: CSC (A); AMD (B); DME (C); RVO (D). The blue part that gathers inward from the red part is active, indicating that the model pays particular attention to this area (Huang et al., 2022).

Characteristics of fusion features

After the early fusion, a subset of each image that contains 162 features was got. Feature was visualized by t-SNE for an intuitive perception of how well these features can distinguish different types of ME, was shown in Figure 4.

www.frontiersin.org

Figure 4. Feature visualization by t-distributed stochastic neighbor embedding (t-SNE): AMD (red); DME (green); RVO (blue); CSC (purple).

Lasso model evaluation

The LASSO was used for automated feature selection in this study. 53 features were selected to build the final classification models based on the optimal lambda value and the corresponding coefficients in the training set as shown in Figure 5.

www.frontiersin.org

Figure 5. Feature selection in the lasso model: (A) Lasso coefficient profiles of the 162 fusion features, where each curve corresponds to one feature, the vertical black line indicates an optimal λ. (B) Curve of binomial deviation varied with parameter λ, where value of the optimal log (λ) is marked by vertical dashed lines.

Classification models evaluation

The accuracy of SVM model was highest than other models, up to 93.8% in the test set. And the KNN, ExtraTrees, MLP, and LR models in the test set were only 90.08, 82.23, 90.50, and 86.77%, respectivily, as shown in Table 1.

www.frontiersin.org

Table 1. The accuracy of classification models in the training set and test set.

Since our problem is a multiclass classification, AUC of binary class classification cannot be considered. So, micro- and macro-averages (Sokolova and Lapalme, 2009) were calculated from ROC curves, the macro-average could give equal weight to the classification of each label, whereas the micro-average incorporates the frequency of the labels into the label weighting. In the test set, the area under curves (AUCs) of micro- and macro-averages of the SVM and MLP models both were 99%, which was highest than the other models. The ROC curve of the test set for each group compared with that of the other groups, each group were clearly distinguished from other groups in the SVM model and the AUC of the AMD, DME, RVO, and CSC groups were 100, 99, 98, and 100%, respectively. While, the AUC of the AMD, DME, RVO, and CSC groups compared with that of other groups in the MLP model only were 99, 97, 97, and 100%, respectively. It could be seen that in terms of ROC curve results, the SVM model has the best performance.

The test set was distributed in a 4 × 4 matrix according to the labeled labels and the classification results. It could be seen that the recognition performance of SVM and MLP models was better than others. For example, the recognition rates of RVO were relatively high in the SVM and MLP model, while RVO was easily misrecognized in the other three models. However, there were also some differences. In the SVM model, the recognition rates of AMD, DME, and RVO were relatively high. While, in the MLP model, the recognition rates of CSC were relatively high. As shown in Figure 6.

www.frontiersin.org

Figure 6. Confusion matrix and ROC curve of the different models: (A1–E1): Confusion matrix of the SVM, LR, KNN, MLP, and ExtraTrees model, respectively. Each row of the matrix represents the actual class and each column indicates the predicted class. (A2–E2): ROC curve of SVM, LR, KNN, MLP, and ExtraTrees model, respectively. Label 0 for AMD, label 1 for DME, label 2 for RVO, label 3 for CSC.

Discussion

The current study used a multi-feature fusion method for automatic ME classification on SD-OCT images. It fused the features of traditional omics and four DL models, which were the alxnet, inception_v3, resnet34, and vgg13. The Grad-CAM was used to visualize the explanation of DL black-box model. Finally, after the fusion features were screened by the lasso model, the non-zero coefficients features were used to developed six classification models. According the accuracy results, as well as the ROC curve and confusion matrix, the performance of the SVM model was the best, and could be used to classify the DME, AMD, RVO, and CSC accurately from DM SD-OCT images.

Early intelligent diagnosis mainly relies on artificially designed feature templates or uses single traditional machine-learning methods (Turchin et al., 2009), treating intelligent diagnosis as a classification problem (Srinivasan et al., 2014; Alsaih et al., 2017). Because a single feature is usually sensitive to the changes of part of the image features and is not sensitive to the changes of other features, when the difference between two kinds of images is not big in the sensitive features of a certain feature, the classifier based on the training of a single feature cannot output the correct classification. In addition, the complex background noise in the image will also lead to the deterioration of feature data quality, which not only increases the difficulty of classifier training, but also reduces the accuracy of classification. Our proposed fusion features method, by contrast, realized feature complementarity and reduced the influence of single feature inherent defects.

In previous studies, Lu et al. (2018) used ResNet to detect normal images, cystoid ME, serous macular detachment, epiretinal membrane, and macular hole based on the single deep learning feature extraction method. The accuracy of their method for detecting cystoid ME cases was 84.8% which was much lower than our result. This also confirmed that the feature fusion method can improve the accuracy of the model compared with the single feature extraction method. Chen T. et al. (2021) used a convolutional-neural-network to classify AMD. Chan et al. (2018) used information from AlexNet, VggNet, and GoogleNet to design a decision model for automatic classification of normal ME and DME. Although these models have performed well, they lack the interpretation capability. The Grad-CAM was introduced in our study to overcome the common drawback of DL models. It uses the gradient of the target class and propagates to the final convolutional layer to generate a rough positioning map, which is used to visualize the features (Yang et al., 2021). The Grad-CAM could address the mechanism by which the CAM approach requires changes to the model architecture. Compared with other interpretation methods, the computational complexity is reduced and the interpretability of the model is increased. It also combines the advantages of fine-grained detection (unable to locate the image) and image positioning (unable to improve the positioning resolution). The result of Grad-CAM heatmaps in our study highlighted important areas that the DL models probably focused on extracting features. This is the same area in which our eyes recognize ME. This is a good example of the Grad-CAM identifying the pathologic region of an OCT image correctly.

Of course, there were also shortcomings to this study. First, we just collected the OCT images from a single-center study so the sample does not represent the entire patient population. Second, single-omics methods were used in this study. For multi-classification, using multi-omics data can obtain better accuracy (Lin et al., 2020). Third, the accuracy of our study needs to be improved. Therefore, in future studies, we will try to incorporate multicenter data to reinforce the conclusion of our study and combined multiomics techniques to automate classification of DM based on SD-OCT images and the color fundus pictures.

Conclusion

In this study, an artificial intelligence method based on multi-feature fusion was introduced for automatic ME classification on SD-OCT images. The results showed that the model could be used to classify the DME, AMD, RVO, and CSC accurately from SD-OCT images. The result of Grad-CAM heatmaps in our study highlighted important areas that the DL models probably focused on extracting features. The results of Grad-CAM heatmap highlighted that the important areas for the DL model to extract features was the same as the areas in which our eyes recognize ME.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving human participants were reviewed and approved by the Medical Ethics Committee of the Jiangxi Provincial People’s Hospital. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author contributions

FG, F-PW, and Y-LZ contributed to data collection, statistical analyses, and wrote the manuscript. All authors read and approved the final manuscript, contributed to the manuscript, and approved the submitted version.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Alsaih, K., Lemaitre, G., Rastgoo, M., Massich, J., Sidib, D., and Meriaudeau, F. (2017). Machine learning techniques for diabetic macular edema (DME) classification on SD-OCT images. Biomed. Eng. Online 16:68. doi: 10.1186/s12938-017-0352-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Chan, G. C., Kamble, R., Muller, H., Shah, S. A., Tang, T. B., and Meriaudeau, F. (2018). “Fusing results of several deep learning architectures for automatic classification of normal and diabetic macular edema in optical coherence tomography,” in Proceedings of the 2018 40th annual international conference of the IEEE engineering in medicine and biology society (EMBC), Vol. 2018, (Honolulu, HI: IEEE), 670–673. doi: 10.1109/EMBC.2018.8512371

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, T., Lim, W. S., Wang, V. Y., Ko, M., Chiu, S., Huang, Y., et al. (2021). Artificial intelligence–assisted early detection of retinitis pigmentosa–the most common inherited retinal degeneration. J. Digit. Imaging 34, 948–958. doi: 10.1007/s10278-021-00479-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, Y., Huang, W., Ho, W., and Tsai, J. (2021). Classification of age-related macular degeneration using convolutional-neural-network-based transfer learning. BMC Bioinform. 22(Suppl. 5):99. doi: 10.1186/s12859-021-04001-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Decroos, F. C., Stinnett, S. S., Heydary, C. S., Burns, R. E., and Jaffe, G. J. (2013). Reading center characterization of central retinal vein occlusion using optical coherence tomography during the COPERNICUS trial. Transl. Vis. Sci. Technol. 2:7. doi: 10.1167/tvst.2.7.7

PubMed Abstract | CrossRef Full Text | Google Scholar

Huang, T., Yang, R., Shen, L., Feng, A., Li, L., He, N., et al. (2022). Deep transfer learning to quantify pleural effusion severity in chest X-rays. BMC Med. Imaging 22:100. doi: 10.1186/s12880-022-00827-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Khalid, S., Akram, M. U., Hassan, T., Nasim, A., and Jameel, A. (2017). Fully automated robust system to detect retinal edema, central serous chorioretinopathy, and age related macular degeneration from optical coherence tomography images. Biomed Res. Int. 2017:7148245. doi: 10.1155/2017/7148245

PubMed Abstract | CrossRef Full Text | Google Scholar

Khan, M. U., and Hasan, M. A. (2020). Hybrid EEG-fNIRS BCI fusion using multi-resolution singular value decomposition (MSVD). Front. Hum. Neurosci. 14:599802. doi: 10.3389/fnhum.2020.599802

PubMed Abstract | CrossRef Full Text | Google Scholar

Kim, J., Park, I. W., and Kwon, S. (2020). Factors predicting final visual outcome in quiescent proliferative diabetic retinopathy. Sci. Rep. 10:17233. doi: 10.1038/s41598-020-74184-9

PubMed Abstract | CrossRef Full Text | Google Scholar

LaRocca, F., Nankivil, D., Farsiu, S., and Izatt, J. A. (2014). True color scanning laser ophthalmoscopy and optical coherence tomography handheld probe. Biomed. Opt. Express 5, 3204–3216. doi: 10.1364/BOE.5.003204

PubMed Abstract | CrossRef Full Text | Google Scholar

Leung, C. K., Cheung, C. Y., Weinreb, R. N., Lee, G., Lin, D., Pang, C. P., et al. (2008). Comparison of macular thickness measurements between time domain and spectral domain optical coherence tomography. Invest. Ophthalmol. Vis. Sci. 49, 4893–4897. doi: 10.1167/iovs.07-1326

PubMed Abstract | CrossRef Full Text | Google Scholar

Lin, Y., Zhang, W., Cao, H., Li, G., and Du, W. (2020). Classifying breast cancer subtypes using deep neural networks based on multi-Omics data. Genes 11:888. doi: 10.3390/genes11080888

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, Y., Carass, A., He, Y., Antony, B. J., Filippatou, A., Saidha, S., et al. (2019). Layer boundary evolution method for macular OCT layer segmentation. Biomed. Opt. Express 10, 1064–1080. doi: 10.1364/BOE.10.001064

PubMed Abstract | CrossRef Full Text | Google Scholar

Lu, W., Tong, Y., Yu, Y., Xing, Y., Chen, C., and Shen, Y. (2018). Deep learning-based automated classification of multi-categorical abnormalities from optical coherence tomography images. Transl. Vis. Sci. Technol. 7:41. doi: 10.1167/tvst.7.6.41

PubMed Abstract | CrossRef Full Text | Google Scholar

Massamba, N., Sellam, A., Butel, N., Skondra, D., Caillaux, V., and Bodaghi, B. (2019). Use of fundus autofluorescence combined with optical coherence tomography for diagnose of geographic atrophy in age-related macular degeneration. Med. Hypothesis Discov. Innov. Ophthalmol. 8, 298–305.

Google Scholar

McGill, T. J., Bohana-Kashtan, O., Stoddard, J. W., Andrews, M. D., Pandit, N., Rosenberg-Belmaker, L. R., et al. (2017). Long-term efficacy of GMP grade Xeno-Free hESC-derived RPE cells following transplantation. Transl. Vis. Sci. Technol. 6:17. doi: 10.1167/tvst.6.3.17

PubMed Abstract | CrossRef Full Text | Google Scholar

Murthy, R. K., Haji, S., Sambhav, K., Grover, S., and Chalam, K. V. (2016). Clinical applications of spectral domain optical coherence tomography in retinal diseases. Biomed. J. 39, 107–120. doi: 10.1016/j.bj.2016.04.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Narayanan, R., Kelkar, A., Abbas, Z., Goel, N., Soman, M., Naik, N., et al. (2021). Sub-optimal gain in vision in retinal vein occlusion due to under-treatment in the real world: Results from an open-label prospective study of Intravitreal Ranibizumab. BMC Ophthalmol. 21:33. doi: 10.1186/s12886-020-01757-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Ouyang, Y., Heussen, F. M., Keane, P. A., Sadda, S. R., and Walsh, A. C. (2013). The retinal disease screening study: Prospective comparison of nonmydriatic fundus photography and optical coherence tomography for detection of retinal irregularities. Invest. Ophthalmol. Vis. Sci. 54, 1460–1468. doi: 10.1167/iovs.12-10727

PubMed Abstract | CrossRef Full Text | Google Scholar

Schellevis, R. L., Dijk, E. H., Breukink, M. B., Keunen, J. E., Santen, G. W., Hoyng, C. B., et al. (2019). Exome sequencing in families with chronic central serous chorioretinopathy. Mol. Genet. Genomic Med. 7:e00576. doi: 10.1002/mgg3.576

PubMed Abstract | CrossRef Full Text | Google Scholar

Sokolova, M., and Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 45, 427–437. doi: 10.1016/j.ipm.2009.03.002

CrossRef Full Text | Google Scholar

Song, R., Jiang, J., and Wang, H. (2022). Macular edema and visual acuity observation after cataract surgery in patients with diabetic retinopathy. J. Healthc. Eng. 2022:3311751. doi: 10.1155/2022/3311751

PubMed Abstract | CrossRef Full Text | Google Scholar

Srinivasan, P. P., Kim, L. A., Mettu, P. S., Cousins, S. W., Comer, G. M., Izatt, J. A., et al. (2014). Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images. Biomed. Opt. Express 5, 3568–3577. doi: 10.1364/BOE.5.003568

PubMed Abstract | CrossRef Full Text | Google Scholar

Turchin, A., Shubina, M., Breydo, E., Pendergrass, M., and Einbinder, J. (2009). Comparison of information content of structured and narrative text data sources on the example of medication intensification. J. Am. Med. Inform. Assoc. 16, 362–370. doi: 10.1197/jamia.M2777

PubMed Abstract | CrossRef Full Text | Google Scholar

Verma, G. K., and Tiwary, U. S. (2014). Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals. Neuroimage 102 Pt 1, 162–172. doi: 10.1016/j.neuroimage.2013.11.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, H., Zheng, L., Pan, S., Yan, T., and Su, Q. (2022). Image recognition of pediatric pneumonia based on fusion of texture features and depth features. Comput. Math. Methods Med. 2022:1973508. doi: 10.1155/2022/1973508

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, Y., Zhang, Y., Yao, Z., Zhao, R., and Zhou, F. (2016). Machine learning based detection of age-related macular degeneration (AMD) and diabetic macular edema (DME) from optical coherence tomography (OCT) images. Biomed. Opt. Express 7, 4928–4940. doi: 10.1364/BOE.7.004928

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, P., Wang, X., Chen, J., and You, W. (2017). Feature weight driven interactive mutual information modeling for heterogeneous bio-signal fusion to estimate mental workload. Sensors 17:2315. doi: 10.3390/s17102315

PubMed Abstract | CrossRef Full Text | Google Scholar

留言 (0)

沒有登入
gif