J. Imaging, Vol. 8, Pages 321: A Multimodal Knowledge-Based Deep Learning Approach for MGMT Promoter Methylation Identification

Figure 1. Exemplified schema of the proposed approach: on the left, the Data Preparation step generates isotropic and normalized acquisitions; in the middle, the Knowledge-Based Filtering (KBF) step leverages the medical knowledge to pre-select, in an unsupervised manner, the ROI corresponding to suspect lesions; on the right, the MGMT promoter methylation identification step adopts a CNN for the identification of the methylation process.

Figure 1. Exemplified schema of the proposed approach: on the left, the Data Preparation step generates isotropic and normalized acquisitions; in the middle, the Knowledge-Based Filtering (KBF) step leverages the medical knowledge to pre-select, in an unsupervised manner, the ROI corresponding to suspect lesions; on the right, the MGMT promoter methylation identification step adopts a CNN for the identification of the methylation process.

Jimaging 08 00321 g001

Figure 2. Illustration of the processes involved within the “Data Preparation” step.The volume retrieval and scaling create the 3D acquisitions with the isotropic dimension of 1×1×1 mm3; the Rotation step creates a set of volumes in the sagittal projection; the Skull stripping removes the tissue outside the brain.

Figure 2. Illustration of the processes involved within the “Data Preparation” step.The volume retrieval and scaling create the 3D acquisitions with the isotropic dimension of 1×1×1 mm3; the Rotation step creates a set of volumes in the sagittal projection; the Skull stripping removes the tissue outside the brain.

Jimaging 08 00321 g002

Figure 3. An illustrative example of the threshold operations performed during the KBF procedure on FLAIR and T1w slices (first image), reported in the first and second rows respectively. The red filter (second image) considers the mode value, while the third image represents the output of the first filtering process. On the other hand, the light-blue filter (fourth image) exploits the 25% of the highest and the 25% of the lowest values in the FLAIR and T1-w acquisition, respectively. The output of the KBF procedure is shown in the fifth image.

Figure 3. An illustrative example of the threshold operations performed during the KBF procedure on FLAIR and T1w slices (first image), reported in the first and second rows respectively. The red filter (second image) considers the mode value, while the third image represents the output of the first filtering process. On the other hand, the light-blue filter (fourth image) exploits the 25% of the highest and the 25% of the lowest values in the FLAIR and T1-w acquisition, respectively. The output of the KBF procedure is shown in the fifth image.

Jimaging 08 00321 g003

Figure 4. An illustrative example of the results produced by the KBF module on two patients: if the slice contains a tumor (top row) a huge cluster is generated in the preselection mask, while in the opposite case (bottom row) the mask contains sparse outliers.

Figure 4. An illustrative example of the results produced by the KBF module on two patients: if the slice contains a tumor (top row) a huge cluster is generated in the preselection mask, while in the opposite case (bottom row) the mask contains sparse outliers.

Jimaging 08 00321 g004

Figure 5. An illustrative example of how the KBF module operates on three consecutive couples of FLAIR and T1-W slices: the threshold based on the mode value is represented in red, while the light-blue line represents the threshold at 25% of the values for the FLAIR and T1-W acquisitions.

Figure 5. An illustrative example of how the KBF module operates on three consecutive couples of FLAIR and T1-W slices: the threshold based on the mode value is represented in red, while the light-blue line represents the threshold at 25% of the values for the FLAIR and T1-W acquisitions.

Jimaging 08 00321 g005

Figure 6. The MGMTClassifier architecture consisting of seven convolutional blocks with depth-wise separable convolutions spaced by batch normalization and ReLU as activation function, followed by two fully connected layers and a ReLU activation.

Figure 6. The MGMTClassifier architecture consisting of seven convolutional blocks with depth-wise separable convolutions spaced by batch normalization and ReLU as activation function, followed by two fully connected layers and a ReLU activation.

Jimaging 08 00321 g006 Figure 7. Results of the Integrated Gradients [28] and Occlusion [29] models considering four different inputs volumes. In the first two rows, we consider negative samples without the methylation process. The last two rows show positive instances, in which the methylation is present. Figure 7. Results of the Integrated Gradients [28] and Occlusion [29] models considering four different inputs volumes. In the first two rows, we consider negative samples without the methylation process. The last two rows show positive instances, in which the methylation is present. Jimaging 08 00321 g007

Table 1. 5-fold CV performances of models trained and tested on dataset A. The input sequence is KBF for both the 3D and 2D MGMTClassifier models and T1-w CE for the Tunisia.ai one. For each metric, the best value is reported in bold.

Table 1. 5-fold CV performances of models trained and tested on dataset A. The input sequence is KBF for both the 3D and 2D MGMTClassifier models and T1-w CE for the Tunisia.ai one. For each metric, the best value is reported in bold.

ModelACCSPESENPREF1AUC3D MGMTClassifier55.09%50.34%59.74%55.18%57.37%55.38%2D MGMTClassifier57.77%54.44%60.73%59.93%60.33%53.55%Tunisia.ai52.31%33.45%69.38%53.52%60.30%53.84%

Table 2. 5- fold CV performances of models trained and tested on dataset B. The input sequence is KBF for both the 3D and 2D MGMTClassifier models and T1-w CE for the Tunisia.ai one. For each metric, the best value is reported in bold.

Table 2. 5- fold CV performances of models trained and tested on dataset B. The input sequence is KBF for both the 3D and 2D MGMTClassifier models and T1-w CE for the Tunisia.ai one. For each metric, the best value is reported in bold.

ModelACCSPESENPREF1AUC3D MGMTClassifier60.06%74.03%45.35%62.40%52.53%59.80%2D MGMTClassifier55.66%62.98%45.31%46.40%45.85%55.57%Tunisia.ai55.14%54.31%56.52%42.39%48.45%57.56%

Table 3. 5-fold CV performances of models trained on dataset A and tested on dataset B. The input sequence is KBF for both the 3D and 2D MGMTClassifier models and T1-w CE for the Tunisia.ai one. For each metric, the best value is reported in bold.

Table 3. 5-fold CV performances of models trained on dataset A and tested on dataset B. The input sequence is KBF for both the 3D and 2D MGMTClassifier models and T1-w CE for the Tunisia.ai one. For each metric, the best value is reported in bold.

ModelACCSPESENPREF1AUC3D MGMTClassifier48.99%57.80%40.16%48.68%44.01%48.78%2D MGMTClassifier52.58%59.41%42.98%42.98%42.98%51.51%Tunisia.ai37.30%26.72%55.07%36.54%43.93%49.58%

Table 4. 5-fold CV performances of models trained on dataset B and tested on dataset A. The input sequence is KBF for both the 3D and 2D MGMTClassifier models and T1-w CE for the Tunisia.ai one.For each metric, the best value is reported in bold.

Table 4. 5-fold CV performances of models trained on dataset B and tested on dataset A. The input sequence is KBF for both the 3D and 2D MGMTClassifier models and T1-w CE for the Tunisia.ai one.For each metric, the best value is reported in bold.

ModelACCSPESENPREF1AUC3D MGMTClassifier49.47%65.94%33.00%49.21%39.51%50.57%2D MGMTClassifier51.66%51.85%51.49%54.55%52.98%50.72%Tunisia.ai51.93%28.35%73.29%52.90%61.45%50.83%

Table 5. 5-fold CV performances of models trained and tested on dataset A+B. The input sequence is KBF for both the 3D and 2D MGMTClassifier models and T1-w CE for the Tunisia.ai one. For each metric, the best value is reported in bold.

Table 5. 5-fold CV performances of models trained and tested on dataset A+B. The input sequence is KBF for both the 3D and 2D MGMTClassifier models and T1-w CE for the Tunisia.ai one. For each metric, the best value is reported in bold.

ModelACCSPESENPREF1AUC3D MGMTClassifier56.81%65.13%48.58%58.44%53.06%57.59%2D MGMTClassifier53.74%48.11%59.63 %52.34%55.75%55.17%Tunisia.ai56.88%48.22%65.96%54.87%59.91%58.63%

留言 (0)

沒有登入
gif