BOA: A CT-Based Body and Organ Analysis for Radiologists at the Point of Care

Computed tomography (CT) is an essential part of medical imaging and is used for diagnosis and disease monitoring in a wide variety of diseases with ever new fields of application made possible by new technologies such as photon-counting.1 The ability to automatically segment and quantify different body tissues and organs in CT images provides new opportunities for a variety of clinical applications targeting personalized medicine for diagnosis, treatment planning, and monitoring of disease progression.2–4

Recent advances in deep learning have led to the development of segmentation algorithms that can automatically delineate various anatomical structures with a high accuracy.5,6 However, many of these algorithms can only be executed via scripts by dedicated IT personnel and can only segment one anatomical structure at a time, impeding clinical use.7–9

To address these limitations, the body and organ analysis (BOA) was developed, which uses the open-source Orthanc research PACS (Picture Archiving and Communication System) server10 to enable seamless integration with various other PACS systems as a DICOM node. As a result, the BOA can be easily incorporated into existing radiological workflows, allowing for efficient and accurate segmentation of body structures in clinical practice. It combines 2 segmentation algorithms (body composition analysis [BCA]11 and TotalSegmentator12,13) to assign almost all pixels in the CT to an anatomical region. The BCA segments the body composition (muscles, adipose tissue, and bones) in various body regions (eg, abdominal or thoracic cavity) in CT images, whereas the TotalSegmentator delineates numerous organs and bones. Previous studies have demonstrated the potential of body composition–derived parameters in various clinical contexts, such as assessing sarcopenia as a biomarker in oncology patients,14,15 patients with infectious diseases such as SARS-CoV-2,16 or assessing other adipose tissue–derived biomarkers in various diseases.17–19

All models within the BOA are implemented using the nnU-Net architecture,13 which has been shown to be effective for biomedical image segmentation tasks.20–22 Furthermore, the BOA is available as an open-source tool on GitHub (https://github.com/UMEssen/Body-and-Organ-Analysis), facilitating its use and adaptation by the research community.

The goal of this manuscript is to describe the workflow of the BOA and discuss its potential applications and impact on clinical workflows. In addition, an evaluation of the segmentation accuracy of the newly trained BCA models is provided.

MATERIALS AND METHODS Ethics Statement

This study was approved by the ethics committee of the investigating hospital (approval number 23-11283-BO). Because of the study's retrospective nature, the requirement of written informed consent was waived by the ethics committee. All data were fully anonymized before being included in the study.

BCA Dataset

The original BCA version developed by Koitka et al11 used a dataset consisting of 50 CT (40 train, 10 test) scans randomly selected from abdominal CT studies carried out at the investigating hospital between 2015 and 2019. Specific indications for these studies were not considered. Based on the distribution of clinical studies in the department, over 50% of the scans were likely performed for oncological indications. Each CT volume in the dataset had a 5-mm slice thickness and was reconstructed using a soft tissue convolutional reconstruction kernel. The data contained annotations for 6 distinct labels: background (outside the human body), muscle, bones, subcutaneous tissue, abdominal cavity, and thoracic cavity.11 Because the original publication, the proposed dataset was continuously expanded to include 360 CT examinations (349 patients) from the investigating hospital between 2013 and 2020. To ensure a diverse and representative dataset, CT images were randomly drawn in equal amounts across various categories: whole body, neck, thorax, and abdomen. The number of CT scans per category and the patient characteristics of the extracted collective are presented within Table 1.

TABLE 1 - Patient Characteristics for the Newly Collected BCA Collective Including 360 CT Scans (349 Patients) CT Type Total Contrast Enhanced Age (Male) Age (Female) All 360 180 n = 182 (median, 62.00; IQR, 18.75) n = 162 (60.15 ± 14.61) Whole body 60 0 n = 27 (61.30 ± 12.53) n = 31 (60.13 ± 13.14) Neck 60 60 n = 33 (52.82 ± 18.57) n = 26 (58.42 ± 14.42) Thorax 120 60 n = 63 (median, 64.00; IQR, 14.00) n = 56 (59.39 ± 15.47) Abdomen 120 60 n = 62 (61.71 ± 11.77) n = 54 (61.24 ± 14.64)

For each CT type, the total number of CTs, the number of contrast-enhanced CTs, and the number of male and female patients with the respective age are reported. For normally distributed age, the mean and the standard deviation are reported (in the form mean ± standard deviation); otherwise, the median and the IQR are used.

BCA, body composition analysis; CT, computed tomography; IQR, interquartile range.

The dataset underwent manual annotations, expanding the original annotation to 11 semantic body labels: subcutaneous tissue, muscle, bone, abdominal cavity, thoracic cavity, glands, mediastinum, pericardium, breast implant, brain, and spinal cord. The glands label was defined as the union of the parotid, thyroid, and submandibular glands. The segmentations from Koitka et al11 were used to create suggestions, which were corrected by medical trainees and used to continuously train and evaluate the updated networks. Throughout the annotation process, a quality assurance team of senior annotators and a data scientist, supervised by a consultant radiologist with 7 years' experience in CT imaging, refined and reviewed the segmentations manually and automatically. The manual assessment relied on the expertise of human reviewers and on the adherence to the internal annotation guidelines. The automated quality assessment involved several test scenarios, examining the existence or absence of labels, atypical instance quantities for specific labels, and relationships between neighboring labels. For example, a connected component analysis was executed to detect potential instances with incorrect adjacent elements.

CT Acquisition

The CT scans were obtained using various scanners from a single vendor (SOMATOM Definition AS, SOMATOM Definition AS+, SOMATOM Definition Flash, SOMATOM Definition Edge, SOMATOM Force, and Biograph128 [Siemens Healthineers AG, Erlangen, Germany]). All scans had a slice thickness of 5 mm and were obtained using a pitch factor ranging from 0.4 to 1.4. The collimation was set at either 0.6 mm or 1.2 mm, and the tube current varied from 17 to 926 mAs. The reconstructions of the CT images were performed using different kernels, namely, I31f, I30f, I31s, Br32d, or Br32f.

Noncontrast scans were acquired without any delay. In the case of contrast-enhanced thorax CTs, an arterial phase was obtained using bolus tracking with a region of interest placed in the aorta. For contrast-enhanced abdominal CTs, a venous phase was acquired with a delay of 70–90 seconds after the injection of the contrast media. For the acquisition of a neck CT, a dual injection protocol was used with a total delay of 70 seconds. The contrast media was injected at a dose of 1 mL/kg, with a flow rate ranging from 2.50 to 4.50 mL/s.

BCA Models

The nnU-Net framework,13 which was previously used to train the TotalSegmentator models, was also used to train the BCA models. This self-configuring method enables optimization based on the provided dataset, delivering accurate segmentation results. The framework generates tailored U-Net architectures and applied built-in data augmentation and training strategies.13 The proposed BCA network was trained for 1000 epochs using mixed precision training to reduce memory usage and speed up training. The optimizer used is stochastic gradient descent with a momentum of 0.99 to reduce the likelihood of getting stuck in local minima during training. The learning rate is decreased over time using a polynomial learning rate schedule. The dataset was stratified based on the type of CT scan and whether the scan includes a breast implant and was divided into 300 scans for training and 60 scans for testing. In total, 16 patients with breast implants were identified, and 12 were used for the training set and 4 for the test set. The training process used 5-fold cross-validation (each fold had 240 scans for training and 60 for validation), and at inference time, ensembling was used to improve the accuracy of the models. In a postprocessing step, the body regions were subclassified as described by Koitka et al11 to create a tissue segmentation with the following classes: muscle, bone, subcutaneous adipose tissue, visceral adipose tissue, intermuscular adipose tissue, epicardial adipose tissue, paracardial adipose tissue, and total adipose tissue. An example of the created segmentations can be seen in Figure 1.

F1FIGURE 1:

Example segmentation output from the BCA. The BCA segmentations computes 11 regions (A–D): subcutaneous tissue (brown), muscle (yellow), bone (light pink), abdominal cavity (green), thoracic cavity (blue), glands (bright green), mediastinum (light blue), pericardium (pink), breast implant (red, only visible in C), brain (dark pink, only visible in A at the very top and in D), and spinal cord (gray, only in C). In addition, a secondary tissue segmentation is computed: muscle (yellow), bone (pink), subcutaneous adipose tissue (brown), visceral adipose tissue (green), intermuscular adipose tissue (dark green), epicardial adipose tissue (dark pink), and paracardial adipose tissue (light blue). The segmentations are shown in coronal (A and D) and axial (B–D) view. The patient was a 55-year-old female patient.

TotalSegmentator

For the organ segmentation networks, the models from the TotalSegmentator12,13 were used. The tool is able to segment up to 104 different labels, each corresponding to a specific structure in the human body, including the spleen, liver, stomach, pancreas, and urinary bladder, as well as the lungs, heart, esophagus, trachea, brain, and small bowel. The dataset also includes annotations for various bones, such as vertebrae, ribs, humerus, scapula, clavicle, femur, hip, and sacrum. Furthermore, the dataset contains annotations for several muscles, including gluteus maximus, gluteus medius, gluteus minimus, autochthon, and iliopsoas. In addition, this includes segmentations of body parts (extremities and trunk).

A complete label list of the TotalSegmentator model can be found in the Supplementary Materials (https://links.lww.com/RLI/A882), and an example of the segmentation can be seen in Figure 2.

F2FIGURE 2:

Example patient from the TotalSegmentator dataset. The segmentations were performed for 104 regions and are visualized in coronal (A) and axial view (B–D). The patient was a 63-year-old male patient.

Package Workflow

The presented workflow outlines a systematic approach for processing and analyzing CT scans, using the BOA end point within a PACS. The initial stage involves a radiologist selecting CT scans, which are then sent to the BOA Orthanc with a DICOM send operation and are stored for further processing. For each series, a unique task will be automatically created and placed into a working list. A waiting worker constantly monitors for new tasks and performs computations once a new task is created and is ready to be accepted. The python package celery23 (version 5.2.7) is used as a distributed task queue, whereas RabbitMQ24 is used as a message broker. The worker initially collects essential information about the CT, including unique identifiers, dates, and DICOM tags. Subsequently, the segmentations of the CT scan are computed using TotalSegmentator and BCA, with computation performed either via Triton25 or directly on the graphics processing unit (GPU). After the segmentation, various measurements are computed for each region, such as volume, statistical information about the Hounsfield units (HU) within the region (including mean, standard deviation, minimum, maximum, median, and 25th and 75th percentiles), and contrast-to-noise ratio (CNR). Let μregion denote the mean HU for the region, and μla and μra represent the mean HU of the left and right authochton musculature, respectively. Similarly, let σla and σra denote the standard deviations of the HU of the left and right authochton musculature. Then, the CNR for each segmented region is computed using the following formula:

CNRregion=μregion−μla+μra2σla+σra2

Furthermore, the BCA features are computed, which include multiple aggregation regions (eg, whole body, thorax, abdomen) and produce a report on the quantity of fat, muscle, and bone present in each region. A similar report is also generated for each CT scan slice. All computed measurements are stored in an Excel file. An example of such output can be viewed in the Appendix (see Supplementary example.xlsx, https://links.lww.com/RLI/A878). In addition, the BCA features are stored in a Portable Document Format (PDF) report, which contains the measurements as well as different views of the segmentations (see Supplementary report.pdf for an example, https://links.lww.com/RLI/A879). This is particularly useful for clinicians, which can review the PDF to assess the correctness of the segmentations. In addition, the TotalSegmentator also provides a rendering of the segmentations, which is also made available to the clinicians (see Supplementary preview_total.jpeg, https://links.lww.com/RLI/A880).

The system offers the optional functionality of storing the Excel file and report in a Samba network share (SMB), whereas computed segmentations can also be uploaded to a DicomWeb-capable PACS in DICOM-seg format, generated by pydicom-seg.26 In addition, a viewer such as OHIF27 may be connected to the DicomWeb-capable PACS to visualize the segmentations and for quality assurance. The described workflow is visualized in Figure 3.

F3FIGURE 3:

Exemplary BOA workflow. The process starts with a DICOM send operation from the radiologist's workstation. The CT is received by the Orthanc PACS, which proceeds to create a task. Once processing power is available, some DICOM tags are stored, and the segmentations are computed using BCA and TotalSegmentator. The segmentations may either be computed directly on the GPU or by using an existing Triton server instance. Finally, the results are saved to persistent storage by saving the resulting Excel with the segmentation features and the BCA report to a Samba network share (SMB), and by uploading the segmentations to a PACS.

The BOA can be downloaded open-source and used for research purposes at the following link: https://github.com/UMEssen/Body-and-Organ-Analysis.

The described workflow provides a comprehensive and systematic approach for processing and analyzing CT scans, offering a range of information on different regions of the body and their compositions.

Evaluation

In the proposed study, the body regions generated with the BCA were compared with Koitka et al11 using the Sørensen-Dice score28 for the comparable body regions using the dataset described in Table 1. First, the Sørensen-Dice score is computed in the same manner as in Koitka et al11 for better comparison by computing the overlap of all predicted voxels in the test set together. Second, the average “per-patient” Sørensen-Dice over the patients is also reported together with its standard deviation. Moreover, the maximum symmetric Hausdorff distance,29,30 the 95th percentile symmetric Hausdorff distance, and the average symmetric surface distance are also computed, which are commonly used metrics to evaluate segmentations31–33 and are also used for segmentation challenges.34

In addition, a separate dataset of 150 whole-body CT scans was randomly selected from data collected at the investigating hospital between 2013 and 2016 and was used to evaluate the segmentation coverage of the BOA. This dataset contained 150 patients (median, 59.00; interquartile range [IQR], 18.50): 83 male (median, 62.00; IQR, 18.50) and 67 female (57.13 ± 13.21) and was annotated with a whole-body mask. The whole-body mask was generated using the union of the trunk and extremities and body part segmentation of the TotalSegmentator, which were reviewed and manually corrected (Supplementary Fig. S1, https://links.lww.com/RLI/A881). The overall percentage of segmented human body was computed in terms of voxel body coverage, that is, the percentage of whole-body mask that was covered by a segmentation. The voxel body coverage was evaluated across 3 different approaches: only the TotalSegmentator, only the BCA, and their combination (BOA). For the BCA, the tissue segmentation was used to compute the coverage, because the body regions cover a substantial part of the body, but do not offer a fine granular segmentation. Accordingly, the body parts of the TotalSegmentator were not used to calculate the coverage because they do not offer a fine granular segmentation.

Statistical Analysis

For normally distributed variables, the mean and standard deviation as measures of central tendency and variability were reported. The normality assumption was checked using the Shapiro-Wilk test.35 On the other hand, for variables that were not normally distributed, the median and IQR were used. To check whether the results from Koitka et al11 and this version were statistically different, a paired t test36 was performed. Normality was ensured before running the test.

To test for statistical significance for the voxel body coverage, the Mann-Whitney U test37 from the scipy package38 (version 1.9.3) was used. If the P value was less than 0.05, the differences between groups were reported as statistically significant.

RESULTS

A comparison between the nnU-Net BCA and Koitka et al11 showed that, for the comparable body region classes, the BCA achieved higher Sørensen-Dice scores, as indicated in Table 2. The average Sørensen-Dice score was 0.971 in comparison to 0.955 of the original publication. This suggests that the newly trained BCA is more effective in accurately identifying and segmenting these anatomical regions. A paired t test showed that the segmentation accuracies differed significantly (P = 0.0066).

TABLE 2 - Sørensen-Dice Scores of the Body Region Classes of the BCA Body Regions Label Koitka et al11 BCA BCA Sørensen-Dice Per Patient (Average ± SD [CI]) Subcutaneous tissue 0.962 0.971 0.956 ± 0.039 [0.946, 0.966] Muscle 0.933 0.959 0.955 ± 0.014 [0.951, 0.958] Abdominal cavity 0.973 0.983 0.983 ± 0.006 [0.982, 0.985] Thoracic cavity 0.965 0.982 0.977 ± 0.018 [0.972, 0.982] Bone 0.942 0.961 0.958 ± 0.019 [0.953, 0.963] Glands — 0.766 0.745 ± 0.165 [0.691, 0.798] Pericardium — 0.964 0.965 ± 0.015 [0.961, 0.969] Breast implant — 0.943 0.468 ± 0.470 [0.049, 0.888] Mediastinum — 0.88 0.848 ± 0.101 [0.821, 0.874] Brain — 0.985 0.979 ± 0.022 [0.969, 0.989] Spinal cord — 0.896 0.896 ± 0.079 [0.876, 0.917] Average of Koitka et al11 classes 0.955 ± 0.017 [0.934, 0.976] 0.971 ± 0.11 [0.957, 0.985] 0.966 ± 0.012 [0.949, 0.982] (P = 0.0066) Average of all classes — 0.935 ± 0.063 [0.891, 0.98] 0.885 ± 0.149 [0.78, 0.989] For each label, the Sørensen-Dice scores from Koitka et al11 and for the BCA trained with an nnU-Net are given. In addition, the average scores per patient together with the SD and the 95% CI are reported. Moreover, the averages over the comparable and for all classes are shown. For the average of Koitka et al11 classes, a paired t test was performed to compare it with the nnU-Net BCA. The 95% CIs are reported in square brackets.

BCA, body composition analysis; CI, confidence interval.

In addition, it achieved an overall good segmentation efficiency for newly introduced classes: brain (0.985), breast implant (0.943), glands (0.766), mediastinum (0.880), pericardium (0.964), and spinal cord (0.896). Including the newly segmented classes, the network achieved an average Sørensen-Dice score of 0.935, and per-patient Sørensen-Dice scores of 0.966 and 0.88 were achieved for the existing classes and for all classes, respectively. A comparison with other approaches from the literature is presented in Table S1 of the Supplementary Materials, https://links.lww.com/RLI/A882, using the per-patient Sørensen-Dice score.

In addition, the Sørensen-Dice scores for each type of CT scan were 0.936 for whole-body CTs, 0.909 for neck CTs, 0.938 for thorax CTs, and 0.935 for abdomen CTs. The Sørensen-Dice scores for contrast-enhanced CTs achieved an average of 0.929, whereas the noncontrast CT scans achieved a score of 0.94. The Sørensen-Dice scores for each class separately are reported in Supplementary Tables S2 and S4, https://links.lww.com/RLI/A882, for the Sørensen-Dice score and Supplementary Tables S3 and S5, https://links.lww.com/RLI/A882, for the per-patient average Sørensen-Dice score. The Hausdorff distance, the 95th percentile Hausdorff distance, and the average symmetric surface distance are reported in Supplementary Table S6 and in Tables S7–S12 for the different CT types (https://links.lww.com/RLI/A882).

In the present study, varying performances were observed among the 3 models with respect to voxel body coverage, which serves as an indicator of each model's ability to accurately map the human body's anatomical structure. The TotalSegmentator models produced a mean voxel body coverage of 31%, with a standard deviation of ±6%. This suggests a relatively moderate level of body coverage when compared with other evaluated models.

The BCA model, in contrast, showed a voxel body coverage of 75% with a standard deviation of ±6%. This significant increase indicates the BCA model's ability to capture and render a large portion of the human body's anatomy in regard to soft tissue and bones. The BOA demonstrated the highest performance, achieving a mean voxel body coverage of 93% with a standard deviation of ±2%. This value suggests the BOA model's near-complete coverage, thereby visualizing the good combination of the very different but comprehensive segmentation networks BCA and TotalSegmentator. A graphical representation of the voxel body coverage performance of the different models can be found in Figure 4. Figure 5 further illustrates the capability of the BOA model, showcasing an example of anatomical segmentations produced using BOA segmentations within Siemens' Cinematic Rendering39 (Siemens Healthineers, Munich, Germany).

F4FIGURE 4:

Body coverage for TotalSegmentator, BCA, and BOA. A, Boxplot visualization for the percentual voxel body coverage for the 3 models TotalSegmentator, BCA, and BOA including a Mann-Whitney U test (****P ≤ 0.0001). B, Visualization of the voxel body coverage based on the available segmentation masks of each tool on the middle coronal slice of a volume. The patients in B were a 58-year-old male patient (row 1) and 71-year-old male patient (row 2).

F5FIGURE 5:

Example segmentations viewed using Siemens' Cinematic Rendering. A, 3D rendering of the segmentation of bones, muscles, and fat. B, 3D rendering of the skeleton and organs. C, 3D rendering of the organs. The patient was a 53-year-old female patient.

In addition, the time taken to compute all BOA segmentations and all the measurements and to generate the report for the CT scans from the test set of the BCA dataset is presented in Supplementary Table S13, https://links.lww.com/RLI/A882.

DISCUSSION

In this study, we evaluated the performance of an updated BCA algorithm when implemented using the nnU-Net architecture, which has been demonstrated to be effective for a variety of medical image segmentation applications.21,22 Our results demonstrated that the nnU-Net–based BCA outperformed the original BCA implementation from Koitka et al11 in terms of segmentation accuracy. This improvement can be attributed to the enlarged dataset (50 vs 360 CT examinations) and may be attributed to the advanced design and optimization techniques used in the nnU-Net framework, which have been shown to enhance generalization and robustness across different datasets.13 In addition, new regions have been added, which provide a more comprehensive picture of the human body. The advantage of this is particularly evident when using chest or neck examinations, as only the abdomen was included in the initial publication. Some of these regions have previously unreported scores, as they are uncommon regions for segmentations, such as mediastinum, abdominal cavity, the thoracic cavity, and breast implant. For all regions, most classes (subcutaneous tissue, muscle, abdominal cavity, thoracic cavity, bone, pericardium, brain) achieved segmentation accuracies over 0.95. A comparison with other segmentations from the literature also showed that the segmentations provided in this work were either on par with existing ones, or achieved higher quality, or could not be compared with existing approaches. This holds for all but the glands label, which contains parotid, thyroid, and submandibular glands, and resulted in an average Sørensen-Dice score of 0.745, which may be due to the fact that the glands regions are comparably smaller than the other classes and thus are less important in the optimization of the model.40 However, the best score that was achieved in the literature for the thyroid glands alone was 0.767,41 which is of similar magnitude. Future work should investigate the impact of separating these 3 classes. It is also clear from the higher standard deviation (0.165) that there can be a lot of variation among patients, which is accentuated by the size of the segmentation itself. Another class with a very high standard deviation is the breast implant class (0.46). In the test set, there are only 4 patients with a breast implant (Sørensen-Dice scores of 0.933, 0.968, 0.983, and 0.862), but in some cases, the model still predicted some pixels as a breast implant, even where the label was not present, obtaining a Sørensen-Dice score of 0 and thus showing a high standard deviation. Additional postprocessing could prove beneficial for scans where a small number of voxels are predicted as breast implants within this class.

For some labels, there are no comparable results in the literature, and they are also part of the contribution of this study. For example, in existing publications, only the subcutaneous fat has been segmented42,43; however, there are no comparable values for the subcutaneous tissue, from which the subcutaneous adipose tissue is derived using thresholding.

Moreover, the BOA also works as an extension of other BCA tools that only provide measurements and biomarkers at specific parts of the body.44,45 The BOA is able to both compute the relevant metrics at a specific slice (eg, L3 vertebrae) and provide a full picture of the patient's health status by computing a summary for the whole body.

Upon comparing the Sørensen-Dice quality achieved for different CT types, it becomes evident that the model achieved better results on the non–contrast-enhanced CT scans (0.94 vs 0.929), and that whole-body, abdomen, and chest CTs achieve similar Sørensen-Dice scores (0.93), and neck CTs have a slightly worse performance (0.909).

The evaluation of the Hausdorff distance also showed similar results. The best performing classes were brain and spinal cord, whereas mediastinum and breast implant performed worse. In the evaluation of the breast implant class for the contrast-enhanced images, it is also possible to see that there was a single contrast-enhanced CT scan that also contained the breast implant label, as the standard deviation is 0. In particular for the neck CTs, the maximum symmetric Hausdorff distance was also rather high, but the 95th percentile Hausdorff and the average symmetric surface distance were comparable to other regions. In the abdominal CTs, the mediastinum has both lower Sørensen-Dice scores (0.75) and distances with respect to the other CT types, probably because the mediastinum is cropped in this type of scan. In contrast to the Sørensen-Dice scores, the distance scores for the non-enhanced CT scans are better.

The unification of the segmentation algorithms of the TotalSegmentator and our BCA simplifies resource management in a permanent implementation as an on-demand service in the clinical routine. The new structures added to the dataset, such as glands and breast implants, prevent mis-segmentation of thresholding-based subsegmentation of tissue types (muscle/adipose tissue). However, the newly introduced regions such as pericardium and mediastinum allow a finer granular segmentation of adipose tissue in different compartments such as epicardial adipose tissue and thus a simplified evaluation of these biomarkers.

The combination of BCA and TotalSegmentator into the BOA provides a comprehensive segmentation of body structures in CT images, which is essential for various clinical applications, including treatment planning and monitoring of disease progression.3,46,47 In this way, a unified approach that can segment multiple anatomical structures efficiently and reliably11,12 was created. Although both algorithms address fundamentally different topics such as soft tissue/body composition (BCA) versus organ/vessels (TotalSegmentator), they also share certain overlaps such as the segmentation of bones. Therefore, it was important to evaluate what percentage of the body is segmented by the BCA and TotalSegmentator alone as well as the BOA. The results showed that the BOA segmented significantly more of the body (93% ± 6%) compared with the BCA (74% ± 8%) and TotalSegmentator (31% ± 6%). The significantly higher percentage of segmentation voxel coverage of BOA compared with BCA and TotalSegmentator visualizes the added value of combining and unifying both segmentation algorithms. By combining the 2 segmentation models, the BOA brings us closer to a complete segmentation of the entire body. This is particularly important in light of the current literature, which has identified both volumetry of various organs46,47 and BCA15,19,48 as predictive factors for various diseases. In this context, for example, a meta-analysis in which 3858 cancer patients from 35 studies were examined showed that a low skeletal muscle mass negatively influences the therapy response (objective response rate) and the disease control rate.15 At the same time, there are many studies that try to treat sarcopenia with drugs49 or a change in diet.50 To date, however, there is a lack of studies using CT-derived skeletal muscle volume for the indication of targeted treatment of sarcopenia. However, it has already been shown that the CT BCA has a high agreement with the body composition conventionally measured by dual-energy x-ray absorptiometry and bioelectrical impedance analysis.14 The high availability of the BOA, a publicly available open-source application that can be integrated as a DICOM node, could facilitate such studies in the future. Looking at individual organs, for example, spleen volume has been shown to be a predictive factor for posthepatectomy liver failure in patients with hepatocellular carcinoma51 or for treatment response to immunotherapy in non–small cell lung cancer patients.46

Whereby the unification becomes even more important in view of the ever increasing imaging numbers, assuming that these parameters will finally find their way into the clinical routine.52,53 Moreover, the integration of the BOA as an automated service of the open-source Orthanc research PACS and the resulting capability to integrate it as a DICOM node has the potential to streamline radiological workflows and enhance the reproducibility of quantitative imaging biomarkers. Through this integration, an examination can be sent to the algorithm via any PACS that allows a DICOM send to DICOM nodes. As this is a standard functionality of a PACS system, it is possible with almost all PACS systems, for example, Centricity Universal Viewer PACS (Chicago, IL), Syngo Carbon PACS (Siemens Healthineers AG, Erlangen, Germany), JiveX Enterprise PACS (VISUS Health IT GmbH, Bochum, Germany), Philips IntelliSpace PACS (Koninklijke Philips N.V., Amsterdam, the Netherlands).

Subsequently, the segmentation is performed automatically, and reports are stored in a defined folder as PDF reports with sample images of the segmentation for quality control and as Excel sheets that can be used easily for research.

In the field of medical image segmentation, there has been a growing interest in developing algorithms that can accurately segment various anatomical structures and pathologies in the body.12,13,54–56 The BOA algorithm fits well into this trend, providing a comprehensive segmentation of the body in CT images. With its open-source availability and easy integration into radiological workflows, the BOA has the potential to make a significant impact in the field of medical image analysis because it combines BCA and organ segmentation in 1 tool.

Despite the promising results, the BOA still has limitations that need to be addressed in future projects. A limitation is that smaller vessels and lymph nodes are still missing from the segmentation, which could be added in future releases of the tool. Another limitation is that the BCA was trained with monocentric data and was not externally validated, which may limit the applicability of the algorithm. In addition, the segmentations of the BCA and of the TotalSegmentator were not repeatedly performed by multiple annotators; therefore, no interobserver variability can be calculated. However, an extensive standardized quality control of the segmentations performed by 5 different annotators following internal annotation guidelines was performed with manual and automated controls to ensure high quality of BCA segmentations.

Despite these limitations, the BOA has several advantages that make it a valuable tool for medical image analysis. One advantage is its ease of integration into radiological workflows through its integration as a DICOM node. Furthermore, the BOA offers a promising solution for the segmentation of the body in CT images, and its open-source availability makes it accessible to a wide range of users.

CONCLUSIONS

The open-source BOA combines the BCA and TotalSegmentator in 1 tool and makes it available to clinicians via an integration as a DICOM node to provide a comprehensive segmentation of the body in CT images. Despite its limitations, such as missing segmentation of smaller vessels and lymph nodes, the BOA has several advantages, including its ease of integration into radiological workflows and its comprehensive segmentation of nearly all other structures in the body.

REFERENCES 1. Schwartz FR, Samei E, Marin D. Exploiting the potential of photon-counting CT in abdominal imaging. Investig Radiol. 2023;58:488–498. 2. Li H, Zhang H, Johnson H, et al. Longitudinal subcortical segmentation with deep learning. Proc SPIE Int Soc Opt Eng. 2021;11596:115960D. 3. Wong J, Huang V, Wells D, et al. Implementation of deep learning-based auto-segmentation for radiotherapy planning structures: a workflow study at two cancer centers. Radiat Oncol. 2021;16:101. 4. Lenchik L, Heacock L, Weaver AA, et al. Automated segmentation of tissues using CT and MRI: a systematic review. Acad Radiol. 2019;26:1695–1706. 5. Koitka S, Gudlin P, Theysohn JM, et al. Fully automated preoperative liver volumetry incorporating the anatomical location of the central hepatic vein. Sci Rep. 2022;12:16479. 6. Kart T, Fischer M, Küstner T, et al. Deep learning-based automated abdominal organ segmentation in the UK biobank and German National Cohort Magnetic Resonance Imaging Studies. Investig Radiol. 2021;56:401–408. 7. Neves CA, Tran ED, Kessler IM, et al. Fully automated preoperative segmentation of temporal bone structures from clinical CT scans. Sci Rep. 2021;11:116. 8. Meddeb A, Kossen T, Bressem KK, et al. Evaluation of a deep learning algorithm for automated spleen segmentation in patients with conditions directly or indirectly affecting the spleen. Tomogr Ann Arbor Mich. 2021;7:950–960. 9. Senthilvelan J, Jamshidi N. A pipeline for automated deep learning liver segmentation (PADLLS) from contrast enhanced CT exams. Sci Rep. 2022;12:15794. 10. Jodogne S. The Orthanc ecosystem for medical imaging. J Digit Imaging. 2018;31:341–352. 11. Koitka S, Kroll L, Malamutmann E, et al. Fully automated body composition analysis in routine CT imaging using 3D semantic segmentation convolutional neural networks. Eur Radiol. 2021;31:1795–1804. 12. Wasserthal J, Breit H-C, Meyer MT, et al. TotalSegmentator: robust segmentation of 104 anatomic structures in CT images. Radiol Artif Intell. 2023;e230024. 13. Isensee F, Jaeger PF, Kohl SAA, et al. nnU-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021;18:203–211. 14. Kroll L, Mathew A, Baldini G, et al. CT-derived body composition analysis could possibly replace DXA and BIA to monitor NET-patients. Sci Rep. 2022;12:13419. 15. Surov A, Strobel A, Borggrefe J, et al. Low skeletal muscle mass predicts treatment response in oncology: a meta-analysis. Eur Radiol. 2023;33:6426–6437. 16. Hosch R, Kattner S, Berger MM, et al. Biomarkers extracted by fully automated body composition analysis from chest CT correlate with SARS-CoV-2 outcome severity. Sci Rep. 2022;12:16411. 17. Li X, Zhang N, Hu C, et al. CT-based radiomics signature of visceral adipose tissue for prediction of disease progression in patients with Crohn's disease: a multicentre cohort study.

留言 (0)

沒有登入
gif