Predicting outcome after aneurysmal subarachnoid hemorrhage by exploitation of signal complexity: a prospective two-center cohort study

The study was approved by the local ethics committees Zurich and Aachen and was in accordance with the ethical standards laid down in the 2013 Declaration of Helsinki for research involving human subjects. Informed consent was received before inclusion by the patient or their legal medical representative. Data from two prospective observational cohorts (University Hospital Zurich, Switzerland; the Rheinisch-Westfälische Technische Hochschule Aachen, Germany) was analyzed. The Zurich cohort was used as the derivation cohort to establish models and analyses, while the Aachen cohort was used for external validation.

Study population

For the Zurich cohort a total of 244 consecutively admitted adult patients with aSAH were recruited as part of the ICU Cockpit Prospective Cohort Study between 2016 and 2022. All of these received multimodal monitoring data acquisition and were consequently evaluated for inclusion. For the Aachen cohort a total of 316 consecutively admitted adult patients with aSAH were collected as part of a prospective cohort between 2014 and 2021. 102 of these received multimodal monitoring data acquisition and were consequently evaluated for inclusion. Inclusion criteria were: 1. aSAH due to an angiography confirmed ruptured aneurysm; 2. Admission to the NCCU and recording of high-resolution monitoring data. The only exclusion criterion was loss to follow up with missing 12-month outcome. Patients at both centers were treated according to the guidelines of the Neurocritical Care Society, American Heart Association guidelines, and the respective standard therapies of the two centers [19, 20].

Data acquisition

The following relevant clinical data were prospectively included in the respective databases: Demographics, World Federation of Neurological Surgeons scale (WFNS)[21], modified Fisher Score (mFisher) [22], clinical course incl. aneurysm occlusion modality, occurrence of angiographic vasospasm (defined as narrowing of the vessels in neuroimaging independent of clinical symptoms), delayed cerebral infarction (DCI—infarction within neuroimaging not present on imaging performed within 24–48 h after aneurysm occlusion, and not attributable to other causes [23]), and outcome at 12 months (represented by the Glasgow Outcome Scale Extended—GOSE [24]). WFNS was evaluated after neurological resuscitation (i.e. after insertion of EVD and/or hematoma evacuation). In either center outcome was assessed during routine outpatient follow-up consultations or by contacting the patient, their next of kin, or caregiver by telephone in a structured interview. Physiological high-resolution data (at least 100 Hz—BP, ICP, HR) was collected in Zurich (Moberg Component Neuromonitoring Systems (CNS)—Moberg Research Inc, PA, USA) and Aachen (MPR2 logO Datalogger (Raumedic, Helmbrechts, Germany) or, after July 2018, Moberg Component Neuromonitoring Systems (CNS)—Moberg Research Inc, PA, USA). The data acquisition was started after admission to the respective NCCUs (after neurological resuscitation and generally after securing of the aneurysm) and stopped either when the patient was discharged to the ward or if invasive monitoring was deemed unnecessary.

Data preprocessing

The high-resolution (i.e. waveform) monitoring data from either center was transformed into an HDF5 format for streamlined analysis of the different formats. NCCU high-resolution waveform data contains, without exception, artifacts which are not representative of the patients’ physiology. Thus, raw waveform data was preprocessed using ICM + ® (Cambridge Enterprise Ltd, Cambridge, United Kingdom). Data was curated to remove artifacts using both manual and automated methods. The manual methods were applied to remove sections with arterial line failure (continuous reduction of the arterial blood pressure amplitude followed by flushing) and sections with manipulation or opening of the external ventricular drain (EVD—high frequency artefacts with or without sudden changes of ICP level). Automated methods for cleaning of arterial blood pressure were removal of pressure below 0 or above 300 mmHg and removal of sections with pulse amplitude of less than 15 mmHg. Automated methods for cleaning of ICP included removal of values below − 20 or above 200 mmHg, removal of sections with low amplitude (< 0.04 mmHg) corresponding to noise or EVD opening, and removal of values with a 95% Spectral edge frequency above 10 Hz (high-frequency noise). Only the remaining data (termed artifact-free) is used for further processing mitigating the effect of artificial, non-physiological sections.

Data was then processed to acquire 10 s averages of mean arterial blood pressure (ABP), systolic blood pressure (SBP), diastolic blood pressure (DBP), ICP, ICP amplitude (AMP), CPP (difference between ABP and ICP), and HR. Averaging, in effect, allowed for the removal of cardiac and respiratory components.

Multiscale entropy analysis

MSE was calculated as previously described based on the estimation of sample entropy [13]. Sample entropy describes the probability that matching sequences of length m will exhibit the same behavior (i.e. will also match) when extended by one point. It is estimated as the negative natural logarithm of the ratio between the number of m + 1 length patterns to the corresponding m length patterns [25]. We estimated sample entropy using m = 2 and a tolerance of 0.15. MSE describes the process of calculating sample entropy over different time scales. A total of 20 scales starting from 1 up to 20 (produced by averaging based coarse graining i.e. Step 1—no averaging, step 2—averaging of 2 consecutive samples … step 20—averaging of 20 consecutive samples) covering the range of slow waves was used. MSE is the resulting area under the curve (AUC) of the plotted sample entropies. Higher values represent higher signal entropy/complexity. MSE was calculated for each of the 10 s biosignals resulting in the metrics MSE abp, MSE sbp, MSE dbp, MSE cpp, MSE hr, MSE icp, MSE amp.

Statistical analysis

Statistical analysis was performed in R Studio (R version 4.3.2—https://www.r-project.org/—packages used: rstatix, pROC, boot, rms, MASS, ResourceSelection, predtools, brant).

Descriptive variables are reported as counts/percentages or mean ± standard deviation. Distribution of the different continuous variables was assessed using the Shapiro–Wilk test. Equality of variances was tested using the Bartlett test or the Levene test. Different statistical methods were explored to assess the association between MSE and outcome. Both univariable as well as multivariable analysis (covariates: age, WFNS, mFisher, and occurrence of DCI) were performed. A significance level of p < 0.05 was set due to the exploratory nature of the study and the different tests used for exploration. The Bonferroni corrected adjusted significance level would be p = 0.00089.

Univariable: First the different MSE variables were compared to outcome as dichotomized by GOSE (1–4 vs. 5–8) using independent t-tests. To assess the overall diagnostic performance of the different MSE metrics, ROC curves (receiver operating characteristic curves) were plotted and evaluated by calculating the AUC (overall diagnostic performance) and its confidence interval (CI), and by estimating the optimal threshold (based on the Youden Index) to assess sensitivity, specificity, positive/negative predictive values, and accuracy. MSE metrics were then plotted against outcome as grouped into Dead/Vegetative (GOSE 1–2), Severe Disability (GOSE 3–4), Moderate Disability (GOSE 5–6), and Good Recovery (GOSE 7–8) and evaluated by analysis of variance (ANOVA).

Multivariable: Covariate adjusted logistic regression models were built with dichotomized GOSE (1–4 vs. 5–8) as endpoint to assess the independence of the MSE metrics as predictors of outcome. Effect of the metrics on model performance was described using the odds ratio (OR) including its CI. Diagnostic performance of the models was assessed using the AUC, the Nagelkerke R2 (R2), and the Brier Score. The effect of MSE metric inclusion was evaluated using the DeLong’s test comparing the different AUCs to a base model without the inclusion of MSE metrics. The established models were validated both internally as well as externally. Internal validation was performed by bootstrapping (1000 replications with replacement). During this process prediction models were derived from each bootstrap sample and applied to both the bootstrap and the original dataset allowing for the estimation of optimism (i.e. the difference between the AUC/R2/Brier scores of the results derived from the original vs. the different bootstrapping data sets). External validity was assessed by: 1. Evaluating the calibration (agreement between predicted and observed outcome described using its intercept and slope and assessed using the Hosmer–Lemeshow-goodness-of-fit test) 2. Evaluating the discrimination (AUC) when applying the derivation-dataset-based model to the validation cohort.

Ordinal multivariable: Due to the ordinal nature of the outcome score we additionally performed a proportional odds logistic regression and a sliding dichotomy analysis. Both, proportional odds logistic regression as well as sliding dichotomy allow for exploiting the range of the outcome scale by providing either the assessment of OR across different cutoffs or the assessment of baseline adjusted outcome definitions thereby increasing statistical power [26]. Proportional odds logistic regression adjusted for covariates was applied to the same scales as described above with moving cutoffs (Dead/Vegetative vs. Severe Disability, Severe Disability vs. Moderate Disability, Moderate Disability vs. Good Recovery) to assess the common odds ratio. The proportional odds assumption was tested using the Brant-Wald test. Lastly a sliding dichotomy approach was used to assess the importance of MSE metrics for a baseline severity adjusted outcome definition. For each patient, based on the baseline covariates (age, WFNS, mFisher score, and occurrence of DCI), a prognostic risk probability for unfavorable outcome was estimated. The resulting scores were then divided into 3 prognostic groups of roughly equal size corresponding to low, intermediate, and high likelihood of unfavorable outcome. For each prognostic group a separate cutoff was defined to dichotomize outcome into favorable and unfavorable, with the adjusted favorable outcome classified as:

GOSE 7–8: for the group with low likelihood for unfavorable outcome,

GOSE 5–8: for the group with intermediate likelihood for unfavorable outcome

GOSE 3–8: for the group with high likelihood for unfavorable outcome.

The resulting baseline severity adjusted outcome variable was then assessed against the MSE metrics using logistic regression. For both methods bootstrapping was applied for internal validation and to acquire the CI.

Secondary analysis

Three additional secondary analyses were performed to assess further aspects associated with the metric MSE based on the most promising metrics found. First: To assess, whether early outcome prediction using MSE is feasible, a secondary analysis was performed including only data acquired within the first 48 h after NCCU admission. Second: To evaluate whether MSE was associated with specific clinical aspects of the disease, values were assessed against clinical events. For this purpose, the following additional clinical parameters were extracted from the electronic patient records (occurrence of rebleeding, global cerebral edema, brain herniation, and seizures) and evaluated using t-tests. The raw metrics (ABP, HR, ICP) were assessed against the derived MSE metric to reveal possible intercorrelations. Third: The stability of the metric was assessed by evaluating the change when considering longer amounts of data within one patient (between 1 and 24 h) as well as when comparing the results of the metrics to the duration of the measurement in the whole cohort.

留言 (0)

沒有登入
gif