Prioritizing evidence-based practices for acute respiratory distress syndrome using digital data: an iterative multi-stakeholder process

Our research team included clinicians and researchers from 4 health systems specializing in pulmonary and critical care medicine, implementation science, learning health systems, and organizational behavior. The larger project was a planning grant from the US National Heart, Lung and Blood Institute of the National Institutes of Health (U01HL143453, Sales and Gong co-PIs), called Digital Implementation Trials in Acute Lung Care (DIGITAL-C), with the ultimate goal to plan a multi-site hybrid type 2 implementation-effectiveness trial of digital implementation strategies. We report abridged methods here; more detailed background and methods are available in Additional file 1.

We engaged in a 4-step multi-method process to evaluate and prioritize EBPs to assist in clinical decision-making and concentrate future implementation efforts. Throughout this process, we focused on EBPs most relevant to and strongly associated with improved clinical outcomes (i.e., shorter duration of mechanical ventilation and/or lower mortality), as identified in previously established guidelines, among patients receiving invasive mechanical ventilation for acute respiratory failure or acute respiratory distress syndrome. In steps 2–4, we also considered the feasibility of using digital data, extracted from electronic health records, to assess EBP performance, rather than processes of human data abstraction.

An overview of our 4-step prioritization process is shown in Table 1. In step 1 (see Ervin et al. [8]), clinician experts from our research team identified key guidelines that included several EBPs, and we searched the literature for related reviews.

Table 1 The 4-step prioritization process overview

Step 1 is reported in our previous paper, in which we describe the 20 EBPs that we collated from the literature review [8]. Step 1 was conducted from July to December 2018. The focus of the current report is on steps 2 and 3.

In step 2, clinician researchers on our team (MWS, MNG, TJI, CLH) rated a list of 26 EBPs generated from step 1 using two Qualtrics surveys. The first survey contained the full set of criteria from the Guideline Implementability Appraisal 2.0 (GLIA) tool, which we show in Table 2 [9].

Table 2 Guideline Implementability Appraisal 2.0 (GLIA) variables and definitions

In this and subsequent steps, we used radar graphs (Figs. 1, 2 and 3) to assess responses across all of the GLIA dimensions concurrently. To construct these graphs, we averaged the responses and plotted them on the 11 axes of the GLIA dimensions, using Microsoft Excel. At this point, following discussion focused on the radar graphs, we removed a total of 15 EBPs from the list, leaving 11. Our primary criteria focused on measurability, resource intensiveness, and source credibility. We conducted step 2 in February-May 2019.

Fig. 1figure 1

Radar graphs for phase 1 EBPs

Fig. 2figure 2

Radar graphs for phase 2 EBPs

Fig. 3figure 3

Radar graphs for phase 3 EBPs

In step 3, frontline clinicians from 8 participating hospital systems evaluated the distilled list of 11 EBPs produced from step 2 using a much-reduced survey instrument. We reduced the survey to 3 GLIA elements (measurability, resource intensiveness, and source credibility) based on the experience of the team clinicians who engaged in the first 2 rounds of surveys, who felt that the full GLIA was very burdensome. We conducted these surveys in June-September 2019.

We surveyed clinicians directly involved in caring for patients receiving invasive mechanical ventilation, including attending physicians, house staff, nurse managers, registered nurses, and respiratory therapists. We used Qualtrics as the platform for the survey. Clinicians were asked whether we should include the EBPs in the final list for implementation (yes; no; maybe) and then rated the EBPs on 3 GLIA criteria: measurability, resource intensiveness, and source credibility. These three criteria were selected based on the specific needs of this project, which included the ability to use electronic medical record data to extract data for measurement, feasibility which we operationalized as “resource intensiveness,” and whether the recommendation came from a credible source. These are modified from the original GLIA terms and were selected based on input from the clinician members of the research team. We note that we used the term “source credibility” rather than “evidence validity” to reduce the burden to clinicians of fully assessing the validity of the evidence for each recommendation. In future work, we recommend that all the GLIA questions be considered for prioritization, depending on the groups involved in assessing recommendations. Descriptive statistics were calculated in Microsoft Excel; missing data were omitted using pairwise deletion.

The 4th and final step, not addressed in this paper, was to use all of the available data to generate the final list of EBPs to focus future implementation efforts. This step required developing metrics for each of the included EPBs using digitally extracted data from electronic health records, which we will report in subsequent papers. We placed heavy emphasis on measurability and variability in practice within and across ICUs and health systems.

This study was deemed exempt from human subject’s oversight by IRBMED at the University of Michigan. Data were gathered throughout 2019.

留言 (0)

沒有登入
gif