Educational and Patient Care Impacts of In Situ Simulation in Healthcare: A Systematic Review

In situ simulation is defined as simulation that “takes place in the actual patient care setting/environment in an effort to achieve a high level of fidelity and realism.”1 While early high-fidelity simulators were not easily transportable, technological advances have significantly improved mannequin portability. This has opened new possibilities for implementing simulation-based educational activities directly at the point of care, permitting higher environmental fidelity than could be achieved with more traditional simulation centers and diminishing the logistical barrier of requiring participants to travel to a location distant from their daily work.2

Several different approaches to in situ simulation have been described. One uses it as a way to conduct simulations at a relatively high frequency without the use of a dedicated center, thus enabling educational goals to be addressed at lower cost.3 A second approach uses it as a means of testing ongoing processes of care and new clinical spaces, enabling the identification of latent safety threats, and avoiding patient harm.2,4,5 Finally, in situ approaches have been used for “just-in-place” and “just-in-time” training, which involves the deployment of focused, skills-based in situ simulations within care environments in which that skill is likely to be needed.5,6 Education using in situ simulation thus has significant potential to impact a wide range of learner and patient outcomes.

Several reviews have summarized prior research on in situ simulation.7–13 The majority affirm its usefulness for the detection of latent safety threats and note a limited, but evolving, body of evidence supporting its beneficial effects on both learner behaviors and patient outcomes when used for education. Many of these, however, were conducted several years ago and/or are limited in clinical scope, thus missing recent and potentially relevant studies. A comprehensive, systematic evaluation and synthesis are needed to identify the specific situations in which the use of in situ simulation as an educational approach might have the greatest impact and to formulate future research goals.

METHODS

We conducted a systematic review of in situ simulation, using the Preferred Reporting Items for Systematic Review and Meta-analysis framework.14

Question

We sought to answer the following initial question: for healthcare providers, does education using in situ simulation, as compared with other types of training options (no training, nonsimulation educational modalities, and other simulation-based approaches), result in improved outcomes (satisfaction, knowledge, technical and nontechnical skills, behavior change, patient outcomes, cost-effectiveness)? By beginning with a broad question, we hoped to cast a wide net that would catch all relevant literature and allow us to address more focused questions based on the evidence available (see Box, Supplemental Digital Content 1: Definitions of key terms, https://links.lww.com/SIH/B20 for definitions of terms).

Inclusion Criteria

We included original research studies that met the following criteria: a focus on in situ simulation, presence of quantitative data, and the presence of a comparator [including comparison with baseline (1-group pre-post studies)]. The definitions used for in situ simulation, comparison groups, and data types are provided in Supplemental Digital Content 1: Definitions of key terms, https://links.lww.com/SIH/B20.

Search Strategy

Searches were conducted on September 15, 2021, in MEDLINE (using PubMed), Embase, ProQuest Dissertations, and Theses Global, and were inclusive of all prior studies. Because the research questions applied to healthcare students and practitioners across professions and involved a variety of outcomes, the initial strategies were broadly constructed, and included several different terms referencing in situ training (eg, “in situ,” “workplace based,” “just-in-place,” “drill”) and simulation (eg, “simulation,” “medical education,” “training,” “education,” “teaching,” “mock code”). Limiters were used to exclude ineligible publication types (eg, conceptual papers, systematic reviews, and letters to the editor) and articles describing in situ laboratory methods unrelated to simulation. Articles from all languages were included (see Document, Supplemental Digital Content 2: Initial literature search string, https://links.lww.com/SIH/B21 provides the full search string used). Database searches were supplemented by hand searches of healthcare simulation journals not listed in PubMed (BMJ Simulation & Technology Enhanced Learning, Clinical Simulation in Nursing) and a review of the bibliographies of all included articles.

Potential articles were initially screened via evaluation of the title and abstract by 2 members of the research team working independently. Screening was conducted in Rayyan, a free online management resource for systematic reviews.15 All articles identified by at least one screener as potentially meeting inclusion criteria were read in full by 2 research team members to make a final inclusion decision. In the event of conflict, the primary investigators (A.C., Y.L.) served as a tiebreaker.

Data Extraction

Each included article was reviewed by 2 independent authors, who extracted data from each study using a form developed and piloted by the author team. Elements in this extraction form included the following:

Study design Study population Number of participants and sites Location of in situ simulation Educational content and theory Debriefing content and theory Comparator Study outcome Statistical significance and magnitude of results

Each article was also evaluated for methodological quality by both authors using the Medical Education Research Study Quality Instrument (MERSQI) tool.16 After the initial extraction process, a third author reviewed the extraction forms and MERSQI assessments for each article, resolving discrepancies if needed via a third extraction.

Study outcomes were then examined for overall patterns to assist in synthesis. The following 8 outcome categories were identified:

Mortality Clinical metrics of care (including both checklist and time-based measurements) Mitigation of latent safety threats Diagnostic decision making Technical and nontechnical skill–related behaviors directly measured during patient care participant reactions and knowledge change (for articles in which in situ was compared with another simulation-based approach only) Physiologic measures of stress

Individual study outcomes were classified according to the Kirkpatrick Framework for levels of evidence as adapted and commonly used within the field of simulation-based medical education.17,18 In this framework, Level 1 evidence represents participant reactions and perceptions, Level 2 includes both knowledge (2a) and skill (2b) demonstrated within a simulated environment, Level 3 represents provider behaviors outside the educational context, and Level 4 represent effects on patients or institutions.

After data extraction was completed, the primary investigators organized the studies according to the outcomes and the comparison intervention. This led to the inductive identification of a subset of studies that were felt to be of potentially high importance for our research question due to use of clinical outcomes (provider behaviors and/or patient and institutional effects) or comparison with an intervention that clarifies the value of in situ versus traditional simulation approaches (direct comparison between in situ and non–in situ simulation modalities). Articles within this subset addressed 1 of 3 focused research subquestions.

How does the addition of in situ simulation to training methods already in use alter clinical outcomes (provider behaviors during patient care and/or patient outcomes)? How does frequency of in situ simulation events affect clinician behaviors and/or patient outcomes? How does in situ simulation compare with non–in situ simulation, using any quantitative outcome (as defined previously)?

Several studies within this subset used as outcomes the detection of latent safety threats, which are defined as “errors in design, organization, training, or maintenance that may contribute to medical errors and have a significant impact on patient safety.”19 The goal of latent safety threat detection is to determine whether such threats exist within a healthcare system and mitigate them before negative patient impact occurs. We classified the detection of latent safety threats as a Kirkpatrick Level 3 outcome. Although these outcomes are measured during a simulated event, the in situ environment is authentic, and the outcomes have direct implications for future clinician behaviors. Moreover, latent safety threats reflect more than the knowledge and skills of the individual participants (ie, more than Kirkpatrick Level 2).

Studies outside the high importance subset all focused on provider perceptions, knowledge, or skills and made comparison with either baseline performance (1-group pre-post design) or a nonsimulation-based educational modality (eg, lecture). These methodological features prevent us from making direct inferences about differences in outcome attributable to the in situ element because of confounding.

Data Analysis and Synthesis

Studies falling within the high importance subset were then categorized within the 3 subquestions presented previously, and between-group differences were converted to odds ratios (ORs) (for event counts), standardized mean differences (SMDs) (for means and standardized deviations), or percent change. When calculating SMDs, we used means and standard deviations whenever reported. When these were unavailable, we estimated the SMD using the P value. Random effects meta-analysis was then performed. For those studies with a 2-group historical control design that did not report a separate provider sample size, the control group was assumed to be numerically comparable with the intervention group for purposes of study weighting. Outcomes were calculated such that smaller numbers (ie, OR < 1 or SMD < 0) indicated results favorable to in situ simulation. Meta-analysis was done using SAS 9.4.20 Studies falling outside this high importance group were synthesized descriptively.

RESULTS

The initial search resulted in 13,213 potential articles, of which 62 were included. Twenty-four of these were deemed high importance and are analyzed in-depth. The remaining 38 are summarized descriptively. The trial flow is depicted in Figure 1.

F1FIGURE 1:

Trial flow of the systematic review search and screening process.

High Importance Studies

Most of these 24 studies addressed some form of resuscitation or critical event as the clinical focus, although a few addressed communication and subacute diagnostic decision making. Twenty-one focused on interprofessional teams, 2 on maternity staff, and 1 on physicians. Specialties centered mostly on acute care environments (trauma, pediatric and adult intensive care, neonatology, OB-GYN), although one did address outpatient practice (see Table, Supplemental Digital Content 3: Individual study details: High importance studies, https://links.lww.com/SIH/B22 provides overall characteristics of these studies).

Study Quality

The methodological quality of the high-importance studies varied widely. Studies involved 1 of 4 designs: randomized controlled trials (RCTs) (4 studies, 17%), studies in which 2 different groups were compared prospectively but no randomization was conducted (2-group nonrandomized comparative) (3 studies, 13%), trials in which in situ simulation was implemented at an organizational level and the effects of that training on patient care were compared with historical patient data obtained before implementation (2-group historical controls) (12 studies, 50%), and single group studies in outcomes during and after in situ simulation were measured (1-group pre-post) (5 studies, 21%). We considered those studies using historical controls to be 2-group comparisons as the historical patient outcomes measured would likely be attributable (at least in part) to providers not included in the in situ educational cohort. It is worth noting, however, that none of these studies identified the providers who cared for the patients serving as historical controls and thus some overlap is possible. Eighteen studies (75%) gathered at least some objective patient-level outcome measures (mortality, clinical care metrics, technical and nontechnical skill–related behaviors measured during patient care, etc), 5 (21%) assessed latent safety threats present within the care environment, and 3 (13%) measured learner-level outcomes, such as knowledge and preference. Many measured multiple outcome levels. The median MERSQI score was 13.75 [interquartile range (IQR), 12–14; range, 8.5–16.5]. Of those articles having RCT or 2-group nonrandomized comparative designs, 12 (63%) were found to have unit of analysis errors because of their use of patient-level sample sizes without correction for potential within-provider correlations (ie, “nesting” of patients within specific providers).21 In the results that follow, the sample size is reported, whenever possible, in terms of the number of providers trained by the in situ intervention. Here, we again note that our rationale for designating these as high importance studies was not methodological but was based on the degree to which the outcomes reported or the comparison described assisted us in answering our primary research question. Their overall ability to address this question must therefore be evaluated in light of the above quality assessment. Table 1 summarizes relevant aspects of high importance study quality, and Table, Supplemental Digital Content 4: Study quality details—high importance studies, https://links.lww.com/SIH/B23, provides further methodological details (see Table, Supplemental Digital Content 4: Study quality details—high importance studies, https://links.lww.com/SIH/B23).

TABLE 1 - Quality of Included High Importance Studies Quality Category Characteristic No Present (%),
n = 24 Study design (max 3) Single group pretest and posttest (1.5) 5 (21%) Nonrandomized, 2 groups (2) 15 (63%) RCT (3) 4 (17%) Sampling: institutions studied (max 1.5) 1(0.5) 19 (79%) ≥3 (1.5) 5 (21%) Sampling: follow-up (max 1.5) Not applicable 20 (83%) <50% or unreported (0.5) 1 (4%) >75% (1.5) 3 (13%) Type of data (max 3) Assessment by participants (1) 2 (8%) Objective measurement (3) 22 (92%) Validity of evaluation instrument (max 3) Not applicable 12 (50%) Internal structure (1) 4 (17%) Content(1) 7 (29%) Data analysis: appropriate (max 1) Inappropriate (0) 12 (50%) Appropriate (1) 12 (50%) Data analysis: complexity (max 2) Descriptive only (1) 3 (13%) Beyond descriptive (2) 21 (88%) Outcomes (max 3) Satisfaction, attitudes, perceptions, opinions, general facts (1) 2 (8%) Knowledge, skills (1.5) 1 (4%) Behaviors (2) 7 (29%) Patient/healthcare outcome (3) 14 (58%) This table provides detail on study quality as broken down by MERSQI scoring component. Those studies in which follow-up and/or evaluation instrument was deemed not applicable used patient-level data to evaluate outcomes. Data analysis was deemed inappropriate in most cases because of unit of analysis errors (ie, treating the patient or the event as the sample size rather than the provider educated). Quality categories are derived from the MERSQI. Numbers given to specific characteristics in the second column represent the MERSQI scores associated with each item. Specific details regarding individual study quality features as classified by MERSQI characteristic are located in Supplemental Digital Content 4,
https://links.lww.com/SIH/B23.
Subquestion 1: In Situ Simulation as an Addition to Training Already in Use

Nineteen of the 24 articles addressed the impact of in situ simulation on behavioral and patient effect outcomes when combined with training methods already in use.22–42 In 5 (26%), this took the form of measured, clinically relevant latent safety events occurring within the in situ environment assessed using a 1-group pre-post design. Twelve other studies (63%) used a historical group comparison design, in which clinical metrics gathered before the intervention were compared with clinical metrics after a period of in situ simulation (ie, 2-group historical controls). Two of these studies (11%) were RCTs comparing patient metrics at centers receiving in situ simulation-based interventions focused on intrapartum and postpartum care with centers that had not. The median MERSQI score for these articles was 14 (IQR, 12–14). Many articles examined 2 or more outcome subtypes. Results overall provide tentative support for in situ simulation. Findings related to specific outcomes are explored hereinafter.22

Mortality

Eight of the 24 studies assessed patient mortality.22,24,26,30,31,36,37,42 Meta-analysis of these studies revealed a small but statistically significant association between the use of in situ simulation and reduced risk of death (OR, 0.66; 95% CI, 0.55 to 0.78), although individual studies were heterogeneous in terms of quality and the specific implementation of in situ simulation used. Figure 2 displays the results of this meta-analysis. Six (75%) had a 2-group historical control design and two (25%) were RCTs. Clinical tasks assessed included neonatal, pediatric, and adult resuscitation. Provider sample sizes ranged from 24 to 445 (median, 132). Two of these studies demonstrated this association in a low- to middle-income country context using low-fidelity, low-technology approaches to in situ simulation.22,26 One study reported costs of approximately $50,000 to initiate their program and $130 per person for facilitator training.36

F2FIGURE 2: Meta-analysis of mortality data in studies addressing in situ simulation as an addition to training already in use. This forest plot depicts the results of the meta-analysis synthesizing the mortality data from all studies addressing in situ simulation as an addition to current institutional training methods as compared with current training practices alone among historical controls. Meta-analysis of these studies revealed a statistically significant reduction in risk of death (OR, 0.61; 95% CI, 0.47 to 0.79) after in situ simulation. For studies with a 2-group historical control design that did not include a provider control group sample size, this was assumed to be numerically comparable with the intervention group for purposes of study weighting; therefore, the pooled sample size in this figure exceeds the sum of the individual study sample sizes. Of note, the sample size used for van den Broek et al31 is presented at the center level, as a provider sample size was not given and could not be inferred from the study data. 2-Grp Hist, 2-group historical controls; Neo Resusc, neonatal resuscitation; Resusc, resuscitation.Clinical Metrics of Care

Nine of the 24 studies assessed metrics of clinical care such as time to initiate cardiopulmonary resuscitation, frequency of use of key clinical interventions, and scores on clinical quality checklists.22,24,26,28–30,33,34,37,39,42 Meta-analysis of these results showed a small but statistically significant pooled standardized mean difference (SMD) of −0.34 (95% CI, −0.45 to −0.21) in favor of in situ simulation. Figure 3 graphically displays these results. Eight of these studies (88%) used a 1-group pre-post design and one (11%) was an RCT. Clinical tasks included neonatal resuscitation, obstetric/gynecological care, neurocritical care, trauma, pediatric and adult resuscitation, and community drug prescribing. Provider sample sizes ranged between 21 and 445 (median, 39).

F3FIGURE 3:

Meta-analysis of clinical metrics of care in studies addressing in situ simulation as compared with traditional organizational practice in situ simulation as an addition to training already in use. This forest plot depicts the results of the meta-analysis synthesizing the data pertaining to clinical metrics of care. All studies included compared in situ simulation as an addition to current institutional training methods to current training practices alone among historical controls. Meta-analysis of these studies revealed a statistically significant improvement in clinical metrics of care (SMD, −0.34; 95% CI, −0.45 to −0.24) after in situ simulation. For studies with a 2-group historical control design that did not include a provider control group sample size, this was assumed to be numerically comparable with the intervention group for purposes of study weighting; therefore, the pooled sample size in this figure exceeds the sum of the individual study sample sizes. 2-Grp Hist, 2-group historical controls; Adv Event, adverse event avoidance; Clin Perf, clinical performance checklist; Guide Adh, clinical guideline adherence; Neo Resusc, neonatal resuscitation; Outpt Presc, outpatient prescribing; Resusc, resuscitation; Stroke Mgmt, stroke management.

Mitigation of Latent Safety Threats

Four of the 24 studies reported data on safety metrics, all of which were 1-group pre-post studies in which clinically relevant latent safety threats were measured in the simulated environment.23,27,29,41 We did not pool results with meta-analysis because data were insufficient to calculate SMDs and also because of disparate designs and outcomes. One study showed a median decrease of 2.5 latent safety threats per simulation.23 Two studies demonstrated heightened detection of latent safety threats (49 more latent safety threats detected during in situ simulation)27 and enhanced latent safety threat mitigation (10 latent safety threats mitigated after in situ simulation).41 The final study used in situ simulation both to implement and to evaluate a massive transfusion protocol and noted a decrease from 31 transfusion-related threats before implementation to 4 afterward.29 This study also reported a $10,183.42 total cost for the simulation aspects of the project.29 Clinical tasks included neonatal resuscitation, trauma, and obstetric/perinatal care. Provider sample sizes ranged between 27 and 218 (median, 126.5).

Nontechnical Skill–Related Behaviors

Four of the 24 studies reported data on measurements of nontechnical skill–related behaviors obtained during actual clinical care, all of which showed an association between in situ simulation and improved behaviors.32,35,39,40,42 Three of these studies (75%) used a 2-group historical control design and one (25%) used a 1-group pre-post design. Clinical tasks included interprovider communication, trauma, and obstetric/perinatal care. Provider sample sizes ranged 15 and 279 (median, 80). Meta-analysis of 3 of the studies showed a small but statistically significant association between in situ simulation and improved skill (SMD, −0.52; 95% CI, −0.99 to −0.05), although, again, there was substantial heterogeneity in quality and implementation.35,40,42Figure 4 graphically displays these results. The fourth study did not report a provider sample size and thus could not be included in the meta-analysis but reported similar findings.39

F4FIGURE 4:

Meta-analysis of nontechnical skills assessments in studies addressing in situ simulation as an addition to training already in use. This forest plot depicts the results of the meta-analysis synthesizing the data pertaining to nontechnical skills. All studies included compared in situ simulation as an addition to current institutional training methods to current training practices alone among historical controls. Meta-analysis of these studies revealed a statistically significant improvement in clinical metrics of care (SMD, −0.47; 95% CI, −0.87 to −0.07) favoring in situ simulation. For studies with a 2-group historical control design that did not include a provider control group sample size, this was assumed to be numerically comparable to the intervention group for purposes of study weighting; therefore, the pooled sample size in this figure exceeds the sum of the individual study sample sizes. 1-Grp pre-post, 1-group pre-post; 2-Grp Hist, 2-group historical controls; Prov Comm, provider communication patterns.

Diagnostic Decision Making

Two of the 24 studies assessed the effect of in situ simulation on diagnostic decision making in the perinatal context, with one using an RCT and the other using a 2-group historical control design.25,31 The first demonstrated an association with improved recognition of need for emergency obstetric care in the in situ group (SMD, −0.02, n = 1272 births), and the second reported an association between in situ simulation and improvement in the decision-to-deliver interval (SMD, −0.62, n = 102 births). Neither study reported provider sample size, precluding meta-analysis.

Subquestion 2: Effects of In Situ Simulation Frequency

We found one 2-group nonrandomized comparative study assessing the effect of in situ simulation frequency on behaviors and patient-level outcomes. This multisite study was conducted across 26 hospitals and reported a significant association between higher frequency of in situ simulation and improved survival after cardiac arrest (OR, 0.62; n = 572 teams).38 The higher frequency hospitals performed 177 in situ simulations per 100 beds, whereas the lower frequency hospitals performed 3.2 in situ simulations per 100 beds.

Subquestion 3: In Situ Simulation Compared With Other Simulation Modalities

Four of the 24 studies compared in situ simulation to other simulation-based educational modalities, using a variety of outcomes.43–47 Neither modality demonstrated a consistent advantage in terms of satisfaction (preference), knowledge, or skill. Two of these (50%) were 2-group nonrandomized comparative studies, and 2 (50%) were RCTs. The median MERSQI score for these articles was 12.75 (IQR, 11.5–13.63). Supplemental Digital Content 3: Individual study details—high importance studies, https://links.lww.com/SIH/B22, provides a full description of these studies. Findings related to specific outcomes are explored hereinafter.

Mitigation of Latent Safety Threats

One RCT assessed the effect of in situ simulation as compared with simulation conducted off-site on latent safety threat detection within a group of 97 obstetric anesthesia providers.45 Within this study, 51 latent safety threats were detected in the in situ group, compared with 40 latent safety threats in the off situ simulation group. Of note, this study addressed a variety of outcomes, which are reported hereinafter.

Participant Reactions and Preference

Three of the 24 studies (two 2-group nonrandomized comparative and the RCT conducted among obstetric anesthesia providers referenced previously) examined the preferences and reactions of participants in in situ simulations compared with traditional simulation approaches; 1 of the 2-group nonrandomized comparative studies favored in situ (SMD, −0.08, n = 1415 providers44) and the other traditional simulation (SMD, 4.26; n = 120 providers46). The RCT also favored traditional simulation overall (SMD, 0.068; n = 97 providers) but noted higher perceived authenticity in in situ simulation (SMD, −0.49; n = 97 providers).45

Knowledge

The RCT conducted among obstetric anesthesia providers referenced previously found lower knowledge in the in situ group (SMD, 0.07; n = 97 providers).45

Measures of Stress

The RCT conducted among obstetric anesthesia providers referenced previously found higher levels of salivary cortisol (a measure of stress) in the in situ group (SMD, −0.42; n = 97 providers).45

Technical Skills

Two of the 24 studies (one RCT focused on airway management and one 2-group nonrandomized comparative study focused on global resuscitation) compared technical skills after in situ simulation versus traditional training; the RCT favored in situ simulation (SMD, −0.06; n = 57 providers)43 and the 2-group nonrandomized comparative study favored traditional simulation (SMD, 0.67; n = 120 providers).46

Additional Studies

Thirty-eight of the included studies did not fit within the high importance group as defined previously and hence were not included in meta-analysis. Thirty-two (84%) were 1-group pre-post, 4 (10%) were 2-group nonrandomized comparative, were 2 (5%) are RCTs. Of the 6 comparative studies presented, 2 compared in situ simulation to no intervention,6,48 and 3 to other educational methods (video review, self-study, lecture).49–51 The sixth study examined first-year resident levels of confidence in taking call after either traditional simulation followed by in situ simulation or the reverse order; improvement was descriptively noted in both groups, but no formal statistical comparison was done.47 Twenty-two studies (58%) assessed participant preference and reaction, 8 (21%) knowledge, and 19 (50%) skill. Results are descriptively evaluated by directionality of findings. Of the 1-group, pre-post studies, 31 (97%) reported results in favor of in situ simulation, and in 1 (3%) direction of effect could not be ascertained from the included information. Both studies comparing in situ simulation with nothing reported results in favor of in situ simulation.6,48 Of the 3 studies comparing in situ simulation with other educational methods, 2 (comparison with video and lecture) reported results in favor of in situ simulation.50,51 Direction of effect was not ascertainable in the final study49 (see Text Document, Supplemental Digital Content 5: Additional study listing, https://links.lww.com/SIH/B24, provides the full references for each of these 38 studies).

DISCUSSION

The evidence synthesized in this systematic review suggests that in situ simulation as an adjunct to current training methods at an institutional level may lead to improvements in clinician behaviors and patient effect outcomes. These improvements were seen in provider technical and nontechnical skill–related behaviors, and decision making, as well as in patient-level variables, such as time to administer key interventions and mortality. While the changes were not large, they were generally consistent in the direction of effect.

Two of the studies addressing mortality also used well-known low-fidelity, low-cost approaches to in situ simulation (ie, “Helping Babies Breathe” and “Helping Mothers Survive Bleeding After Birth”) that are specifically designed for use in low- and middle-income settings.22,26 Both demonstrated significant associations with improved outcomes, and one was able to apply the intervention at scale (61 centers). The development of similar low-fidelity in situ interventions targeting other critical health issues has potential for impact on patient outcomes.

Higher frequency exposure to in situ simulation also may result in improved outcomes. While only one study examined this question, it is of relatively high quality and focuses on a high-level outcome (mortality).38 The study only addresses, however, a relatively narrow clinical domain (ie, resuscitation training for providers likely to take part in in-hospital cardiac arrest) and uses short, skills-focused simulations. Therefore, these results may not be generalizable to other types of in situ simulation. Moreover, the exact frequency of in situ simulation required to achieve the documented effects cannot be confirmed from the study, as the difference in simulation frequency between the high and low dose groups was greater than a factor of 50.

The studies synthesized in this review suggest that in situ simulation, in general, does not seem to offer a consistent advantage or disadvantage over traditional simulation approaches in terms of learner perception, knowledge acquisition, and technical skill improvement. For many outcomes, the SMDs were quite small, suggesting relative equivalence in practical terms. This is unsurprising, as cognition and skill acquisition would logically be more dependent on the instructional content of the simulation than the setting. One study, however, did report higher participant scores for in situ simulation in terms of authenticity, which suggests that situations in which environmental fidelity is especially important may benefit from in situ approaches.

It is important to note that only 2 of the included studies explicitly reported cost information29,36 (although 2 others used well-known low-cost simulation programs from which cost can be reasonably assumed22,26) and none quantitatively measured the impact on bed flow and staffing. The issue of cost in health profession education is receiving increased scrutiny.52 A 2013 systematic review noted that only 6.1% of comparative studies addressing simulation report on cost,53 and 2 subsequent reviews identified similar shortcomings for other fields of health professions education.54,55 The emotional impact of these simulations on patients and their families who may witness them was also unexamined.56 We thus recommend that future studies of in situ modalities strive to incorporate measures of cost and impact on local care. This would be particularly useful in studies examining the appropriate frequency of in situ simulation, as resource use could then be directly correlated with frequency-based differences in effectiveness.

Finally, we acknowledge the 38 studies not included in the high importance group. Nearly all of these studies found results favorable to in situ simulation, but the vast majority made comparison with no intervention (1-group pre-post, or a no-intervention control group). Studies of this type add little to our understanding of education science.57 The remaining studies made comparison with nonsimulation interventions, which means we cannot isolate the effect of the in situ element (as a key feature of simulation) from the effect of simulation itself (broadly construed). We believe that future research exploring how and when to use in situ simulation (ie, seeking to determine best practices for this modality) will be particularly useful.58 Such studies could use a variety of outcomes.

Limitations

There are significant quality limitations in the literature assessed. Those studies addressing patient-level outcomes used historical control data, raising concern for bias and confounding in linking specific interventions with observed outcomes. Many of these studies also chose to use the patient as the unit of analysis, a statistically inappropriate approach as the educational intervention was delivered at the provider level (ie, nesting). Such analyses erroneously inflate the effective sample size and increase the possibility of type 1 (alpha) error.21 The total number of studies that merited deeper analysis (24) was small, and only 4 explicitly compared in situ simulation to traditional approaches. Finally, study reporting was inconsistent and frequently poor, which made it difficult to extract the variables needed for many of the meta-analyses. In some cases, differences in how those key variables were presented also rendered it difficult to pool these results, and some readers might question the appropriateness of some of the groupings we did pool for analysis. Thus, all meta-analysis results present in this study must be treated as provisional at best. Because of this, our outcome synthesis of the 24 high importance studies must be considered provisional at best.

In terms of the review methodology, a key limitation concerns the nature of our original research question, which was framed primarily in terms of educational efficacy. This restricts our ability to generalize about domains peripheral to this, such as patient safety and the detection of latent safety threats. It is quite likely that a search strategy focused on patient safety would have uncovered more articles addressing these issues. We also acknowledge that our research team used a large number of reviewers (18), which could have introduced unwanted variation into the screening and selection process; however, all conflicts were addressed by either the first or senior author (A.W.C., Y.L.), which should mitigate this to some extent.

Integration

Our findings affirm the results of prior reviews of in situ simulation. Two reviews noted potential impacts on patient outcomes, although in many cases this was difficult to disentangle from other interventions.8,11 A further review noted in situ simulation's value as a means of establishing an authentic environment of care but highlighted a lack of data regarding cost and effects on the clinical environment of care.10 Finally, 3 additional reviews noted the efficacy of in situ simulation in identifying latent safety threats.7,9,12 Our review further supports these observations using a search strategy with broader scope.

IMPLICATIONS

This systematic review suggests that adding in situ simulation-based interventions in an institutional context could improve patient morbidity and mortality. Moreover, low-cost, low-fidelity versions of these interventions have potential to impact health issues in low- and middle-income settings. Higher frequency exposure to skills-based in situ training seem to enhance its beneficial effect, although the cutoff for this phenomenon has yet to be determined and it has only been demonstrated for “just-in-place” approaches. While overall learner preference, knowledge acquisition, and skill/behavior acquisition seem similar in both in situ and traditional simulation approaches, in situ simulation may have value in situations where environmental fidelity is critical. Additional systematic review is needed to comprehensively summarize current results on the use of in situ simulation to address noneducational outcomes, such as latent safety threats. Further studies are needed to determine whether in situ simulation poses any risk in terms of patient psychological reaction and overall hospital bed flow and to examine cost-effectiveness.

ACKNOWLEDGMENTS

The authors acknowledge the contributions of the following individuals to the systematic review process: Ellen Deutsch, MD; Mary Patterson, MD; Jihai Liu, MD; Stephanie Barwick, RNRM, Ma DNP; David Grant, MD; and Kiran Hebbar, MD.

REFERENCES 1. Lioce L, Lopreiato J, Downing D, et al. Healthcare Simulation Dictionary. 2nd ed. Agency for Healthcare Quality and Research: Rockville, MD; 2020. 2. Patterson MD, Blike GT, Nadkarni VM. In situ simulation: challenges and results. In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches (Vol 3: Performance and Tools). Rockville, MD: Agency for Healthcare Research and Quality; 2008. 3. Calhoun AW, Boone MC, Peterson EB, Boland KA, Montgomery VL. Integrated in-situ simulation using redirected faculty educational time to minimize costs: a feasibility study. Simul Healthc 2011;6(6):337–344. 4. Auerbach M, Kessler DO, Patterson M. The use of in situ simulation to detect latent safety threats in paediatrics: a cross-sectional survey. BMJ Simul Technol Enhanc Learn 2015;1(3):77–82. 5. Posner GD, Clark ML, Grant V. Simulation in the clinical setting: towards a standard lexicon. Adv Simul (Lond) 2017;2:15. 6. Niles D, Sutton RM, Donoghue A, et al. "Rolling refreshers": a novel approach to maintain CPR psychomotor skill competence. Resuscitation 2009;80(8):909–912. 7. Gomez-Perez V, Escriva Peiro D, Sancho-Cantus D, Casana Mohedo J. In situ simulation: a strategy to restore patient safety in intensive care units after the COVID-19 pandemic? Systematic review. Healthcare (Basel) 2023;11(2), 263. 8. Goldshtein D, Krensky C, Doshi S, Perelman VS. In situ simulation and its effects on patient outcomes: a systematic review. BMJ Simul Technol Enhanc Learn 2020;6(1):3–9. 9. Truchot J, Boucher V, Li W, et al. Is in situ simulation in emergency medicine safe? A scoping review. BMJ Open 2022;12(7):e059442. 10. Armenia S, Thangamathesvaran L, Caine AD, King N, Kunac A, Merchant AM. The role of high-fidelity team-based simulation in acute care settings: a systematic review. Surg J (N Y) 2018;4(3):e136–e151. 11. Fent G, Blythe J, Farooq O, Purva M. In situ simulation as a tool for patient safety: a systematic review identifying how it is used and its effectiveness. BMJ Simul Technol Enhanc Learn 2015;1(3):103–110. 12. Villemure C, Tanoubi I, Georgescu LM, Dube JN, Houle J. An integrative review of in situ simulation training: implications for critical care nurses. Can J Crit Care Nurs 2016;27(1):22–31. 13. Owei L, Neylan CJ, Rao R, et al. In situ operating room-based simulation: a review. J Surg Educ 2017;74(4):579–588. 14. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 st

留言 (0)

沒有登入
gif