Simulation is being used for the training of healthcare workers in many settings and to address a variety of clinical issues. Available evidence suggests that performance improvements as a result of simulation training result in improvements in the clinical care of patients. However, current practices vary widely in simulation training, and there are no existing guidelines based on systematic synthesis of best available evidence to guide practices. In this manuscript, we present the first evidence-based guidelines relevant to simulation training that were developed using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) methodology.
How to Use These GuidelinesThese guidelines are primarily intended to help trainers using simulation to make decisions about the optimal training of healthcare professionals. Other purposes are to educate, to inform policy and advocacy, and to define future research needs. Guidelines are applicable to all facing simulation training uncertainties addressed herein without regard to specialty, training, or interests. Because of the complexity of the healthcare environment, these guidelines are intended to indicate the preferred, but not necessarily the only, acceptable approach to simulation training. Guidelines are intended to be flexible depending on individual circumstances. Given the wide range of practices in healthcare, educators must always choose the course best suited to the individual learner and the variables in existence at the moment of decision.
Interpretation of Strong and Conditional RecommendationsThe strength of these evidence-based recommendations is either “strong” or “conditional” as per the GRADE approach and as previously described.1,2 The words “the guideline panel recommends” are used for strong recommendations and “the guideline panel suggests” for conditional recommendations, according to the GRADE approach as previously described.1 Strong recommendations can be adopted as a policy in most situations. Conditional recommendations require shared decision-making between trainers and learners. When insufficient evidence existed to inform recommendations, expert consensus opinion was documented.
Key questions (KQ) addressed by these guidelines and recommendations:
KQ 1: Should in situ simulation vs the education accrued during typical organizational practice be used for training healthcare professionals to improve clinician behaviors during patient care and/or patient outcome?
For healthcare provider training, we suggest that participation in in situ simulations should be considered to improve healthcare professionals' performance, patient outcomes, and healthcare system quality and safety compared with the education accrued during typical organizational practice (conditional recommendation, moderate evidence).KQ 2:Should a higher frequency of short in situ simulation events with structured debriefing vs a lower frequency of short in situ simulation events with structured debriefing be used for training healthcare professionals to improve clinician behaviors during patient care and/or patient outcomes?
For healthcare provider training, we suggest more frequent participation in short skill-oriented in situ simulations to benefit patient outcomes (conditional recommendation, moderate evidence).KQ 3:Should in situ simulation vs another non–in situ simulation modality be used for training healthcare professionals to improve perceptions, knowledge, skills, clinician behaviors, and patient care outcomes?
For healthcare provider training, we suggest the use of in situ simulation to uncover or mitigate latent safety threats in the healthcare environment and to enhance the environmental authenticity and fidelity of the experience (conditional recommendation, moderate evidence).KQ 4:Should just-in-time training (JIT) vs no JIT be used for simulation training of healthcare professionals (trainees or practitioners)?
The panel suggests that 5 to 30 minutes of just-in-time simulation training (within 24 hours of performance) should be implemented with healthcare professionals (trainees or practitioners) engaged in high-stakes medical or surgical procedures particularly when there has been a prolonged period of no training (>1–2 weeks) (conditional recommendation, very low certainty evidence).KQ 5:Among healthcare professionals (trainees or practitioners) engaged in simulation, does spaced training (separation of training into several discrete sessions over a prolonged period with measurable intervals between training sessions) compared with massed training (all training occurs during the same session) improve skill acquisition?
The panel suggests either spaced or massed simulation training for procedural skill acquisition using simulation (conditional recommendation, very low level of evidence).KQ 6:Should higher physical realism simulators/task trainers vs lower physical realism simulators/task trainers be used for healthcare simulation training of individuals in lower and middle resource settings?
The panel suggests the use of lower physical realism as opposed to higher physical realism simulators and task trainers for healthcare professionals and/or healthcare trainees/students in low- and middle-income (LMIC) settings (conditional recommendation, very low certainty of evidence).KQ 7:Should high-fidelity simulation vs low-fidelity simulation be used for team training for healthcare professionals and/or healthcare trainees/students?
The panel suggests the use of either higher- or lower-fidelity simulation for team training by healthcare professionals and/or healthcare trainees/students (conditional recommendation, very low certainty of evidence).KQ 8:Should distance simulation vs in-person simulation vs mixed distance simulation (ie, in-person and distance) be used for the training of healthcare professionals?
The panel suggests the use of either distant, in-person, or mixed distant simulation be used for the training by healthcare professionals (conditional recommendation, very low certainty of evidence). The panel also suggests that distance simulation may be preferable for specific purposes (eg, geographic limitations) and motivations (eg, convenience) (expert consensus recommendation).KQ 9: For the team-based training of healthcare professionals, do any specific conditions in the clinical environment before or after training (eg, leadership support, positive work culture, staff huddles) compared with other conditions or no such conditions lead to improved learning outcomes and patient outcomes?
When implementing team-based training with healthcare professionals to improve patient safety, we suggest implementing facilitated discussions, coaching, wider communication of learning objectives to staff members, or other leadership initiatives to facilitate transfer of skill to the clinical environment (conditional recommendation, very low certainty evidence).KQ 10: Does a specific method or content of team training improve learning outcomes of healthcare professionals participating in simulation-based training of teamwork competencies?
We suggest that debriefing after simulation training of teamwork competencies may be conducted as traditional instructor-led debriefing or by use of alternate methods such as rapid-cycle deliberate practice (RCDP), peer-led debriefing, or video-assisted debriefing (conditional recommendation, very low certainty evidence). We suggest the use of either low- or high-fidelity simulators when training teamwork competencies (conditional recommendation, very low certainty evidence). We suggest that combining the training of teamwork competencies using simulation with other learning modalities such as classroom activities or e-learning modalities may be of added benefit (conditional recommendation, very low certainty evidence).KQ 11:For healthcare professionals training teamwork competencies, does interprofessional team training compared with single-professional team training lead to improved learning outcomes and patient outcomes?
We suggest conducting training of teamwork competencies with interprofessional teams in situations where professionals are expected to work together across specialties or disciplines in clinical practice (conditional recommendation, very low certainty evidence).KQ 12: Is competency-based simulation procedural training superior to non–competency-based approaches in improving skill acquisition and patient outcomes?
The panel suggests that competency-based simulation training methods be used for procedural skill training of healthcare professionals (conditional recommendation, moderate level of evidence).KQ 13:Does the use of virtual reality (VR), augmented reality (AR), or extended reality (XR) simulation improve healthcare professional learning and patient outcomes compared with traditional simulation methods?
Both XR or traditional simulation can be used for the training of healthcare professionals as both have comparable learning outcomes (conditional recommendation, very low certainty of evidence). The panel suggest that VR experiences should be proctored, include debriefing, and have a backup plan when learner cybersickness or myopia are encountered and to document time and costs (expert consensus recommendation).KQ 14:Does the use of XR simulation improve surgical/procedural learning and patient outcomes compared with standard training methods?
The panel suggests that XR simulation modalities may be an effective training modality for surgical and procedural training (expert consensus recommendation).KQ 15:In healthcare professionals, does the use of 1 debriefing or feedback intervention, compared with a different debriefing or feedback intervention, improve educational and clinical outcomes in simulation-based education?
For healthcare provider training using simulation, we suggest that structured debriefing and feedback should be included (conditional recommendation, very low certainty of evidence).KQ 16:Among healthcare professionals, does the use of simulated participants (SPs) methodology related to communication skills have an effect on improving learner knowledge, skills, attitudes, or patient outcomes compared with other simulation methodologies?
The panel suggests an integrated approach to teaching communication knowledge, skills, and attitudes in healthcare education. This approach should prioritize the use of SPs for hands-on skill development, incorporate role-play scenarios for practical application, and include reflective exercises to nurture the growth of empathetic and patient-centered attitudes among healthcare professionals (expert consensus recommendation). Introduction Aim of These Guidelines and Specific ObjectivesThese evidence-based guidelines from the Society for Simulation in Healthcare (SSH) aim to support healthcare professionals in decisions on the most effective methods for simulation training in healthcare. The key target audiences include clinician and nonclinician educators and their learners. Other stakeholders involved in delivering simulation training but also patient care delivery that may be impacted by such training may also consider these recommendations in their deliberations.
Description of the Training ProblemOver the past 30 years, there has been increasing use of simulation training modalities for the training of healthcare workers.3 Compared with traditional training, simulation enables experiential training that augments learning and knowledge and skill retention4 that has been shown to positively impact patient outcomes and the delivery of care.5 However, there is variability in teaching practices that affect training quality and impact training outcomes and therefore module the potential benefit patients may derive from simulation training. Guidelines can assist educators and learners in choosing the most appropriate training methods based on systematic synthesis of best available evidence.
METHODSThe development of these guidelines was conceived by the research committee of the SSH and was conducted in conjunction with the Society's 2023 research summit. After committee and SSH board approval, a steering group was formed to oversee this project (see Table, Supplemental Digital Content 1, https://links.lww.com/SIH/B26, which lists all the contributors to this project). After numerous deliberations, the steering group defined 12 topics of relevance to simulation training: (1) mastery learning/deliberate practice/technical skills; (2) feedback/debriefing; (3) spaced learning/booster training/warm up/JIT; (4) self-guided learning/regulated learning/peer to peer learning; (5) team training/nontechnical skills training; (6) in situ training (for training vs process improvement); (7) VR/AR/hybrid; (8) remote simulation/telesimulation (consider training and assessment); (9) standardized patients; (10) simulation/simulator fidelity (task resemblance of reality); (11) faculty development; and (12) low/high stakes assessment/formative/summative. The steering committee chose 2 coleads for each of these topics based on their background and expertise in the area (see Table, Supplemental Digital Content 1, https://links.lww.com/SIH/B26, which lists all the contributors to this project). The leads of each group in collaboration with the steering group determined and invited the expert panel members for guideline development. The expert panels deliberated and prioritized topic specific guideline questions and formed corresponding systematic review working groups. Reviewers included either members with prior methodological experience and expertise or members who underwent methods pretraining. Findings of the systematic reviews were summarized in GRADE evidence profiles and summary of findings tables. Guideline recommendations were developed with the GRADE Evidence to Decisions (EtD) approach.6,7 When evidence was lacking, expert panels provided consensus opinion. The Essential Reporting Items for Practice Guidelines in Healthcare (RIGHT) checklist was used to draft this guideline.8
Guideline Panel OrganizationThe guideline panel (expert panel) was composed of the topic coleads and volunteers with subject matter expertise. The guideline panel also included steering committee members. A nonvoting guideline development methodologist (M.T.A.) and learners (A.C., S.-M.K.-M.) also participated in panel meetings. All guidelines contributors and their roles are listed in Supplemental Digital Content 1, https://links.lww.com/SIH/B26.
Guideline Funding and Declaration and Management of Competing InterestsAll committee members and voting members of the guideline panel were volunteers. Funding for the methodologists, the librarian, and partial salary support for the research fellow was provided by SSH. There was no monetary or other support from industry. All guideline panel members were required to declare conflicts of interest. The guideline leads and steering committee chair evaluated these declarations for any pertinent conflicts. All disclosed potential conflicts of interest are listed in Supplemental Digital Content 2 (see document, Supplemental Digital Content 2, https://links.lww.com/SIH/B27, conflicts of interest).
Selection of Questions and Outcomes of InterestUnder the guidance of the steering committee, topic coleads, and guideline methodologist, the expert panel created a list of KQs relevant to each topic using the PICO format (population-intervention-comparator outcome). The outcomes were clearly defined by the expert panel using the Kirkpatrick levels of educational outcomes,9 and those deemed “critical” or “important” to decision-making for each KQ were included. The importance of these outcomes was revisited by panel members after they had reviewed the systematic review evidence. Outcomes included learner satisfaction with training, knowledge, and skills improvement and behavior/performance improvement in a clinical environment as a result of training, and a variety of patient and process outcomes that changed as a result of training. Cost to the patient was included as an additional consideration when data were available.
Evidence Synthesis and Grading the Certainty of EvidenceStandard systematic review approach using 2 independent reviewers (with third-party arbitration) was adopted to synthesize the best available evidence for each KQ. A librarian with expertise in the area searched multiple databases, including PubMed, Cochrane Library, and Embase, in May 2021. Systematic reviews and the bibliography of select included studies were hand searched for additional studies missed in the literature search. Given the potential paucity of data, both randomized controlled trials (RCTs) and observational studies addressing the KQs of interest were eligible for inclusion. Only peer-reviewed English language studies were included during study selection, which formed the bulk of the existing literature. Retrieved records were screened for eligibility and to remove duplicates at 2 levels (title and abstract, and full-text review) against the aforementioned eligibility criteria.
Study data were extracted using Covidence digital software for general study characteristics and outcomes.10 The Cochrane Risk of Bias Tool for RCTs and the Newcastle Ottawa Scale for non-RCTs were used to assess study risk of bias as appropriate.11,12 Meta-analysis was conducted in Revman using the Mantel-Haenszel random-effects model.13 Heterogeneity between studies was measured by I2 and chi-square and was explored against the risk of bias and clinical covariates across the studies. Publication bias could not be assessed because of the general inadequacy of the evidence. When direct comparative evidence was lacking, evidence from noncomparative studies was used to make indirect comparisons (albeit with lower certainty). For each outcome, the certainty of evidence was graded as per the GRADE approach based on the overall risk of bias, inconsistency, indirectness, imprecision, and other considerations and summarized in Evidence Tables in the online GRADEPro tool.14,15 Randomized controlled trial evidence was preferred over non-RCT evidence with the intent of generating higher certainty.
Development of Clinical RecommendationsThe panel took an individual perspective, using learner-based values to formulate recommendations for a target audience composed of clinician educators and learners. We used the GRADE EtD framework in the GRADEPro tool.14,15 The EtD framework requires panel members to make deliberated judgments about the magnitude of desirable and undesirable effects across the important and critical outcomes and the values (and associated variability in values) learners and patients place on those outcomes as they make subsequent judgments about the balance of desirable and undesirable effects, the overall certainty of evidence across the critical outcomes, the potential for inequities in health, and acceptability and feasibility of the intervention. These EtD judgments inform the final recommendation. As no literature was known to investigate the relative values and preferences patients and or learners assign to the various outcomes of interest, the panelists used their trainer and clinical experience as proxies for learner and patient values and preferences. Dissenting judgments and views were documented. Final recommendation and its wording required a majority of the panel to agree (>80%). All EtD tables relevant to the presented guidelines are included as Supplemental Digital Content and referenced hereafter under each separate KQ.
Limitations of These GuidelinesThe limitations of these guidelines are inherent to the very low certainty of the evidence we identified for the majority of our KQs. Multiple identified research priorities aim to improve the certainty and quality of the evidence for which recommendations were made, so future recommendations on these KQs can be based on more robust evidence.
Guideline Monitoring and UpdatingThe impact of these guidelines and their use will be studied in 5 years through a literature review and surveys; the guidelines will be updated in 5 years as well.
Guideline Document ReviewAfter composition of these guidelines, this manuscript was reviewed and appropriately revised, including steering group members, topic coleads, panelists, and methodologist before submission for publication. The AGREE-II tool16 was used to assess the quality of these guidelines by 2 independent reviewers (S.-M.K.-M. and A.C.) and revealed a score of 6.1 of 7.
KQs and Recommendations
Topic: In Situ Simulation
KQ 1:Should in situ simulation vs the education accrued during typical organizational practice be used for training healthcare professionals to improve clinician behaviors during patient care and/or patient outcome?
For healthcare provider training, we suggest that participation in in situ simulations should be considered to improve healthcare professionals' performance, patient outcomes, and healthcare system quality and safety compared with the education accrued during typical organizational practice (conditional recommendation, moderate evidence).Problem statement: In situ simulation (ie, simulation at the point of care) is increasingly being used to impact provider skills and behavior. The overall benefit of this on provider behaviors in the environment of care and/or patient outcomes, however, has not been firmly established.
Summary of the evidence: Nineteen studies17–36 addressed a range of in situ simulation outcomes, including mortality, clinical metrics of patient care delivery, nontechnical skill levels as measured during actual patient care, latent safety threat mitigation, and diagnostic decision-making. Clinical areas addressed included neonatal resuscitation, pediatric and adult resuscitation, obstetric care, outpatient care, stroke care, and trauma.
Benefits: Meta-analysis of included study data revealed that use of in situ simulation reduced risk of death [odds ratio, 0.66; 95% confidence interval (CI), 0.55–0.78], improved metrics of care delivery (standardized mean difference [SMD] = −0.34; 95% CI, −0.45 to −0.21), and improved nontechnical skills (SMD = −0.52; 95% CI, −0.99 to −0.05). Although meta-analysis was not feasible for diagnostic decision-making or latent safety threats, all presented outcomes were in favor of in situ simulation.
Harms and burden: Harms remained largely unaddressed by the identified studies, and only 2 reported on the cost of the intervention.
Certainty in the evidence of effects: Overall level of certainty was low and varied by outcome (high for the measurement of diagnostic decision-making; low for clinical metrics of care; and very low for mortality, nontechnical skill measurements, and effect on latent safety threats).
Decision criteria and additional considerations: The panel considered the desirable impact of in situ simulation on patient outcome and provider behavior in relation to the lack of data regarding potential deleterious effects at the patient, provider, or institutional level; the panel judged that undesirable effects were likely trivial. Consideration was also given to issues of accessibility, equity, and inclusion, which was of particular importance given that several included studies addressed low-cost in situ methods deployed in LMIC settings. Despite the limitations of the literature, the panel's opinion was influenced by the consistent benefit of in situ simulation in the reviewed studies. The panel considered providing a strong recommendation but decided not to because of the lack of evidence on undesirable effects and low level of certainty.
Conclusions: The panel suggests that in situ simulation be implemented in addition to current educational methods with the goal of improving patient care and mortality. The most robust findings were seen in postpartum hemorrhage, postpartum sepsis recognition, and in neonatal resuscitation skills. Accessibility to this type of training in LMIC settings is enhanced by low-cost, low-fidelity simulators promoting health equity. Institutions implementing this guideline should consider associated costs to ensure sustainability.
Research Priorities: The panel recommends the following research priorities be pursued:
High-quality studies focused on the impact of in situ simulation on simulation program resource use, the financial costs to the institution, and return on investment considering the potential cost savings due to avoided harmful events. Studies that measure the effect of in situ simulation (especially unannounced in situ simulation) on the emotions of participating professionals and on the care provided to other patients in nearby areas that may be disrupted.Please also refer to the relevant tables in Supplemental Digital Contents 3 and 4 (see tables, Supplemental Digital Content 3, https://links.lww.com/SIH/B28, KQ GRADE evidence table; and Supplemental Digital Content 4, https://links.lww.com/SIH/B29, EtD table).
KQ 2:Should a higher frequency of short in situ simulation events with structured debriefing vs a lower frequency of short in situ simulation events with structured debriefing be used for training healthcare professionals to improve clinician behaviors during patient care and/or patient outcomes?
For healthcare provider training, we suggest more frequent participation in short skill–oriented in situ simulations to benefit patient outcomes (conditional recommendation, moderate evidence).Problem statement: Given the time and resources required to implement in situ simulations, it is important to determine whether a dose-response relationship exists between frequency of exposure and beneficial effects and what the ideal exposure frequency is.
Summary of the evidence: One study was found37 that addressed this question: a comparative, nonrandomized study conducted with 572 teams across 26 hospitals that examined differences in survival after cardiac arrest in hospitals with low exposure to in situ simulation (3.2 in situ simulations per 100 beds) compared with those with high exposure (177 in situ simulations per 100 beds).
Benefits: Improved survival was noted in those hospitals with higher levels of in situ simulation (odds ratio, 0.62; n = 572 teams).
Harms and burden: No data were found that addressed harms and burden.
Certainty in the evidence of effects: Level of certainty was deemed to be moderate for this study.
Decision criteria and additional considerations: Given the reliance on 1 study with a relatively narrow focus (the study examined only short, skills-based in situ simulations focused on cardiac arrest care), the panel's recommendation specifically applies to skills-based in situ simulations. It was also felt that the survival benefits likely outweigh simulation costs especially given the use of low-cost low-tech mannequins and the short length (5 minutes) of the proposed simulations. The panel further opined that the findings of this study likely apply to other types of in situ simulation as well; issues of accessibility and equity were also considered.
Conclusions: The panel suggests that hospitals should engage in higher frequencies of short, skills-based in situ simulations to improve cardiac arrest outcomes. Given that the differences in simulation frequency between the 2 groups in this study were large, no recommendation can be provided on the optimal frequency of in situ simulation exposure.
Research Priorities: The panel recommends the following research priorities be pursued:
High-quality studies examining the level of exposure to in situ simulation needed to enhance patient outcomes that is balanced against measures of cost, resource use, and workflow (ie, the level of diminishing returns). High-quality studies addressing the impact of frequent in situ simulations on participants' psychological responses.Please also refer to the relevant tables in Supplemental Digital Contents 5 and 6, respectively (see tables, Supplemental Digital Content 5, https://links.lww.com/SIH/B30, KQ GRADE evidence table; and Supplemental Digital Content 6, https://links.lww.com/SIH/B31, EtD table).
KQ 3:Should in situ simulation vs another non–in situ simulation modality be used for training healthcare professionals to improve perceptions, knowledge, skills, clinician behaviors, and patient care outcomes?
For healthcare provider training, we suggest the use of in situ simulation to uncover or mitigate latent safety threats in the healthcare environment and to enhance the environmental authenticity and fidelity of the experience (conditional recommendation, moderate evidence).Problem statement: It is currently unknown whether in situ simulation offers any benefit over traditional, simulation center-based approaches. The answer to this question will inform the educator the choice of preferred training modality.
Summary of the evidence: Four relevant studies were included.38–42 Outcomes measured included participant preference and satisfaction, participant knowledge, participant stress, participant skill as evaluated in the simulated environment, and latent safety threat mitigation. Clinical areas addressed included infection prevention, airway management, perinatal resuscitation, and general resuscitation.
Benefits: The heterogeneity of the studies did not allow meta-analysis; instead, standardized mean differences were reported. Regarding participant performance, 1 study favored in situ simulation (SMD = −0.06, N = 57 providers), whereas another favored traditional simulation (SMD = 0.67, N = 120 providers). Regarding participant preference and satisfaction, some favored in situ simulation (SMD = −0.08, N = 1415 providers) and some favored traditional simulation with varying effect sizes (SMD = 4.26, N = 120 providers; SMD = 0.068, N = 97 providers). Participant knowledge acquisition was largely equivalent between in situ and traditional simulation (SMD = 0.07, N = 97 providers). In situ simulation was perceived as having higher authenticity (SMD = −0.49, N = 97 providers), and higher levels of salivary cortisol (a measure of stress) were found in the in situ group (SMD = −0.42, N = 97 providers). More latent safety threats (51 vs 40) were detected by in situ simulation.
Harms and burden: No measures of harm or burden were assessed in the identified studies.
Certainty in the evidence of effects: The level of certainty was deemed to be high for technical skill, knowledge, and latent safety threat measurements because of low risk of bias.
Decision criteria and additional considerations: The panel was not surprised by the equivalency of both modalities in terms of preference or knowledge, given that similar content can be taught using both. However, the panel considered the evidence that in situ simulation enables higher levels of environmental fidelity and uncovers more latent safety threats as adequate to offer a recommendation. The limited number of available comparative studies, presence of confounders, and lack of evidence on potential undesirable effects such as cost, effect on personnel, and effect on institutional efficiency made the panel offer a conditional recommendation.
Conclusions: Although in situ simulation has similar impact on provider preferences, knowledge, and skill compared with traditional simulation, its use may enhance training authenticity and improve latent safety threat detection.
Research Priorities: The panel recommends the following research priorities be pursued:
High-quality comparative studies addressing the effectiveness of in situ simulation vs other traditional simulation and non–simulation-based educational approaches that also evaluate potential undesirable effects such as cost, resource use, and workflow disruption. Systematic reviews focused on in situ simulation as a means of detecting latent safety threats.Please also refer to the relevant tables in Supplemental Digital Contents 7 and 8, respectively (see tables, Supplemental Digital Content 7, https://links.lww.com/SIH/B32, KQ GRADE evidence table; and Supplemental Digital Content 8, https://links.lww.com/SIH/B33, EtD table).
Topic: JIT
KQ 4:Should JIT vs no JIT be used for simulation training of healthcare professionals (trainees or practitioners)?
The panel suggests that 5–30 minutes of just-in-time simulation training (within 24 hours of performance) should be implemented with healthcare professionals (trainees or practitioners) engaged in high-stakes medical or surgical procedures particularly when there has been a prolonged period of no training (>1–2 weeks) (conditional recommendation, very low certainty evidence).Problem statement: Just-in-time simulation training, defined as training that is conducted in temporal or spatial proximity to performance, may be an effective method to improve performance and patient outcomes. Such training, however, is resource intensive and its benefits should be weighed against its risks.
Summary of the evidence: Sixteen studies were eligible for inclusion.43–59 Just-in-time training simulation training has been evaluated for a variety of medical, resuscitation, and surgical procedures. Most JIT simulation training occurred immediately before procedures and lasted between 5 and 30 minutes. The panel assigned relative values and preferences to outcomes. Specifically, time (ie, efficiency) was an outcome that was applicable across various training contexts and therefore the panel assigned a greater value to this outcome.
Benefits: All examined outcomes were in favor of JIT. The effect sizes ranged from small to large and the panel decided that the overall effect was moderate.
Harms and burden: Research examining the undesirable effects of JIT is lacking. Panel members noted that some of the undesirable effects of this intervention might be the time, resource intensiveness, and disruption of regular care processes related to implementing JIT before performance.
Certainty in the evidence of effects: All evidence for each outcome was deemed to be of very low certainty.
Decision criteria and additional considerations: The panel weighed the desirable effects of JIT simulation training against any potential undesirable effects. The panel felt that even if there was evidence of high cost or resource use related to JIT simulation training, most relevant stakeholders and decision-makers would still favor its use given the anticipated improvements in patient outcomes. The panel also opined that implementation of JIT simulation training is likely to initially target university hospitals and tertiary care centers potentially giving rise to disparities in patient outcomes between these centers and other healthcare-providing facilities. Although the disparities in patient outcomes may provide empiric evidence of the effectiveness of the implementation of these guidelines in the real-world settings, it is hoped that subsequent implementation interventions would be adopted to minimize such disparities.
Conclusions: The panel judged that given the moderate desirable benefits and unknown but likely trivial undesirable effects, use of JIT simulation training should be suggested.
Research Priorities: The panel recommended additional research to address the following areas:
Studies examining the effectiveness of JIT simulation training in nonphysicians and physicians in practice. Studies examining the impact of JIT simulation training on patient outcomes such as patient morbidity and mortality. Studies using interrupted time series analysis (where the effectiveness of JIT simulation training on patient outcomes is evaluated before and after JIT simulation training implementation). Studies examining the undesirable effects of JIT simulation training (costs, resources, and time) to better understand the overall balance of desirable and undesirable effects.Please also refer to the relevant tables in Supplemental Digital Content 9 (see tables, Supplemental Digital Content 9, https://links.lww.com/SIH/B34, KQ GRADE evidence table and EtD table).
Topic: Spaced Training
KQ 5:Among healthcare professionals (trainees or practitioners) engaged in simulation, does spaced training (separation of training into several discrete sessions over a prolonged period with measurable intervals between training sessions) compared with massed training (all training occurs during the same session) improve skill acquisition?
The panel suggests either spaced or massed simulation training for procedural skill acquisition using simulation (conditional recommendation, very low level of evidence).Problem statement: Spaced training, defined as the separation of training into several discrete sessions over a prolonged period with measurable intervals between training sessions, has been proposed to be a more effective method than massed training for skill acquisition. However, it is unclear if this is true for spaced training using simulation across settings and skills, and if the potential benefits of spaced training outweigh the potential drawbacks.
Summary of the evidence: Fifteen RCTs were included, comparing simulation-based spaced vs massed training.60–73 Most of the studies involved physician trainees doing procedures or operations. Outcomes measured were heterogeneous but primarily at the T1 level (based on the translational outcomes framework), in the simulated setting. In the highest prioritized outcomes (time to complete a procedure and final product assessment scores) measured after a retention interval, there was a signal that spaced training may be advantageous over massed training. However, findings were quite heterogeneous across outcomes and settings, especially for lower weighted outcomes (such as global rating scales assessment of performance and procedure-specific measures) measured immediately post-training.
Benefits: Although there was significant heterogeneity in the reported outcomes of the included studies, there were moderate potential desirable effects from spaced training to improve the acquisition of competence. For the outcomes of time to complete a procedure (efficiency) and global rating scales of performance after a retention interval, there was a moderate potential benefit found for spaced training. For the outcome of final product assessment scores immediately after training, there were trivial to large potential benefits. For the outcomes of global rating scales of performance immediately after training and objective procedure-specific metrics at immediate and at retention assessment, the findings were inconclusive with some studies demonstrating outcomes favoring massed training.
Harms and burden: There were no reported harms related to spaced training
Certainty in the evidence of effects: There was very low certainty of evidence. All research evidence had high risk of bias, with inconsistency across studies and significant imprecision.
Decision criteria and additional considerations: The panel weighed the reported benefits of spaced training against the very low certainty of the evidence, the heterogeneity of reported outcomes, and that some studies revealed no difference or favored massed training. It further considered the absence of any evidence on harms and burden; thus, the panel decided to offer a recommendation for either spaced or massed training.
Conclusions: Given the very low certainty of evidence in favor of spaced training, and that some studies revealed no difference or favored massed training, the panel decided to offer a recommendation for either spaced or massed training.
Research Priorities: The panel recommended additional research to address the following areas:
Quality studies that explore for which settings, which procedure, which trainees, and which outcomes spaced training may be superior to massed training. Studies that assess the impact of spaced training on the acquisition of competence for healthcare professionals other than physician trainees. Larger quality studies of spaced-training measuring outcomes in the patient care setting (ie, impact of spaced training on patient morbidity, mortality, cost, and resource use).Please also refer to the relevant tables in Supplemental Digital Content 10 (see tables, Supplemental Digital Content 10, https://links.lww.com/SIH/B35, KQ GRADE evidence table and EtD table).
Topic: Simulation Fidelity
KQ 6:Should higher physical realism simulators/task trainers vs lower physical realism simulators/task trainers be used for healthcare simulation training of individuals in lower and middle resource countries?
The panel suggests the use of lower physical realism as opposed to higher physical realism simulators and task trainers for healthcare professionals and/or healthcare trainees/students in LMIC countries (conditional recommendation, very low certainty of evidence).Problem statement: Although simulation in LMICs can be an effective teaching methodology, it is unclear if higher-fidelity simulation should be used in these countries because of the human resources required and financial costs involved. It is unclear whether the level of physical realism of simulators impacts clinical, educational, and procedural outcomes in LMIC countries.
Summary of the evidence: Of 2311 initially identified and screened articles, 8 randomized studies relevant to this KQ were included.74–81 Studies frequently considered animal models or VR simulators as being of high fidelity and realism and bench top simulators as low fidelity and realism.
Benefits: The majority of reviewed studies demonstrated no statistically significant difference in skill acquisition or clinical performance of medical students and residents when trained using higher-fidelity vs lower-fidelity simulators.74–80 Only 1 study found that the higher-fidelity models were better than the lower-fidelity models for skill acquisition of intramuscular injections by midwifery students.81
Harms and burden: No evidence was found for any undesirable effects of higher-fidelity simulation. The panel opined, however, that the associated resources and costs might prohibit high-fidelity simulation training in some LMIC healthcare settings or come at the expense of other healthcare educational interventions.
Certainty in the evidence of effects: The certainty of evidence was judged to be very low for all outcomes, downgraded for very serious risk of bias and inconsistency.
Decision criteria and additional considerations: The panel considered the limited to absent evidence in favor of higher physical realism simulation in the context of its use in LMICs with their associated significant cost and resource limitations. The panel also discussed sustainability considerations (eg, repair of equipment) and scalability of simulation use across LMIC practice settings.
Conclusions: Given the lack of strong evidence for high-fidelity simulation and resource limitations of LMICs, the panel concluded that low-cost physical realism simulator equipment would be preferable for the training of healthcare professionals and/or healthcare trainees/students in LMICs.
Research Priorities: The panel recommended additional research to address the following areas:
Quality studies that consider the balance between physical realism and cost, equity, impact of resources, sustainability, and scalability. Studies in LMICs that focus on appropriate study populations and interventions and are adequately powered to address relevant learning and patient outcomes.
留言 (0)