Debriefing Methods for Simulation in Healthcare: A Systematic Review

Healthcare simulation-based training contributes to enhanced patient care.1–4 Debriefing is a critical component in most simulation experiences and many scholars consider the debriefing of simulated cases essential for learning.5–11 However, there is little empiric evidence to guide its use. Debriefing is an art that is difficult to learn.12–15 With the growing number of debriefing concepts approaches and tools, we need to understand clearly how to debrief most effectively.16–20

In simulation-based education, debriefing is defined as guided conversation among participants that aims to explore and understand the relationships among events, actions, thought and feeling processes, and performance outcomes of the simulated situation.21–25 Educational debriefings are associated with large, positive effects on clinical performance—in simulation and beyond.19,26–28 For example, the World Health Organization recommends to consider education debriefings “as part of continuous learning and improvement.”29(p27) However, if done poorly, formative feedback and debriefings can have a negative impact on learning.30

Many clinicians and educators report struggling with leading debriefings.12–15 Steinwachs31 provided an early framework for debriefing after simulation (in this case a simulation game) with 3 phases identified that occur organically in many debriefings: description, analysis/analogy, and application. Over the past years, healthcare simulation education scholars have built on this work and developed a variety of debriefing approaches and tools, for example, “Debriefing with Good Judgment,”10,21 “PEARLS,”32 “The Debriefing Diamond,”33 Gather-Analyze-Summarize,34 Rapid Cycle Deliberate Practice (RCDP),35 and “TeamGAINS.”36 Many of these frameworks focus on creating and maintaining ideal debriefing conditions,37–39 managing challenging debriefing situations,38,40 and the complexity and dynamics among team members.11,17,24,41,42 Many different variables interact during debriefings leading to a complex event that can be difficult to understand well.27

In the simulation literature, empiric studies of debriefing impact are limited.42–45 Studies comparing the use and effectiveness of different approaches are currently lacking. This is problematic because decades of team science have demonstrated that the way team members interact and reflect strongly impacts their performance.43,46–49 Evidence-driven knowledge of effective processes is often required for meaningful, timely, and effective interventions for improving the quality and safety of patient care.50–52 This systematic review explores the current literature of debriefing in healthcare simulation education to understand the evidence behind practice and clarify gaps in the literature.

METHODS

This review was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) 2020 statement.53 Electronic searches for published literature were conducted by a medical informationalist (M.L.) using Ovid MEDLINE (1946 to present), Embase.com (1947 to present), Web of Science (1900 to present), Cochrane Central Register of Controlled Trials via Ovid (1991 to present), ERIC via EBSCO (1907 to present), ProQuest ABI/INFORM Collection (1971 to present), and ClinicalTrials.gov (1999 to present). The searches were conducted in April 2022. The PICO question for this review was defined as “In healthcare providers [P], does the use of one debriefing or feedback intervention [I], compared to a different debriefing or feedback intervention [C], improve educational and clinical outcomes [O] in simulation-based education?”

The search strategy incorporated controlled vocabulary and free-text synonyms for the concepts of improvement, teams, training, debriefing, comparison, and simulation. The full database search strategies are documented in Supplementary Digital Content (see Table, Supplemental Digital Content 1, Search Strategy, https://links.lww.com/SIH/B6). No restrictions on language or other search filters were applied. All identified studies were combined and deduplicated in a single reference manager (EndNote, Clarivate) and then uploaded into Covidence systematic review software (Covidence, Veritas Health Innovation).

Inclusion and Exclusion Criteria

All primary research publications were included comparing one form of debriefing or feedback with another in a healthcare simulation teaching intervention. Synthesis articles, systematic reviews, and meta-analysis were reviewed for appropriate references. Editorials, research protocols, abstracts, and reports of conference presentations were excluded. We also excluded articles in which feedback was provided by a device, such as a CPR feedback device (augmented feedback), as we wanted to focus on the approaches performed by facilitators.

Data Screening

Using Covidence, 2 authors (J.P.D. and M.K.) independently screened and reviewed titles and abstracts in duplicate. They discussed results with the larger review team and solved disagreements through discussion. Four authors (K.M., J.C.S., J.P.D., M.K.) reviewed the full texts to identify articles for data extraction. Each article was reviewed by 2 authors independently. Consensus was achieved by discussion. When that was not possible, the article was reviewed by one of the lead authors (J.P.D. or M.K.) for a final assessment.

Data Extraction

Five authors (J.P.D., M.K., J.S., K.M., I.T.G.) extracted data about study characteristics into a data collection tool, including details about the study design, outcomes, and debriefing characteristics (see Table, Supplemental Digital Content 2, data extraction tool, https://links.lww.com/SIH/B7). Each study data were extracted independently by 2 of the 4 researchers involved in data extraction (with each article reviewed by one of the lead authors). Disagreements were discussed until agreement was reached. In case of missing data, we left the respective extraction field empty.

Data Terms

We extracted data on the study characteristics and methods, characteristics of the simulation being performed, details about the debriefing including framework used, structure of the debrief, and information about the debriefers themselves. Simulations and debriefs were classified as being multiprofessional (different professions present such as physicians, nurses, respiratory therapists), multidisciplinary (same professions but different disciplines, such as surgical and anesthesiology learners), or neither. We defined learning and debriefing group size as individual, dyad (2 learners), medium (3–5 learners), and large (>5 learners) as per Salas et al.54 Task complexity was rated as low (a simple psychomotor task like chest compressions), medium (a more complex or more cognitively challenging task, such as management of a stable arrhythmia), and high (crisis management in a polytrauma patient). The structure of the debrief was also assessed using a framework described by Keiser and Arthur27 in their meta-analysis on debriefs. A debrief was determined to follow a high administrative structure if each debriefing in the study followed a specified set of steps. For example, if each debrief in the study used the PEARLS framework and went through each of the 4 stages (reactions, description, analysis, summary), it was rated as having high administrative structure. Similarly, each debrief's content structure was appraised. If each debrief in the study contained the same content (such as team work, or a focus on a particular medical procedure), the study was rated as having high content structure.27

Quality Assessment

For each study, 2 authors, one of which was one of the lead authors, independently assessed study quality using the Medical Education Research Study Quality Instrument (MERSQI) tool.7,55,56 It includes 10 items assessing study design, sampling, type of data collected, validity, data analysis, and outcomes.

Statistical Analysis

We report descriptive statistics of the studies in the dataset. Given the heterogeneous nature of the included studies, we elected not to perform a meta-analysis.

RESULTS Results of the Search

We performed our search in April 2022. We retrieved a total of 1604 citations, reducing them to 1572 after removal of duplicates. After title and abstract screening, we selected 110 articles for full-text review with 70 included in the systematic review (Fig. 1).

F1FIGURE 1:

Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) diagram.

The full list of articles in the data set is included in Supplementary Digital Content 3 table (see Table, Supplementary Digital Content 3, included studies, https://links.lww.com/SIH/B8).

Study Characteristics

Continent and country of origin of the studies in the dataset is described in Table 1. Four studies (5.8%) did not report the study origin. Forty-four studies (62.9%) were published in educational journals (eg, Simulation in Healthcare, Nurse Educator), and 26 (37.1%) were published in discipline journals (eg, JAMA Pediatrics, Surgery). There has been a steady increase in the number of publications with 2 published in 2006 with 15 published in 2021 (Fig. 2).

TABLE 1 - Geographic Origin of the Studies Geographic Origin No. Studies (% of Total Studies; N = 70)  North America 32 (45.7%)   United States 19 (27.2%)   Canada 11 (15.7%   Both Canada and United States 2 (2.9%)  Europe 15 (21.4%)   Germany 6 (8.6%)   United Kingdom 2 (2.9%)   France 2 (2.9%)   The Netherlands 2 (2.9%)   Switzerland 1 (1.4%)   Spain 1 (1.4%)   Ireland 1 (1.4%)  Asia 15 (21.4%)   Korea 9 (12.9%)   China 3 (4.3%)   Iran 1 (1.4%)   Japan 1 (1.4%)   Hong Kong 1 (1.4%)  South America 2 (2.9%)   Brazil 1 (1.4%)   Columbia 1 (1.4%)  Australia 1 (1.4%)  Africa 1 (1.4%)  Not reported 4 (5.7%)
F2FIGURE 2:

Year of publication of the included studies.

Type and Quality of Studies

We categorized 56 of the 70 included studies (80.0%) as randomized controlled trials (RCTs) and 14 (20.0%) as nonrandomized studies. Most of the studies were single-center studies (64 studies; 91.4%), 2 (2.9%) were 2-center studies, and 4 (5.7%) were conducted at 3 or more institutions. In 53 studies (75.7%), objective measurement was used; 17 (24.3%) relied solely on assessment by participants. In 9 studies (12.9%), satisfaction, attitudes, perceptions, or opinions were measured; in 59 studies (84.3%), knowledge or skills were measured; and in 1 study, each (1.4%) behaviors or patient/health-care related outcomes were measured, respectively. Adjusted MERSQI scores [ie, final scores excluding the response rate item due to lack of reported data; maximal achievable points = 16.5 (instead of 18)] ranged from 6.5 to 14 (median 11).

Participants Studied

Most of the studies were uni-professional with 6 studies (8.6%) enrolling learners from more than 1 profession (ie, medicine and nursing). Of the uni-professional studies, 12 (17.1%) had learners from different disciplines (ie, anesthesiology and surgery). The remainder of the 50 studies that reported learner characteristics enrolled learners from a single discipline and profession. There was a large range of numbers of participants, from 12 in 1 study and as many as 450 in the largest with a median of 50 participants in each study. Most studies involved learners, either students (46 studies) or residents/fellows (25 studies), with a minority enrolling postlicensure providers (6 studies).

Simulation Interventions

Half of the studies in the dataset used a mannequin-based modality (35 studies). Of the remainder, 12 (17.1%) used a task trainer, 9 (12.9%) used standardized patients, and 2 (2.9%) used a hybrid of multiple modalities. One study (1.4%) used a screen-based simulation and 1 (1.4%) used an extended reality technique. Simulation modality was not reported in 9 studies. Simulated tasks were rated on task complexity from low (such as a basic procedural skill such as CPR, 24 studies), high (such as healthcare crisis or resuscitation, 23 studies), and medium (all other studies, 22 studies). Only 27 studies (38.6%) included explicitly stated hypotheses, and these rarely went beyond proposing simple main effects.

Study Outcomes

In most cases, outcomes measures were poorly reported. Many studies focused primarily on low level Kirkpatrick outcomes. In 7 studies (10.0%), reactions were used exclusively as an outcome measure, while 12 studies (17.1%) assessed knowledge. Skills performance was assessed in 54 studies (77.1%). One study examined a systems-level outcome.57 In this study, the authors reviewed data from a previous study and determined the cost-effectiveness and a willingness-to-pay value of instructor-led versus self-debriefing. Eight studies measured any form of retention (ie, any assessment of learning performed after any period had passed from the initial education).58–65 In the 32 studies, which looked at dyad/team debriefings, only one (1.4%) controlled for dyad/team membership via multilevel analysis.

Debriefing Interventions

Despite the studies all examining different debriefing interventions, the actual debriefing or feedback session itself was often poorly described.

Debriefing Intervention

Fifty-four of the studies (77.1%) reported a particular debriefing intervention; 16 (22.9%) did not. Of the 54 studies reporting a particular intervention, 43 (61.4%) reported the use of a debriefing framework (eg, PEARLS, RCDP). One study tested the effect of a relaxation phase between simulated case and debriefing and found a benefit.66

Debriefing Duration

Only 50 studies (71.4%) reported the amount of time spent in debriefing, which ranged from 10 to 100 minutes (median, 20 minutes).

Debriefer Characteristics

Few studies reported any details about the number of debriefers per session, nor their level of training and expertise with debriefing. Only 29 studies (41.4%) reported that debriefers were trained, whereas 41 (58.6%) did not. Debriefing characteristics are reported in Table 2. In 22 studies, who did the debriefing was the independent variable, for example comparing self-led with facilitator-led debriefing. Debriefers were content experts in 31 (44.9%, intervention) and 33 (47.8%, control) studies; their expertise was not reported in 24 (44.3%, intervention) and 22 (31.4%, control) studies.

TABLE 2 - Debriefing Characteristics No. Studies (% of Total Studies, N = 70) Intervention Comparison Debriefing timing  After 55 (78.5%) 64 (91.4%)  During 15 (21.4%) 5 (7.1%)  Not reported 0 1 (1.4%) Debriefing group size  Individual 22 (31.4%) 22 (31.4%)  Pairs 5 (7.1%) 5 (7.1%)  Medium (2–5 learners) 15 (21.4%) 15 (21.4%)  Large (>5 learners) 13 (18.6%) 13 (18.6%)  Not reported 15 (21.4%) 15 (21.4%) Group composition  Multidisciplinary 6 (8.5%) 6 (8.5%)  Multiprofessional 12 (17.1%) 12 (17.1%)  Not reported 5 (7.1%) 0 No. debriefers  Zero (self) 14 (20.0%) 10 (14.2%)  1 debriefer) 25 (35.75%) 27 (38.6%)  2 debriefers 10 (14.3%) 9 (12.9%)  ≥3 debriefers 3 (4.3%) 2 (2.9%)  Not reported 18 (25.7%) 22 (31.4%) Script used 14 (20.0%) 9 (12.9%) Administrative structure  High 44 (62.9%) 42 (60.0%)  Low 6 (8.6%) 6 (8.6%)  Not reported 20 (28.6%) 22 (31.4%) Content structure  High 35 (50.0%) 27 (38.6%)  Low 6 (8.6%) 7 (10.0%)  Not reported 29 (41.4%) 36 (51.4%) Video-assisted  Yes 25 (35.7%) 14 (20.0%)  No 12 (17.1%) 19 (27.1%)  Not reported 33 (47.1%) 37 (52.9%)
Selected Specific Debriefing Interventions

Although there were many different debriefing interventions studied in the dataset, there were several that were common—we report these hereinafter.

Rapid Cycle Deliberate Practice

Seven studies examined the use of RCDP59,67–72; all of these were in pediatric or neonatal resuscitation or septic management. Six studies showed an immediate improvement in performance in groups taught using a RCDP methodology59,67–69,71,72 though this was not consistent in all studies.70 Only one study examined retention of learning and there was no difference between the RCDP and standard debriefing groups.67 However, interestingly, the RCDP had the largest decline in assessed performance 4 months after training.

Instructor-Guided Versus Self-debriefing

Seventeen studies examined the effect of self or peer-led debriefing in comparison with an instructor-facilitated debrief, the independent variable being presence of an instructor during the debrief (compared with no instructor). In many of these studies, the learners were provided with a guide (such as specific goals,73 a debriefing framework74) or a video (eg, see the studies Halim et al75 and Lee et al76) to facilitate the self-debrief, with mixed results. Most of the studies used a video of the simulation,61,65,76–79 a checklist,80,81 debriefing guide82,83 or exemplar video,84 or some combination of these tools57,75,85 to guide the students through their debrief. Only 4 studies on self-debriefing were done on postlicensure learners,57,61,77,85 with the majority of the rest recruiting nursing students. Eight studies examined higher-level outcomes (such as skill development) and results were mixed. Only one study reported a benefit of self-debriefing in post-graduate year 1 (PGY-1) interns; Oikawa and colleagues81 noted that teams that self-debriefed using a scenario-specific checklist and debriefing guide reported higher teamwork skills on self-assessment. Boet and colleagues77,85 demonstrated no difference in individual or team skills after self-debrief (guided by video) versus instructor debriefing in anesthesia residents and teams. In a subsequent analysis of their data, they also demonstrated a cost-effectiveness of self-debriefing, the only study in the dataset to examine cost of an intervention.57 However, other studies examining skill development showed improved performance after instructor-led debriefing when compared with self-debriefs, even when guided by a video or checklist.61,82,84 In a study examining the effect of video on self-debriefing, Ruesseler and colleagues79 found that using a video during a peer-led feedback session improved history and informed consent skills in a group of medical students during their surgical rotation. One study by Kang and colleagues74 examined the effect of an initial group reflection without an instructor present followed by an instructor-led debrief compared with a control group that received only an instructor-led debrief in nursing students. Interestingly, students that were able to discuss among themselves before meeting an instructor demonstrated significant improvement in problem-solving process and debriefing satisfaction compared with those that only had instructor-led debriefing.

Video-Assisted Debriefing

Nine studies examined the benefits of using video during debriefing compared with no video. Results were mixed with 5 studies showing no benefit of the addition of video to debriefing sessions in neonatal resuscitation skills,86,87 intravenous (IV) medication administration in nursing students,88 or nontechnical skills in anesthesia trainees.89 However, other studies demonstrated a benefit with the use of video during debriefing compared with no video. One study found that feedback with video was associated with the largest improvement of novices' laparoscopic suturing skills.75 The study by Ruesseler and colleagues79 mentioned previously found that compared with oral-only feedback, students receiving video-assisted feedback performed significantly better at taking patient histories in surgical students. Grant and colleagues90 found that whereas behavior scores were higher with video than without video, they had no impact on the number of performed nursing behaviors. Prakash and colleagues91 found better overall and nontechnical performance on delayed assessment in the video group but no differences in technical performance and reactions to the debriefing. One study examined the use of first-person video in debriefing but found no significant gains in learning outcomes of CPR and teamwork.92

DISCUSSION

The aim of this systematic review was to answer the question of whether educational and clinical outcomes in health professions simulation-based education are improved if facilitators use one debriefing intervention compared with a different debriefing intervention. Based on the results of our review, there is insufficient evidence at present to adequately answer this question. Our findings highlight that debriefing impact research comprises very few studies with high-quality evidence, very few multicenter studies, and very few studies explicitly testing a debriefing model with measurable, robust outcomes. However, interest in debriefing research (based on number of publications) is growing. In what follows, we will highlight current debriefing research foci, discuss challenges we see, and point to future research needs.

Current Debriefing Research Foci

Much of the current debriefing and feedback variables examined in simulation research has followed Raemer and colleague's proposed framework of “who, where, when, what, why”51 and addressed selected questions around (a) who should debrief/provide feedback (self, peer, facilitator), or (b) when to debrief/provide feedback (during or after simulated case), or (c) with what device (video, script), or (d) based on what approach (eg, PEARLS). Other independent variables were debriefing duration, group size, and debriefing structure. Surprisingly, although debriefer expertise and skill are considered crucial in the debriefing literature,13,93 in the studies we reviewed, debriefer training and skill level were rarely or superficially described. Based on our review, we find it hard to draw clear-cut conclusions with respect to potential superiority of any of the studied independent variables because the outcomes they were tested against were mostly reactions, attitudes, or selected skills. Thus, while we may conclude that some debriefing methods (eg, instructor-led over self-debriefing) were generally favored more than others, we cannot make comparable conclusions with respect to their effectiveness.

Certain independent variables were emphasized more than others in the studies we reviewed.

Timing of intervention was a common, independent variable which has received more interest in the last few years, specifically RCDP-type debriefing frameworks. Since its development by Hunt and colleagues,35 RCDP has been tested in multiple domains. Most feedback experts recognize that timely feedback is important, and the RCDP model allows facilitators to step in quickly to correct critical errors rather than waiting for the end of the event (which could be 15–30 minutes later). After a brief coaching session, learners then get an opportunity to repeat the scenario to apply what they have learned according to experiential learning principles.94 Six of 7 studies examining RCDP59,68–70 found a benefit on early skill learning with 2 demonstrating improved retention.71,72 However, more research is required exploring the potential challenges with RCDP (eg, the instructor wrongly assuming that the reason the learners are struggling is a knowledge gap).

The availability of well-trained simulation facilitators can be a challenge in most programs, so understanding when an instructor should be present to guide debriefing is important. While the presence of an instructor was a second, common independent variable, self-led debriefing was frequently introduced with the exclusive aim of saving costs rather than educational purpose. Providing some form of structure to the self-debrief generally improved outcomes compared with self-debriefs without a guide.

The use of video to assist recall during the debrief was a third, common intervention. Human recall of stressful events is often poor,95 and the use of short snippets of video to refresh memory and point out issues can be powerful. Our review found mixed results. In some studies, the use of video to help learners reflect was associated with improved learning outcomes,79,91 but in others, there was no significant improvement with the addition of video.86–89 Like many debriefing interventions, it is likely that there are specific contexts in which video-assisted debrief can and should be used, but more work is needed to elucidate these.

Current Gaps and Challenges

In 2011, Raemer and colleagues52 identified large gaps in the simulation debriefing literature. They noted at the time that research in the area was sparse despite a widespread belief that debriefing is a key component in simulation-based learning. Over 10 years later, there has been little change in the debriefing research landscape, which remains a major challenge to simulation educators as we attempt to develop interventions that will maximize learning. Although it is likely over-simplistic to suggest that all learning in simulation occurs in debriefing, it is undoubtedly a key component of any simulation intervention.89 Learners may recognize behaviors that are successful or unsuccessful as they receive feedback during the scenario itself (such as vital signs improving on a mannequin as an intervention is performed or a standardized patient responding positively to a line of discussion), but cementing that knowledge with some form of reflection is likely critical. A recent meta-analysis on educational debriefing (ie, after action reviews), which was not focused on simulation, highlighted the importance of exploring task and training characteristics.27 For tasks that offer limited intrinsic feedback and of high complexity (eg, managing a simulated, complex trauma), debriefing was associated with higher impact on performance compared with tasks with more intrinsic feedback and low complexity (eg, simulating chest compressions).27 Such relationships were rarely tested in the studies we reviewed; not even half of the studies stated even simple, main effect hypotheses.

What happens in a debrief or feedback session also has an impact. An analysis of debriefing interactions showed that combining advocacy with inquiry, asking open-ended questions, and paraphrasing support learner’s reflections, whereas stand-alone appreciations did not.42 Studies of team decision making and meetings—some outside of healthcare—revealed that what team members do and say during meetings significantly impacts their performance, for example, decision quality.96–103 We have limited evidence relating specific interactions in a debrief to outcomes.

Insights into which debriefing interventions are most effective to improve educational and clinical outcomes in simulation-based education are required to guide debriefing facilitators. These insights will inform debriefing faculty development efforts for faculty.104–106 They will also help mitigate workload during debriefing,15 enhance debriefing skills and debriefing quality, and thus contribute to safe patient care. Studies examining the direct effect of specific debriefing interventions that clearly link to outcomes will be helpful in improving our delivery of simulation-based education. In their work on trauma education, Brazil and colleagues107 examined the use of a relational coordination framework in trauma simulation and highlighted some of the more subtle changes in culture that can be enhanced by simulation and debriefing.

Without empirical evidence from the healthcare simulation literature to guide us, currently available debriefing techniques and frameworks have been pulled from educational and other social sciences research. However, there are signs from that literature that currently established practices in simulation debriefing may not be effective. As an example, Keiser and colleagues27 in their meta-analysis found that debriefings were less effective when they included an initial reactions phase, a key component in many debriefing frameworks (though not all, such as the Diamond model).33 We are not aware of any empirical evidence in healthcare simulation debriefing of the role of the reactions phase. However, as discussed in the results, one study demonstrated the benefit of a “relaxation phase” before debriefing. Studies like this are important as they demonstrate how very simple and feasible changes in our debriefing strategies can have large impacts on learning outcomes.

One of the largest challenges we face is poor reporting. Even in studies that were explicitly studying a debriefing intervention, details of the intervention that would allow replication of the study were missing. The publication of reporting frameworks in simulation-based research108 will hopefully help this issue, although it is important that authors and editors refer to the framework to ensure adequate details are present. With the large number of potentially influencing variables, each with the challenges of operationalization and measurement, understanding how debriefing can influence outcomes will be difficult. What might be a good intervention in one context might not be helpful in another, even where there may be only subtle differences between the two.

Study Strengths and Limitations

Studying debriefing can be a challenge, with multiple variables in play. Some of these can be measured (learner training level, debriefing framework), but many are more difficult to assess (learner/debriefer emotional state, tone of questioning, debriefer-learner interactions, cultural implications). This complexity makes summing up this literature especially difficult. We have done our best to mitigate this by gathering a large group of experts (both within clinical simulation and outside of it) with diverse academic and geographic backgrounds to review studies and our results. Not only were relevant simulation and debriefing variables extracted from the data, but the quality and risk of bias of included studies were also assessed.

However, one of the large limitations of our study is secondary to the state of the currently available literature. There are few high-quality empiric studies that would allow us to provide any clear recommendations on best practice in debriefing in simulation education. In addition, there were large gaps in reporting in the included studies, making extraction challenging.

CONCLUSIONS

Debriefing and feedback are important components of simulation-based education. Despite this, our current debriefing strategies, frameworks, and techniques are not based on robust empirical evidence. We believe that intentional simulation debriefing research is of the utmost importance to ensure that we are providing the best possible learning experiences.

ACKNOWLEDGMENT

The authors thank the 2023 Society for Simulation in Healthcare Research Summit participants who reflected on our results and provided valuable insights. The authors also thank Dr Rachel Elkin for her final review of the manuscript.

REFERENCES 1. Yucel C, Hawley G, Terzioglu F, Bogossian F. The effectiveness of simulation-based team training in obstetrics emergencies for improving technical skills: a systematic review. Simul Healthc 2020;15(2):98–105. doi:10.1097/sih.0000000000000416. 2. Fransen AF, van de Ven J, Banga FR, Mol BWJ, Oei SG. Multi-professional simulation-based team training in obstetric emergencies for improving patient outcomes and trainees' performance. Cochrane Database Syst Rev 2020;12:CD011545. doi:10.1002/14651858.CD011545.pub2. 3. Buljac-Samardzic M, Dekker-van Doorn CM, van Wijngaarden JD, van Wijk KP. Interventions to improve team effectiveness: a systematic review. Health Policy 2010;94(3):183–195. doi:10.1016/j.healthpol.2009.09.015. 4. Cook DA, Brydges R, Hamstra SJ, et al. Comparative effectiveness of technology-enhanced simulation versus other instructional methods: a systematic review and meta-analysis. Simul Healthc 2012;7(5):308–320. doi:10.1097/SIH.0b013e3182614f95. 5. Eppich WJ, Hunt EA, Duval-Arnould JM, Siddall VJ, Cheng A. Structuring feedback and debriefing to achieve mastery learning goals. Acad Med 2015;90:1501–1508. doi:10.1097/acm.0000000000000934. 6. Cheng A, Grant V, Robinson T, et al. The Promoting Excellence and Reflective Learning in Simulation (PEARLS) approach to health care debriefing: a faculty development guide. Clin Simul Nurs 2016;12(10):419–428. doi:10.1016/j.ecns.2016.05.002. 7. Cheng A, Eppich W, Grant V, Sherbino J, Zendejas B, Cook DA. Debriefing for technology-enhanced simulation: a systematic review and meta-analysis. Med Educ 2014;48:657–666. doi:10.1111/medu.12432. 8. Cheng A, Hunt EA, Donoghue A, et al. EXPRESS—examining pediatric resuscitation education using simulation and scripting. The birth of an international pediatric simulation research collaborative–from concept to reality. Simul Healthc 2011;6(1):34–41. doi:10.1097/SIH.0b013e3181f6a88701266021-201102000-00008 [pii]. 9. Sawyer T, Eppich W, Brett-Fleegler M, Grant V, Cheng A. More than one way to debrief: a critical review of healthcare simulation debriefing methods. Simul Healthc 2016;11:209–217. doi:10.1097/sih.0000000000000148. 10. Rudolph JW, Simon FB, Raemer DB, Eppich WJ. Debriefing as formative assessment: closing performance gaps in medical education. Acad Emerg Med 2008;15:1010–1016. 11. Rudolph JW, Foldy EG, Robinson T, Kendall S, Taylor SS, Simon R. Helping without harming. The instructor's feedback dilemma in debriefing—a case study. Simul Healthc 2013;8:304–316. 12. Seelandt JC, Walker K, Kolbe M. “A debriefer must be neutral” and other debriefing myths: a systemic inquiry-based qualitative study of taken-for-granted beliefs about clinical post-event debriefing. Adv Simul (Lond) 2021;6(1):7. doi:10.1186/s41077-021-00161-5. 13. Cheng A, Eppich W, Kolbe M, Meguerdichian M, Bajaj K, Grant V. A conceptual framework for the development of debriefing skills: a journey of discovery, growth, and maturity. Simul Healthc 2020;15(1):55–60. doi:10.1097/sih.0000000000000398. 14. Sweeney RE, Clapp JT, Arriaga AF, et al. Understanding debriefing: a qualitative study of event reconstruction at an academic medical center. Acad Med 2020;95(7):1089–1097. doi:10.1097/acm.0000000000002999. 15. Fraser KL, Meguerdichian MJ, Haws JT, Grant VJ, Bajaj K, Cheng A. Cognitive load theory for debriefing simulations: implications for faculty development. Adv Simul (Lond) 2018;3:28. doi:10.1186/s41077-018-0086-1. 16. Kolbe M, Rudolph JW. What's the headline on your mind right now? How reflection guides simulation-based faculty development in a master class. BMJ Simul Technol Enhanc Learn. 2018;4(3):126–13

留言 (0)

沒有登入
gif