Health Care Simulation in Person and at a Distance: A Systematic Review

The delivery of health care simulation through a virtual environment spans more than 4 decades,2,3 yet the Coronavirus Disease 2019 (COVID-19) pandemic served as a catalyst for the move to online methods of simulation-based education (SBE) posing new challenges. The wide acceptance of distance simulation, along with the historical success of online instruction,3 confirms the learning potential of distance simulation.4 The ability to deliver simulations remotely holds many advantages, notably a broader reach to geographical locations and accessibility to experts globally.6–9 Hayden et al5 additionally concluded that distance simulation enables a broader number of trainees, more simulation sessions, and less time, thereby saving on staffing costs. Despite its advantages, the rapid adoption of distance simulation during COVID-19 fostered a large gap in knowledge and a profusion of new questions. One such question was: What are the different methods of in-person and distance simulation?9 Primarily seen during the pandemic, many distance simulations describe participants and facilitators synchronously participating in the simulation virtually but located independently in different physical places.9 We refer to this as “distance-only simulation.” Another form of distance simulation portrays a group of learners and/or facilitators situated at 1 site with either facilitator(s) and/or other learners joining them remotely. This combined in-person and distance-synchronous form of simulation is the focus of this study, and we refer to this form as “mixed-distance simulation.” As the field of distance simulation expands, the questions that persist are: What is its effectiveness, what are the best delivery methods, and what are the challenges encountered? This systematic review aims to determine the following: In the distance health care simulation literature, what are the methodological characteristics and effectiveness as measured by descriptions of platforms, technology, or teaching methods and learning outcomes? Subaims include the examination of faculty training, theoretical frameworks used, implementation methods, and areas for future research inquiry

METHODS

This descriptive analysis systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines10 and is registered with the International Prospective Register of Systematic Reviews, Prospero (ID: 274842). This study was initially conceptualized to systematically review the distance simulation literature under the auspices of the 2023 Society for Simulation in Healthcare (SSH) Research Summit and is a study stemming from the work of a scoping review conducted in 2019.11

Search Databases

The complete search strategy for the review can be found in the Supplemental Digital Content (SDC) (See Table, Supplemental Digital Content 1, Search Strategy, https://links.lww.com/SIH/A993). To ensure the inclusion of the most current articles possible, this study included 3 sequential searches conducted between September 2020 and January 2022, each sharing identical search strategies. Three librarians assisted with this process. The authors performed ancestry and hand searches where appropriate. Figure 1 displays a PRISMA flowchart of the selection process.

F1FIGURE 1:

PRISMA flowchart.

The searches encompassed all distance simulation. During the screening process, however, it became apparent to the researchers that the distance-only learning environment possessed a very different dynamic than mixed-distance. For this reason, a final screening was performed to label each study “distance-only” or “mixed-distance” to separate the subsumed database into 2 separate databases. This article includes the analysis of only the mixed-distance simulation dataset.

Inclusion Criteria

We included articles that met the following criteria:

▪ A simulation activity that had a distance element (in at least 1 arm of the study if multiarm) where at least 1 simulation participant was distant/remote and at least 2 other participants were together in person (eg, learners, observers, simulated patients or embedded participants, facilitators, operators, etc) ▪ Simulations that were conducted synchronously ▪ Peer-reviewed, research-based studies (quantitative, qualitative, and mixed methods) ▪ Outcomes were reported from at least 1 analysis in any form ▪ Any year of publication ▪ Any language Exclusion Criteria

Subsequently, articles with the following criteria were excluded:

▪ Articles with no specific qualitative or quantitative outcomes12 ▪ Non–peer-reviewed articles, abstracts, and poster presentations ▪ Simulations that were conducted fully at a distance (no in-person element) ▪ Simulations that were conducted in person where students were in another observation room physically located away from the simulation room and where they were located together both before and after the simulation (no distance element consistent throughout the active simulation activity) Review and Extraction Process

Data collection and extraction were conducted using Covidence systematic review software (Veritas Health Innovation, 2022). Interrater reliability training was performed before the abstract screening. After consensus was reached for all articles, each title and abstract was screened by 2 reviewers independently using the aforementioned inclusion and exclusion criteria. The same process was followed for full-text screening. This resulted in a Cohen Kappa of 0.82 across the 10 reviewers. Two reliability training sessions were conducted before extraction focusing mainly on item clarity and the additions of instructional notes where needed. After training, 2 reviewers independently extracted each article, with 1 reviewer (N.B.) extracting every article to provide consistency in data collection. Conflicts were resolved via consensus between the 2 reviewers and brought forward to the group in weekly meetings to ensure awareness and reliability throughout the team. All authors assisted in table cleaning and analysis.

Quality Assessment Before extraction of the literature, we chose several quality assessment tools that our expert librarian recommended for pilot testing. Pilot testing included the use of different risk-of-bias tools to assess 3 randomly selected articles. With the guidance of 3 librarians, we chose the Critical Appraisals Skills Program (CASP),13 based on medical literature and endorsed by the Cochrane Qualitative and Implementation Methods Group14 because it was the best fit for our study and produced the highest reliability among our reviewers. Because there were many prestudies to poststudies and CASP does not have a pre/post risk-of-bias tool, we chose the NIH Before & After with No Control Group Tool to assess those articles.15 We conducted interrater reliability testing and training for bias appraisal. The GRADE approach was not used because less than 5% of our studies were comparative in design (See Table, SDC 2, Data Extraction Item Table, https://links.lww.com/SIH/A994) (see Table, SDC 3, Characteristics of mixed-distance simulation studies, https://links.lww.com/SIH/A995) (see Table, SDC 4, extraction table, https://links.lww.com/SIH/A996). Unlike GRADE or other quality assessment approaches, CASP and NIH do not recommend assigning a score to correspond to the degree of bias, but instead ask the assessor to first evaluate each study on the following elements: appropriateness of the study design, if it is methodologically sound, the quality of reporting of results, and if the results provide value to practice. Assessors then make an overall gestalt appraisal of the study using the collective data from the tool. We measured this overall appraisal by asking the reviewers to indicate “Yes” or “No” to the question: Does this study overall meet quality standards as listed in CASP/NIH? Data Extraction

The data extraction table was created by the researchers through 4 meetings. The creation of the extraction table was guided by reporting guidelines16 and reviews,17,18 in addition to the emerging data from the scoping review11 as well as expert opinion. Extracted data items included aim of study, demographic data (eg, information on the learners, the professions of learners and facilitators, and country of study, etc), simulation design and delivery (eg, specific scenario details, facilitator training, and distance modalities used, etc), assessment approaches, and outcomes. Studies were also examined to determine whether simulation was the object of the study (ie, studies “on” simulation) or was used as a method to study a non–simulation-related topic (ie, studies “using” simulation).19 The level of evaluation for each study was categorized using Kirkpatrick Levels of Evaluation.12 The presence or absence of a “pictogram,” or a diagrammatic or photographic representation of the overall configuration of the simulation or research, was also noted. This aligns with a directive of The Distance Simulation Collaborative Group, who created a Pictogram Group on realizing that the presence of pictograms assists in understanding distance simulation methodology and eliminates confusion often experienced with descriptive text.20 A complete list of the extracted data and supporting guidance for the researchers can be found in SDC 2 – Data Extraction Items, https://links.lww.com/SIH/A994.

Data Analysis and Synthesis

Due to the high heterogeneity of our data, a metaanalysis of the systematic review results was not conducted. Instead, we used a descriptive analysis over a series of 10 meetings to explore patterns and trends in the evidence, identify gaps in the literature, and inform future research directions. Analysis of each column and across the table led to identification of further descriptive analysis. Themes from each descriptive analysis were recorded and used later for discussion. The methodological quality of the reviews was analyzed according to the Kirkpatrick Levels of Evaluation.12

RESULTS Study Flow

Figure 1 depicts the PRISMA flowchart for this review. After the identification of more than 8000 articles from 3 searches, a total of 34 studies met the inclusion criteria for the current review.

Study Demographics

The included studies span a period of 20 years, with the earliest mixed-distance simulation article published in 2001.21 The studies were conducted in a total of 22 countries, with the United States holding the largest number of publications (n = 26, 76%), followed by Canada (n = 5, 15%) (Fig. 2). A third of the studies (n = 12, 35%) were transnational in their delivery.8,21–32 (See SDC 3 Characteristics of Mixed-Distance Simulation Studies, https://links.lww.com/SIH/A995). In many of the studies, the learners were physicians at different levels of training (n = 13, 38%), followed by nurses (n = 8, 24%), and 1 prehospital care provider (3%). Approximately one third (n = 12, 35%) were interprofessional.21,22,26,27,29–36

F2FIGURE 2:

Depiction of the countries involved in mixed-distance simulation research.

The aims of these simulations are diverse and focused on different modalities. In 14 studies (40%), the objective is to train in procedural skills. This includes simulation skills such as preparing staff on telemedicine techniques and telepresence robots. The remainder of the studies focus on teaching communication, clinical reasoning, problem-solving, and team-building skills. Relative to the aims, 60% focus “on” simulation, studying the methods used, whereas the remainder (n = 14) classify as research “using” simulation, studying a concept using simulation.

Terminology

The studies reflect significant variation in terminology used for distance simulation, as shown in SDC 5 Terminology Table (see Table, SDC 5, terminology table, https://links.lww.com/SIH/A997). Many terms describe distance simulation as “remote” or “telesimulation” and at times a term is synonymously used with other iterations throughout an article.

Study Outcomes

A detailed list of study outcomes can be found in Table 1, SDC 3, https://links.lww.com/SIH/A995 Characteristics Table, and SDC 4, https://links.lww.com/SIH/A996 Full Extraction Table. Of the 34 studies, 21 (61.7%) studies are categorized at the Kirkpatrick II Level, 13 (38.2%) studies were at Kirkpatrick I, 11 (32.3%) studies are quantitative, and 2 (5.8%) qualitative. None of the studies provide an evaluation at a Kirkpatrick III or IV level.

Qualitative studies that explored learners' experiences demonstrated a clear preference for in-person teaching26,27,37

A summary of the reported findings related to simulation is presented in SDC 4, https://links.lww.com/SIH/A996 including the few studies that compare mixed-distance methods. Eight (23.5%) mention using a tool with published validity evidence for the outcome being assessed, whereas 25 do not report or have no validity evidence for their assessment tool and use other forms of author-designed assessment.

When assessing for risk of bias, the NIH tool was used in 16 (47%) studies, and CASP for 18 (53%). Table 1 demonstrates the risk of bias for each tool. Nineteen (55.8%) studies demonstrated a low risk of bias and 15 (44.1%) demonstrated a high risk of bias.

TABLE 1 - Reporting Summary of Simulation Design and Delivery Simulation Design n/34 % Study design Quasi-experimental 14 41.2 Pretest-posttest design 9 26.4 RCT 6 17.6 Qualitative 2 5.9 Case-control 2 5.9 Retrospective study 1 2.9 Theoretical framework Not mentioned 28 82.4 Mentioned: 7 20.6 - Social presence model 2 5.9 - Social practice 2 5.9 - Deliberate practice 1 3 - Community of inquiry 1 3 - Cognitive load theory 1 3 - Situated learning theory 1 3 - Tech acceptance model 1 3 - Guthrie and Wigfriel engagement model 1 3 - Bandura Self-efficacy theory 1 3 - Social learning theory 1 3 Outcome Kirkpatrick I 13 38.2 - Quantitative studies 11 32.3 - Qualitative studies 2 5.9 Kirkpatrick II 21 61.8 Kirkpatrick III 0 0 Kirkpatrick IV 0 0 Assessment Not reported 25 73.5 Tools with appropriate validity 8 23.5 Interview 1 2.9 Risk of bias* CASP tool High risk of bias 8 23.5 Low risk of bias 10 29.4 NIH before and after with no control tool High risk of bias 7 20.6 Low risk of bias 9 26.5 Simulation delivery n/34 % Faculty training Yes 12 35.3 Not reported 23 67.6 Debriefing Debriefing delivered 17 50 - Debriefing tool not mentioned 13 38.2 - Debriefing tool mentioned 4 11.8 Debriefing not mentioned 13 38.2 Feedback 5 14.7

*CASP/NIH do not assign a score to correspond to a degree of bias, risk of bias is determined by overall appraisal of each study.

There were 6 randomized controlled trials (RCTs) in our dataset. Of those 6 RCT studies, only 2 directly compared mixed-distance simulation and in-person skills training. Altieri et al23 found no statistical difference between mixed-distance and in-person simulation when training surgical residents on electrocautery devices. Lin et al38 similarly found no difference in the quality of chest compressions between online- and in-person–trained learners.

Use of Theory

Seven (21%) articles in our review describe a theoretical framework that informed their analysis, as shown in Table 1. Some theories explored were Cognitive Load Theory,39 Situated Learning Theory,40 Community of Inquiry,41 Social Presence Theory,42 the Guthrie and Wigfield Engagement Model,43 and the Technology Acceptance Model.44

Debriefing

Nearly half of the studies (n = 16, 47%) mention a debriefing process, and an additional 5 (15%) refer to a feedback process. However, only 4 (12%) studies describe the use of a recognized debriefing method such as PEARLS (Promoting Excellence and Reflective Learning in Simulation)45 (n = 2),33,46 Plus Delta (n = 2), or Pause and Coach (n = 2).26,27 More than one third of the studies (n = 13, 37%) do not mention a debriefing or feedback process.

Mixed-Distance Simulation Configurations

Five mixed-distance simulation configurations were identified as shown in Figure 3. Configuration A is the most common (n = 20 studies, 57%) and consists of a remote instructor who facilitates a joint group of learners. These learners frequently have access to on-site simulation modalities. Configuration A was the most appropriate for linking expertise to underserved areas and was well suited for procedural skills training. Configuration B describes 2 sets of learners interacting at a distance with an on-site facilitator. The direction of discussion can be unilateral, with 1 group directing on-site students with a simulation modality, or bidirectional, with both sets of students collaboratively working together. Configuration C represents a distance-only simulation because all learners are remote; however, the instructor is accompanied by either another in-person learner, instructor, sim ops personnel, or a simulated participant. Configuration D depicts a group of learners who are located together and are interacting remotely with a facilitator at another site with a set of simulation modalities. Finally, configuration E describes a sequential mixed-distance simulation where the simulation is fully remote, but a debriefing or postconference is in person for all participants. Sixteen (47%) studies have a pictogram included in their article. Some articles (n = 5, 14.7%) have screenshots that include picture-in-picture (PIP) (small video overlaying screen that showed a participant or facilitator) and indicate that a person was at a distance.

F3FIGURE 3:

Configuration of participants during mixed-distance simulation.

Challenges

Challenges with distance simulation were reported in 13 (38%) studies. The most common challenges are technical problems, particularly Internet speed, bandwidth, and familiarity with the use of technology (eg, operating the simulator and interacting with the online software). Other challenges include scheduling due to differences in time zones and weekends, faculty training, and fluency in English.

As a preventative approach to anticipated challenges, facilitator training was reported in 12 (35%) studies; most relate to instruction on remote technology, simulation skills, or assessment tool reliability training.

DISCUSSION

The expansiveness of our extraction table (Table, SDC 4, https://links.lww.com/SIH/A996) brings to light many discussions, old and new. Several highlights are familiar to health care simulation research, including the need for higher quality studies (high methodological rigor, intentional measurements, tools with validity and reliability evidence), higher Kirkpatrick evaluation levels (ie, III and IV), standardization of terminology, and the need for reporting guidelines for the purpose of study replication and literature searches. Through our analysis, we distinguish new characteristics (eg, implementation of distance simulation), challenges, limitations, and future directions. We discuss our most persistent thoughts here.

Taxonomy and Terminology

A consistent challenge in health care simulation is the discrepancy in the terminology used. This challenge is only amplified by distance simulation literature. Our review reveals 8 main headings and multiple variations within these headings to describe mixed-distance simulation, demonstrating an absence of standardization. The terms most often used were variations of the word “tele” and “remote” simulation (Table, SDC 5, https://links.lww.com/SIH/A997). These terms are like those proposed at a recent consensus summit focused on distance simulation taxonomy convened by The Healthcare Distance Simulation Collaborative Group.47 However, the group ultimately settled on “distance” because it was the most neutral term that did not connote other unrelated interpretations.

Duff et al48–50 describe distance simulation as “a novel offshoot of an established discipline” (p. 185), an emerging field where the creation of vocabulary with expected variance and the process toward standardization is an integral part of its development phases. Reaching agreement on terminology is necessary to ensure standardization, increase literature access, and help advance this field of simulation. Authors and simulationists, perhaps, often feel as though their choice of words is the most accurate description or may not be aware that other terminology exists. In this study, as an example, the authors encountered this same dilemma determining the best fit term for what we call here in this article “mixed distance.” After using the term “hybrid” during the 2-year period of data collection and analysis, the authors ultimately decided to avoid adding to the confusion around the term “hybrid” because it is already used in an assortment of descriptions throughout health care simulation literature. After reviewing the relevant literature, we suggest that the term “mixed-distance simulation” best describes the educational environments we examined while causing the least confusion. We acknowledge, however, that adding a new term may be contributing to the confused lexicon of our field. For this reason, it is imperative that there be consideration of mechanisms for introducing such new terms through a formal process and consensus.

Quality of Studies

The methodological approaches used by the studies in this review were diverse and underreported. This is reflected by the high percentage (44%) of included studies with a high risk of bias as determined by the CASP and NIH quality assessment tools. We found minimal reference to simulation standards of best practice in both design and implementation within the included studies. Similarly, there is a lack of description regarding guiding theory, study design, and debriefing approach. Finally, less than one quarter of the studies use assessment tools with published validity evidence that supported their described use. Taken together, these inconsistencies significantly limit our ability to draw firm conclusions and should be addressed in future studies on mixed-distance simulation.51

Study Outcomes

This review aimed to explore the effectiveness of mixed-distance simulation, including an evaluation of outcomes. The Kirkpatrick Framework provides a structure to evaluate the impact of outcomes where each level builds on the previous level (ie, level I builds to II, etc) through the idea that effectiveness should emerge at each level leading to meaningful changes at the patient care level and organization.12 Most studies (63%) achieved a Kirkpatrick Level (KL) II outcome demonstrating improved knowledge, skills, or attitude; however, none scored KL III or higher, such as transfer of training to on-the-job performance.12 This is not surprising given the relative recency of this discipline in simulation compared with in person. By studying the efficacy of mixed-distance simulation at the levels of clinical practice and patient care outcomes through the usage of both quantitative and qualitative methodologies, we can substantially enhance our capacity to evaluate the effectiveness of mixed-distance simulation modalities.

It is important to note that KLs are intended to provide a framework for categorizing a study's level of outcomes, not quality;12 whereas quality assessment tools (such as CASP and NIH) are intended to examine a study's quality through the assessment of risk of bias. The critique that most of the studies were at lower KLs is not a critique of quality (ie, many studies at level I and II were deemed to have used high-quality methods), but rather a critique that, with an abundance, theoretically, we are able to build on knowledge to higher levels (Kirkpatrick III and IV) of impact using both quantitative or qualitative methods. However, the suboptimal methodological quality revealed in this study, as indicated by the high risk of bias, may provide an inherently flawed foundational basis of KLs I and II studies on which subsequent levels of research endeavors at KLs III and IV may choose to build on.

When the purpose of the research was examined, most, nearly 60%, of the studies were “on” simulation. These findings suggest that mixed-distance simulation is at its early stages of adoption, where educators are still exploring methods and looking to establish proof of concept and its feasibility. Considering the novelty of mixed distance and the spirit of early discovery palpable throughout the studies, it is logical that many research foci concentrate on investigating the intricacies of mixed-distance simulation modalities rather than using these, as yet unstudied, approaches to study other concepts.

Interestingly, the most common modality in our review was procedural skills training with task trainers. This finding addresses an emerging topic in distance simulation: Is distance simulation appropriate for training procedural skills?52 Although these findings suggest the adequacy of procedural skills training at a distance, an alternate school of thought is that although these skills could be delivered remotely, caution must be taken when suggesting equivalence in training and assessment when compared with in-person teaching. There is not yet strong evidence to settle this debate because this review contained only 6 RCTs in our dataset. Although we are considering only 2 sources of data,23,38 the notion that when implemented thoughtfully, mixed-distance simulations can lead to learning outcomes equivalent to those that in-person simulation is encouraging.27

The nuances of online observation are critical to providing feedback and, hence, to learning outcomes. Mikrogianakis et al8 addressed the issue of mixed-distance simulation assessment and found a positive correlation between assessment scores completed in person and remotely. Although these data are promising, there remains a need for further comparative studies to expose new areas of consideration for assessment (eg, how reliable or valid are remote assessments? What is missing from the in-person assessment? What are the advantages and disadvantages?). The convenience of online remote access not only offers an advantage for participation and faculty recruitment, but also may potentially address assessor recruitment challenges.

Challenges

The most common challenge encountered in mixed-distance simulation was technical problems. Poor Internet speed and bandwidth and a lack of familiarity with technology are frequently cited. Although technical issues are recognized as a challenge, viable solutions are seldom explored in the studies. The mitigation of technical challenges could be achieved through faculty and staff development and a dedicated commitment to comprehensive preparation and meticulous planning.33 Studies in distance simulation highlight the importance of faculty development in delivering distance simulation.53 Our review found that only one third of the studies reported faculty training, which was often not always specific to distance simulation. The use of distance simulation educator guidelines, developed using the Delphi method, were recently published by Bajwa et al51 and may serve as a guide for distance educator development.54

Yet, even with the most proper planning, technical glitches may occur a priori in any online environment. Acknowledgment of such innate challenges at the onset of the mixed-distance activity may temper any potential for technological frustration.

Other challenges were related to the transnational nature of these simulations. Our studies encompassed 22 countries, a third of which were transnational simulations. Recognizing learners' travel limitations, particularly those in medium- to low-income countries restricted by cost, time, and challenges with visa approvals,8 mixed-distance simulation provides a unique opportunity to access high-quality training.20,33 The reach and access provided by these simulations, while providing an incredible opportunity, also underline geographic challenges because different time zones created problems with scheduling. The transnational attribute also produced cross-cultural challenges, most notably language differences as a barrier to understanding. This can be dually challenging when the aim of the activity is not focused on heightened awareness of culture and diversity. Interestingly, this was not addressed in any of the studies in this review; however, justice, equity, diversity, and inclusivity (JEDI) have been identified as necessary competencies for distance simulation educators to enable them to interact with the required respect and awareness of dealing with learners of diverse backgrounds.39,42,51,55

Theoretical Frameworks

Gross et al52 describe the need for theoretical and educational foundations to underpin research in the expanding field of distance simulation. Although only 7 studies reported the use of a theoretical framework, there are valuable insights regarding the specific theories used in mixed-distance simulation. Particularly, through distance simulation, the SBE literature is now exploring theories based in technology, informatics, and online experiences in combination with theories commonly used in SBE (eg, Kolb Experiential Learning Theory, Situational Learning, Deliberate Practice, etc). The use of these theories is central to anticipating challenges for faculty and learners when engaging remotely, and the combination of common SBE theories and new technology-based theories is essential to the development of quality mixed-distance simulations.53

Mixed-Distance Simulation Configurations

This review identified 5 configurations of mixed-distance simulation (Fig. 3) that emphasize the advantage of global and expert access. There were positive and negative themes within each configuration. In configuration B, learners in the distance arm had an inferior experience when compared with their in-person counterparts, with difficulties arising due to technology and interacting in the virtual environment. One study used configuration D to scale expertise, where 153 experts in their fields, located remotely, managed an on-site bioterrorism case.32 In addition to easier access to educator or simulation expertise, all configurations have the potential for scalability of participants. An ongoing area of interest in SBE is how to deliver high-quality mass education (>200 students) through the medium of simulation. This dilemma is further compounded by the resource-intense nature of SBE and the inherent role limitations in natural patient care teams, where representation from various professions or specialties at a given time is typically 1 individual. Mixed-distance simulation potentially allows for in-person teams that mirror actual clinical practice while scaling the number of observing participants remotely. This critical advantage was identified as a priority area for future research at the recent Research Summit, International Meeting on Simulation in Healthcare 2023.56

Configurations A to D portray prevalent configurations, yet they offer a very rudimentary depiction of configurations that have the potential for significantly greater complexity, particularly when considering sequential methodologies as demonstrated in configuration E. Continuous evaluation of evolving configurations may lead to descriptions of implementation methods from basic to complex. These classifications may assist in areas of educator development, research reporting, and literary communication.

Pictograms

As the reviewers of this study analyzed configurations and methods of each study, it became apparent that the presence of a pictogram facilitated an understanding of configurations and methods more efficiently than the written descriptions. Less than half of the included studies provided a pictogram in our review, making identifying the other studies as mixed-distance simulations challenging, with a frequent need to verify methods with study authors. We believe creating a standardized description through imagery assists in the reporting of methodology and allows for a shared understanding of these simulations.

Debriefing

Debriefing is a guided process where students and faculty engage in reflective thinking to examine what happened and what to learn from the experience.57 Following best practice, debriefing is a critical element of simulation-based education, being identified as the most crucial factor in learning58 and should be incorporated regardless of the delivery of the simulation activity (in-person vs. distance). The findings in this study reveal that one third of the studies failed to mention any debriefing. In the studies that performed debriefing, only 4 studies discussed using a specific debriefing method.

Limitations

Several limitations emerged throughout this review. The most prominent limitation is the high risk of bias of included studies. Consequently, the findings and conclusions drawn from this review require further study to demonstrate true evidence rather than suggestions. Further limitations include the challenge of identifying mixed-distance simulation studies. It is possible that without a visual depiction (ie, a pictogram), some studies may not have been identified as mixed-distance simulations by the reviewers and were omitted. In addition, this review examines only peer-reviewed literature when many mixed-distance simulations conducted during the COVID-19 pandemic have not been disseminated in the literature or are yet to be published. Perhaps if our findings, when used in combination with the 2019 COVID19 Distance Simulation Survey findings,59 may provide a more accurate view of the existing landscape of mixed-distance simulation. The relatively small number of articles along with the absence of comparative studies did not allow for the study of different subsets of mixed-distance simulation, such as differences in study design, learning objectives, purpose, or modality that may produce other effects between these groups, including greater effectiveness or further challenges.

Future Directions

This review identifies areas that require further investigation. Flowing from the limitations experienced in this review, there is a demand for higher quality research, comparative studies, higher levels of evaluation impact (ie, KLs III and IV), reporting guidelines for mixed-distance simulations, and simulationist development in distance and mixed-distance technologies.

One recurring theme was learner preference for in-person interactions over distance methods; however, the reasons for this preference are not reported and nuances of each method (ie, Were there ways to improve the distance engagement?) could not be inferred. Studies that evaluate and identify best practices in distance simulation or online and virtual environments will inform best practices in mixed-distance simulation design and delivery.

The ability of distance simulation to have a transnational reach and to cross national urban and rural divides highlights the importance of mindfulness toward differences between cultures and a greater understanding of justice, equity, diversity, and inclusivity. Further research in this field is vital to understand the new challenges produced by mixed-distance simulation, as well as perspectives of the learners and educators to ensure effective cross-cultural learning.

The differences in the dynamics between distance-only and mixed-distance simulation need to be explored with further comparative studies. More importantly, are learning outcomes in distance-only or mixed-distance simulation comparable with in-person simulation? This study, along with a systematic review on distance simulation,60 represents foundational work in this field, particularly as we seek to discern differences between the in-person and distance environments.61

No study mentioned the use of a particular reporting guideline. Stapleton et al61 suggest such reporting guidelines and request further inquiry into feasibility and needed information for replication and understanding of data. The findings in this study found minimal reference to simulation standards of best practice in the design and implementation of distance simulation activities,55 as well as a lack of a description of the underlying theory used to guide the simulation activity. Although such standards and theories may help guide simulationists in the development of quality mixed-distance simulations, existing standards and theories must be reviewed from the perspective of distance simulation to identify gaps for future study.

Finally, no articles address the issue of cost. Given the required preparation, faculty development, and equipment investment at local sites, it is unclear if there are any measurable cost savings in mixed-distance simulation. This may likely translate to a shift in cost rather than a reduction. Further studies in this area will help inform on the adoption of mixed-distance simulation at a broader scale.

CONCLUSIONS

With the growth of technology, there has been an increase in the ability to deliver SBE at a distance. Mixed-distance simulation represents a growing approach to SBE of health care professionals in which learners and facilitators are in different remote and in-person configurations. This systematic review identifies 5 types of configurations for mixed-distance simulation and discusses their potential applications. Furthermore, this review found that mixed-distance simulation can deliver procedural and team-based communic

留言 (0)

沒有登入
gif