The study was conducted at the Zucker School of Medicine at Hofstra/Northwell (ZSOM). The ZSOM employs self-directed learning with a case-based/problem-based pedagogy with early clinical experiences. The curriculum introduces students to core clinical skills (communication skills for gathering a complete history and physical examination skills) during their first 7-week course. During the subsequent course, students are introduced to clinical reasoning during curricular sessions in which small groups of students work together to gather a patient history, generate diagnostic hypotheses, and plan a hypothesis-driven physical examination to test their diagnostic hypotheses. These sessions occur three times during each of the three remaining first-year (MS1) courses and three second-year (MS2) courses. All students complete a clinical skills examination (CSE) at the end of each of the MS1 and MS2 courses. The CSEs are conducted at the Clinical Skills Center at the Northwell Health Center for Learning Innovation (CLI). The Clinical Skills Center at CLI consists of 14 rooms designed to resemble outpatient examination rooms. CLI has a regular pool of previously screened SPs trained to the case and to the use of checklists to assess student performance. As part of standard operating procedures, SP educators conduct annual video reviews of CSE encounters to ensure reliability of checklists by course, case, and SP.
Curricular ContextThe MS1 year consists of four integrated courses: From the Person to the Professional: Challenges, Privileges, and Responsibilities (CPR), The Biologic Imperative (BI), Fueling the Body (FTB), and Continuity and Change: Homeostasis (HOM). The MS2 year consists of three integrated courses: Interacting with the Environment (IE), Host-Microbe Interaction (HMI), and The Human Condition (HC).
For the academic years spanning 2016 to 2021 (i.e., the classes of 2020 through the first year of the class of 2024), all end-of-course CSE, except for CPR, were conducted as summative examinations to prepare students for Step 2 CS. At the end of the CPR course, students completed an end of course single station formative assessment during which they practiced their communication and physical examination skills. The summative CSE resulted in an end of course grade of pass or fail and did not include any feedback or coaching. During the academic year 2021–2022, we introduced FCSE to two additional courses (BI and FTB) in the MS1 year and to the first course in the MS2 year (IE).
ParticipantsFirst- and second-year medical students in the Class of 2024 (C24) and the Class of 2025 (C25) participated in the FCSE during the academic years 2021–2022 and 2022–2023 as part of standard end-of-course assessments (Table 1). C24 was introduced to FCSE in their MS2 year. C25 was introduced to FCSE in their MS1 year. The Class of 2022 (C22) completed all CSE prior to this intervention and served as a historical control. Due to the COVID-19 pandemic, several CSE for the class of 2023 were run on a virtual platform so could not serve as historical controls for the majority of the exams; C23 data was included as historical control for the one examination for which comparable C22 data was not available.
Table 1 Courses and clinical skills examination formatsWe included performance data only from students who previously consented at matriculation.
Educational InterventionThe FCSE consisted of a “linked encounter” followed by real-time learner-centered feedback and coaching with a trained SP and clinical skills faculty member with built-in opportunity to re-practice identified areas for improvement. The linked encounters (Fig. 2), employed on all CSE since 2016, were developed to assess students’ communication, physical diagnosis, and clinical reasoning skills. During the first part of the linked encounter, students gathered a history from an SP and completed a post-encounter to assess their diagnostic hypotheses and their plan for the hypothesis-driven physical examination. During the second part, students continued the encounter with the same SP during which they conducted the hypothesis-driven physical examination. The linked encounter concluded with post-encounter documentation of the patient’s leading diagnosis and history of present illness. During the FCSE, faculty observed the linked encounter from a computer lab and documented observations on what the student did effectively and opportunities for practice improvement. The FCSE concluded with a 25-min individualized, learner-centered debrief with feedback and coaching facilitated by the faculty and SP.
Fig. 2Linked station format. White boxes represent the format of summative clinical skills exams. Grey boxes indicate elements added for formative clinical skills exams. SP, standardized patient; HDPE, hypothesis-driven physical examination; HPI, history of present illness
We developed a protocol for a student-SP-faculty triad learner-centered feedback and coaching that employed elements of a PDSA cycle including principles of psychological safety, self-regulated learning theory, and deliberate practice [1, 10, 28] (Fig. 1). The protocol established psychological safety through introductions, check-in on the student’s emotional state, and scripting a transparent plan for the process. After establishing safety, learners were asked to reflect on the encounter to identify their strengths and areas for improvement or “learning edge” and worked with faculty to brainstorm a plan for improvement (PLAN). Learners re-practiced the skill identified as their learning edge (DO). Students reflected on the re-practice with reinforcing feedback from the faculty and SP (STUDY). At the end of the feedback session, learners reflected on the process and identified an action plan for ongoing practice improvement (ACT).
Students’ clinical skills performance during the encounters is assessed on SPs checklists completed after each encounter. Clinical skills faculty grade all student post-encounter questions anonymously using a grading rubric. About 2 weeks after the FCSE, learners receive an individualized feedback report including performance data from the SP checklists, faculty graded post-encounters, and faculty narrative feedback.
Clinical Skills FacultyFaculty were recruited from a pool of ZSOM faculty. Nineteen faculty participated in the FCSE during the academic years 2021–2022 and 2022–2023. A faculty development curriculum was developed to prepare faculty with the knowledge, skills, and attitudes needed to apply a uniform, learner-centered approach to our FCSE feedback and coaching model. Faculty attended a 90-min interactive, skills-based faculty development session before each FCSE.
Outcome MeasuresTo assess student perception and experience, students were invited to complete a brief, voluntary, anonymous paper exit survey. Students were given the paper exit survey immediately after completion of the CSE, and surveys were collected in real-time.
FCSE Exit SurveyC24 completed the FCSE exit survey after their first FCSE. This survey consisted of one 4-point Likert-style question (strongly disagree, disagree, agree, strongly agree) which asked them to reflect on if it was clear what clinical skills they need to work on and two open-ended questions eliciting their personal take home point and additional feedback.
Summative CSE Exit SurveyC25 students completed the summative CSE survey after completion of their first summative CSE in the MS1 year. Both C24 and C25 completed this exit survey after the first summative CSE in the MS2 year. The summative exit surveys consisted of three, 4-point Likert-style (strongly disagree, disagree, agree, strongly agree) and one open-ended question. The Likert questions asked students to reflect on whether they were able to apply their take-home points and lessons learned from the individualized written report from the prior FCSE to that day’s summative CSE. The final open-ended question asked for additional thoughts about the clinical skills assessment.
CSE PerformanceTo assess student performance on CSE, we used student performance on SP checklists assessing communication and clinical reasoning. The communication checklists, based on the ZSOM core communication curriculum, included 20 items that were consistent across all CSE. The clinical reasoning checklists were case-specific based on the chief concern for the patient and included 16–18 items assessing the data gathered during both history and hypothesis-driven physical examination. The number of checklist items assessing physical examination techniques varied between courses and were therefore not included in our analysis. The pandemic impacted our exam administration; data gathered from CSE run in a virtual format were excluded from the study. We therefore analyzed data from 4 of the 7 clinical skills examinations (Table 1).
The Hofstra University Institutional Review Board deemed this project exempt from full review.
Statistical AnalysisData was statistically evaluated using IBM SPSS Statistics (SPSS Inc., Chicago, IL, USA, Version 28.0). Descriptive statistics are presented as the frequency and percent of agree and strongly agree for the exit survey questions (4-point Likert scale) and as the percent correct for the SP communication and clinical reasoning checklist performance. Mann–Whitney U tests were used to evaluate group differences (C24 vs. C25) in the responses on the summative CSE exit surveys completed in the MS2 year. Analysis of variance (ANOVA) was used to determine group differences on communication and clinical reasoning checklist performance. Post hoc independent sample t-tests were performed for follow-up for significant findings. A p value ≤ 0.05 was considered statistically significant except for post hoc t-tests for which we applied Bonferroni correction requiring a p-value < 0.016 for significance.
Qualitative Analysis of Narrative Responses on Exit SurveysA thematic analysis was conducted on the narrative responses to exit surveys. Two of the researchers (GG and EF) used an iterative and inductive approach to familiarize themselves with the data, generate initial codes, search for themes, and define and name the themes [29]. Responses to the initial exit survey were reviewed independently to generate initial codes, which were applied to each subsequent survey with clarification of wording and addition of codes. This process allowed for new codes to be added until theoretical saturation was reached. The researchers then met to reach consensus on final codes. After consensus was reached, the two researchers independently coded all responses and then met to reach a final consensus on coding of individual responses and identification of themes. A third researcher (JJ) was available to review any codes or responses which lacked initial agreement. Independent coding of survey data enabled assurance of trustworthiness and reflexivity [30, 31]. The researchers maintained code books to capture impressions and reflections on the coding process and note any potential biases.
留言 (0)