Development and Psychometric Testing of the Clinical Reasoning Scale Among Nursing Students Enrolled in Three Types of Programs in Taiwan

Introduction

Critical reasoning (CR) is a “complex cognitive process that uses formal and informal thinking strategies to gather and analyze patient information, evaluate the significance of this information and weigh alternative actions” (Simmons, 2010, p. 1155). CR is a required and essential nursing competency emphasized in nursing professional guidelines (National Organization of Nurse Practitioner Faculties, 2017), certification/licensure requirements (National Council State Boards of Nursing, 2019), and educational standards (American Association of Colleges of Nursing [AACN], 2021). Moreover, CR is a vital attribute that distinguishes professional nurses from ancillary care providers (AACN, 2021; Simmons, 2010). Good CR in nurses has been associated with high-quality nursing care, patient well-being, and positive patient outcomes (Jang et al., 2021; Kao et al., 2022; Manetti, 2018). The importance of CR in nurses is especially amplified in complex healthcare environments characterized by rapidly changing system/patient needs, high-stakes practice, technological innovation, and healthcare information explosions (AACN, 2021; Kavanagh & Szweda, 2017; Kim et al., 2022).

The recent COVID-19 pandemic has further highlighted the critical need for CR competency in nurses (Mariani, 2021). To meet both professional and ethical requirements, nurses should have CR skills before entering practice settings. An instrument that accurately identifies the gaps in CR is critical for faculty and students to reach consensus on CR learning needs. Therefore, the purpose of this study was to inductively develop an instrument based on H. M. Huang et al.'s (2018) Framework of Competencies of Clinical Reasoning for Nursing Students to assess the CR of nursing students studying in different types of nursing programs and to test the psychometric properties of this instrument using confirmatory factor analysis (CFA).

Literature Review

In the nursing literature, CR is often used synonymously with critical thinking and clinical judgment (Klenke-Borgmann et al., 2020). However, Klenke-Borgmann et al. contended that CR, critical thinking, and clinical judgment are three distinct terms. Specifically, critical thinking has been described as a broad term “that includes reasoning both outside and inside of the clinical setting. Clinical reasoning and clinical judgment are key pieces of critical thinking in nursing” (Alfaro-LeFevre, 2019, p. 6). Critical thinking has been defined as a knowledge-based, situation-independent cognitive process used to analyze empirics to make clinical judgments and solve problems (Mohammadi-Shahboulaghi et al., 2021), whereas CR has been defined as a specific concept addressing the clinical thinking processes nurses use at points of care (Alfaro-LeFevre, 2019). CR has been further described as a nonlinear “cyclical nursing process within the limits of patients' circumstances and nurses' knowledge or experience” (Hong et al., 2021, p. 1). The term “clinical judgment” has been defined as the result and end point of clinical thinking and CR (Klenke-Borgmann et al., 2020) and is thus the context-specific results and outcomes achieved at points of care. Despite the effort of scholars to distinguish these three concepts, they remain intertwined with overlapping elements that address the process of reaching a context/circumstance-specific clinical decision at the point of care.

Although the cultivation of CR in nursing education is required by nursing accreditation organizations (AACN, 2021) and critical to promoting patient outcomes, many new nursing graduates and nursing students still lack the preparation and confidence necessary to apply CR in practice settings (Kavanagh & Szweda, 2017; Killam et al., 2011). In Kavanagh and Szweda's study of 5,000 newly graduated nurses, 23% were unable to identify a change in patient condition or distinguish levels of urgency most of the time, whereas 54% were unable to manage patient problems. Nursing students have been found to lack practice-ready competency (Al-Moteri et al., 2019; Jarvelainen et al., 2018). Al-Moteri et al.'s systemic review synthesized study findings on nurses' recognition and responses to patient deterioration in the presence of worsening condition in practice in seven countries. Nurses' failure to recognize the antecedents of patient deterioration and related judgment errors was found to be an issue across all seven countries, which highlighted the significance of adequate CR preparation in nursing students before entering practice. Studies on nursing students' CR ability have also identified significant deficits in their abilities to manage deteriorating patients in practice settings (Jarvelainen et al., 2018). The effective application of CR requires nursing students to possess an ability “to gather the right cues, based on the right reason to execute the right action for right patient at the right time” (Levett-Jones et al., 2010, p. 515). The CR literature highlights the necessity of having valid and reliable CR assessment tools to accurately identify students' learning needs in CR and effectively guide the design of focused nursing curricula (Menezes et al., 2015).

Tanner's (2006) clinical judgment model and Levett-Jones et al.'s (2010) clinical reasoning model are theoretical models that have been used to develop nursing curricula and andragogical approaches to facilitate growth in CR and to promote improvements in clinical decision making. Both models have been used extensively to guide the development of CR instruments. Lasater Clinical Judgment Rubric (LCJR; Lasater, 2007) is a tool that has been a frequently used in CR assessment. LCJR was developed within the framework of Tanner's clinical judgment model and based on the responses of 24 baccalaureate nursing students to simulated scenarios to assess their clinical judgment. The resultant LCJR is a 5-point Likert scale instrument comprising 11 dimensions, four developmental phases (exemplary, accomplished, developing, and beginning), and 44 descriptors for each dimension at each developmental phase. The instrument has been translated into Dutch (Vreugdenhil & Spek, 2018), Chinese (Yang et al., 2019), Spanish (Román-Cereto et al., 2018), Korean (Shin et al., 2015), and other languages to assess nursing students' clinical judgment competency in simulation experiences. Although LCJR has shown to be effective in assessing the clinical judgment competency of students in simulated experiences, researchers have noted that rater training is essential to ensure interrater reliability (K. A. Adamson et al., 2012). Other CR instruments are relatively more limited in their validity for student self-evaluation (e.g., Nurses' Clinical Reasoning Scale [CRS; Liou et al., 2016]; LCJR [Lasater, 2007]). Furthermore, CR instruments have been utilized for diverse and broad purposes such as skills testing for specific content areas (e.g., key-feature questions [Nayer et al., 2018] and script concordance tests [Aubart et al., 2021]), nursing admission (e.g., reasoning skills test [Vierula et al., 2021]), and simulations (e.g., LCJR [Lasater, 2007]). However, assessment of the inherent CR process in nursing has been limited. Nursing education systems in different country settings are diverse in terms of both program lengths and structures. A clinically validated, nursing-discipline-specific instrument for CR assessment in students of diverse types of nursing programs is critically needed (Griffits, 2017).

Conceptual Framework

H. M. Huang et al.'s (2018) framework of CR competencies for nursing students was used to guide this study. CR competency in nursing students is described as “an ongoing, interactive and dynamic process that continuously undergoes adjustments and modifications depending on changes in the clinical context” (H. M. Huang et al., 2018, p. 115). The conceptual framework used in this study consists of four domains of CR, including awareness of clinical cues, confirmation of clinical problems, determination and implementation of actions, and evaluation and reflection. The two to four indicators in each domain result in 13 interconnected and interwoven indicators of CR competencies that together may be used to assess the CR competency of the respondent. The first domain “awareness of clinical cues” includes four indicators: “possession of keen observation, application of past life experiences, possession of professional healthcare knowledge and skills, and willingness to facilitate patients with problem-solving” (H. M. Huang et al., 2018, p. 112). The second domain “confirmation of clinical problems” consists of four indicators: “search for clinical cues, interpret the meaning of clinical cues, connects theories with practice, and recognize important clinical problems” (H. M. Huang et al., 2018, p. 112). The third domain “determination and implementation of actions” encompasses three indicators: “determination of priority, verification of hypothetical answers, and solution to patients' problems” (H. M. Huang et al., 2018, p. 112). The fourth domain “evaluation and reflection” includes two indicators: “evaluation of the effectiveness of problem-solving and self-evaluation and improvement” (H. M. Huang et al., 2018, p. 112).

Methods

Development of the CRS involved four phases (see Table 1), including (a) developing the CR domains and the CRS items, (b) testing content validity, (c) testing construct validity, and (d) testing reliability. The data collection period was from November 2016 to June 2018.

Table 1 - Development of the Clinical Reasoning Scale (CRS) Phase Object Method/Data Domain/Items Phase 1 Developing the CR domains and the CRS items  Step 1 Exploring clinical reasoning ability indicators and framework Literature review  Step 2 Develop preliminary items for the new instrument Assess and identify items First version: four domains, 44 items Phase 2 Testing content validity  Step 1 Assess the appropriateness, representativeness, and explicitness of the CRS's (first version) items and content First round of Delphi study (n = 7) Second version: four domains, 34 items  Step 2 Assess the appropriateness, representativeness, and explicitness of the revised CRS's (second version) items and content Second round of Delphi study (n = 7) Third version: four domains, 30 items Phase 3 Test the construct validity (third version) Confirmatory factor analysis  Step 1 Model 1 Domains not correlated (n = 1,504) Four domains, 30 items  Step 2 Model 2 (n = 1,504) Four domains, 25 items (deleted five items), poor fit  Step 3 Model 3 Four domains, 23 items (deleted seven items), poor fit  Step 4 Test the construct validity of the fourth version, Model 4 Domains correlated (n = 1,504)
Item deleted and domains correlated (n = 1,504) Four domains, 16 items (fourth), goodness of fit Phase 4 Test reliability of the fourth version (the final version) Correlation coefficient (n = 1,504)

Item–total correlation

Total scale: .894
Domain 1: .801
Domain 2: .789
Domain 3: .830
Domain 4: .839
.627–.728
Phase 1: Developing the CR Domains and CRS Items

The CRS items were developed based on H. M. Huang et al.'s (2018) framework of competencies of clinical reasoning for nursing students.

Phase 2: Testing Content Validity

Two Delphi study rounds involving seven experts in nursing education were conducted. All of the experts were seasoned clinicians and educators with 14–30 years of experience in nursing higher education (six full professors and one assistant professor). Six of the seven experts held a doctorate in nursing. The experts were asked to evaluate whether the CRS items accurately reflected the four domains, with the content validity index (CVI) showing the extent to which they collectively agreed upon the “representativeness” (items appropriately reflect the CR domains), “suitability” (items suitably measure nursing students' CR), and “clarity” (items are clearly stated and easy to understand) of the CRS items (Polit & Beck, 2017). The experts ranked the representativeness, suitability, and clarity of each CRS item on a 5-point Likert scale (1 = irrelevant and should be deleted; 2 = seemingly relevant but large-scale revision required; 3 = relevant but in need of small adjustments; 4 = relevant, but needs rewording; 5 = relevant, clear, and precise). Items with a mean score of 4.0 or above were retained, items with a score range of 3.1–3.9 were modified, and items with a mean score less than 3.0 were deleted.

Phase 3: Testing Construct Validity

The factor structure of the instrument was tested using CFA run on LISREL (Linear Structural Relations) software, Version 9.30 (Scientific Software International, Inc., Lincolnwood, IL, USA). CFA may be used to verify if an instrument is theory based, with results providing stronger evidence than exploratory factor analysis in support of the construct validity of the factor structure (Brown, 2015). CFA was performed to assess the structure of the CRS and identify the optimal model. The main goodness-of-fit indicators for the CRS were the goodness-of-fit index (GFI), adjusted GFI (AGFI), root mean square error of approximation (RMSEA), and Akaike information criterion (AIC). Goodness of fit is shown if GFI and AGFI are > .90, RMSEA is < .05, and AIC close to zero (Grave & Cipher, 2017). Items were deleted if factor loadings were less than .4 or greater than .75 (F. M. Huang, 2009).

Sample size was determined as at least 10 cases per variable based on Nunnally's (1967) principle for adequate sample size. Thus, 440 students from each nursing program type, including 5-year associate degree in nursing (ADN) programs, 4-year bachelor's degree in nursing (BSN) programs, and 2-year RN-to-BSN programs, were necessary to ensure good reliability and validity. Considering an estimated 20% attrition rate, the researchers intended to recruit 528 students from each of nursing program type (1,584 participants in total). Finally, a convenience sample of 1,550 nursing students were recruited across the three types of nursing programs from 10 universities in Taiwan. The inclusion criteria were nursing students who were (a) enrolled in the last semester of their nursing program, (b) of full-time status, and (c) aged 20 years or older.

Phase 4: Testing Internal Consistency and Reliability

The researchers tested the internal consistency and reliability using correlation coefficient (Cronbach's α) and item–total correlations. Cronbach's α ≥ .8 is an indicator of good reliability. The item–total correlation was ≥ .3, indicating high internal consistency (Burns et al., 2020).

Ethical Considerations

This study was approved by the Research Ethics Committee of National Taiwan University (201509ES002) before data collection. The study was conducted in accordance with ethical principles. The researchers explained the study procedures and obtained informed consent from the participants. Participation was voluntary, and the participants could withdraw at will from the study at any time. None of the participants were enrolled in the researchers' courses.

Results Phase 1

Forty-four items were developed for the CRS, including 11 items under the “awareness of clinical cues” domain, 13 items under the “confirmation of clinical problems” domain, 11 items under the “determination and implementation of actions” domain, and nine items under the “evaluation and self-reflection” domain.

Phase 2

The first round of the Delphi study resulted in 12 items being deleted to avoid abstraction and duplication and two new items being added to comprehensively address the domain of “determination and implementation of actions.” Consequently, 34 items were kept in the instrument. The item CVI averaged between .42 and 1.0. The scale CVI was .87.

The second round of the Delphi study confirmed that the items accurately reflected the concept and domains of CR for nursing students. Four items were deleted because they lacked specificity. The content validity of the CRS item-CVI averaged between .85 and 1.0, and that of the scale CVI averaged .98. The resulting CRS included 30 items with four domains (refer to Table 1 for details).

Phase 3

The 30-item CRS was examined for construct validity using CFA. The participants were asked to complete the 30-item CRS. Furthermore, 1,504 nursing students (response rate = 97%) completed the 30-item CRS. The participants were students from 5-year ADN programs (n = 548), 4-year BSN programs (n = 478), and 2-year RN-to-BSN programs (n = 478). A strong majority of the participants were female (91.7%, n = 1,379), and the average age of the sample was 21.25 (SD = 1.05) years. All were full-time students aged 20 years or older. The Kaiser–Meyer–Olkin test result was .95. Bartlett's test of sphericity was significant for the entire scale (p < .001), indicating an adequate sample number for the CFA (Kline, 2015).

Four models were tested (Table 2). Model 1 with 30 items showed poor model fit (χ2 = 3871.38, p < .001, GFI = .85, AGFI = .82, RMSEA = .08, and AIC = 930). Five items were deleted because of ambiguous wording and having a factor loading < .40. The remaining 25 items were tested using a four-factor framework in Model 2. The results still showed model misfit. Thus, the correlation coefficient was checked for each domain, and two items with factor loadings > .75 were deleted. Next, the remaining 23 items were tested using a four-factor framework Model 3, which showed a better fit (χ2 = 1627.02, p < .001, GFI = .91, AGFI = .89, RMSEA = .07, and AIC = 552). On the basis of the results, to correct the variable with the largest modification index value and, theoretically, to eliminate items with modification index values > 3.84 (F. M. Huang, 2009), seven items and one indicator (“willingness to facilitate patients with problem-solving”) were deleted. The results from the final model showed significantly better goodness of fit (Figure 1; χ2 = 435.38, p < .001, GFI = .97, AGFI = .95, RMSEA = .049, and AIC = 272). The CRS showed goodness of fit for the model and met the required criteria for construct validity. The results of the RMSEA, root mean square, GFI, AGFI, nonnormed fit index, normed fit index, and AIC further supported the acceptable fit of Model 4 (Table 2). The final version of the CRS includes 16 items with a four-factor framework. The error variances were around .15–.24, with no negative values. The factor loadings were less than .4, indicating that the items are effective in detecting the latent variables of clinical reasoning (F. M. Huang, 2009). This scale clustering of 16 items accounted for 49.03% of the variance of clinical reasoning competence, which explains the total variance in the measured concept.

Table 2 - Goodness-of-Fit Statistics for the Comparative Models of the Clinical Reasoning Scale Model χ 2 df p RMSEA RMS GFI AGFI NNFI NFI AIC 1 3871.38 399 < .001 .08 .04 .85 .82 .97 .97 930 2 2715.11 269 < .001 .08 .04 .88 .85 .97 .97 650 3 1627.02 224 < .001 .07 .04 .91 .89 .98 .98 552 4 435.38 94 < .001 .05 .01 .97 .95 .99 .99 272

Note. Model 1 = four factors, correlated factors (items deleted: 5, 8, 9, 14, and 15); Model 2 = four factors, deleted items, correlated factors (items deleted: 11 and 12); Model 3 = four factors, deleted items, released the items of MI value were > 3.84 (items deleted: 3, 7, 18, 19, 20, 25, and 29); Model 4 = four factors. RMSEA = root mean square error of approximation; RMS = root mean square; GFI = goodness-of-fit index; AGFI = adjusted goodness-of-fit index; NNFI = nonnormed fit index; NFI = normed fit index; AIC = Akaike information criterion.


F1Figure 1:

Measurement Model of the Clinical Reasoning Scale (CRS)

Phase 4

During Phase 4, the researchers tested the internal consistency of each domain. The Cronbach's α for the entire scale (N = 1,504) was .894. The Cronbach's α for the four domains were .801, .789, .830, and .839. The item–total correlations were between .627 and .728 (p < .01). The result of the CRS item analysis met the required criteria for internal consistency.

Therefore, the final CRS is a four-domain, 16-item instrument that measures nursing students' CR using a 5-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree; Table 3). Each domain consists of four items, and the CRS takes approximately 5–10 minutes to complete. The total possible score for the scale ranges from 16 to 80, with higher scores indicating better clinical reasoning readiness. All of the items use positive descriptions only without any negative questions.

Table 3 - Clinical Reasoning Scale for Nursing Students Item Score 1. I can notice patient's needs when I get in contact with the patient. 5 4 3 2 1 2. I can notice patient's potential health concerns based on the clinical clues I have observed. 5 4 3 2 1 3. I can use various data collection methods (such as medical history, physical assessment) to collect clues pertinent to the problem. 5 4 3 2 1 4. My clinical practical experiences can help me detect a patient's health concerns. 5 4 3 2 1 5. I can collect all the data on an abnormality before I confirm a patient's health problems. 5 4 3 2 1 6. I can explain the connection between observed clues and a patient's health problems. 5 4 3 2 1 7. I can identify a patient's health problems by synthesizing the clues collected. 5 4 3 2 1 8. I can use theories and nursing knowledge to interpret clinical clues to determine a patient's health problems. 5 4 3 2 1 9. I can think through the problem-solving steps before resolving patient issues. 5 4 3 2 1 10. I can set a goal for problem solving based on a patient's condition. 5 4 3 2 1 11. I can find the most appropriate solution based on a patient's condition. 5 4 3 2 1 12. I can provide theory- and evidence-based nursing interventions. 5 4 3 2 1 13. I can evaluate whether a patient's problems are resolved. 5 4 3 2 1 14. I can evaluate effectiveness of problem solving from a variety of aspects. 5 4 3 2 1 15. I can reevaluate a patient's needs if the problem is not resolved. 5 4 3 2 1 16. I can reflect on the steps of problem solving for improvement whether the problem is resolved or not. 5 4 3 2 1

Note. Score: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree.

A total of 1,504 nursing students completed the CRS. The item average score was 3.96. The item average scores for students from the 5-year ADN programs, 4-year BSN programs, and 2-year RN-to-BSN programs were 3.81, 3.96, and 4.11, respectively. The total CRS scores for the 2-year RN-to-BSN students were significantly higher than the scores for the 5-year ADN and 4-year BSN students (F = 10.07, p < .001; see Table 4). Scores of the four CRS domains revealed a consistent pattern across the three types of nursing programs. For all of the participants, the domain with the highest scores was the “awareness of clinical cues” domain (4.01 ± 0.48), followed by the “evaluation and reflection” (3.98 ± 0.52), “determination and implementation of actions” (3.93 ± 0.51), and “confirmation of clinical problems” (3.88 ± 0.49) domains. This means that the participants perceived being aware of clinical cues as their most proficient CR ability. Notably, “confirmation of clinical problems” was the weakest CR competency for participants in all program types (see Table 4).

Table 4 - Clinical Reasoning Scale Scores in Nursing Students of Three Types of Nursing Programs Type of Program ① 2-Year RN–BSN ② 5-Year ADN ③ 4-Year BSN Mean SD t/F/r Scheffe's Total scores 65.46 6.71 61.09 6.86 63.08 6.67 63.15 6.99 10.07* > > Average scores 4.11 0.41 3.81 0.49 3.96 0.51 3.96 0.09 65.24* > > Awareness of clinical cues 4.15 0.46 3.87 0.46 4.02 0.46 4.00 0.48 47.99* > > Confirmation of clinical problems 4.02 0.47 3.75 0.46 3.88 0.48 3.88 0.48 43.80* > > Determination and implementation of actions 4.06 0.49 3.81 0.49 3.92 0.51 3.93 0.51 32.53* > > Evaluation and reflection 4.13 0.52 3.86 0.51 3.96 0.51 3.98 0.52 35.68* > >

Note. BSN = bachelor's degree in nursing; ADN = associate degree in nursing.

*p < .001.


Discussion

In this study, the four-domain, 12-indicator, 16-item CRS instrument was developed to assess nursing students' CR competency. The CR instruments in the literature differ widely to address a diverse range of purposes. Because of the unique scope of the nursing profession, only nursing CR instruments were included in this discussion. Among the nursing CR assessment tools currently available, the LCJR is a frequently used instrument in nursing education, particularly for student simulation experiences. There are similarities and differences between the LCJR and the CRS. The four domains of the CRS are similar to the components of the LCJR (noticing, interpreting, responding, and reflecting; Lasater, 2007). However, the indicators used in the two instruments are different. The 11 indicators of the LCJR were derived from observations of students' simulation experiences (Lasater, 2007), whereas the 12 indicators of the CRS were developed inductively using a qualitative study involving clinically and academically experienced nursing educators and validated for goodness of fit with the theoretical model using CFA. One of the challenges of employing the LCJR is the need for extensive training (K. Adamson, 2016; Victor-Chmil & Larew, 2013). The LCJR consists of four domains, 11 indicators, and four developmental phases (exemplary, accomplished, developing, and beginning), which generate data in 44 columns describing each indicator at different developmental phases. Most of the descriptions for each column include more than one behavioral manifestation of the indicator, which may contribute to difficulties in selecting the appropriate column representing students' CR abilities. Compared with the LCJR, the CRS is clear and succinct and has been validated by CFA. Faculty and students may complete this instrument in 5–10 minutes, with the results immediately applicable to identifying areas of improvement in CR in all nursing courses, especially practicums.

For all nursing program types, the lowest mean domain score was reported for the “confirmation of clinical problems.” In the CR process, “confirmation of clinical problems” allows nursing students to obtain pertinent clinical cues and effectively implement focused interventions to achieve optimal patient outcomes. Related deficits may present challenges to care provision and contribute to unsafe practices. Studies in the literature have identified similar findings. In an integrative literature review conducted by Killam et al. (2011), knowledge- and skill-related incompetence (particularly deficits in cognitive abilities), critical thinking, problem identification, and clinical problem solving were the principle characteristics found to identify unsafe undergraduate nursing students in clinical practice. In Hunter and Arthur's (2016) qualitative exploratory study, graduate nursing students were also found to lack the necessary CR skills for safe practice. Educators must take this and other inadequacies into consideration when designing and implementing nursing curriculums. In addition, the CRS may be used to refine pedagogical approaches and match the learning needs of students. Students who are unfamiliar with “awareness of clinical cues” may be targeted by educators with appropriate approaches that help them identify and recognize important clinical cues, make appropriate clinical judgments, take proper actions, and engage in self-reflection.

Limitations

The psychometric properties of the developed CRS instrument were tested on nursing students from three different types of nursing programs in Taiwan. The results may be influenced by cultural differences. Future studies should be conducted that consider the impact of culture on nursing students' CR processes. In addition, pedagogical approaches and modalities that may improve performance in the “confirmation of clinical problems” domain should be investigated to strengthen nursing students' CR ability. Studies that examine the application of the CRS on practicing nurses should also be conducted.

Conclusions

The 16-item CRS is a valid and reliable tool for assessing CR in nursing students. Nursing educators may use the CRS to identify strengths and weaknesses in the CR of their students and facilitate student growth with regard to CR competency. Future studies are recommended to investigate educational approaches that cultivate improved clinical reasoning in nursing students.

Author Contributions

Study conception and design: HMH, SFC

Data collection: HMH

Data analysis and interpretation: KCL

Drafting of the article: CYH

Critical revision of the article: SFC, CHY

References Adamson K. (2016). Rater bias in simulation performance assessment: Examining the effect of participant race/ethnicity. Nursing Education Perspectives, 37(2), 78–82. https://doi.org/10.5480/15-1626 Adamson K. A., Gubrud P., Sideras S., Lasater K. (2012). Assessing the reliability, validity, and use of the Lasater clinical judgment rubric: Three approaches. Journal of Nursing Education, 51(2), 66–73. https://doi.org/10.3928/01484834-20111130-03 Alfaro-LeFevre R. (2019). Critical thinking, clinical reasoning, and clinical judgment: A practical approach (7th ed.). Elsevier. Al-Moteri M., Plummer V., Cooper S., Symmons M. (2019). Clinical deterioration of ward patients in the presence of antecedents: A systematic review and narrative synthesis. Australian Critical Care, 32, 411–420. https://doi.org/10.1016/j.aucc.2018.06.004 American Association of Colleges of Nursing. (2021). The essentials: Core competencies for professional nursing education. https://www.aacnnursing.org/Portals/42/AcademicNursing/pdf/Essentials-2021.pdf Aubart F. C., Papo T., Hertig A., Renaud M. C., Steichen O., Amoura Z., Braun M., Palombi O., Duguet A., Roux D. (2021). Are script concordance tests suitable for the assessment of undergraduate students? A multicenter comparative study. Revue de Medecine Interne, 42(4), 243–250. https://doi.org/10.1016/j.revmed.2020.11.001 Brown T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). Guilford Press. Burns N., Grove S., Sutherland S. (2020).

留言 (0)

沒有登入
gif