The Emerging Role of Reinforcement in the Clinician's Path from Continuing Education to Practice

Clinicians pursue a variety of learning activities to enhance their ability to provide the best possible patient care. An area of ongoing concern for continuing education (CE) researchers and practitioners is how clinicians translate information learned in CE into practice. Commitment to change is one part of a complex process of learning and change. Studies examining commitment or intention to change have shed light on this element of the change process. Several studies suggest that if clinicians make commitments to change clinical behavior, it is more likely that they incorporate what they have learned into practice with appropriate modifications for local conditions.1–3 Additional studies suggest that self-efficacy, a personal belief in one's capability to organize and execute courses of action required to attain designated types of performances,4 may be a moderator (have influence on the relationship between another variable and commitment to change) or mediator (be the variable through which another variable affects commitment to change) of commitment to change.5–8 The results have shown that increases in knowledge and competence contributed to increases in self-efficacy.5–8 However, clinicians do not always learn something “new” when they participate in CE activities; rather, their knowledge or skills are reinforced. This study examined whether the status of knowledge/competence post-CE (reinforced/improved) moderates the association between post-CE knowledge/competence score and self-efficacy—is self-efficacy greater when clinicians reinforce what they already knew compared with when they learned something new?

METHODS

This study was exempt from institutional review board approval because it used deidentified secondary data and is not considered human subjects research.9

Sample

A total of 153 online continuing medical education–certified activities addressing knowledge and competence learning objectives were included. Activities had a range of possible credit (0.25–1.0) and were text- or video-based. A total of 48,243 learners provided complete data; 86% (n = 41,774) provided usable data to answer the research question; 22% (n = 9190) participated in ≥1 activity.

Measures

Knowledge/competence was assessed using pre-post knowledge- or competence-based multiple-choice questions. The questions had a substantial development process with four layers of review from a multidisciplinary team. Three questions were asked pre- and postpoint of learning (herein “pre-CE” or “post-CE”). Post-CE, learners received the correct response with rationale. Post-CE score was calculated based on the number of questions answered correctly (range 0–3).

Change in knowledge/competence was calculated for each learner for each activity. For purposes of analysis, we designated learners as “improved” if they answered ≥1 question correctly post-CE than pre-CE and “reinforced” if they answered the same question(s) correctly pre- and post-CE and were not in the “improved” category. Those who scored 0, had a decreasing post-CE score, or incorrectly answered a question at post-CE and they correctly answered at pre-CE were excluded.

Post-CE self-efficacy (herein “self-efficacy”) was assessed with a question related to the activity learning objectives (eg, How confident are you right now in your ability to diagnose familial hypercholesterolemia?). Responses were rated on a 5-point Likert scale ranging from 1 = “not confident” to 5 = “very confident.” “Confidence” was used because it reads more clearly to the learner than “self-efficacious.”

Statistical Analysis

Hierarchical linear modeling (HLM) using SAS 9.410 was conducted to evaluate the research question. Individual data were nested in activities. Self-efficacy was a continuous dependent variable; post-CE score (continuous) and knowledge/competence change (dummy-=coded; improved = 1, reinforced = 0) were independent variables. A sensitivity analysis determined whether participating in one versus >1 activities yielded different results. Post hoc analyses calculated the predicted means of self-efficacy by knowledge/competence change status and post-CE score. The effect size between self-efficacy for the various combinations of knowledge/competence change and post-CE score was calculated using Cohen's d from sample means and standard deviations for each group.

RESULTS Descriptive Results

Fifty two percent experienced improvement in knowledge/competence; 48% experienced reinforcement; the mean self-efficacy was 3.19 (SD = 1.10).

HLM Results

The intraclass correlation of the intercept-only model was 0.1052, suggesting the variance in self-efficacy because of nesting within activities was 10.5%. The results of the full model show that the intercept (ß= 2.4660, P < .0001), post-CE knowledge/competence score (ß= 0.3596, P < .0001), improved knowledge/competence (ß= 0.07175, P < .05), and interaction terms (ß= −0.08226, P < .0001) were significant. Learners with higher post-CE scores have higher self-efficacy ratings. The strength of this relationship differs if knowledge/competence was reinforced or improved.

Post hoc Analysis of Effect Size for Self-Efficacy

Cohen's d provided an effect size between the mean self-efficacy ratings at each post-CE score but with different knowledge/competence change statuses (Figure 1). The effect size increases as post-CE score increases. Thus, when post-CE score is higher, reinforcement has a larger association with self-efficacy.

F1FIGURE 1.:

Predicted and Sample Mean Self-Efficacy Ratings and Cohen's d. Post-CE score ranges from one to three for reinforced or improved knowledge/competence. Self-Efficacy post-CE ranges from 1 to 5. Cohen's d between same post-CE scores and different knowledge/competence change depicted in boxes above the bars

Sensitivity Analysis

HLM models were applied to single- and multiactivity participants. The direction of the relationships and statistical significance found in the full sample model were equivalent.

DISCUSSION

Clinicians who received confirmatory information about their knowledge of patient management reported greater self-efficacy than clinicians who learned something new. The self-efficacy rating was highest for those who experienced reinforcement and had the highest knowledge/competence scores. This suggests that clinicians who confirmed what they knew or their current practice approach have a greater sense of agency related to topics covered in the CE activities than those who learned something new.

This study contributes to an expanded understanding of the path from CE to practice. There was benefit to self-efficacy for learners who reinforced but did not improve their knowledge/competence.

This study also suggests that pre-post questions administered can be considered part of the learning process. The literature from other stages along the medical education continuum suggests that assessment contributes to measurement of learning and to learning.11 This study suggests similar findings for CE and that improvement and reinforcement moderate the relationship between knowledge/competence and self-efficacy.

These results have implications for CE design. If reinforcement learning is correlated with self-efficacy, self-efficacy strengthens commitment to change, and a commitment to change increases the likelihood that clinicians will change practice behavior, then increasing the opportunities for positive reinforcement as part of the learning activity has the potential to help learners achieve desired clinical results. Other ways to increase opportunities for learning include incorporating additional sessions that promote deliberate practice of clinical case management and receiving expert feedback.12 Kerfoot et al13 suggest longer learning activities, possibly spaced over several days or weeks, strengthen learning. Longer CE activities that provide opportunities for multiple exposures to content have been associated with improved clinician performance and, in some cases, improved patient health.14

Study findings support a broader perspective and understanding of pre-post-CE knowledge/competence scores. Assessment can be conceptualized as an outcome measurement and facilitating learning. Although mean knowledge/competence score is typically expected to increase in “effective” education, clinicians seem to benefit when they have similarly high pre- and post-CE scores. The results suggest that something positive is occurring for these learners—without an improvement in their scores. This may have implications for educational design because learners experiencing knowledge/competence reinforcement may benefit from different educational approaches than those experiencing improvement. Another way to assess the process of behavior change is by analyzing knowledge/competence scores in conjunction with self-efficacy measures. Ultimately, this study suggests there is potential benefit in helping learners confirm existing knowledge/competence. This replicates the results of previous research.8 While assessment for learning is discussed in undergraduate and graduate education, there has been little discussion in CE literature.11 The results support the potential value of pre-post questions as a reinforcement experience.

Limitations of the Study

This analysis focused on CE from one online platform; clinicians may consult many different types of CE and other sources of information to learn.15 A second limitation is we did not do a separate analysis of knowledge-only–focused activities from those that solely focused on competence. To address this, we included activities that focus on both. Knowledge or competence improvement or reinforcement may have differential effects on self-efficacy.

Despite limitations, this study examines a large sample of learners who used a free online platform for CE. We analyzed data from the target specialties/professions for each individual activity, which suggest that the results are representative of others in those specialties/professions.

CONCLUSION

This study explored whether self-efficacy was higher when clinicians reinforced what they already knew compared with when they learned something new. The results contribute to the evolving understanding of how clinicians translate what they learn in CE activities into practice by highlighting the importance of reinforcement in learning. Future studies should tease out the process of the development of new versus only reinforcement of knowledge and competence in conjunction with self-efficacy. How does change in what one knows and reinforcement affect change in self-efficacy? In addition, reinforcement can happen in multiple contexts at different times; thus, it is important to account for other situations where learning may occur. Further exploration of how CE participation on the same topic over time affects self-efficacy is warranted. Furthermore, reinforcement and improvement are not a dichotomy; one may experience reinforcement and learn something new, which should be investigated. This study has shed light on one part of the process of behavior change—knowledge and competency to self-efficacy—but future studies are needed before we fully understand the mechanisms of change, including how a reinforcement experience in CE leads to behavior change.Lessons for Practice ■ When participating in CE activities, clinicians can learn something new and/or reinforce what they already knew or can do, and this can be measured quantitatively. ■ A clinician's self-efficacy is greater when learning is reinforced. ■ CE planners should consider providing opportunities for clinicians to reinforce what they are learning during a CE activity.

ACKNOWLEDGMENTS

Paul Mazmanian was a key member of our research group. The authors acknowledge his significant contributions during the conceptualization of this research project. Cong Zhang for conducting the main model's analysis under the direction of Katie Lucero and for her attention to detail in this effort. Nathaniel Williams is thanked for keeping us connected.

REFERENCES 1. Wakefield J, Herbert CP, Maclure M, et al. Commitment to change statements can predict actual change in practice. J Contin Educ Health Prof. 2003;23:81–93. 2. Armson H, Kinzie S, Hawes D, et al. Translating learning into practice: lessons from the practice-based small group learning program. Can Fam Physician. 2007;53:1477–1485. 3. Mazmanian PE, Johnson RE, Zhang A, et al. Effects of a signature on rates of change: a randomized controlled trial involving continuing education and the commitment-to-change model. Acad Med. 2001;76:642–646. 4. Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev. 1977;84:191–215. 5. Evans JA, Mazmanian PE, Dow AW, et al. Commitment to change and assessment of confidence: tools to inform the design and evaluation of interprofessional education. J Contin Educ Health Prof. 2014;34:155–163. 6. Williams BW, Kessler HA, Williams MV. Relationship among practice change, motivation, and self-efficacy. J Contin Educ Health Prof. 2014;34(suppl 1):S5–S10. 7. Williams BW, Kessler HA, Williams MV. Relationship among knowledge acquisition, motivation to change, and self-efficacy in CME participants. J Contin Educ Health Prof. 2015;35(suppl 1):S13–S21. 8. Lucero KS, Chen P. What do reinforcement and confidence have to do with it? A systematic pathway analysis of knowledge, competence, confidence, and intention to change. J Eur CME. 2020;9:1834759. 9. US Department of Health and Human Services. 45 CRF 46.102(e)(1)(ii) and 45 CRF 46.102(e)(4)-(6). Washington, DC: US Government Printing Office; 2018. 10. SAS Institute. SAS 9.4 Statements: Reference. Cary, NC: SAS Institute, Inc.; 2013. 11. Schuwirth LW, Van der Vleuten CP. Programmatic assessment: from assessment of learning to assessment for learning. Med Teach. 2011;33:478–485. 12. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79:S70–S81. 13. Kerfoot BP, Fu Y, Baker H, et al. Online spaced education generates transfer and improves long-term retention of diagnostic skills: a randomized controlled trial. J Am Coll Surg. 2010;211:331–337.e1. 14. Cervero RM, Gaines JK. The impact of CME on physician performance and patient health outcomes: an updated synthesis of systematic reviews. J Contin Educ Health Prof. 2015;35:131–138. 15. Stabel LS, McGrath C, Björck E, et al. Navigating affordances for learning in clinical workplaces: a qualitative study of general practitioners' continued professional development. Vocat Learn. 2022:1–22. 10.1007/s12186-022-09295-7

留言 (0)

沒有登入
gif