JBI's approach to evidence implementation: a 7-phase process model to support and guide getting evidence into practice

What is known about the topic?

Evidence implementation is an evolving field designed to address challenges with getting evidence into practice. Several highly theoretical approaches, models and frameworks are prescribed within the field. Evidence-based clinical audits and feedback are valuable and effective approaches to implementing evidence into practice.

What does this paper add?

This paper formally describes the JBI's approach to evidence implementation and provides practical step-by-step guidance to assist and support health professionals with implementation. The JBI approach to evidence implementation highlights the importance of evaluation using evidence-based criteria, the role of context and contextualization, and facilitation and clinical change agents as key components in the implementation process. Central to JBI's approach to implementation is the GRiP process (Getting Research into Practice), a mechanism of action for practice change that assists in identifying barriers and enablers to develop targeted strategies.

Introduction

Implementation science is defined as “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice and, hence, to improve the quality and effectiveness of health services and care” (p. 1).1 The goal of implementation science is to address ongoing challenges that impact getting evidence into practice.2

The implementation of evidence into practice is complex and can take many years3 with no guarantee of success. Despite advances in this field and an increased focus on strategies and methods to obtain evidence into practice, it is estimated that only 60% of care is delivered in line with evidence.3 Closing the gap between evidence and the uptake of research into clinical practice has been a point of ongoing debate and study over the years.3,4 Barriers to the uptake of evidence have been well documented. In a systematic review, barriers and facilitators to the implementation of hospital-based interventions were identified and grouped into three domains: system, staff, and intervention.5 System barriers and facilitators included environmental context, culture, communication processes, and external requirements (such as reporting, standards, and guidelines); staff barriers and facilitators included staff attitude, skills, ability and confidence, role identification, and understanding and awareness; intervention barriers and facilitators included ease of integration, face validity/evidence base, safety/legal/ethical concerns, and supportive components such as education and training, and audit/feedback involvement of end users.5 Implementation science seeks to understand these barriers and facilitators; and to empower health professionals to utilize evidence-based approaches with the end goal of improving the quality and service of healthcare.6

Theoretical approaches, models, and frameworks adopted for, or developed within the field of implementation science, assist in understanding and explaining why and how implementation succeeds or fails.7–16Table 1 provides an overview of selected theories, models, and frameworks that may be considered relevant when implementing evidence. The list is by no means exhaustive, but it highlights the increasing options and complexity of choice involved in mobilizing theories, models, and frameworks for getting evidence into practice. A review examining the differences and similarities of models and frameworks developed to close the gap between research knowledge and its application to practice identified 41 different models and frameworks; the four most published and cited were the Reaching Effectiveness Adoption Implementation Management (RE-AIM) framework, Knowledge to Action (KTA) framework, knowledge translation continuum models, or “T” models, and the PARiHS frameworks.17 All identified models and frameworks described the gap that exists between research knowledge and its application into policy and practice, and all acknowledged the importance and difficulty in closing this gap.17

Table 1 - Description of implementation theories, models and frameworks Theory/model/framework Description Diffusion of innovation model11 • Knowledge phase: involves learning about the innovation to be implemented (such as a guideline, or best practice recommendation)
• Persuasion phase: relies on opinion leaders with good knowledge, who are credible, approachable, can effectively influence practice and encourage others to take up new evidence in practice by personal example – facilitating individuals to form positive (or sometimes negative) attitudes to the innovation
• Decision phase: the point in time where the acceptability of the changes is determined by stakeholders as either worthwhile or not worth pursuing
• Adoption or rejection phase reflects the outcome of the decision phase and is the ultimate decider as to whether evidence is implemented in practice Knowledge to action (KTA) framework63 • Consists of two interconnected cycles (knowledge creation and action)
• At the center of the model is knowledge creation, which includes the three phases of knowledge inquiry (primary research), synthesis (systematic reviews), and products/tools (guidelines, algorithms, etc.)
• Surrounding knowledge creation is the action cycle, which consists of seven phases. These phases may occur sequentially or simultaneously (identify problem; adapt knowledge to local context; assess barriers to knowledge use; select, tailor and implement interventions; monitor knowledge use; evaluate outcomes; sustain knowledge use) PARiHS model44,64 Research implementation expressed as a function of the relationships among evidence, context, and facilitation:
• Evidence (research, clinical experience, patient experience)
• Context (culture, leadership, and evaluation)
• Facilitation (purpose, role, skills, and attitudes) PDSA model65 The model is cyclic comprising four stages:
• Plan – the change to be tested or implemented
• Do - carry out the test or change
• Study – based on the measurable outcomes agreed before starting, collect data before and after the change, and reflect on the impact of the change and what was learned
• Act plan the next change cycle or full implementation Pipeline model66 • Evidence enters the pipeline and flows through a variety of stages from awareness of the evidence to adherence by patients/clients
• Between these are stages of acceptance of the evidence, applicability of the evidence and the ability to implement into the area of practice
• Finally, there are stages of acting on the evidence, reaching agreement between practitioners and patients, and sustained adherence. It is only at this stage that patient outcomes will be affected RE-AIM67 Reach (proportion of the target population that participated in the intervention).
Efficacy or effectiveness (success rate if implemented as in guidelines; defined as positive outcomes minus negative outcomes)
Adoption (proportion of settings, practices, and plans that will adopt intervention)
Implementation (extent to which intervention is implemented as intended in the real world)
Maintenance of intervention effects in individuals and settings over time Theoretical domains framework (TDF)15 • A theoretical framework that targets behavior change in health professionals and comprises 14 domains that encompass factors likely to influence healthcare professional behavior change: knowledge; skills; social/professional role and identity; beliefs about capabilities; optimism; beliefs about consequences; reinforcement; intentions; goals; memory, attention, and decision processes; environmental context and resources; social influences; emotion; and behavioral regulation The triple C model16 • Stage 1: Consultation
• Stage 2: Collaboration
• Stage 3: Consolidation Translation research continuum or “T”’ models17 • Description and discovery
• From discovery to health application
• From health application to evidence guidelines
• From guidelines to health practice
• Evaluation of effectiveness and cost-effectiveness of such interventions in the real world and in diverse populations Consolidated Framework for Implementation Research (CFIR)43,68 • Informed by a review of constructs from theories and other frameworks to form a consolidated framework
• Consists of five main domains, which have a number of constructs: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation

With an abundance of published information available it is not surprising that selecting an appropriate approach can be difficult and overwhelming.18 As Curran highlights, there is a need to simplify and ensure that clinicians gain understanding in what implementation science and practice actually is.19 JBI, an international evidence-based healthcare organization, with an extensive history of assisting healthcare professionals and organizations with the uptake of evidence to practice, offers a pragmatic and practical approach to implementing evidence into practice.20 JBI's approach to evidence implementation draws on pragmatism and change theory while focused on clinicians21 as the change agents for activation of knowledge in clinical and policy settings, and assists with the ‘planning’ and ‘doing’ of getting evidence into practice. The purpose of this article is to formally describe the JBI's approach to evidence implementation and present a seven-phase process model that guides the approach.

JBI and evidence implementation

In the JBI Model of evidence-based healthcare (EBHC) (Fig. 1), evidence implementation is defined as a “purposeful and enabling set of activities to engage key stakeholders with research evidence to inform decision making and generate sustained improvement in the quality of healthcare” (p. 67).22

F1Figure 1:

The JBI model for EBHC. EBHC, evidence-based healthcare.

The JBI model for EBHC highlights context analysis, facilitation of change, and evaluation of process and outcome as core components to operationalize the implementation of evidence (represented by the outer segments circle of wedge).22 Context analysis and understanding issues that impact and influence practice within a local setting is important to enable movement of knowledge to practice.23 Facilitation of change is a key component for supporting and promoting implementation projects and requires a skilled approach to enable and engage others in practice change.24 Lastly, evaluation is based upon the collection of local data where implementation occurred and is a direct measure of the local impact of a change process; without evaluation, there is no evidence of improvement.25

For the past two decades, the JBI has been delivering and facilitating evidence implementation training programs and successfully improving the delivery of healthcare. The JBI Evidence Implementation Program (formerly known as the Clinical Fellowship Program) is a six-month, evidence-based, facilitated implementation program (comprising two 5-day face-to-face intensive training workshops and an evidence implementation project undertaken in the workplace).26,27 The program is grounded in the clinical audit and feedback process where context analysis, facilitation, and evaluation are key components. This approach has been found to be successful, with >600 healthcare professionals from over 34 countries having successfully implemented and changed practice following this process.28–30 Driving the development of JBI's approach to evidence implementation is the JBI Implementation Methodology Group; an international, independent body of experts, working in volunteer capacity. In 2018, the JBI Implementation Methodology Group embarked on a review of the methods and methodology underpinning the JBI's approach to evidence implementation. The group concluded that while the JBI EBHC Model provides a high-level conceptualization of evidence implementation, it was recommended that detailed methodological guidance that reflected JBI's pragmatic and practical approach to evidence implementation was required. A working group was established to lead the development, and a series of teleconferences and face-to-face meetings were held between 2018 and 2019, supplemented with e-mail communication. A cyclic process of feedback and review was used at all stages of the development process until a consensus was reached, resulting in the development of the JBI Evidence Implementation Manual31 and a process model (described herein) that complements the existing JBI EBHC Model. Both were submitted to the JBI Scientific Committee32 on June 6, 2019, and formally reviewed and ratified at a meeting of the Scientific Committee on September 4, 2019.

JBI's approach to evidence implementation

JBI's approach to evidence implementation is entrenched in the principles of evidence-based healthcare and focuses on aligning current practices with best practice principles. The approach, outlined in a seven-phase process model, guides the implementation of evidence into practice with context analysis, facilitation, and evaluation of key components of the entire process (Fig. 2). This approach complements and expands the JBI EBHC Model (with the color of the framework aligned with the implementation wedge of the model), providing additional practical and pragmatic guidance on the process of implementing evidence into practice.

F2Figure 2:

JBI's approach to evidence implementation.

JBI's approach to evidence implementation is underpinned and informed by Donabedian's quality improvement via evidence-based clinical audit and feedback, with the central tenet that collaborative partnerships and an understanding of the context in which the project is being conducted underpin each step of the implementation cycle. As with the JBI approach, the Donabedian approach to quality improvement involves continuous cycles of practice improvement. Donabedian defines the operational parameters of quality improvement suited to clinician-led change in relation to structures and processes internal to the organization for the delivery of quality care.33 The key stakeholders in the evaluation of structures and processes of care are healthcare workers, professionals, managers, and administrators who support care delivery with appropriate structures, including supply and resourcing of facilities, equipment, administrative structures, and operational programs. Processes including appropriateness, acceptability, and competency in relation to care delivery are also formal domains of organizational and professional responsibility that lend themselves to evaluation using audits and feedback with health professionals as the primary stakeholders.

Phase 1: identify the practice area

Healthcare delivery is a collaborative activity, with people working together (in teams, units, departments, and divisions) to combine different strengths, skills, and expertise to best serve the needs of people.16 As such, identifying areas for improvement should be a collaborative, multidisciplinary process.16,34 Implementation is far more likely to be successful when the questions being answered are relevant to key stakeholder groups (policy makers, managers, clinicians, patients/consumers).4,35 Identifying areas of practice may be supported by data such as hospital reports, adverse events, clinical pathway variance reports, morbidity, and mortality data. Additionally, the area of focus may arise from an awareness that evidence-based practice is not being followed or better results are occurring elsewhere such as on another ward or organization.35 Some approaches for identifying practice areas for evidence-based implementation projects include aspects of care that are:

High cost High frequency (regardless of cost) High risk (known poor process or outcome) Topic of local concern Known variability in practice Flagged through critical incident review Practice area addressed by recent evidence-based guidelines

It is important that the practice area identified is relevant and of interest to the project team (who are conducting the evidence implementation), the organization (in which the project is being conducted), and the clinical team (whom the practice improvement initiative/evidence implementation project will impact). If there is a lack of support from the outset, the likelihood of achieving sustainable change is challenging. Therefore, identifying and engaging with key stakeholders to develop an agreed focus on the practice area is an essential first step in implementing evidence into practice. This includes both patients and consumers (among others). Patients and consumers may assist in identifying priority areas for improvement and implementation of evidence. Where feasible and practical, ensuring patient input prior to, during, and following any evidence implementation activity is recommended.

Phase 2: engage change agents

A change agent can be defined as an “individual or group that undertakes the task of initiating and managing” (p. 1).36 Change agents may be leaders, facilitators, or clinical champions; their involvement is pivotal to successful change.37,38 Different kinds of change agents exist and knowing these can assist in identifying whom to inform, whom to engage, and whom to invite as active members of the implementation project group. For example, roles can be classified as:

System leadership: This person is an influencer and leads across departments, organizations, or sectors. System leaders can overcome barriers and facilitate progress towards change. Technical expertise: This person has exceptional knowledge of the subject matter, which requires a degree of enthusiasm and expertise. If the subject is particularly broad, such as vital signs, technical experts with different backgrounds are warranted. Day-to-day leadership: This responsibility should rest with someone who has time and professional interest or enthusiasm to devote to the project. This person should be able to recruit others and encourage relevant activities.

In evidence implementation projects, there is also a need for a person to play the leading role in facilitating change at the point of care. Kitson et al. defined facilitation as “a technique by which one person makes things easier for others” (p. 152).9 During the change process, these facilitators need to “guide, motivate, set norms and standards; maintain open communication, invite and listen openly and actively to the opinions, attitudes and ideas of others; continuously re-evaluate facts, beliefs and positions; encourage two-way feedback; integrate the efforts of others; promote and sustain efficient performances; and delegate power and authority” (p. 424).39

Facilitators should ensure they are communicating the purpose of change clearly to all stakeholders.40 This should be in the form of open consultation and dialogue.41 As Deegan et al. state, “people need to be involved in aspects of change that affect them, because they will only accept changes that fit into their cultures. This implies that, because stakeholders ‘own’ the changes, the processes and the outcomes, they are more likely to accept and sustain the changes” (p. 26).41 Findings from evaluation studies on facilitation suggest that facilitation that provides face-to-face communication and uses a range of enabling techniques can lead to positive changes in clinical and organizational practice.42

Once the topic has been identified, it is crucial to set up an implementation project group and appoint a project lead whose role is to coordinate the group, clarify the role and responsibilities of those involved in the project, provide access to EBHC resources, and oversee all aspects of the implementation project. It is important to engage people who can facilitate change and create positive changes in clinical and organizational practices.24

Phase 3: assess context and readiness to change

Undertaking a contextual analysis and assessment of readiness to change is a key component of the initial phase of any implementation project. The characteristics of a context involve the environment or setting in which the proposed change is to be implemented, and the unique circumstances that surround a particular implementation project.9,43 An organization's readiness to change encompasses whether there is a commitment to change, in addition to the ability to change.44,45

Context analysis is a diagnostic process, the purpose of which is to understand issues within the local context that are important to practice change, and to identify factors likely to influence the proposed change. There are a variety of ways to undertake a context analysis; for example, a Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis46 can be a quick and effective framework at this stage. An analysis of the context is vital when attempting to stimulate a process of change in policy or practice. It recognizes that interventions to promote change need to be tailored. We considered eight factors that should be considered in an assessment of the context. These elements were presented along with guiding questions to assist the team in this process. SWOT analysis can be applied to each of these factors.

1. Structure: To what extent does decision-making occur in a decentralized manner, and are there enough staff to support the change process? 2. Workplace culture: To what extent is the proposed change consistent with the values and beliefs of the practice environment, and to what degree does the culture support change and value evidence? 3. Communication: Are there adequate communication systems to support information exchange related to the change and implementation processes? 4. Leadership: To what extent do leaders within the practice environment support (visibly and behind the scenes) implementation? 5. Resource availability: Are necessary human, physical, and financial resources available to support implementation? 6. Knowledge, skills, and attitudes: Do staff members have the necessary knowledge and skills? Which potential groups are open to change and new ideas, and to what extent are they motivated to implement the change? 7. Commitment to quality management: Do quality processes and systems exist to measure the implementation results? 8. Interdisciplinary relationships: Are there positive relationships and trust between disciplines that will be involved or affected by the change?

These eight factors were identified through process modeling using the JBI Implementation approach. Frameworks such as the Consolidated Framework for Implementation Research and Theoretical Domains Framework (TDF) offer a determinant perspective on factors and are complementary to the JBI Implementation approach and can be integrated within the planning for practice change to identify additional determinants that influence implementation outcomes. Cultural changes in the workplace may also be needed to facilitate the implementation of evidence. The literature clearly indicates that culture and climate affect organizational performance, and in the context of healthcare, this can have serious effects on patient care as well as staff.47 It is therefore important to be able to recognize, understand and subsequently develop effective cultures in the workplace.48

Phase 4: review practice against evidence-based audit criteria

JBI's approach to implementation is grounded in the clinical audit and feedback process using evidence-based audit criteria and an assumption that healthcare practice should be based on the best available evidence.24 Clinical audit can be defined as “a quality improvement process that seeks to improve patient care and outcomes through systematic review of care against explicit criteria and the implementation of change” (p. 1).49 It is a basic but critical process in continuous quality assessment.50 A clinical audit cycle includes a baseline audit of practice, feedback of results to healthcare staff, reflection of barriers and potential strategies to address, then follow-up data collection to enable comparisons with the baseline data.

Audit criteria are defined as “well defined standards set on the principles of evidence-based healthcare” (p. 250)51 and an audit can have several goals:50

It can broadly address the components of clinical effectiveness in the ongoing goal of improving the quality of healthcare. It can provide a means by which clinical units and organizations can assess and compare their work with established guidance. This may also be useful for benchmarking within or across organizations. It can promote self-assessment in practitioners, which can provide professional and practical development, as well as add to an overall quality agenda.

The JBI's evidence-based audit criteria were developed following the principles of evidence-based healthcare. Every audit criterion is developed and informed by a rigorous review of the evidence and aligned to a best practice recommendation.52 This stage requires expert skills to search subject matter databases and critically appraise research publications. Once evidence-based audit criteria have been established, a baseline audit of current practice is conducted, and the findings are compared to the standards indicated in the evidence-based audit criteria.

It is important to highlight that an audit is a systematic change process, not ad hoc. As such, it is recommended that the team leading the implementation project develop a proposal or plan that clearly and transparently outlines the objective(s) and methods to be undertaken.

Phase 5: implement changes to practice using Getting Research into Practice

After completing a baseline clinical audit, the results are evaluated, and these findings are used to assist in identifying barriers and enablers to the utilization of evidence and planning and implementing change. Grimshaw et al. wrote that “unfortunately, our evidence on likely effectiveness of different strategies to overcome specific barriers to knowledge translation remains incomplete. Individuals involved need to: identify modifiable and nonmodifiable barriers relating to behavior; identify potential adopters and practice environments; and prioritize which barriers to target” (p. 5).53 This approach to identifying barriers and targeting strategies to these barriers forms the basis of the JBI practice change method Getting Research into Practice (GRiP).50,54–57

The GRiP method aims to compare the findings of the audit, identify barriers and enablers to the utilization of evidence, and assist in developing implementation strategies to reduce the evidence-to-practice gap. The GRiP method can be undertaken in three stages: evaluating baseline audit findings, identifying barriers and enablers to evidence utilization, and developing and implementing strategies for change.

Evaluate baseline audit findings

Before any implementation changes to practice commence, it is important to compare current practice with best practice standards (using the findings from the baseline audit). These findings (both good and bad) should be presented to everyone involved in the project (stakeholders, working groups, and clinical staff). Feedback can be delivered one-on-one with individuals, although there may be benefits in providing feedback to a group in a constructive way.58 Feedback may also be provided in a number of forms – verbal, printed, or electronic – and it may be that a variety of these forms are used (when possible). When reviewing the findings of current practice compared to best practice standards, it is important to remember that there are many contributing factors that might impact practice and, in turn, whether practice is based on evidence. Generally, a single group or individual is not responsible for these factors; therefore, any process of change must encompass a no-blame attitude.

Identify barriers and enablers to evidence implementation

Using the findings from the baseline clinical audit, the implementation project group, led by the project leader, should aim to identify barriers and enablers to implementing evidence into practice. It is important to identify both barriers, as well as strengths and enablers, to achieve best practice, as these will help develop strategies for practice change. The findings from a systematic review identified common barriers and enablers to implementation of hospital-based interventions and categorized them into system (environmental context, culture, communication processes, and external requirements such as reporting standards and guidelines), staff (commitment and attitude, awareness and understanding, role identity, skills, ability, and confidence), and intervention (ease of integration, evidence-base, safety/ethical concerns, and supportive components).5 Although there are common barriers to evidence implementation, there are also differences across contexts. The authors noted the importance of design and planning, and the consideration of context to achieve sustainable change.5

A variety of methods are available to identify barriers and enablers to change, such as in-depth interviews, surveys, and focus groups with key personnel involved in the practice, observing clinical practice, and brainstorming with key stakeholders. How barriers and enablers are established will be guided by local circumstances and may be affected by available time and resources.

Developing strategies for change

Identifying barriers and targeting strategies to barriers forms the basis of the JBI practice change method, namely, GRiP. The GRiP approach provides a clear, transparent structure for those implementing evidence into practice to evaluate the audit findings, identify the barriers and enablers of evidence utilization, and develop and implement strategies for change (Table 2). A comprehensive description of the implemented strategies is required to allow others to reproduce similar strategies and promote change.

Table 2 - Getting Research into Practice matrix Barrier Strategy Resources Outcomes • What was the barrier? • What was the action to overcome the barrier (e.g. development of tool, delivering educational sessions, development of pamphlets)? • What resources did you use to achieve a desirable outcome (e.g. tool, charts, educational package, seminars, extra staff)? • What was the result?
• How was an improvement measured?
Phase 6: re-assess practice

Evaluation is a key component of implementing evidence into practice as it provides a systematic method to collect “information about the activities, characteristics, and outcomes of programs, services, policy, or processes, in order to make judgments about the program/process, improve effectiveness, and/or inform decisions about future development” (p. 23).59 Determining the ideal timing requires careful consideration of the following: type of strategies implemented (simple versus complex), volume of staff involved in the process, experience level of staff involved, hostile versus accommodating culture. The evaluation process should mimic the pre-implementation evaluation process to ensure comparable results and data. Once the results have been collected, a careful review of the differences between the baseline and follow-up results must be undertaken by the project team.

Phase 7: plan for sustainability

Sustainability refers to the ability to maintain and sustain evidence-based practices beyond the duration of an implementation project. Sustainability cannot be achieved without the change being embedded in the organizational norms/culture.16,60–62 Consideration should be given to who will be responsible for continuing to evaluate and improve the practice area and when it is feasible and appropriate to repeat a review of the practice area. There is no gold standard when another audit should be conducted, and the decision is influenced by many factors. Table 3 provides an approximate guide for determining the frequency of subsequent audits aligned with the percentage of compliance with the standards.

Table 3 - Generic guide for determining timing of subsequent audits Compliance with standards <50% Compliance with standards 50–<80% Compliance with standards 80–100% Audit every 3 months Audit every 6 months Audit every 12 months
Conclusion

JBI's approach to evidence implementation offers practical and pragmatic guidance to assist healthcare professionals and organizations in navigating the complexity surrounding the implementation of evidence into practice. Developed by a group of international experts, the seven-phase process model complements the existing JBI EBHC Model and assists healthcare professionals with the ‘planning’ and ‘doing’ of getting evidence into practice. We acknowledge that the movement between the seven phases is often dynamic, as implementation is complex and often does not follow a linear process. Further research and methodological development are required to formally evaluate the robustness of the approach to better understand the complex nature of evidence implementation and the involvement of patients in evidence implementation.

Acknowledgements

We would like to acknowledge the members of the JBI Implementation Methodology Group for their review and feedback of the JBI Manual for Evidence Implementation

Declarations

Ethics approval and consent to participate: not applicable.

Consent for publication: All authors provide consent for publication.

Availability of data and materials: not applicable.

Funding: No funding support.

Authors’ contributions: K.P.: Contributed conceptual and led the development of the JBI Implementation Framework and the JBI Manual for Evidence Implementation and wrote sections of the manual. Drafted the article and made amendments to the article in line with feedback provided. Provided final approval for submission.

A.M.: Contributed conceptual to the development of the JBI Implementation Framework and the JBI Manual for Evidence Implementation, wrote sections of the manual, provided critical revision of the article. Provided final approval for submission.

C.L.: Contributed conceptual to the development of the JBI Implementation Framework and the JBI Manual for Evidence Implementation, wrote sections of the manual, provided critical revision of the article. Provided final approval for submission.

Z.M.: Contributed conceptual to the development of the JBI Implementation Framework and the JBI Manual for Evidence Implementation, wrote sections of the manual, provided critical revision of the article. Provided final approval for submission.

Conflicts of interest

There are no conflicts of interest.

References 1. Eccles M, Mittman B. Welcome to Implementation Science. Implement Sci 2006; 1:1. 2. Bauer M, Kirchner J. Implementation science: what is it and why should I care? Psychiatry Res 2020; 283:112376. 3. Lang E, Wyer P, Haynes R. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med 2007; 49:355–363. 4. Pearson A, Jordan Z, Munn Z. Translational science and evidence-based healthcare: a clarification and reconceptualization of how knowledge is generated and used in healthcare. Nurs Res Pract 2012; 2012:792519. 5. Geerligs L, Rankin NM, Shepherd HL, Butow P. Hospital-based interventions: a systematic review of staff-reported barriers and facilitators to implementation processes. Implement Sci 2018; 13:36. 6. Tabak R, Khoong E, Chambers D, Brownson R. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med 2012; 43:337–350. 7. Ayanian J, Markel H. Donabedian's lasting framework for healthcare quality. N Eng J Med 2016; 375:205–207. 8. Gardner G, Gardner A, O’Connell J. Using the Donabedian framework to examine the quality and safety of nursing service innovation. J Clin Nurs 2014; 23:145–155. 9. Kitson A, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a conceptual framework. Qual Healthcare 1998; 7:149–158. 10. Rycroft-Malone J, Bucknall T. Using theory and frameworks to facilitate the implementation of evidence into practice. Worldviews Evid Based Nurs 2010; 7:57–58. 11. Rogers E. Diffusion of innovations. 4th ed. New York: Free Press; 1995. 12. Brown D, McCormack B. Developing postoperative pain management: utilising the promoting action on research implementation in health services (PARIHS) framework. Worldviews Evid Based Nurs 2005; 2:131–141. 13. Rycroft-Malone J, Kitson A, Harvey G, et al. Ingredients for change: revisiting a conceptual framework. Qual Saf Healthcare 2002; 11:174–180. 14. Graham I, Logan J, Harrison M, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof 2006; 26:13–24. 15. Atkins L, Francis J, Islam R, et al. A guide to using the theoretical domains framework of behaviour change to investigate implementation problems. Implement Sci 2017; 12:77. 16. Khalil H. The triple C (consultation, collaboration and consolidation) model: a way forward to sustainability of evidence into practice. Int J Evid Based Healthc 2017; 15:40–42. 17. Milat A, Li B. Narrative review of frameworks for translating research evidence into policy and practice. Public Health Res Pract 2017; 27:e2711704. 18. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci 2015; 10:53. 19. Curran GM. Implementation science made too simple: a teaching tool. Implement Sci Commun 2020; 1:27. 20. Jordan Z, Lockwood C, Aromataris E, et al. JBI series paper 1: introducing JBI and the JBI Model of EHBC. J Clin Epidemiol 2022; S0895-4356(22)00091-9. 21. Lockwood C, Pearce A, Sfetcu R, Jordan Z. Behaviour science: theoretical domains framework representation within nursing implementation studies. Manage Health 2019; XXIII:3–6. 22. Jordan Z, Lockwood C, Munn Z, Aromataris E. The updated Joanna Briggs Institute model of evidence-based healthcare. Int J Evid Based Healthc 2019; 17:58–71. 23. Lockwood C, Aromataris E, Munn Z. Translating evidence into policy and practice. Nurs Clin North Am 2014; 49:555–566. 24. Lizarondo L, McArthur A. Strategies for effective facilitation as a component of an evidence-based clinical fellowship program. J Contin Educ Nurs 2017; 48:458–463. 25. Lockwood C, Munn Z, Jordan Z, et al. JBI series paper 3: the importance of people, process, evidence, and technology in pragmatic, healthcare provider-led evidence implementation. J Clin Epidemiol 2022; S0895-4356(22)00090-7. 26. Lockwood C, Stannard D, Jordan Z, Porritt K. The Joanna Briggs Institute clinical fellowship program: a gateway opportunity for evidence-based quality improvement and organizational culture change. Int J Evid Based Healthc 2020; 18:1–4. 27. McArthur A, Munn Z, Lizarondo L, Porritt K. The ripple effect of evidence implementation: a descriptive analysis of JBI's Evidence-based Clinical Fellowship Program. J Evid Implement 2021; 19:142–148. 28. Bayuo J, Munn Z, Campbell J. Assessment and management of burn pain at the Komfo Anokye Teaching Hospital: a best practice implementation project. JBI Database Syst Rev Implement Rep 2017; 15:2398–2418. 29. Sykes PK. Prevention and management of postoperative delirium among older patients on an orthopedic surgical unit: a best practice implementation project. J Nurs Care Qual 2012; 27:146–153. 30. Mwita CC, Akello W, Sisenda G, et al. Assessment of cardiovascular risk and target organ damage among adult patients with primary hypertension in Thika Level 5 Hospital, Kenya: a criteria-based clinical audit. Int J Evid Based Healthc 2013; 11:115–120. 31. Porritt K, McArthur A, Lockwood C, Munn Z. JBI handbook for evidence implementation. JBI; 2020. Available at:
https://implementationmanual.jbi.global. https://doi.org/10.46658/JBIMEI-20-01. 32. Aromataris E, Stern C, Lockwood C, et al. JBI series paper 2: tailored evidence synthesis approaches are required to answer diverse questions: a pragmatic evidence synthesis toolkit from JBI. J Clin Epidemiol 2022; S0895-4356(22)00089-0. 33. Donabedian A. Evaluating the quality of medical care. Milbank Q 2005; 83:691–729. 34. Cranley LA, Norton PG, Cummi

留言 (0)

沒有登入
gif