Promises and pitfalls in implementation science from the perspective of US-based researchers: learning from a pre-mortem

The six themes we discuss below which were identified through the pre-mortem (Table 1) are also challenges in the fields in which we initially trained. These themes threaten forward movement if we are not thoughtful about the field’s evolution and growth. Framing these themes using prospective hindsight highlights their complexities and points toward potential opportunities.

Table 1 Threats that might stymie progress in the field of implementation science: Themes, causes, and solutionsTheme 1: We did not impact population health or health equityThreats

Impact is foundational to implementation science. We considered impact from the equity perspective of deploying discoveries that are relevant, appropriate, and feasible across diverse populations and settings for widespread health and societal benefits, while acknowledging the complexity of defining impact [12]. The literature has only a few examples of the field having broad impact (e.g., implementation of patient safety checklists) [3]. This scarcity of success may be due to many implementation studies having null results, implementation efforts taking many years to influence public health, or a misalignment between reporting impact broadly and metrics such as papers and grants used to evaluate researchers and the quality of their research [13]. Regardless, as the field coalesces and grows, funding, uptake, and scaling of implementation approaches require that they demonstrate societal and population health impact and economic value. Below, we outline tensions we can address to demonstrate impact as the field continues to develop and demonstrate its utility [14, 15].

Our mission to improve EBP implementation is more complex than instituting a discrete strategy [16]. The field’s relatively focused endeavor to improve the widespread, routine adoption, implementation, and sustainment of EBPs has therefore evolved to be more all-encompassing. This is partly attributed to findings that organizational factors such as culture predict much of the ability of health service organizations to provide high-quality care and implement EBPs [17, 18] and that policy significantly shapes health and inequities, partially through financing and incentives for changing the healthcare status quo [19]. Additionally, as part of context, upstream societal and structural factors such as structural racism and social determinants of health are recognized as critical for shaping health inequities and inequitable implementation [20]. Only recently, however, has the field more explicitly included and measured these determinants and their impact on implementation and health outcomes [20,21,22,23]. Given the important role of multilevel context in implementation, understanding the real-world complexity and interconnected nature of these determinants is critical. Yet inclusion of these complexities in our models and solutions takes more time and resources than was originally thought for a field whose mission is to hasten the deployment of science to practice.

Opportunities

As implementation researchers, our publications ought to detail impact, both with empirical evidence about health outcomes (including whether outcomes were equitably improved in all groups) in our studies and impact at organizational or policy levels resulting from research and partnerships (e.g., if results led to state funding for EBP delivery or partners report that implementation challenges were addressed). Measuring health outcomes is often challenging when study resources are allocated to rigorous evaluation of implementation strategies and outcomes, but may offer the greatest opportunity to demonstrate impact. Increasingly, we need to leverage routinely collected heath outcome or administrative data and other pragmatic measures [24].

Another potential solution to increase impact is better defining an implementation strategy’s scope. Some focus on proximal implementation and clinical outcomes and should acknowledge an inability to meaningfully impact system-level outcomes; others are designed for system-level effects and should state limitations for individual impact. This suggestion stems from our experience studying individual clinician behavior to state and national policies, and realization that balancing breadth and depth is important for the future of implementation. It also underscores the importance of being explicit about how and why an implementation strategy is intended to work (i.e., specifying hypothesized mechanisms) [16, 25, 26].

Because of the need to consider context, multilevel system variation, and other complexities while accelerating the implementation of EBPs in communities, team science is essential [27] for equitable impact. Examples include applying implementation science to examine and address social and structural determinants (e.g., structural racism) as part of contextual assessments to advance understanding of barriers to implementation or informing selection or refinement/adaptation of EBPs and/or implementation strategies [20, 28]. This work, in collaboration with community members and leaders, intervention developers, prevention scientists, policymakers, and other scientific and practitioner partners, can provide a foundation and strategies for shared responses to inequities or uneven EBP implementation informed by implementation and policy development focused on and prioritizing health equity [29, 30]. Implementation scientists can also prioritize EBPs and strategies with potential to promote health equity to highlight the value and impact of the field and avoid inadvertently reinforcing inequities. We can measure and track equitable delivery of EBPs and implementation strategies across populations and settings and the extent that approaches alter health inequities [20, 21].

Areas in which we ought to generate more evidence to demonstrate impact include (a) investigating the relationship between our implementation outcomes and health outcomes [31] and prioritizing both sets of variables such as suggested by hybrid designs [16]; (b) demonstrating improvement in population health, including in promoting health equity and reducing health inequities [22]; and (c) demonstrating the economic impact of EBP implementation and of poor/ineffective implementation [14] (i.e., return on investment and value). Demonstrating the economic costs of effective strategies is critical [14, 16, 32, 33]. Without compelling evidence that implementation science-informed approaches yield a favorable return, policymakers and administrators may be reluctant to invest time and resources in complex approaches. Identifying the best approach to economic analysis and ensuring collection of this data during implementation efforts is critical to building a business case for funding implementation.

Finally, and perhaps most importantly, translating our scientific knowledge into usable knowledge for the public is a way forward for impact. This can be accomplished through multiple avenues. The recently published National Cancer Institute practitioner guide to implementation science [34] is one example of a product that can translate research to practice. We recommend that implementation scientists also clearly communicate the value of the field to the public and policymakers. The COVID-19 pandemic underscores the value of an implementation science-informed approach: An influential simulation paper prior to emergency-use approval for COVID-19 vaccines suggested that implementation rather than vaccine effectiveness would be the major challenge to global population vaccination [35], precisely predicting the vaccine rollout challenges. If more people in the public and in healthcare knew what implementation science offers, we could have more impact and added value to public needs. As implementation scientists, our responsibility is to shape the narrative about the value of our field [36]. This includes communicating our work in understandable ways and answering the key questions that policymakers, the public, and our broader communities have for us in lay venues including op-eds [37, 38].

Theme 2: We over anchored on becoming a “legitimate” scienceThreats

The past 15 years have seen a flurry of activity around codifying and legitimizing the science of implementation. This pattern is consistent with the emergence of a new field with no common body of facts and scientists converging on conceptual frameworks, terminology, methods, and designs to answer research questions [39]. A shared lexicon and tools are laudable goals and can legitimize implementation science, but potentially undermine the future of the field if not approached thoughtfully.

First, we observe a tendency in the field to reify commonly used frameworks, approaches, and ways of thinking. Using similar terminology has clear communication advantages, but we see a disadvantage to all studies applying the same conceptual frameworks, designs, and methods without critical thinking, which can contribute to stagnancy and limit innovation. For example, while Proctor and colleagues’ influential 2011 paper substantially advanced the field by defining implementation outcomes [40], scholars rarely posit outcomes beyond this initial set. A few of the outcomes are over-represented (e.g., fidelity) compared to others.

A second example is the idea that implementation science-related inquiries require an EBP rather than simply an existing innovation or program that meets a community’s need [41]. The COVID-19 pandemic demonstrated how quickly implementation science might become obsolete if we only get involved when there is an EBP [42, 43]. Furthermore, approaches that over-prioritize scientific evidence over community-defined evidence can disempower community partners [41, 44]. This might manifest as EBPs that do not reflect or involve populations that experience historical or ongoing mistreatment, discrimination, or injustices from public health and/or medical institutions, presenting foundational challenges in our ability to equitably reach, implement, and sustain EBPs [21].

A third challenge is related to our borrowing from disciplines such as organizational theory, behavioral science, and systems science. One danger, since funders and reviewers prioritize novelty [45], is borrowing from other fields to maximize innovation but doing so in a superficial manner that does not reap the benefits of deep interdisciplinary or transdisciplinary work.

Opportunities

Healthy critiques, reflection, and dismantling of current thinking are needed for scientific field development. We have opportunities to innovate in our methodologies and theories before settling on what is “widely accepted” [46, 47]. Although we have 150 published implementation frameworks [48] and must carefully consider the value of adding more, frameworks are still opportunities to shift paradigms and advance theory. Deeper application and evolution of methods from other adjacent fields applied to implementation are opportunities to harness well-vetted theory, advance our science, and increase rigor and impact, particularly in promoting health equity. For example, we have seen recent innovations in adapting existing theories, models, and frameworks to focus more on equity (e.g., see [23, 49,50,51]). We note opportunities to learn from and integrate theories and frameworks from fields with a long history of health equity scholarship, including anthropology, sociology, and public health [52]. Simultaneously, we cannot overpromise the benefits of implementation science: We will quickly become disillusioned if we are not circumspect about the potential benefit — or lack thereof — of the products of our implementation work.

Theme 3: We recreated the research-to-practice gapThreats

Although implementation science was created to reduce the research-to-practice gap, recent critiques suggest we may be recreating it [53]. This could undermine the forward movement of the field [5], including work to reach populations experiencing health inequities [22]. More bidirectional partnership between implementation research and practice is needed [54].

Because implementation science requires multilevel and partnered approaches (theme 1), it is complex by nature. Input from multiple sources that often prioritizes researcher perspectives may lead implementation strategies to be developed without “designing for implementation.” In other words, many strategies are designed for maximal theoretical effect, with the unintended consequence of limiting fit, feasibility and/or affordability [55]. Additionally, because implementation science is relatively new, investigators may feel pressure to develop their own approach to push the field forward, especially given the premium that funders and reviewers place on innovation. The resulting innovation may be less responsive to partners and the result too complex or incompatible with many practice settings. There may be limited access to the implementation strategy due to limited capacity to train others in complex, “proprietary” strategies. As we have been advocating for intervention developers to design for implementation for years [56], we might consider heeding our own advice.

Second, the state of implementation frameworks is challenging because of both their number and their utility for pragmatic application. The multitude of implementation frameworks [48] creates considerable difficulty for researchers and community partners in selecting a framework to guide their work and pragmatically apply the findings.

Third, a key tension that we hear from partners is that implementation science should balance adaptation for context with generalizable knowledge. While context is key [57, 58], tailoring solutions for particular sites or efforts may not always be possible with limited resources. We ought to balance pragmatism, the creation of generalizable knowledge, and finite resources.

Opportunities

To avoid recreating the research-to-practice gap, we should balance advancing implementation science theory and general knowledge with serving research and community partners, all with finite resources. Solutions may include refining commonly used frameworks to enhance pragmatism and facilitate application. An example is the sixth domain added to the Consolidated Framework for Research (CFIR) focused on patient needs [59] and adaptation of CFIR for the context of low- and middle-income countries [59].

Developing modular (i.e., menu of common implementation strategies) implementation approaches is an opportunity for innovation and creating broadly useful strategies for tailoring to setting. These solutions are opportunities for both implementation researchers and practitioners, who apply research to transform practice. We can be partners in advancing knowledge quickly and ensuring rigor, relevance, and translatability. As humans, we are prone toward dichotomous thinking, but implementation science will be stronger if we prevent the emergence of separate “research” and “practice” ideologies. Another opportunity is a lesson from intervention development: avoid assuming “if you build it, they will come” [60].

To avoid a research-to-practice gap in implementation, we should assemble the voices of all key partners including community members, implementation researchers, and practitioners. The most effective way forward is true partnership to advance knowledge quickly and ensure rigor and relevance, rather than the emergence of “research” and “practice” camps separated by ideological lines. One solution comes from the Society for Implementation Research Collaboration (SIRC), which proposed an integrated training experience for implementation researchers, practitioners/intermediaries, practice leaders, and policy leaders to reduce the implementation research-practice gap [60]. Building on principles of pragmatic research, team science (theme 1), and interprofessional education, the approach could be a model for integrated professional development.

Theme 4: We could not balance making implementation science available to everyone while retaining the coherence of the fieldThreats

A major challenge of the field relates to capacity building [61], with the goal of making implementation science more broadly available to implementation research and practice. Pressures to create traditional niches of expertise have resulted in a sometimes insular field that often requires individuals to be perceived as “card-carrying” implementation scientists to obtain funds for large-scale implementation research. If we want to meet demand, have broader impact, and formalize as a scientific field, we need more implementation scientists [62]. However, having everyone “do implementation science” has the potential to dilute the field and lessen perceived innovation and coherent science. The epistemology of science has many theories on the tension of how fields grow and thrive [63]. If we, as implementation scientists, act as gatekeepers to retain field coherence, we lose opportunities to partner with adjacent fields such as improvement [64] and intervention sciences and grow synergistically, rather than in parallel and siloed. We give up the chance to embed in learning health systems [65] and other organizations available internationally in the US [66], UK [67], and Australia — and repeatedly proposed in low- and middle-income countries [68, 69] — for synergy between implementation science-informed approaches and quality improvement, clinical informatics, and innovation [70].

Opportunities

There is a growing understanding that more implementation science capacity is needed, while retaining field coherence [71, 72]. One way that we have come to think about this includes considering the needs of three groups. First, basic scientists and early-stage translational researchers should be aware of implementation science but will likely not incorporate its approaches into their work without partnering with an implementation scientist. This group benefits from awareness of implementation science methods, which can be built into graduate and/or postdoctoral training. The second group of individuals might include implementation science in their toolkit (e.g., health services researchers, intervention developers, clinical trialists) and use established methods (e.g., hybrid designs, co-design) in their projects. This group requires foundational training. The third group are dedicated implementation scientist methodologists and advance the field with their work. These individuals require advanced specialized training. They may be most interested in a particular disease (e.g., cancer) and setting (e.g., acute care or schools) or be disease and setting agnostic, instead answering the most impactful implementation science questions. The intentional development of these different groups will promote the full range of implementation science, from basic science focused on theory and method development to applied, real-world approaches [73].

We envision a future in which all institutions, including large health systems, have a division or department of implementation scientists from the third group. We envision departments having individuals with expertise in implementation science germane to the department’s area, akin to the biostatistician model [74, 75]. Exploration of additional models for supporting institutional implementation science capacity, including leveraging Clinical and Transitional Science Award programs [73, 76, 77], is needed. This kind of growth will both democratize implementation science and promote paradigm-shaping work needed

留言 (0)

沒有登入
gif