Evidence for objects of implementation in healthcare: considerations for Implementation Science and Implementation Science Communications

The journals Implementation Science and Implementation Science Communications focus on the implementation of evidence into practice and policy. The evidence concerns objects of implementation, such as clinical or public health interventions, guidelines, medical technologies (medicines, devices), and healthcare delivery models (e.g. structured diabetes care). We have never operationalized the concept of evidence in healthcare in detail, but in the journal editorial that launched Implementation Science, Eccles and Mittman made it clear that it relates to research evidence rather than evidence from practical experience or other sources [1]. There are multiple related terms commonly used in healthcare, including evidence-based practice, evidence-based intervention, and evidence-based medicine. Even the concept “practice-based evidence” implies the need for garnering rigorous evidence of the value of practice-based experience. The assumption underlying these terms is that practices, programmes, devices, or innovations more generally need to be shown to have benefits, and no unacceptable harms, before these are widely implemented in practice or policy. Our working operationalization of evidence in healthcare has included well-developed, trustworthy clinical practice guidelines [2], systematic reviews and meta-analyses of primary studies [3], and, in rare instances, single rigorous studies (typically randomized trials) of the practice of interest. It is incumbent on study authors to be clear about the research evidence supporting the “thing” [4] or “implementation object” that is being implemented and how that was assessed. Some submissions are rejected, because the thing being implemented lacked a sufficient evidence base or was poorly operationalized and assessed.

However, some researchers perceive that this threshold is not entirely explicit, difficult to reach, or inappropriate in certain contexts. Our journals will remain focused on the implementation of evidence-based practices, programmes, or policies in healthcare, but we believe that there are several reasons to reflect on what is meant by this. In many cases, a set of practices is implemented, some of which are clearly evidence-based while others are not, or have mixed evidence. For instance, many clinical practice guidelines contain recommendations with a strong evidence base and recommendations with weak or conflicting evidence. Furthermore, using guidelines requires clinical judgement, ideally in partnership with patients [5]. Second, research evidence needs to be understood in context, so the transportability of research findings from one setting to another can be an issue [6]. For instance, is research in the USA relevant for implementation in Africa or Asia? Or is evidence based on clinical trials with predominantly White populations also evidence based for Hispanic or Indigenous populations? Third, some practices depend on the presence of technical, legal, and other arrangements, which are interventions in their own right that need to be created before the practices can be evaluated for benefits and harms. For instance, health-related digital devices depend on information technology infrastructures, and advanced nursing roles require educational programmes and regulatory arrangements. In these situations, developing the necessary structures precedes the generation of evidence on intervention outcomes. Fourth, models of “precision medicine” or “precision healthcare” may imply the simultaneous conduct of patient care, research, and decision support. For instance, genetic profiling of individual patients can guide their medical treatment in oncology while simultaneously generating data for research and decision support. This challenges the assumption that evidence generation precedes its implementation. Fifth and last, we observe that strong research evidence (i.e. from randomized trials) is not always perceived to be required or appropriate. For instance, several health authorities require professional assessment, but not necessarily clinical trials, for approval of some interventions (e.g. medical devices and healthcare delivery models). This implies that such interventions, particularly those with low risk, are approved for use, although they are not evidence based through randomized trials.

In the remainder of this commentary, we will discuss three specific cases and how we deal with these in our journals. Before we turn to this, we summarize the arguments for a focus on evidence-based practices for implementation.

Why is evidence important?

Our rationale for a focus on the implementation of evidence-based practices, programmes, devices, and policies (rather than those of unproven value) is linked to the argument for the use of evidence in healthcare generally. Historically, it implies a departure from authority-based practice, which was motivated by examples of outdated practices that continued to be used and new practices of proven value that were only implemented after many years, or not at all [7]. The use of evidence to guide healthcare practice has become a broadly accepted ideal. Several decades after the introduction of evidence-based healthcare, we perceive that many innovations are implemented in healthcare practice or policy that have unproven value or proven lack of value, require resources, and may cause harm. For example, there is debate regarding benefits and harms of healthcare practices such as breast cancer screening [8] and HIV self-testing [9]. Also, the majority of new medical technologies used in German hospitals were not supported by convincing evidence for their added value [10], which may not be the best use of resources to optimize health and well-being of individuals and populations.

Implementation heavily focuses on implementation strategies, whose choice, optimization, and effectiveness require dedicated research. Furthermore, the field examines the context(s) in which an object is implemented, as the application of objects does not happen in a vacuum but involves complex interactions with many contextual factors (such as service systems, organizations, clinical expertise, patient preferences, and available resources). Nevertheless, here we focus on the evidence for the objects of implementation, which also influence the uptake of objects into practice. Beginning with the ground-breaking work of Everett Rogers [11], factors related to the object being implemented—the innovation, intervention, evidence-based practice, and so on—have been theorized to be critical in influencing the success of implementation efforts. Since Rogers’ work, most consolidated determinant frameworks (e.g. [12,13,14,15,16]), as well as many others derived from empirical and theoretical approaches, have included domains related to factors related to the thing being implemented, including the perception of the strength of evidence. In Rogers’ model, effectiveness of the innovation is seen to be influential in users’ decision to adopt the innovation, along with several other factors related both to the innovation and to the adopter of the innovation.

留言 (0)

沒有登入
gif