The Evolving Economics of Implementation

Decision makers in healthcare systems strive to implement evidence-based, high-quality care. A lack of economic data is often cited as a barrier to implementation, especially when decision makers are asked to allocate finite resources and face competing demands.1–3 Studies that evaluate the cost of implementation strategies remain rare, and often these studies only estimate the implementation costs without connecting those investments to patient outcomes. In this issue of BMJ Quality & Safety, in a cost-effectiveness evaluation of a quality improvement project to improve thrombolysis door-to-needle times in a large Norwegian stroke centre, Ajmi and colleagues4 report that their implementation strategies cost $44 802 (US$) in fixed costs and $2141 per month in recurring costs. Further, they report an incremental cost per life-year saved that ranged from $4961–$10 543 (US$) over a 5-year period, with the range reflecting different assumptions made in a sensitivity analysis.

The authors should be commended for estimating the incremental cost per life-year saved, incorporating costs of implementation of the intervention. This information alone may help in allocating scarce resources. However, we suspect that many decision makers will remain uncertain about whether they should try to replicate these efforts in their own systems.

One challenge with economic evaluations is providing enough context by which other decision makers can easily determine the generalisability of the findings to their own systems. Such knowledge is necessary if there is a desire to spread and replicate this work in other systems. In the study on which Ajmi and colleagues’ analysis is based, the door-to-needle time was cut in half (from a median of 27 min to 13 min). On one hand, that is a triumph, and on the other hand, these hospitals were already doing relatively well at the onset. In contrast, Get with the Guideline efforts in the USA reported door-to-needle time improvements from a median of 77 min to 67 min.5 We anticipate many decision-makers will want to know if they can reasonably expect a relative reduction in door-to-needle time of 50% by investing a similar amount of money ($44 802 in fixed costs, plus $2141 per month) in a similar intervention.

As Lewis and colleagues6 highlight, mechanisms that drive changes in short term and overall outcomes must be explicitly described for others to feel confident in implementing the intervention themselves. Other decision makers can then try to extrapolate from the model to guide decisions at their healthcare system. However, using economic models to make individual decisions, whether that is for a single patient or a whole hospital, is fraught with difficulty.7 In the paper, Ajmi and colleagues4 discuss reasons why they likely saw favourable outcomes as a result of their implementation efforts but stopped short of exploring in more detail how these outcomes may be affected by varying mechanisms. Efforts to link the economic results more tightly to mechanisms of action will help guide decision makers to decide whether replication is warranted. This depends, in part, on whether they can expect similar results if they invest in similar interventions and implementation strategies.

The issue of context comes up again with the authors’ decision to exclude additional healthcare costs (eg, additional CT-scans, price of additional thrombolytic medication, critical care stays). Ajmi and colleagues did not describe Norway’s approach to healthcare financing or whether cost containment is a priority. However, implementation scientists should consider the incentives created by the broader financial landscape. Health systems financed through a ‘fee for service’ model may not be concerned by the exclusion of these downstream costs, but those working in systems financed through capitation or global payments may feel differently. In the USA, a growing number of decision makers are asking for budget impact analyses to understand how potential interventions will affect the allocation of funds and resources in the near future given existing budget constraints.8 9

Although the issues we discuss could be viewed as critiques, this is partly the learning curve for researchers who are grappling with economic evaluations applied to implementation research with little guidance of best practices. We hope there will be more economic evaluations of implementation efforts in the future. In that vein, we offer a few suggestions.

First, jargon can be an impediment, especially in multidisciplinary areas such as implementation science. Readers with varying experience with either implementation science or economic evaluation often talk past each other due to a lack of mutual understanding over discipline-specific language.10 The use of terms such as opportunity costs, and fixed and variable costs, should therefore always be defined.11 For example, fixed costs are costs that do not vary by the scale of production. Purchasing an X-ray machine represents a fixed cost—the cost is fixed for the life of the machine, regardless of how many scans it is used for. Personnel (ie, labour) typically represents a variable cost because the cost varies by the scale of production. Ajmi and colleagues4 reported that the majority of costs were fixed and attributable to personnel, which may confuse some readers. It is possible that Ajmi and colleagues used the terms programme costs synonymously with fixed costs, but programme costs often vary at some level, and for interdisciplinary work, clearly explaining key terms will therefore pay dividends.

Second, to estimate the cost of the implementation strategies, most researchers rely on microcosting approaches that involve identifying the quantities of inputs used in the production of a service. In this study, the authors used staff surveys, with input from programme directors, to estimate the total cost. Microcosting approaches differ in precision and accuracy due to the timing and method of data collection. As a field, we need to find more accurate ways to track staff effort. Periodic surveys asking staff to recall their effort in the past month may be feasible, but whether these estimates are accurate is unknown and untested. Some researchers try to handle this uncertainty with sensitivity analyses, yet those ‘in the trenches’ often feel as though we are imputing a considerable amount of data in economic evaluations that cannot be easily addressed with a sensitivity analysis. Building time tracking systems into the workflow is perhaps the best method to ensure accurate and precise time estimates.12 13 Time tracking systems are standard in many industries that bill clients for direct services (eg, legal services, fixed-price contracts) and integration of these into existing clinical management systems can reduce the additional burden of time tracking for health professionals.

Finally, as noted above, studies should ideally shed light on the mechanisms of action. Including more information on the implementation process—such as the time and effort spent changing the minds of stakeholders and delays due to staff turnover—to provide other decision makers with advanced insight on decision points that may require additional resources. For many implementation activities, time tracking systems can also inform the kinds of implementation efforts needed for future implementation planning purposes, whether those efforts are small and steady or rare but large.

Creating tables of implementation costs, while helpful, provides only part of the information needed to make thoughtful decisions about implementation. The other component is understanding the benefits gained. Rarely are healthcare systems solely focused on costs or solely focused on the benefits. Integrating economic information into implementation science is appreciatively difficult and made worse by the lack of guidance for researchers. As these details are worked out and best practices are identified, we look forward to more studies like those of Ajmi and colleagues, informing this growing and evolving field.

Ethics statementsPatient consent for publicationAcknowledgments

We appreciate the comments from Bryony Dean Franklin and Robert E Burke on an earlier draft.

留言 (0)

沒有登入
gif