What can Safety Cases offer for patient safety? A multisite case study

Case study: the Safer Clinical Systems programme

Our analysis draws on an evaluation we conducted of a programme known as Safer Clinical Systems, which is designed to improve the safety and reliability of clinical pathways based on learning adapted from a range of hazardous industries. It seeks to enable organisations to make improvements to local clinical systems and pathways through a structured methodology for identifying risks and re-engineering systems to control risk and enhance resilience.25 26 Use of the principles of the Safety Case approach is a centrepiece of the Safer Clinical Systems programme, although outside a regulatory context.

Funded by the Health Foundation, the Safer Clinical Systems programme was developed by a team at Warwick University and tested over a number of phases. Following initial development, a ‘testing phase’ involving eight NHS hospital sites (seven in England, one in Scotland) ran from 2011 to 2014. An ‘extension phase’ (2014 to 2016) involved further work by five of these sites and one new site.

Each participating hospital site (table 1) was required to establish a multidisciplinary clinical team. Sites in the testing phase were advised by a support team of clinicians and experts, received inperson training, had access to other resources (such as a reference manual and telephone support) and were required to report their progress regularly. Sites in the extension phase had less bespoke support and were expected instead to build on their previous learning.

Table 1

Sites involved in the programme

A requirement of participating teams was that they use the Safer Clinical Systems approach to proactively assess risks and hazards in their clinical pathways and that they produce Safety Cases at the end of their projects describing the risks and how they were being mitigated. The Safety Cases were expected to be similar in format to those used in other sectors,27 comprising a description of the clinical pathway covered, the key hazards identified through structured analysis using prescribed tools, the risk controls implemented, and, critically, a ‘safety claim’ and associated ‘confidence argument’—a pronouncement on the current safety of the system concerned, and a statement explaining how risks had been made ALARP. Rather than being presented to an external regulator, as would be the case if the Safety Case were being used as a regulatory technique, the principal intended audience in this programme was the senior leadership (executive and board level) within organisations.

Evaluation methods

To study the testing and extension phases of the Safer Clinical Systems programme, we used a mixed-methods, longitudinal design, involving interviews, ethnographic observations, and documentary analysis across the nine participating sites. The analysis we report here is based primarily on interviews and documentary analysis. Ethnographic observations (over 850 hours) provided valuable data on how clinical teams carried out their Safer Clinical Systems projects in practice in the context of existing and competing demands, but are not reported in detail here.

Across the nine sites, we conducted 89 semistructured interviews in the testing phase and 39 in the extension phase with participating clinical team members and programme leaders. Sampling at the sites sought to purposefully include a range of different roles in the programme, including the clinical leaders of each project and others. We also conducted 5 semistructured interviews in the testing phase, followed by 10 in the extension phase, with organisational senior leadership, comprising executive team/board members. Interviews explored general experiences of the programme as well as specific exploration of using the Safety Case approach. Participants were informed of the aims and commissioners of the evaluation. All interviews were conducted by experienced social scientists using topic guides (online supplemental material 1). Interviews were conducted either in person or by telephone, between November 2012 and June 2016, and were digitally audio recorded and then transcribed for analysis.

Analysis, conducted by EL and guided by the wider team, was based on the constant comparative method28 combining inductive and deductive approaches. We coded interviews and observations using an inductive approach, deriving codes directly from each interview and then progressively clustering codes in higher order categories and themes. To strengthen explanatory power, this inductive strategy was complemented by theoretical concepts drawn from the wider literature.

GL and EL conducted a documentary analysis of the Safety Cases prepared by the clinical teams (table 2). We used recommendations and guidelines for writing and maintaining safety cases in other sectors,29–31 to organise the Safety Cases’ content thematically, and identified their main strengths and weaknesses in terms of completeness, presence of appropriate evidence and analyses to support the claims, consistency with the site’s safety improvement objectives, readability, and presence of a safety claim and confidence argument.

Table 2

Format and content of 13 Safety Cases reviewed

Finally, we organised our higher order themes and overall reflections using concepts and themes proposed by recent works on the topic.19 32 Regular team meetings and correspondence provided oversight of the analytical approach, consistency and adequacy of codes, and reporting. Given the nature of the programme, we did not undertake a formal test for theoretical saturation for the interviews or the Safety Cases.

Findings

Across the testing and extension phases of Safer Clinical Systems, we undertook 143 interviews with participants across programme leadership, clinical teams and organisational leadership. We analysed 13 submitted Safety Cases; although 14 should have been developed, one site from the extension phase struggled to implement the programme in full and did not produce a Safety Case.

In presenting our analysis below, we consider, first, participants’ views on the Safety Case as a novel approach to understanding and managing safety risk in healthcare, and second, the work that went into developing Safety Cases. We then turn to the analysis of Safety Cases themselves.

Views on the value of safety cases

By the end of the programme, members of the project teams and senior leadership in the participating organisations had largely come to see the Safety Case as a valuable approach, with the potential to make hazards visible in an accountable, systematic and scientific way. The analytical steps required to compile a Safety Case, such as process mapping the patient pathway, were seen to be particularly useful in proactively identifying threats to safety, rather than reactively managing incidents once they had happened. The role of Safety Cases in enabling an overarching, system-wide view of the hazards, rather than focusing on what happens in particular segments of the pathway, was also welcomed. Broadly, teams valued the possibilities of new ways of thinking about risk.

I like the idea that you just have one document that you can hand to somebody and say how safe is your system. I like the concept that you can say ‘Well this is what our system is like just now’. (Project participant)

Some organisational senior leaders agreed, at least in principle, that Safety Cases could offer value, and recognised the importance of a prospective approach to safety.

We have immensely complex systems which could be simplified and therefore made a bit more reliable. […] So something which looks at that could certainly be a useful thing, because it’s saying ‘Well actually here is a little nest of complexity which you can reduce, but it’s also a significant risk to the patient, because you’re missing information or you’re hurrying things through.’ […] (Senior leader)

Other senior leaders, however, were not always clear on the practicalities of the approach, and some found it difficult to identify the added value of Safety Cases. They suggested, for example, that existing risk management tools performed very similar functions.

If you look at our risk register, mitigation is the last box, we spend a good amount of time on the other things, but if we were to spend any time on a particular risk it would be on mitigation […]. And so that sounds like a very similar process, and so I’m back to what the delineation is between Safety Case and risk register. (Senior leader)

Some project teams saw the Safety Case as useful for a secondary reason: that of securing the attention and interest of senior leaders in their organisations. Their hope was that, by providing new evidence and analysis of the riskiness of clinical systems, senior management attention, support, and resources might be solicited.

So they’ve [senior management] actually kind of bought into it, so I think they will feel pressure to deliver. (Project participant)

However, as we explain below, the exact fit of Safety Cases into the existing ecology of tools and documents in healthcare was not clear to all participants.

Preparing safety cases

Project teams were required to learn new techniques to prepare the Safety Cases, including use of systematic methods to identify and assess risks in their clinical pathways, to propose risk controls and to identify metrics that could be used to monitor systems. Production and communication of Safety Cases also required skills in making persuasive claims, structuring arguments and presenting evidence compellingly. The participating teams were, understandably, unfamiliar with many of these skills, and expressed uncertainties about the expected structure, content and style of the Safety Case itself, especially in terms of what issues to emphasise and how to evidence them. Participants described compiling and drafting the Safety Case as labour-intensive and difficult.

I think the other bit that we have been challenged by is the actual writing of the Safety Case and again it is because it is fairly new to healthcare in general. I think we are going to go through a few reiterations before we fully understand what it is and how to use it. (Project participant)

Notwithstanding the training and support received in the ‘testing’ phase, teams continued to report difficulties with preparing and drafting Safety Cases well into the extension phase. A recurrent source of ambiguity related to the size and scope of the clinical system that the Safety Cases should target. The first, diagnostic, step in the Safer Clinical Systems process involved defining the clinical pathway of focus. However, determining the boundaries of the pathway was far from straightforward. Furthermore, clinical pathways typically involved dozens of technological systems (eg, infusion pumps, IT systems) and sociotechnical processes (eg, guidelines, multidisciplinary meetings). Each might be amenable to risk assessment and management individually, but making sense of their connections, aggregate risks and potential interactions was a much more complex task.

It’s not a linear process and you do go back trying to understand another bit of the process that you thought you understood, but actually didn't as (…) you had hoped. (Project participant)

Once the pathways and their components had been determined (or at least approximated), project teams used a range of methods recommended by the Safer Clinical Systems programme, mostly derived from similar activities in other industries, to assess hazards and risks. The teams found the processes often challenging and time-consuming, with much discussion about the relative merits of different sources of data and evidence. Despite the challenges, teams generally concluded that conducting a systematic risk assessment using structured tools offered important new insights about clinical pathways.

What I’ve loved doing is, is talking to the staff and actually understanding what goes on, because it’s only when you understand what goes on that you can put it right… You’ve worked in the hospital for years and there’s still things you didn’t realise actually went on and things that people did that you didn't realise that they actually did. That was quite an eye-opener. (Project participant)

This new understanding through structured risk assessment enabled teams to identify multiple shortcomings that had potential to harm patients. The hazards they unearthed varied greatly in scale, level of risk posed and tractability to intervention. Some problems identified were amenable to resolution by the project teams, typically those with their roots in suboptimal service planning and pathway design, failures in communication among staff, or unclear distribution of responsibility or ownership of key processes. In response to these, most, but not all, sites designed or implemented some risk controls and documented them in their Safety Cases.

[Staff are] given the freedom and the autonomy to go ahead and do whatever things they think might be necessary to make things better. And that’s what people do, there is very much a culture of promoting change there, so they talked about small cycles of change, doing PDSA [Plan Dp Study Act] cycles, and there’s a number of different projects that are running (Observation notes)

The extent to which these risk control interventions were consistent with the principles of the Safer Clinical Systems programme varied by site. Some project teams were able to draw on extensive experience, while others foundered at this stage. Common to all sites, however, was the identification of issues that were well beyond the scope of control of the front-line teams themselves. These vulnerabilities tended to originate from deep-rooted institutional and organisational pathologies or constraints. The importance of these problems, including, for example, staffing levels, was beyond doubt. Exactly what to do about them was less clear. Some project teams made valiant attempts to at least mitigate the risks through local work, but others appeared to accept that standard quality improvement efforts would not solve the issues. Some teams described the ongoing failure to mitigate the risks in their Safety Cases, in part, as noted above, in the hope that action from senior level might be provoked.

There were other things that were discussed at the [meeting] that they thought would be good as a team to change… but with some of them, they just knew it would be impossible to do so, so actually they didn't even bother to write them down. (Observation notes)

And the team very bravely went to the board and said, you know, our Safety Case is showing and we're telling you that our processes are unsafe, so it alerted people to the issues. […] So that was the strength of it. (Project participant)

However, as we now describe, for senior organisational leaders, both the imperative offered by the Safety Case and their own ability to act were less clear.

Content of, and responses to, safety cases

Our documentary review showed that submitted Safety Cases were highly variable in format and length (table 2). Some were highly structured, clearly written and precise in the use of evidence; others were harder to follow, lacking in clarity and less well organised. Our review also found that the descriptive elements (analysis of risk and hazards) were much better achieved than the assurance components (the safety claim and the confidence argument). Indicative, perhaps, of the intractability to local-level intervention of some of the hazards uncovered, or the lack of expert safety science input in the project teams, most Safety Cases focused more on what had been done to determine the risk than on the level of safety that had been achieved in mitigating it. The documents also varied in the extent to which they reported the residual risks—those that remained despite the implementation of risk controls—in a clear and transparent way. For instance, one Safety Case noted that the diagnostic process had found 99 ways in which the pathway could fail, that the level of reliability in the microsystem remained lower than acceptable, and that radical re-design was needed. Others were more circumspect. Accordingly, while they documented sometimes-extensive mitigations, none of the Safety Cases could make an unambiguous safety claim supported by a powerful confidence argument. Some teams were not clear about how the evidence gathered and analyses conducted would contribute to the safety claim. Some sites listed project activities in lieu of offering an actual safety claim, reporting what they had done rather than the level of safety they had reached.

It was a useful, […] a really good repository for all the stuff we've done in the project, which I find really good. And has been good when people ask ‘What did you do?’ then you can say that this is what we did, so that’s useful. I'm not sure about whether people use it for what it is meant to be, which is to prove the pathway is now safe, I’m not sure whether it is used for that really. (Project participant)

Sometimes, safety claims were reported for each identified hazard (comparing levels of risk before and after the interventions they had implemented) rather than at the level of the clinical system. No site explicitly discussed whether risks had been reduced ‘as low as reasonably practical’. Some sites claimed improvements as a result of the interventions they had implemented, but these did not always stand up to statistical scrutiny.33

The response of senior leadership to the Safety Cases submitted by teams varied. Some focused on the potential of the Safety Case for supporting organisational-level decision making in relation to risk reduction, resource allocation and strategic prioritisation.

I think it would be easier to respond to a Safety Case rather than more so the [other quality and safety] data I get. Because it’s back to first principles, what are we actually here to do… Then if we have an unsafe system everything else needs to fall in behind that, no matter cost pressures, no matter personal opinion, no matter all the other complexities in a big system. If an element is at risk, then that will always be made a priority. (Senior leader)

Not all senior leaders, however, were so confident that the insight offered by Safety Cases would or should inevitably lead to action. Some of the issues identified in the Safety Cases were beyond the ability not only of front-line teams to solve, but also of organisational leaders. Issues such as staffing levels, IT interoperability, and securing timely discharge required at least interorganisational coordination, resourcing, coordination, and support across the whole healthcare system. Additionally, the prevailing approach to risk management, and the perceived unavoidability of risks in the complex systems of healthcare, meant that the insights offered by a Safety Case might be unwelcome or not necessarily candidates for priority attention. In a system that relied primarily on retrospective risk management approaches, such as incident reporting and investigations, the need to tackle risks of recurrence (where problems had already manifested as serious incidents or ‘near misses’, and might do again) could easily take precedence over addressing seemingly ‘theoretical’ risks (problems identified through a detailed prospective analysis but yet to occur).

Because you’re saying actually ‘That was a potential harm on our risk management system, and we knew about it, and we were accepting that we don’t have enough money to address all of these issues at one time’. So there is, if you like, a prioritisation and rationing of where we put money according to the level of risk. […] It’s a bit like county councils putting crossings on roads, or a zebra crossing. You’re waiting for the fatality to occur before actually that will get the funding. (Senior leader)

Some feared that, given the legal obligation of boards to take action in response to safety risks that were revealed to them, an unintended consequence of the Safety Case approach might be to distract organisational focus from areas that were at least as worthy of attention but lacked the spotlight offered by the Safety Case. There was a perception that to have a Safety Case for every pathway or area of practice would likely be impossible, and that too many Safety Cases would be overwhelming.

The complexity of health care is such that there are hundreds of complex connected pathways that patients are on and so… You in theory could write hundreds [of Safety Cases] and that would then become meaningless because if you write hundreds no one would ever read them. So, I think it might be helpful in some specific examples… Rather than being something that could cover everything that we do to patients. (Senior leader)

Consequently, Safety Cases might serve not to assure about control of risks, but to unnerve—and unnerve leaders who were not always well placed to act, given the scope of their control and the other priorities they faced. In a system where Safety Cases were new, without an established function in safety management, and covering only a small proportion of safety-critical activity, the information they provided was not always readily actionable from a managerial perspective and, moreover, had potential to create uncontrolled reputational risk.

The danger is that what you have is a legal requirement to spend money on a Safety Case that actually is of low, relative risk to harms that are occurring in the absence of Safety Cases. So what you get is a spurious diversion of money to a wheel that has been made very squeaky, but actually isn’t causing harm… There’s the risk of diversion to get a perfect patch in one part of the system while everything else is actually terrible. (Senior leader)

(A danger) is, you know, if it does get into the wrong hands, particularly with the media, because there’s not the openness and the ability to manage some of this data, which needs explanation. But we do pride ourselves on being a very open and transparent board. (Senior leader)

留言 (0)

沒有登入
gif