'You cant do quality between surgical cases and tea time: barriers to surgeon engagement in quality improvement

Much has been written about the challenges of surgeon engagement in quality and safety improvement work. In Taitz and colleagues’ 2011 seminal work, one interviewee alluded to the difficulties in engaging surgeons in quality improvement by opining that ‘you can’t do quality between surgical cases and tea time’.1 There are several factors that may explain why surgeons have historically been difficult to engage in quality improvement work, including a lack of improvement culture, limited training and skills, inconvenience of timing of most daytime improvement work, lack of remuneration and inadequate feedback on surgeon performance on various quality metrics.2 Understanding and targeting these factors may improve the culture of quality improvement among surgeons.

Considering these challenges with surgeon engagement in quality improvement, we read with much interest the article by van Schie and colleagues3 describing a multifaceted quality improvement initiative (QII), including monthly audit and feedback, education and an action implementation toolbox, aiming to improve patient outcomes after total hip and knee arthroplasty. The authors are commended for completing this multicentre randomised controlled trial using a relatively novel registry-nested design. We were encouraged to learn that there was enthusiasm among Dutch orthopaedic surgeons to receive feedback and participate in quality improvement work. The study used surgeon-specific data including length of stay, readmissions, complications and revisions. These metrics were compared with a national registry data of expected events based on patient mix (found in appendix I in the paper). The authors found that the education meetings were attended by 85% of orthopaedic surgeons and 90% of orthopaedic surgeons completed the surveys, indicating that they received and read their personalised feedback. Four of the intervention hospitals requested additional educational explanations on funnel plots and cumulative sum (CUSUM) charts, suggesting a knowledge gap for orthopaedic surgeons with understanding of audit and feedback. Both intervention and control hospitals significantly improved in achieving more patients with textbook outcome (absence of 1-year revision, 30-day readmission, 30-day complications and long lengths of stay). However, intervention hospitals improved significantly more than control hospitals (ratio of adjusted ORs 1.24, 95% CI 1.05 to 1.48). Interestingly, intervention hospitals that introduced QIIs improved significantly more than control hospitals (1.32, 95% CI 1.10 to 1.57), whereas intervention hospitals not introducing any QII showed similar changes as control hospitals (0.93, 95% CI 0.67 to 1.30). While the authors conclude that their intervention was successful in improving patient outcomes, this study raises interesting questions about the involvement of surgeons in quality improvement and the impact of performance feedback on surgeons’ practice patterns.

A 2019 overview of various national joint replacement registries by Varnum and colleagues highlights the value of large registry data in identifying underperforming implants and driving practice change.4 In one example, the Swedish Hip Arthroplasty Register was demonstrated to have significant shifts in implant selection, cementing technique and surgical approach based on registry data. This has resulted in a more homogeneous use of highly performing implants and methods, resulting in excellent long-term implant survivorship for hip replacements for arthritis and femoral neck fracture. Reporting of adverse events in the Danish Hip Arthroplasty Register has led to the development of national guidelines regarding thromboprophylaxis and the reductions in the average use of blood transfusion following total hip replacements. While the various national joint registries have had a clear impact on surgeon decision-making with regard to implant selection and perioperative management, the use of surgeon-level feedback is more novel and less studied.

The Australian Orthopaedic Association National Joint Replacement Registry and the National Joint Registry for England, Wales, Northern Ireland, and the Isle of Man have recently begun providing surgeon-level comparative reporting, including implant survivorship, complications, and more recently, patient-reported outcome measures (PROMs). This reporting is publicly available, allowing surgeons and patients to look up a practice profile and compare the surgeon’s individual results against the national average. Studies of surgeon-level comparative reporting have shown promise for impacting the quality of patient care, including improved surgical margins during radical prostatectomies, reduced surgical site infections in orthopaedic trauma surgery and decreased operating room costs.5

Providing timely data and feedback to surgeons is a crucial driver of surgical quality improvement.6 However, empowering surgeons to actively participate in quality improvement is also critically dependent on teaching the skills required for interpreting and acting on individual feedback and addressing surgeons’ fears about the impact of publicly reported surgeon-level outcomes.7 Some perceived barriers to providing surgeon feedback include concerns from surgeons about collecting outcome data and making the reports publicly available, particularly in private healthcare systems where negative outcomes could impact a surgeon’s marketability and referral patterns, and the potential for surgeons to foster risk-averse behaviour by avoiding more difficult cases to improve performance results.5 8 Passive data and feedback programmes such as the US National Surgical Quality Improvement Program are promising, but require surgeon engagement and an extensive change-management strategy to have a meaningful or sustainable impact on quality of care.9 10 Furthermore, there are questions about the selection of meaningful and accurate outcomes and the ability of these databases to discern important individual differences between surgeons.5

Several theories have been proposed to explain the lack of success with audit and feedback among surgeons, including lack of awareness of available quality data, misinterpretation of data, and ‘explaining away’ or scepticism of suboptimal quality metrics.11 Suspicion about the generalisability of published evidence to their own practice and institutional inertia may also impede the implementation of meaningful change.12 Successful surgical quality improvement work requires a comprehensive change culture and commitment to surgeon engagement,13 and clearly benefits from a coordinated effort at a macro, meso and micro level.14 Audit and feedback has shown promise as an effective approach for driving improvement in surgical outcomes, but further exploration is needed to increase surgeon openness to this form of improvement. Strategies to empower surgeons to trust and use individual-level feedback include choosing common metrics that can be tracked automatically, developing a plan for sharing data and improving outcomes with hospitals and administrators, and the use of electronic medical records and dashboards to provide real-time feedback.5 Emerging technologies such as the operating room Black Box Research Program15 and robotic-assisted surgery16 may also provide novel strategies for technical improvement, including the use of objective measurements of surgical performance and the impact on patient outcomes and surgical safety.

PROMs represent another feedback mechanism for surgeons, but research to date does not yet show any link between PROMs feedback and changing surgeon practice. Studies have shown that while surgeons appreciate the PROMs feedback as a way to reassure them that their practice was similar to their peers, they did not consider it sufficient to change their practice.17 Furthermore, there was no difference in outcomes between surgeons who did and did not receive PROMs feedback.18 The authors identified several themes that may explain why surgeons rarely changed practice based on PROMs feedback, including a lack of faith in the accuracy of collected data, discordance between subjective and objective outcomes, and the lack of meaningful and useful feedback that can improve PROMs scores.

Surgeon feedback can also be obtained from 360-degree evaluations, which refers to obtaining feedback from many sources within a surgeon’s work environment (eg, fellow surgeons, department chairs, trainees, clinic and operative room staff, etc). This type of feedback has been shown to result in positive behavioural change among surgeons, including improved professional interactions and communication with coworkers.19 A 2018 study of 360-degree evaluations of orthopaedic surgeons at a single US academic medical centre concluded that both practising surgeons and trainees could benefit from this type of feedback as a way to promote empathic behaviours with patients and team members.20 Despite their accuracy and effectiveness in improving non-technical skills, 360-degree evaluations are time-intensive and costly, with many surgeons complaining of ‘survey burnout’.19

Once again we applaud the authors for their innovative study investigating the effects of monthly feedback on improving patient outcomes. Future investigation into the involvement of surgeons in quality improvement work and the impact of feedback on surgeons’ practice patterns should encompass many different methods, including methods targeted to engage surgeons in quality improvement projects, data-based feedback (as is used in this paper), PROMs, 360-degree evaluations and technical assessment.

Ethics statementsPatient consent for publicationEthics approval

Not applicable.

留言 (0)

沒有登入
gif