Grand Rounds in Methodology: a new series to contribute to continuous improvement of methodology and scientific rigour in quality and safety

In clinical practice, ‘grand rounds’ are well known as a method for continuing medical education. In the early 1900s, grand rounds involved bedside teaching, but teaching sessions later moved to the auditorium when they gained popularity to accommodate a larger audience.1 Nowadays, grand rounds are generally targeted to a diverse audience and include topics that will have broad appeal but may also be organised for specific specialties, for example, medical,2 surgical,3 nursing4 or diagnostic5 grand rounds. Grand rounds are a way to help doctors and other healthcare professionals keep up to date in evolving areas that may be outside their core practice. While bedside rounding with a senior physician is an important part of on-the-job training with the primary focus on immediate patient care, grand rounds are often multidisciplinary and present the ‘bigger picture’, as well as the newest research and treatments in a given area.

Similarly, some of the methodological approaches used in quality and safety research may not be well understood by clinicians and researchers or forgotten, depending on the recency of their primary education. There also may be new methods available or new insights regarding their use. Therefore, in this issue of BMJ Quality & Safety, we introduce a new series of Grand Rounds in Methodology, where each paper will discuss a specific methodological topic relevant for the quality and safety field and point out the critical issues or decision points that will affect the interpretation of results. The different papers are intended as educational resources for teaching or mentoring purposes, as well as a resource to improve methodological rigour and guide reporting of quality improvement research more broadly. We aim to update healthcare professionals and researchers on:

Relatively new study designs and research methods—exploring when these might be most useful in practice.

Methods that have been around for decades where mistakes are frequently encountered—considering how can we do better.

Current debates in methodology—exploring why and how these are relevant for the quality and safety field.

Each paper will start with a typical practical situation, akin to starting a clinical grand round with a case study, to point out in what type of situation this methodological issue will be relevant; each will end with recommendations to guide choices that can or should be made, including their effect on the interpretation of results.

Relatively new designs and methods

New study designs are often received with enthusiasm, where people mainly see the advantages but do not consider whether that design is in fact the best possible design for the question at hand. For instance, the stepped-wedge design is often suggested when the intervention is thought to do more good than harm so that everybody will ultimately receive the intervention (since it would be unethical to withhold it) and/or when it is impossible to deliver the intervention simultaneously to every cluster or site. An early systematic review showed that the first use of this design dated back to 1987, but neither the motivation for this design nor its methodological requirements were consistently reported prior to 2005.6 As outlined in a recent paper in our journal, a stepped-wedge design may not always be the best choice: rather than asking whether you should give the intervention to all sites, the question should instead be how long you can reasonably ask every site to wait for it.7 The authors of this paper also showed the practical constraints associated with a stepped-wedge design, which could result in an alternative design being preferred that also has sufficient power and allows all sites to receive the intervention after the study ended. It is this type of trade-off in choices and the associated practical considerations and critical decision points that we aim to highlight in papers published as part of the Grand Rounds in Methodology series. Similarly, we frequently see mixed methods studies without justification for why the particular type of design would be best suited given the research question, nor how the results from the qualitative and quantitative components of the study were ‘mixed’ or connected. These issues were also highlighted in a recent systematic review of mixed methods studies published in 15 health service management journals, showing that about a third of papers did not give a justification for the study design and almost 40% provided inadequate or no information about the connection between qualitative and quantitative findings.8

Another relatively new design type is the adaptive clinical trial design (including platform trials used in comparative drug evaluations)9 where the statistical literature has highlighted potential advantages over the traditional fixed design. This approach is not (yet) commonly employed in clinical research,10 which may in part be due to the more complicated procedures, including specialised statistical methods, needed to draw inference. However, given the adaptations to quality improvement interventions frequently made as part of plan–do–study–act cycles, it seems worth exploring the utility of adaptive trial designs in a quality improvement context, as they would allow for stronger inferences on intervention effectiveness without the need to first completely develop the intervention. For instance, when considering different elements and combinations to be included in an intervention bundle, a multiarm, multistage trial may allow for simultaneous comparison of all of these against a common control.

On the analysis side, time series methods other than statistical process control (SPC) charts can be used to evaluate quality improvement interventions, such as interrupted time series or difference-in-difference analysis. But what exactly are the differences in design between these time series methods, the underlying assumptions that need to be met and how do these affect the interpretation of results? Other examples of new analytical techniques include machine learning and text mining, where it may be helpful to know when these can be useful in quality and safety research, as well as their pitfalls.

Familiar methods where mistakes frequently occur

There are multiple examples of papers giving guidance on how specific methods should be used. The ‘statistics notes’ series11 in The British Medical Journal is a well-known example, discussing a wide range of issues related to statistical analyses (eg, correlation in restricted ranges of data),12 statistical testing (eg, how to obtain the confidence interval from a p value)13 and more general methodological issues (eg, units of analysis14 and the cost of dichotomising variables).15 Examples from BMJ Quality & Safety include previously published primers16 and ‘How to’ papers.17

What the Grand Rounds in Methodology series can add is guidance on statistical issues specifically for quality and safety topics. For instance, SPC techniques are typically used in quality improvement projects to evaluate whether an intervention is effective in improving outcomes, using longitudinal measurements. There have been many tutorials describing specific issues related to use of SPC, but little has been written to bring these together in a process for using SPC in quality improvement projects, highlighting the critical decision points, how these are inter-related and how these affect the inferences that may be drawn. As above, there is a trade-off in choices and what conclusions can be drawn. As editors, we often encounter papers that have missed these critical decision points or have not reported them, affecting the results and reducing chances of publication. Besides improving the quality of submitted papers, these critical decision points also highlight what should be reported in a quality improvement report and can therefore be seen as supplementing the SQUIRE guidelines.18

Methodological issues are equally important and common for qualitative research. For example, qualitative studies are often inadequately grounded in existing theory, which limits our ability to discern how new findings add to knowledge and understanding. We have therefore commissioned a Grand Rounds in Methodology paper to highlight common mistakes and the critical decision points specifically related to qualitative research.

Current debates in methodology

There is often more than one type of analysis that can be used to address a particular question, which may give similar or different results. In cases where the difference matters, it tends to generate lively debate among those interested in methodology. What we aim to do here is highlight issues generating debate particularly relevant for the quality and safety field. For instance, when comparing hospitals in terms of readmission rates, as currently done in the Hospital Readmission Reduction Programme (HRRP) in the USA, should we take into account the competing risk of mortality after discharge, which would prevent people from being readmitted? This example was used in a recently published paper in our journal, presenting a framework to clarify those situations where competing risks should be taken into account and those where they should not, guided by the research question and the perspective from which this question is asked.19 Another example is on the topic of statistical code sharing, where two studies reached opposite conclusions on whether the HRRP was effective in reducing readmissions. This discrepancy is likely explained by different analytical choices that can only be reconciled through having access to the statistical code used.20 This paper also presents a helpful approach to statistical code sharing similar to the well-known plan–do–study–act cycle, consisting of three integrated cycles of code development, code review and code release. As editors, we support statistical code being shared, for example, as a supplemental file or on one of the available separate platforms. Replication of research is clearly important, and different results on, for example, the effectiveness of an intervention are often a starting point for further in-depth exploration to further our understanding and knowledge.21

As part of Grand Rounds in Methodology, we aim to contribute to such debates on the use of different methodological approaches. We also aim to more broadly add guidance on how variables related to race/ethnicity and social determinants of health should be reported and used in analyses22 so that published research will increase our understanding on how to achieve equitable care and health as part of our continued efforts to promote diversity, equity and inclusion in all aspects of quality and safety.

Summary and next steps

Choosing an appropriate design and methods aligned with the research question at hand is critical to ensure the results are valid and will advance our knowledge. In the context of the complex adaptive environment of healthcare settings, professionals may perceive tension between methods being sufficiently pragmatic to improve care in clinical practice while also being designed and applied to ensure scientific rigour and generalisability. With the Grand Rounds in Methodology series, we hope to make healthcare professionals and researchers more aware of the different choices and trade-offs in methods used as well as their impact on generalisability, in order to advance rigour in quality and safety research to benefit patient care and stimulate debate. We have given examples with regard to topics already on our radar and have commissioned papers on some of these—we would also welcome suggestions emailed to the senior methods editor (PJM-vdM) regarding other topics that fit the aforementioned three considerations to be included in the series.

Ethics statementsPatient consent for publicationEthics approval

Not applicable.

留言 (0)

沒有登入
gif