Scientist practitioners in complementary medicine practice: A case study in an N-of-1 trial

This paper first addresses the need to develop practitioners as scientists in complementary medicine and an epistemology grounded in the messy world of clinical practice as much as in the highly controlled world of the laboratory. It outlines the evolution of the scientist-practitioner model and then describes the N = 1 experimental model, its history and its conceptual fit with the practice of complementary medicine. Finally, using the Gibb's reflective cycle, two scientist-practitioners reflect on their experiences of involvement in an N-of-1 trial and how it has impacted their practice.

The scientist-practitioner model was conceived in the mid-twentieth century, as the discipline of psychology evolved from a mostly academic pursuit of understanding human behaviour to a concern about everyday clinical applications [1]. The model is founded on the ideology that practitioners should be knowledgeable in both research and clinical practice. However, the model has further assumptions: that skilled researchers will facilitate effective psychological services, that research is imperative to develop sound clinical practice and that researchers involved in clinical practice would produce better and more relevant research [2]. Today ‘evidence-based’ practice has become a lynch-pin concept in health care policy and delivery [3]. All health professionals are expected to be critical consumers of research and provide care and treatment informed by extant evidence. However, there is often a rift between the producers of research and practitioners, and a considerable problem of translating research findings into practice [4,5]. The scientist-practitioner is perhaps needed more now than ever before.

There have been criticisms of the preoccupation with evidence-based practice as a guiding principle in messy areas of practice involving complex psychosocial processes such as mental health in which notions of values-based practice need to intersect with the limited empirical evidence of what is helpful [6,7]. In medicine, the assumption that practitioners are merely the recipients and enactors of evidence-based guidelines has also been criticised on numerous grounds [6]. Green [8] argued for a greater emphasis on ‘practice-based evidence’ and practice-based research production given the inherent problems with much research focused on narrowly defined problems in homogeneous samples, and guidelines synthesised largely from meta-analyses [6]. Greenhalgh and Wieringa [6] proposed that research should address a richer agenda which includes practical wisdom, the tacit knowledge developed by a community of practitioners, the links between power and knowledge, and a more macro-level of knowledge shared by those with various interests in health care.

In general practice, primary care and mental health provision heterogeneity, comorbidity and complexity are the norm [9] and the challenge is to translate the available evidence for every patient and family. Today's practitioner needs to not only be an intelligent consumer of research but they also need to adopt, at least in part, a scientific attitude to the available evidence and to their role as clinician in relation to individual patients. The scientific spirit has been defined as ‘science versus tradition, experiment versus conformity to convention, scrutiny versus blind faith, reason versus custom’ [10]; p. 551). This spirit, when applied to the art of clinical practice, demands a rigorous and open-minded assessment of the constellation of presenting problem, careful and informed prescription of interventions and rigorous evaluation of those interventions. The scientist-practitioner model as applied to clinical psychology, emphasised first a rigorous inculcation as a researcher before embarking on practice. Now, in the age of evidence-based practice and with the proliferation of research and authoritative guidelines, health practitioners may need innovative pathways to foster a genuine scientist-practitioner approach to practice.

N-of-1 trial methodology arose from clinical practice through trial and error observations of the effectiveness of treatment. The first known N-of-1 trial was a comparison between the purgative effects of three types of rhubarb in 13 patients from a hospital in Bath, UK in 1786 [11]. Although there was no description of the methods, the reporting of the results reflected a cross-over design with multiple exposures of various durations [12]. The experiment concluded that the Turkish rhubarb was not inferior to the more expensive English varieties.

The first account of how to conduct unbiased N-of-1 trials was not published until the early part of the twentieth century. In 1932, an eight-chapter book [13] detailing protocols for experimental methods including blinding, inclusion of control groups, matched placebos, establishing baseline conditions, sample size considerations, the use of rating scales and statistical methods to test the effectiveness of treatments was published in German but not translated to English [14]. In a chapter from this book, Therapeutic research as science: Criticism on the current situation, Martini raised concerns about the lack of objective evaluation of the effectiveness of pharmaceuticals in medicinal practice and the lack of concern in the medical profession about this [15].

Martini coined the term ‘clinical pharmacology’, and his contributions to treatment evaluation through experimental methods have latterly been widely acknowledged [14]. While Martini set out the controlled experimental method for individual trials, the focus was largely on providing generalisations for populations [11]. In the years since Martini's work was published, group clinical trials have become the gold standard controlled experimental method [16].

The next major development for N-of-1 trial methodology recognition occurred after a publication of an N-of-1 trial that demonstrated an individual was not responsive to a medication, that had previously been proven through clinical trial methodology [17]. The authors argued that group clinical trials focus on inferences to theoretical populations and completely overlook individual variability, yet it is individuals that clinicians treat.

Hogben and Sim were the first to use N-of-1 trial methodology to optimise treatment for an individual patient. They did this by including patient-reported outcomes in addition to concealment and double-blinding. They designed a lengthy N-of-1 trial with eight exposures of each of three conditions (two control conditions) in a single subject. The study clearly demonstrated that the patient was not responsive to the medication conventionally prescribed for the condition. They noted that the patient had not completed 50% of the patient diary that formed the primary outcome. While this study has the hallmarks of a landmark study, it did not have an impact on practice [11]. This may be because it was not published in a clinical journal [18].

The contemporary surge of interest in N-of-1 trial methodology was stimulated by a publication that was linked to an N-of-1 trial service in Canada [19]. This was followed by a companion article guiding physicians in how to conduct N-of-1 trials in practice and reporting further on the service [20]. Since then, there has been a lot of interest in N-of-1 trial methodology, including a Consort extension [21] on best practice reporting of N-of-1 trials and the elevation of N-of-1 trial methodology to level 1 evidence at the Oxford Centre for Evidence Based Practice [22].

Despite this re-emergence of N-of-1 trial methodology, it is still not widely accepted as a routine method of clinical research [11]. Glasziou postulates that this could be due to both intrinsic limits associated with N-of-1 trials and the logistics of implementing N-of-1 trials. These intrinsic limits include: (i) the health condition should be chronic non-terminal, but not acute or degenerative; (ii) the treatment response should be short-term, to allow for several cross-overs, and; (iii) there is a sufficient withdrawal effect (i.e. the treatment symptomatic, rather than curative). Logistical issues relate to every aspect of organising and implementing the trial. For instance, arranging matched placebos for the intervention involves thinking through what kind of placebo is required and how and where to obtain it. Compounding chemists may be available to develop a placebo for ingestible interventions and help with blinding the placebo and active intervention. Planning and setting up a trial in practice can be time-consuming for practitioners. This includes time to prepare and revise an ethics application for studies involving drug or nutritional interventions. Data collection and analysis also require considerable time beyond the practitioner's usual clinical workload.

The N = 1 approach continues to have relevance in generating practice-based evidence [23]. N-of-1 trials that focus on individual outcomes are consistent with patient-centred models of care and the holistic approach that is characteristic of complementary medicine practice [24]. Holistic practice for complementary medicine practitioners means treating the whole person [25], that is, treating multiple aspects of the person both symptomatically and constitutionally. Consequently, treatments in complementary medicine may be complex, and not readily evaluated using group randomised controlled trial (RCT) methodology [26]. However, N-of-1 trials have been adapted in allied health to include interventions where behavioural interventions, for example, aim to change behaviour and therefore cannot be easily withdrawn. In such circumstances, it has been argued that multiple baseline designs (MBDs), where the time to treatment is randomised, are highly suitable for the evaluation of an intervention [27]. Exploring N-of-1 trial methodology and the related family of single case experimental designs (SCEDs), including MBDs, may be a way forward for complementary medicine research.

留言 (0)

沒有登入
gif