Response to: 'Diagnostic accuracy of novel ultrasonographic halo score for giant cell arteritis: methodological issues by Ghajari and Sabour

We thank Ghajari and Sabour for their interest in our work and appreciation of our study.1 We have reported that the extent of vascular inflammation on ultrasound, as quantified by the halo score, is associated with ocular ischaemia in patients with giant cell arteritis (GCA).2 Furthermore, we investigated the diagnostic accuracy of the halo score for a clinical diagnosis of GCA, as well as a positive temporal artery biopsy.2 Here, we discuss the points raised by the authors.

First, the authors propose that our study was focused on ‘test accuracy’. We fully agree with the authors on this point, as we included the term ‘diagnostic accuracy’ in our title and used it throughout our manuscript. Our definition of ‘diagnostic accuracy’ was similar to that reported in the references provided by the authors, that is, the ability of a test to discriminate between patients with the target condition and those without.3 4 It appears that the authors use a slightly distinct definition for ‘diagnostic accuracy’, that is, ‘a test’s added contribution to estimate the diagnostic probability of disease presence or absence’. This is actually the definition of ‘diagnostic yield’, as indicated by the reference provided by the authors.3 Sackett and Haynes have previously described four stages of diagnostic research.5 In essence, our study falls within the third stage of diagnostic research, that is, determining whether the test distinguishes between patients with and without the target condition among those that were suspected to have the condition. We believe that Ghajari and Sabour point to the fourth and final stage of diagnostic research, that is, determining whether patients undergoing the test are doing better than similar untested patients. As emphasised in the conclusions and key messages of our study, we believe our findings warrant further investigation and validation. We agree with the authors that the investigation of the ‘diagnostic yield’ should be part of future research.3

Second, the authors indicate that we might have ‘misinterpreted’ the likelihood ratios (LRs) reported in our study. The authors state that the LRs obtained in our study (eg, 6.41 and 2.00) are ‘clear evidence for inaccuracy of the tests’. The authors refer to a review article, which reports that good diagnostic tests have an LR of >10 or <0.1.4 These particular LR cut-off points appear to be derived from a seminal report by Jaeschke et al.6 We certainly agree that diagnostic tests with such LRs are good, as they have a strong effect on the post-test probability of the target condition. However, tests with an LR closer to 1.0 might still have an important impact on the post-test probability, as also emphasised by Jaeschke et al.6 Diagnostic tests with LRs>2.0 or <0.5 may at least slightly to moderately alter the post-test probability.6–8 For example, a positive test with a positive LR of 6.41 can increase a putative pretest probability of 50% towards a post-test probability of 87%.6–8 As recognised by clinical guidelines for GCA,9 10 it is well known that imaging tests for GCA do not provide absolute evidence for the presence or absence of this condition. The same is actually true for symptoms, physical signs or laboratory tests; none of which have LRs>10.0 or <0.1 for a diagnosis of GCA.11 Overall, we do not agree with the authors’ claim that an LR between 2.0 and 10.0 should be considered as ‘clear evidence for inaccuracy’ of a test. We therefore believe that the term ‘misinterpretation’ is not correct in this context.

The third point raised by the authors suggests that we should have investigated the calibration of the halo score. As described in the reference provided by the authors, calibration is the ability of a test to correctly estimate the risk or probability of a future event.12 Thus, calibration is important for prognostic studies rather than diagnostic studies.12 We presume that the definition of our reference standard, that is, the final clinical diagnosis after 6 months of follow-up, might have caused the impression that we performed a prognostic study. The follow-up in the context of our study, however, was performed to verify that the diagnosis at baseline was correct. Clinicians sometimes have doubt about the clinical diagnosis early in the disease, and alternative diseases explaining the symptoms occasionally become overt during the first months after the initial diagnosis. The reference standard used in our study is therefore common practice in diagnostic research on GCA.

Although we commend Ghajari and Sabour for critically evaluating our work, we believe that the points raised by the authors are not indicative of ‘methodological issues’ or ‘misinterpretation’ in our study. As emphasised in our report, the ultrasonographic halo score awaits further validation by prospective, multicentre studies.

Ethics statementsPatient consent for publicationEthics approval

The original study was performed in accordance with the Declaration of Helsinki. All patients provided written informed consent. The study was approved by the Berkshire Research Ethics Committee (REC#09/H0505/132).

留言 (0)

沒有登入
gif