Statistical concern on evaluating the consistency between gargle samples and nasopharyngeal (NP) swabs in RT-qPCR-based mass screening approach for the diagnosis of COVID-19

Dear editor,We read with interest the article entitled: “Clinical performance and accuracy of a qPCR-based SARS-CoV-2 mass-screening workflow for healthcare-worker surveillance using pooled self-sampled gargling solutions: A cross-sectional study”.Olearo F. Nörz D. Hoffman A. Grunwald M. Gatzemeyer K. Christner M. et al.Clinical performance and accuracy of a qPCR-based SARS-CoV-2 mass-screening workflow for healthcare-worker surveillance using pooled self-sampled gargling solutions: a cross-sectional study. This study reports real-life performance data for an RT-qPCR-based mass screening approach in a large cohort of asymptomatic healthcare workers (HCW), utilizing pooled gargling solution as non-invasive sample type. This article shows that screening by self-sampled gargling solution via pooled RT-qPCR test is highly effective in identifying SARS-CoV-2 RNA positive HCWs. This is feasible due to the high accuracy of the RT-qPCR based approach and the unmatched resource efficiency of sample pooling strategies. It thus represents a promising alternative to rapid antigen testing.Although this article provides valuable information, we believe that when the authors evaluated the consistency between gargle samples and nasopharyngeal (NP) swabs (reference samples), some results are worth discussing. According to the authors’ evaluation, the overall accuracy between gargle samples and NP swabs (N = 521) was 99.4 (CI95 98.3–99.9%). We notice that the agreement of the gargle samples and NP swabs were not assessed by the authors. However, it should be noted that to evaluate intraobserver consistency, applying overall accuracy is not always appropriate. It depends on the prevalence of each observer. For example, Table 1 shows that in both (a) and (b) conditions, the prevalence of concordant data is 95.0% and discordant data is 5.0%. Meanwhile, the overall accuracy rates are 95.0% in both conditions. However, we get different Cohen's kappa values (0.260 as minimal agreement and 0.900 as strong agreement), respectively.

Table 1Limitation of overall accuracy to assess consistency of two observers with different prevalence in two categories.

Cohen's kappaDe Raadt A. Warrens M.J. Bosker R.J. Kiers H.A.L. Kappa coefficients for missing data. analysis is suitable for evaluating consistency between two observers and calculated as follows:

k=∑i=1n(pii−piqi)1−∑i=1npiqi,

(1)


where k is the kappa value and p and q are the sample frequencies. According to McHugh,Interrater reliability: the kappa statistic. the Cohen's kappa result should be interpreted as follows: 0–0.20 as indicating no agreement, 0.21–0.39 as minimal agreement, 0.40–0.59 as weak agreement, 0.60–0.79 as moderate agreement, 0.80–0.90 as strong agreement, and 0.91–1.00 as almost perfect agreement.

Therefore, we recommend combining Cohen's kappa analysis and overall accuracy in the consistency analysis between gargle samples and NP swabs.

CRediT authorship contribution statement

Ming Li: Formal analysis. Tianfei Yu: Writing – original draft.

Declaration of Competing Interest

All authors declare no competing interests regarding the present study.

Funding

This research was supported by a grant (LH2020C110) from the Joint Guidance Project of Natural Science Foundation of Heilongjiang Province of China and a Grant ( YSTSXK201881 ) from the Fundamental Research Funds in Heilongjiang Provincial Universities.

Ethics approval

Not available.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

ReferencesOlearo F. Nörz D. Hoffman A. Grunwald M. Gatzemeyer K. Christner M. et al.

Clinical performance and accuracy of a qPCR-based SARS-CoV-2 mass-screening workflow for healthcare-worker surveillance using pooled self-sampled gargling solutions: a cross-sectional study.

J Infect. 83: 589-593De Raadt A. Warrens M.J. Bosker R.J. Kiers H.A.L.

Kappa coefficients for missing data.

Educ Psychol Meas. 79: 558-576

Interrater reliability: the kappa statistic.

Biochem Med. 22 (): 276-282Article InfoPublication History

Published online: July 06, 2022

Accepted: June 30, 2022

Publication stageIn Press Journal Pre-ProofIdentification

DOI: https://doi.org/10.1016/j.jinf.2022.06.034

Copyright

© 2022 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

ScienceDirectAccess this article on ScienceDirect Related Articles

留言 (0)

沒有登入
gif