Best Practices for Identifying Hospitalized Lower Respiratory Tract Infections Using Administrative Data: A Systematic Literature Review of Validation Studies

Study Description

Our database search identified 1697 unique references and 838 studies via hand searching, of which 26 studies were eligible and contributed to the final analysis (Fig. 1). These were conducted in ten high-income countries (Table 1), including 14 (56%) from the USA [11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Six (23%) studies included a separate analysis for patients with comorbidities [14, 18, 22, 23, 25, 26], including chronic obstructive pulmonary disease and immune suppression. Overall, 24 studies reported on all-cause LRTI, including 17 on any or community-acquired LRTI (two on LRTI only, five on any pneumonia, nine on CAP, and one on empyema) [11, 12, 14,15,16, 18,19,20,21,22, 25, 27,28,29,30,31,32] and seven on HAP [23, 24, 26, 33,34,35,36] (Table S4). Three studies involved pathogen-specific LRTI (one covered both, Table 3) [13, 17, 32].

Fig. 1figure 1Table 1 Characteristics and validation measures of ICD algorithms to identify LRTI in adults (excluding those specific to hospital-acquired infection)

Among the 53 algorithms, the reference standards used to confirm LRTI were based on either the review of medical files (63% of algorithms), including clinical, laboratory and/or radiological data, or on the diagnosis established by the treating physician (Table S4). Ten studies used reference standards based on explicit clinical criteria.

Overall, 22 of the 26 included studies were graded as having a risk of bias: 16 had one uncertainty, and 11 had one high risk of bias in at least one domain (see Table S5 and Fig. S1).

Characteristics and Accuracy of ICD Algorithms

The algorithms included ICD-9 (n = 17 studies) or ICD-10 codes (n = 9), alone or in combination with additional criteria, such as length of stay (n = 2) or free text search (n = 2) such as natural language processing. All studies, except three HAP studies, included ICD codes from the “classical” Pneumonia and influenza ICD group (ICD-9 480–488 or ICD-10 J10–J18), named here classical pneumonia codes. These were the only codes in seven studies, while ten studies included codes for pneumonia due to specific pathogens (ICD-9 001–139 or ICD-10 from A and B groups) and/or other respiratory codes (other codes from ICD-9 460–519 codes or ICD-10 J group than those of pneumonia above). Some algorithms included (n = 7, five CAP and two HAP studies) or excluded (n = 2 CAP studies) “Aspiration pneumonia” codes (ICD-9 507 or ICD-10 J69). Pneumonia classical codes were required to be in primary position only (n = 7), in any position (n = 12), or position not stated (n = 4). Three studies added codes of disease severity (e.g. respiratory failure and sepsis) in primary position when pneumonia codes were in secondary position. HAP studies included additional criteria such as specific codes for nosocomial infection and/or pneumonia codes not present at admission, see below.

Among the 26 studies, eight reported only PPV and/or NPV and were excluded from the analysis on algorithm performance (see “Methods”), and data are in Table S6. The remaining 18 studies reported sensitivity and/or specificity and the performances of their algorithms are described below by clinical outcome. These include 16 studies on all-cause LRTI (six on HAP only) and two on pathogen-specific LRTI.

Any or Community-Acquired LRTI, All Causes

Among the ten studies on non-HAP all-cause LRTI (i.e. excluding those focusing on only HAP), two, three, and five studies involved any LRTI, any pneumonia, and any CAP respectively (Table 1). The 18 algorithms did not differ across outcomes and included either ICD-9 (n = 15) or ICD-10 (n = 3) codes. Sensitivity was at least 80% in around 10/18 of the algorithms reporting it, while specificity was above 90% in 9/16 of them. LR+ was high in three-quarters of the algorithms (13/17, LR+ ≥ 5) and LR− was low in one-third of them (6/17, ≤ 0.20).

The three algorithms based on classical pneumonia codes in primary position only (including or not aspiration pneumonia) yielded a low sensitivity (range 55–72%) and high specificity (> 93%), with varying LR+ and LR− [11, 21, 32]. Sensitivity increased with minimal loss in specificity in algorithms that included codes of pneumonia severity (sepsis or respiratory failure) in the primary position when pneumonia was in a secondary position, by 15% compared to the above algorithm [11], and by 5% compared to the above algorithm combined with codes for other pathogens and respiratory codes [15]. Sensitivity increased by 13% when infection or other respiratory codes (such as empyema, pleurisy, or lung abscess) were added to pneumonia codes in primary position only, while specificity remained high [11]. In the four algorithms with pneumonia ICD codes in any position, sensitivity (57–98%), specificity (62–97%), LR+ (2–32), and LR− (0.02–0.46) varied [27,28,29, 32]; and no major change was observed when other infection or respiratory codes were added [27]. Sensitivity (85–89%) was high and LR− was below 0.2 when text search (text mining or natural language processing) was added to ICD codes, while specificity (78–98%) and LR+ varied [19, 31]. More complex algorithms, using predictors identified through analysis (such as length of hospital stay), reached a high sensitivity (81–89%) and lower LR− (around 0.2) but a lower specificity (63–82%) and LR+ (2–5) [21].

The influence of the reference standard used for case confirmation has been illustrated in one study, in which confirmation by radiological data only led to a drop in both sensitivity (from 98% to 89%) and specificity (from 97% to 62%) compared to chart reviews of medical files for the same algorithm [29]. The type of patients also had an impact on the performance values of algorithms when different groups were included, with higher sensitivity and lower specificity in older adults (≥ 65 years) compared to younger adults (18–64 years) [21, 29], and higher sensitivity in hospitalized patients compared to those seen at emergency departments [19].

Hospital-Acquired Pneumonia, All Causes

Among the six studies (nine algorithms) on HAP (Table 2), five involved any HAP (two with ICD-9 and three with ICD-10) and one based on ICD-9 included ventilator-associated pneumonia (VAP) only [23, 24, 26, 33,34,35]. Six algorithms included specific ICD codes for HAP or VAP [24, 34, 35], and five included classical pneumonia codes (without HAP-specific codes in three algorithms) [23, 24, 26, 33]. All nine algorithms had ICD codes in secondary or any position (or not stated), and six required the ICD codes to be not present on admission [24, 26, 34]. The three algorithms that used HAP/CAP codes alone with the present on admission criteria presented a very low sensitivity (≤ 25%), high specificity (≥ 98%), high LR+ (range 83–233) and poor LR− (0.75–0.77) [24, 34]. When classical pneumonia codes were added to the specific VAP code and present on admission criteria, sensitivity increased to 61% and LR− improved (0.42–0.47) while specificity slightly declined (83–93%) and LR+ dropped to 4–9 [24]. The three algorithms including only classical pneumonia codes displayed a higher sensitivity (35–100%) than specific HAP/VAP codes, high specificity (99–100%), high LR+ (44–333) and varying LR− (0.00–0.65) [23, 26, 33]. Algorithms showed similar performance in patients with continuous invasive mechanical ventilation as in the total patient population [24].

Table 2 Characteristics and validation measures of ICD algorithms to identify hospital-acquired pneumonia in adultsPathogen-Specific LRTI

All three studies assessing pathogen-specific LRTI included pneumococcal pneumonia, one study covered ten other pathogens [13], and they included 26 different algorithms (Table 3), after excluding data for which the reference standard included possible or probable cases [17]. Pathogen-specific ICD-9 codes were included in all algorithms, in primary (n = 3 algorithms), in primary or secondary with severity codes (n = 12), or any (n = 11) position. Sepsis or bacteremia general codes were added in 16 algorithms. The reference standard was based on laboratory tests, with or without clinical and radiological criteria.

Table 3 Characteristics and validation measures of ICD algorithms to identify pathogen-specific CAP in adults

The performance of algorithms varied across specific pathogens [13], with sensitivity ranging from 14% for parainfluenza to 96% for influenza and specificity being always high (≥ 98%). In the three studies involving pneumococcal pneumonia, sensitivity of the pneumococcal-specific code (ICD-9 481) alone was low to moderate (35–58%), while specificity was high (98–99%). In one study, sensitivity was 45% when the pneumococcal pneumonia code was in primary position and 58% when it was in any position. In the same study, sensitivity increased from 58% to 89% when the codes for pneumonia from organism unspecified were added (ICD-9 485–486, any position) [17]. However, specificity declined in the latter, from 98% to 45%. The addition of the acute respiratory failure code (518.81) did not improve performance [17].

Best Practices Learned from Included StudiesDistinguishing Community-Acquired from Hospital-Acquired Pneumonia

In the included studies, the distinction between CAP and HAP cases was performed in the inclusion of pneumonia-suspected cases, in the algorithms applied to these, and/or in the reference standard. In all 26 studies, including those not providing sensitivity and specificity, the case definitions for the inclusion of suspected HAP or CAP frequently included a time threshold, i.e. diagnosis or medical information being obtained or reported within or after 24 or 48 h after hospital admission. At the algorithm level, seven of the nine CAP studies (10 algorithms) included codes (pneumonia, or severity codes—with pneumonia as secondary) in primary position only [15, 16, 18,19,20,21, 32]. Other criteria to exclude HAP were antibiotic prescription within 72 h after admission [22], and the exclusion of patients with major trauma or elective surgery [15]. In HAP studies, the algorithm criteria most frequently used to exclude CAP are pneumonia ICD codes in secondary or in any position and/or not present on admission [24, 26, 34,35,36], and the use of specific HAP codes, such as U69.00 (elsewhere classified, hospital-acquired pneumonia) [34, 35], and 997.31 for VAP [24, 33, 36]. However, the performance of the differentiating criteria has not been evaluated.

Algorithms Including Code Position

In the five non-HAP LRTI studies comparing the sensitivity and/or specificity of algorithms [11, 15, 19, 21,

留言 (0)

沒有登入
gif