Primary Care Asthma Attack Prediction Models for Adults: A Systematic Review of Reported Methodologies and Outcomes

Introduction

During an asthma attack, people with asthma experience a temporary exacerbation of their symptoms, including wheezing, coughing, breathlessness, and chest tightness, which can result in the need for emergency treatment to prevent fatality.1 There are many possible triggers of asthma exacerbation, including viruses, allergies, irritants, adverse drug reactions, and air pollutants.2–7 Asthma presents with high heterogeneity,8–10 so early identification of worsening of symptoms or lung function is a challenge for clinicians and patients, but there is great hope that machine learning tools may be able to assist and create pathways for early intervention. A recent report by Asthma and Lung UK estimated that respiratory conditions, including asthma, cost the UK economy £188 billion in 2019, highlighting the value in investing in the development of efficient tools to improve clinical outcomes and implement timely interventions.11

In recent years, two systematic reviews have explored asthma attack prediction models with slightly different characteristics and objectives.12,13 The 2017 systematic review by Loymans et al12 focused on investigating asthma attack predictors and assessing model performance. In contrast, Bridge et al13 (2020, but only including papers up to 2017) focused on comparing the methodology used in the development of prediction models for future asthma attacks. They primarily reported the impact of different model algorithms on predictive performance. The only major difference in their study inclusion and exclusion criteria, however, was that Bridge et al13 included studies of patients aged 12 years and over, whereas Loymans et al12 included studies with a mean population age over 18.

Both reviews place a strong focus on how to obtain the highest model prediction performance. While strong predictive performance is clearly important for maximizing patient benefit and increasing user trust in the tool, which is required to promote integration into existing care pathways, there are elements to the model which may be even more influential on their impact. Some important considerations for the specification of such a model include the explainability of the results (either overall, or for specific patients), the target populations, and the outcome definition (including time horizon of prediction).

The aim of this study was to provide an updated review of the literature, including studies published since 2020 exploring more complex and intensive machine learning approaches. In addition, we aimed to reflect on the differences of these models with a view to their implementation in clinical practice and the balance between desirability and usability with predictive performance.

Methods

The methods and results of this systematic review were reported in line with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) 2020 statement.14 The checklist, and the location of information pertaining to each item, is presented in Appendix A.

Search Strategy

Two bibliographic electronic databases were searched in May 2023: PubMed and Embase. The search strategy is given in Appendix B. Reference lists of all included papers were checked for potentially contributory papers, and any relevant papers not identified from our search that were included in the Loymans et al and Bridge et al reviews12,13 were added. All papers had to be available in the English language.

Inclusion Criteria

Table 1 lists the inclusion and exclusion criteria by study design, population, outcome, setting, and metric. The research population of this systematic review is adults with asthma, however studies were included if the mean age of the population was over 18 years old and it did not include patients under the age of 12. Studies which reported on adults and children separately were included, but only the analyses on adult patients were described herein.

Table 1 Systematic Review Inclusion and Exclusion Criteria

Studies not related to the development or validation of multivariate prediction models were excluded, such as mechanistic studies or those which only reported association measures (such as odds ratios) rather than performance measures (such as sensitivity and specificity).

This systematic review only included longitudinal studies as cross-sectional studies are unable to assess the ability of prognostic models to predict future asthma attacks. In addition, letters, pre-prints, conference abstracts, protocols, book chapters, and literature reviews were excluded.

The outcome of the prediction model was the onset of an asthma attack, and studies that only reported other asthma-related events (such as post-asthma attack hospital discharge) or statuses (such as uncontrolled asthma or asthma severity) were excluded. Additionally, we aimed to exclude studies whose aim was to detect, rather than predict, the clinically evident onset of an asthma attack. That is, the point at which there is contact between the patient and the healthcare provider. The distinction between detection and prediction was ascertained herein by proxy of whether the time between the index date and the asthma attack event was more than one week (if not, the study was excluded).

Finally, non-clinical predictive models were excluded, such as self-management and home-use prediction models.

Study Selection

The search results of the two databases were imported into Covidence and duplicates were automatically deleted. The study selection, including title/abstract screening and full-text review, was done independently by both authors, and consensus was conducted jointly for any discrepancies.

There were 9067 studies included in the title and abstract screening, of which 46 were passed to full-text screening. Twenty-five studies met the review criteria for the data extraction and quality assessment steps, including all of the twelve studies included by the review by Loymans et al12 and the three of the nine studies included by the review by Bridge et al13 (all three of which were included in both reviews). The studies excluded at each stage and the reasons for their exclusion are provided in Appendix C.

Data Extraction

Data extraction was carried out independently by the authors. The data extraction form included elements pertaining to population characteristics, model characteristics and methodology, and model performance evaluation. The full list of data extracted is presented in Appendix D. Where studies reported sufficient information for additional (unreported) binary classification measures to be calculated, these values were highlighted as derived rather than reported.

ResultsStudy Characteristics

Twelve studies identified in the review by Loymans et al12 were published between 1997 and 2016. Thirteen additional studies were added in our review, including one additional study in 2004 (which used a three-level risk stratification, but discussed outcomes in which the top risk-level is compared to the lowest and middle risk levels15) and twelve between 2017 and 2023 (Table 2). Henceforth, studies will be referred to by the surnames of the lead authors. There were two papers by Luo et al (denoted [A]16 and [B]17) and Schatz et al (denoted [A]15 and [B]18).

Table 2 Summary of Included Studies

Sixteen (64%) studies were conducted in the US. There was a single study from each of the United Kingdom,22 Sweden,21 New Zealand,38 Netherlands,29 Japan,32 China,24 Canada,20 and Belgium.27 One study30 was a secondary analysis of three multinational clinical studies, conducted across several different regions, including South Africa, Germany, Canada, France, Australia, the United Kingdom, Spain, and Sweden.

Study Population

The exact cohort inclusion and exclusion criteria differed between studies, but there was a clear distinction that some focused primarily on severe asthma. Most studies (n=20) included patients across all severity levels, with key variations including asthma ascertainment (including by diagnoses, prescriptions, and related healthcare encounters), requirements for stability (such as no exacerbations or infections in the last month), and exclusions for related respiratory diseases, such as COPD. For the remaining five studies, some additional filtering was applied to select only the patients with the highest risk of asthma attacks. Two studies specified the need for recent emergency care encounters (inpatient or emergency department): in the last year for Inselman19 and in the last three years for Peters.35 Bateman30 only included patients with moderate-to-severe asthma which was considered currently uncontrolled, whereas Eisner31 recruited currently stable patients with moderate-to-severe asthma with a history of positive allergy test. Finally, Miller34 recruited patients considered “difficult-to-treat” by their physician. For Loymans,29 the development dataset contained people with varying severity of asthma, but the external validation set specifically contained participants with poor symptom control and low lung function. Schatz [B]18 included all people with asthma, but tested the models in both the full population and in those with prior emergency department utilization.

Study OutcomeOutcome Ascertainment

Twenty-two studies only considered a single outcome, or composite outcome, while three studies investigated models for multiple different outcomes. These were Zein25 (three outcomes), Eisner31 (five outcomes), and Yurk36 (four outcomes). As such, there were 34 study-outcome combinations. While there were nuanced differences between the outcomes considered in each study, such as exact clinical code-lists, they can be broadly grouped into seven categories.

The first three categories relate to single source outcomes. The first category (n=3) was asthma attacks ascertained from primary care data – either through dispensing of systemic steroids (Zein25 and Eisner31) or a composite of systemic steroid prescribing and/or marked decline in Forced Expiratory Volume in one second (FEV1; Ellman38). The second category was asthma-related emergency department visits, as investigated by Zein25 and Eisner31 (n=2). The third outcome group was asthma-related inpatient admissions, as investigated by Noble,22 Zein,25 Eisner,31 Schatz [B],18 and Grana39 (n=5).

The next three categories relate to composites of these first three categories, in which any event is considered an attack. Firstly, and the most common outcome, nine studies looked at a composite of systemic steroids, emergency department presentation, and/or hospitalization (Inselman,19 Jiao,20 Lisspers,21 Wu,24 Martin,26 Xiang,28 Loymans,29 Bateman,30 and Sato32). This definition is aligned with the American Thoracic Society and European Respiratory Society (ATS/ERS) joint task force definition of a severe exacerbation.40 The fifth category was a composite of emergency department presentation and/or hospitalization, used by seven studies (Tong,23 Luo [A],16 Luo [B],17 Osborne,33 Schatz [A],15 Peters,35 and Lieu37). The sixth category was a composite of systemic steroids and/or hospitalization, used only by Schleich.27

Finally, there were seven miscellaneous outcomes. Eisner31 considered both asthma attacks explicitly stated as patient-reported in primary care data, and unscheduled primary care visits. Miller34 used self-reported asthma emergency department visits or hospitalisations. Yurk36 used self-reported emergency department visits, self-reported hospitalisations, “having missed work five or more days in the past month due to asthma” (self-reported), or “five or more asthma attacks per week in the past month or having symptoms most of the time between attacks” (self-reported).

Outcome Prediction Horizon

The outcome prediction horizon is the maximum duration from the index date (the start of the observation period) and the outcome. Most studies use one year as the prediction horizon (n=18/25). Osborne33 was the study which used a longer prediction horizon (30 months), while six studies used shorter horizons. Three studies used six months as the prediction horizon (Bateman,30 Inselman,19 and Miller34). Ellman38 used 20 weeks as the prediction horizon (which they refer to as a “treatment period”) and Lisspers21 used only 15 days. The Zein25 study was not able to explicitly report their prediction horizon, as it varied by participant.

Outcome Incidence

Only 2.9% of study participants had an emergency department visit in the Zein25 study, and between 1.2 and 1.8% of participants had an inpatient admission in the studies by Noble,22 Zein,25 Schatz [B],18 and Grana39 (Table 2). In the composite of emergency department presentation and/or hospitalization, the five studies looking at the general asthma population over 12 months reported an incidence of between 1.7% and 6.9% (Tong,23 Luo [A],16 Luo [B],17 Schatz [A],15 and Lieu37), while the study looking at the severe asthma population in the same time frame reported an incidence risk of 8.5% (Miller34). The Osborne33 study, which looked at the general asthma population over 30 months, reported an incidence risk of 18.2%.

The incidence risk of asthma attacks ascertained from primary care data (not necessarily requiring emergency care) was estimated as 32.8% in the Zein25 study, and 27.5% in the Ellman38 study. In the studies using a composite outcome of systemic steroids, emergency department presentation, and/or hospitalization, the results were more mixed. The Lisspers21 study, which used an event horizon of only 15 days, found an incidence rate of only 0.04% - the lowest of any of the studies. There were six studies which looked at 12 months horizon in a general asthma population, but the incidence risk ranged from 0.31% to 54.8%, with a median of 16.8%.

Three studies did not provide any information about the outcome incidence in their publications: Schleich,27 Bateman,30 and Eisner31 (for all outcomes).

ModellingStatistical Methods

Across the 25 studies, 16 developed and compared multiple-prediction models (Table 3). Multiple algorithms were tested in six studies (Inselman,19 Lisspers,21 Tong,23 Luo [A],16 Zein,25 and Xiang28), feature sets in seven studies (Loymans,29 Osborne,33 Miller,34 Peters,35 Eisner,31 Lieu,37 and Xiang28), algorithm hyper-parameters in one (Sato,32 and possibly others which tested multiple algorithms but did not explicitly state this), populations in two (Schatz [B]18 and Lieu37), and of course outcomes in three (Eisner,31 Zein,25 and Yurk36). Additionally, Schleich27 compared models looking at the incidence risk of at least one and at least two asthma attacks, although only the former will be discussed herein. Similarly, in studies which investigated multiple age groups, only the adult (or non-paediatric) population models are discussed herein.

Table 3 Methodology of Included Studies

Of these studies, six used the AUC to rank their developed models (Miller,34 Sato,32 Tong,23 Luo [A],16 Eisner,31 and Xiang28). Additionally, Lisspers21 used the Area Under the Precision-Recall Curve (AUPRC), an alternative to the AUC which may be preferred in cases with low outcome incidence. Unlike the other six studies, who simply selected the model with the highest AUC, Eisner31 compared two models for each outcome with different sets of features, and selected the model with the fewest features so long as it did not have a significantly lower AUC than the model built with the larger feature set. Lieu37 selected two final models to present, based on “clinical face validity”, while the other eight studies did not rank their developed models.

In the 19 studies which only considered a single algorithm, logistic regression was the primary choice (n=14, including with LASSO or elastic net for feature selection in Jiao20 and Wu24). Bateman30 used Cox regression, Osborne33 used Poisson regression, Luo [B]17 used gradient boosting trees, and Sato,32 Lieu37 and Peters35 used CART.

There were six studies which considered multiple algorithms. Both Tong23 and Luo [A]16 used gradient-boosting trees and the 39 algorithms native to the WEKA software.42 Lisspers21 used random forest, gradient boosting trees, recurrent neural network, and logistic regression (with multiple regularization methods). Inselman19 used elastic-net logistic regression, random forest, and gradient boosting trees. Zein25 used logistic regression, random forest, and gradient boosting trees. Xiang28 used logistic regression, multilayer perceptron, long short-term memory (LSTM), and Time-Sensitive Attention Neural Network (TSANN).

Zein25 and Inselman19 presented the results (unranked) for each algorithm, but Tong,23 Luo [A],16 and Lisspers21 ranked the gradient boosting trees (specifically XGBoost, according to AUC and AUPRC, respectively) highest, and Xiang28 ranked the TSANN highest (according to AUC).

Model Validation

Five studies only carried out model development without validation, providing performance estimates in the same data that were used to train the model (Ellman,38 Eisner,31 Yurk,36 and Peters;35Table 3). Sixteen studies conducted internal validation. Eight used a random split sample (Inselman,19 Jiao,20 Lisspers,21 Martin,26 Xiang,28 Bateman,30 Osborne,33 and Lieu37), five used a temporal split – reserving datum from a later year for testing (Tong,23 Luo [A],16 Luo [B],17 Miller,34 and Grana39), two used cross-validation (Wu24 and Sato32), and one used bootstrap resampling (Schatz [B]18). Six studies used external validation (Wu,24 Schatz [A],15 Noble,22 Zein,25 Schleich,27 and Loymans29).

Model Performance

Seventeen studies calculated the AUC of their model(s), of which 12 also reported binary classification performance measures, and 5 did not (Miller,34 Loymans,29 Einser,31 Schleich,27 and Xiang28). Lisspers21 reported the AUPRC instead of the AUC. Seven studies presented neither: Bateman,30 Ellman,38 Peters,35 Grana,39 Schatz [A],15 Lieu,37 and Osborne.33 The highest reported AUC was 0.93 by Schleich.27

Of the twenty studies that reported at least one binary classification performance measure, fifteen presented both the sensitivity and specificity (for 3 studies, these were the only measures presented: Jiao,20 Wu,24 and Grana39). Of the five that did not, three presented only the PPV (Bateman,30 Ellman,38 and Osborne33), one the PPV and NPV (Peters35), and one the PPV and sensitivity (Lisspers21). Sato32 presented the sensitivity, specificity, and positive and negative likelihood ratios. The other eleven studies all presented the sensitivity, specificity, and PPV. All but Lieu37 (of these eleven) additionally presented the NPV, and three (Martin,26 Tong,23 and Luo [A])16 presented the accuracy additionally. Finally, of the twenty studies that reported at least one binary classification performance measure, seven presented a confusion matrix, a data table, or a figure, which would allow other binary classification measures to be calculated (Tong,23 Luo [A],16 Luo [B],17 Ellman,38 Peters,35 Lieu,37 and Sato32).

In the prediction of rare outcomes, which is the case for many of these models, the majority of test cases will be predicted to not have the outcome, and be correct, resulting in high specificity. However, correctly identifying those cases in which an asthma attack will occur, especially when the data on these cases is outweighed by the negative cases, can be more of a challenge. As such, the sensitivity may be more pertinent to the real world “cost”. As shown in Table 4, the highest reported model sensitivity was 86% (Zein25). The Lisspers21 study had the lowest reported sensitivity (7%). However, higher sensitivity often comes at the price of lower specificity, and as such we calculated the balanced accuracy for all studies which provided sufficient information. The results ranged from 50% to 81% for the Tong study.23

Table 4 Binary Classification Measure Performance of Included Studies

Perhaps even more crucially, high sensitivity typically has a trade-off for high PPV with rare outcome prediction, as the easiest way to increase the sensitivity may be to lower the classification threshold. The ratio of the misclassification costs between a false negative (missing an attack) and a false positive (flagging a low-risk patient) depends on the model setting, including the population and the event horizon (and the corresponding suggested intervention). As a simple investigation, however, we calculated the F1 measure, which is the harmonic mean of the sensitivity and PPV. The results ranged from 2% to 82% (Inselman19).

DiscussionSummary of Findings

This systematic review of 25 asthma attack prediction models aimed to review key differences in model design and investigate their impact on model predictive performance. The primary distinctions between the models related to the target populations, the outcomes (including the events and the future time horizons for prediction), and the statistical modelling approaches, but there were also fundamental variations in how predictive performance was evaluated, which affected the ability to directly compare model performance.

Most models (n=20/25) did not restrict the model to predicting in patients with severe asthma, and the most common outcome (n=9/25) was a composite including systemic steroid prescription, emergency department presentation, and/or hospitalization, in line with the American Thoracic Society and European Respiratory Society (ATS/ERS) joint task force definition of a severe exacerbation.40 Most studies use one year as the prediction horizon (n=18/25), but the horizon ranged from 15 days to 30 months. Logistic regression was the most common algorithm, used in 20/25 studies, including six which tested multiple algorithms (however it was not the highest performing algorithm in any of these studies). Seventeen studies calculated the area under the curve (AUC), and 20/25 reported at least one binary classification performance measure.

Results in Context

The performance of a model should not be considered in isolation and can only be directly compared to models with the same study design: primarily the population, the outcome, the algorithm, and the model validation procedures.

Model performance may be affected by the outcome definition used in multiple ways. Firstly, the prediction event horizon which gives highest model performance will depend on the data used to make the prediction. For example, records from sporadic GP or secondary care contacts will likely not be sufficiently granular to detect change in risk from one week to the next, especially if there was no contact between those dates. Secondly, the outcome ascertainment may result in very heterogenous events being all labelled as an attack, despite great differences in severity and speed of onset. This is particularly likely to affect logistic regression models, which depend on low outcome heterogeneity,43,44 whereas tree-based models may fare better.

Finally, the incidence of the outcome (itself a function of the ascertainment, event horizon, and study population) is likely to affect performance, particularly if there are limited data on asthma attacks, or if the model is not appropriately set up to overcome self-optimising to maximise the accuracy (which is heavily driven by rarely predicting the event45). Generally, asthma attacks identified in both primary and secondary care, with a longer event horizon, and/or a population with higher severity asthma would be expected to have a higher incidence. Substantial variation was observed in the reported incidence of events, however, even in studies with comparable endpoints (by population and definition). This may be due to variable ascertainment of asthma attacks, affected by local clinical coding procedures or population demographics. The validity of administrative health data for purposes secondary to those of their initial collection must always be considered and evaluated where possible through manual review or linkage to other data sources.

Strengths and Weaknesses

This review has two main strengths. Firstly, we have provided an updated review highlighting papers since 2017 which have made use of improved computing capabilities to test a wider array of statistical methodologies.

Secondly, we have compared the performance of asthma attack prediction models by both the measures used in the studies’ evaluation, and the values themselves, contextualized with study design specifics. This allows the models to be contrasted with the nuance of the potential clinical use of the models.

However, the review has limitations. First, we made the decision to exclude studies which we considered to be asthma attack detection rather than prediction. Detection studies typically aimed to generate an automated system for asthma attack ascertainment and validate this against physician-diagnosed attacks. For example, a study by Kupczyk et al used electronic patient diary data to identify the start of an asthma attack.46 While the data used were from at least 2 days prior to the attack “start” (defined by when the attack was reported), the clinical onset had clearly already commenced, as evident by the change in patient reported symptoms and measurements. The distinction between the two was ascertained on the basis of whether the time between the index date and the asthma attack event was more or less than one week.

Additionally, this review does not identify a “best” model to use as a benchmark, or indeed to identify the threshold for implementation in clinical practice. A clinically applicable model not only needs to have good predictive ability, but also be accepted by clinicians and patients. For example, a model which could be applied to anyone with asthma, had an outcome which was clinically meaningful and aligned well with a feasible intervention, was easily understandable and explainable, but had 1% lower accuracy than a model with none of these characteristics would almost certainly be preferable. More work is required to identify the most clinically important outcomes (relative to available interventions), populations, and model explainability mechanisms. This can only be conducted by consulting with clinicians, patients, and other stakeholders.

Finally, a risk of bias assessment was not included in this review, as our primary aim was to contrast the study design and modelling methods used. However, note the existence of several relevant publishing guidelines which highlight common pitfalls in study set up and reporting: PROBAST (A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies, by Wolff et al47), TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis, by Collins et al48), and RECORD (Reporting of studies Conducted using Observational Routinely collected health Data, by Benchimol et al49). In particular, the RiGoR (Reporting Guidelines to address common sources of bias in Risk model development, by Kerr et al50) guidelines detail the most common sources of bias in leakage between the training and testing data partitions.

Conclusion

The predictive performance is heavily influenced by the study design, including the population, the outcome definition, the algorithm, and the model validation procedures. Identifying the most clinically meaningful model characteristics is necessary to enable a “best” model to be identified and highlight routes for future development. This will boost likelihood of successful translation, adoption, and implementation at scale of clinical prediction models, and to bring benefits to patients.

Author Contributions

All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

Disclosure

The authors report no conflicts of interest in this work.

References

1. Asthma UK. UK asthma death rates among worst in Europe; 2017.

2. Tan WC, Xiang X, Qiu D, Ng TP, Lam SF, Hegele RG. Epidemiology of respiratory viruses in patients hospitalized with near-fatal asthma, acute exacerbations of asthma, or chronic obstructive pulmonary disease. Am J Med. 2003;115(4):272–277. doi:10.1016/S0002-9343(03)00353-X

3. Barr RG, Woodruff PG, Clark S, Camargo CA, Behalf of the Multicenter Airway Research Collaborati (Marc) Investigators ON. Sudden-onset asthma exacerbations: clinical features, response to therapy, and 2-week follow-up. Eur Respir J. 2000;15(2):266–273. doi:10.1034/j.1399-3003.2000.15b08.x

4. Woodruff PG, Emond SD, Singh AK, Camargo CA. Sudden-onset severe acute asthma: clinical features and response to therapy. Acad Emerg Med. 1998;5(7):695–701. doi:10.1111/j.1553-2712.1998.tb02488.x

5. Mazurek JM, Syamlal G. Prevalence of asthma, asthma attacks, and emergency department visits for asthma among working adults - national health interview survey, 2011–2016. MMWR Morb Mortal Wkly Rep. 2018;67(13):377–386. doi:10.15585/mmwr.mm6713a1

6. Alhanti BA, Chang HH, Winquist A, Mulholland JA, Darrow LA, Sarnat SE. Ambient air pollution and emergency department visits for asthma: a multi-city assessment of effect modification by age. J Expos Sci Envir Epidemiol. 2016;26(2):180–188. doi:10.1038/jes.2015.57

7. McCarville M, Sohn MW, Oh E, Weiss K, Gupta R. Environmental tobacco smoke and asthma exacerbations and severity: the difference between measured and reported exposure. Arch Dis childhood. 2013;98(7):510–514. doi:10.1136/archdischild-2012-303109

8. Agusti A, Bel E, Thomas M, et al. Treatable traits: toward precision medicine of chronic airway diseases. Eur Respir J. 2016;47(2):410–419. doi:10.1183/13993003.01359-2015

9. Siroux V, Basagan X, Boudier A, et al. Identifying adult asthma phenotypes using a clustering approach. Eur Respir J. 2011;38(2):310–317. doi:10.1183/09031936.00120810

10. Skloot GS. Asthma phenotypes and endotypes: a personalized approach to treatment. Curr Opin Pulm Med. 2016;22(1):3–9. doi:10.1097/MCP.0000000000000225

11. Asthma + Lung UK. Investing in Breath; 2023. Available from: https://www.asthmaandlung.org.uk/research-health-professionals/research-influencing/true-cost-lung-conditions. Accessed October17, 2023.

12. Loymans RJB, Debray TPA, Honkoop PJ, et al. Exacerbations in adults with asthma: a systematic review and external validation of prediction models. J Allergy Clin Immunol Pract. 2018;6(6):1942–1952.

13. Bridge J, Blakey JD, Bonnett LJ. A systematic review of methodology used in the development of prediction models for future asthma exacerbation. BMC Med Res Method. 2020;20(1):1–12. doi:10.1186/s12874-020-0913-7

14. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Rev. 2021;10(1):89. doi:10.1186/s13643-021-01626-4

15. Schatz M, Nakahiro R, Jones CH. Asthma population management: development and validation of a practical 3-level risk stratification scheme; 2004. Available from: https://www.ajmc.com/view/jan04-1674p25-32. Accessed September11, 2023.

16. Luo G, He S, Stone BL, Nkoy FL, Johnson MD. Developing a model to predict hospital encounters for asthma in asthmatic patients: secondary analysis. JMIR Med Inform. 2020;8(1):e16080–e16080. doi:10.2196/16080

17. Luo G, Nau CL, Crawford WW, et al. Developing a predictive model for asthma-related hospital encounters in patients with asthma in a large, integrated health care system: secondary analysis. JMIR Med Inform. 2020;8(11):e22689. doi:10.2196/22689

18. Schatz M, Cook EF, Joshua A, Petitti D. Risk factors for asthma hospitalizations in a managed care organization: development of a clinical prediction rule. Am J Manag Care. 2003;9(8):538–547.

19. Inselman JW, Jeffery MM, Maddux JT, et al. A prediction model for asthma exacerbations after stopping asthma biologics. Ann Allergy Asthma Immunol. 2022:S1081-1206(22)01972-X. doi:10.1016/j.anai.2022.11.025.

20. Jiao T, Schnitzer ME, Forget A, Blais L. Identifying asthma patients at high risk of exacerbation in a routine visit: a machine learning model. Respir Med. 2022;198:106866. doi:10.1016/j.rmed.2022.106866

21. Lisspers K, Ställberg B, Larsson K, et al. Developing a short-term prediction model for asthma exacerbations from Swedish primary care patients’ data using machine learning - based on the Arctic study. Respir Med. 2021;185(February):106483. doi:10.1016/j.rmed.2021.106483

22. Noble M, Burden A, Stirling S, et al. Predicting asthma-related crisis events using routine electronic healthcare data: a quantitative database analysis study. Br J Gen Pract. 2021;71(713):e948–e957. doi:10.3399/bjgp.2020.1042

23. Tong Y, Messinger AI, Wilcox AB, et al. Forecasting future asthma hospital encounters of patients with asthma in an academic health care system: predictive model development and secondary analysis study. J Med Int Res. 2021;23(4):e22796. doi:10.2196/22796

24. Wu WW, Zhang X, Li M, et al. Treatable traits in elderly asthmatics from the Australasian Severe Asthma Network: a Prospective Cohort Study. J Allergy Clin Immunol Pract. 2021;9(7):2770–2782. doi:10.1016/j.jaip.2021.03.042

25. Zein JG, Wu C-P, Attaway AH, Zhang P, Nazha A. Novel machine learning can predict acute asthma exacerbation. Chest. 2021;159(5):1747–1757. doi:10.1016/j.chest.2020.12.051

26. Martin A, Bauer V, Datta A, et al. Development and validation of an asthma exacerbation prediction model using electronic health record (EHR) data. J Asthma. 2020;57(12):1339–1346. doi:10.1080/02770903.2019.1648505

27. Schleich FN, Malinovschi A, Chevremont A, Seidel L, Louis R. Risk factors associated with frequent exacerbations in asthma. Respir Med. 2020;2:100022. doi:10.1016/j.yrmex.2020.100022

28. Xiang Y, Ji H, Zhou Y, et al. Asthma exacerbation prediction and risk factor analysis based on a time-sensitive, attentive neural network: retrospective cohort study. J Med Internet Res. 2020;22(7):e16981. doi:10.2196/16981

29. Loymans RJB, Honkoop PJ, Termeer EH, et al. Identifying patients at risk for severe exacerbations of asthma: development and external validation of a multivariable prediction model. Thorax. 2016;71(9):838–846. doi:10.1136/thoraxjnl-2015-208138

30. Bateman ED, Buhl R, O’Byrne PM, et al. Development and validation of a novel risk score for asthma exacerbations: the risk score for exacerbations. J Allergy Clin Immunol. 2015;135(6):1457–1464. doi:10.1016/j.jaci.2014.08.015

31. Eisner MD, Yegin A, Trzaskoma B. Severity of asthma score predicts clinical outcomes in patients with moderate to severe persistent asthma. Chest. 2012;141(1):58–65. doi:10.1378/chest.11-0020

32. Sato R, Tomita K, Sano H, et al. The strategy for predicting future exacerbation of asthma using a combination of the asthma control test and lung function test. J Asthma. 2009;46(7):677–682. doi:10.1080/02770900902972160

33. Osborne ML, Pedula KL, O’Hollaren M, et al. Assessing future need for acute care in adult asthmatics: the profile of asthma risk study: a prospective health maintenance organization-based study. Chest. 2007;132(4):1151–1161. doi:10.1378/chest.05-3084

34. Miller MK, Lee JH, Blanc PD, et al. TENOR risk score predicts healthcare in adults with severe or difficult-to-treat asthma. Eur Respir J. 2006;28(6):1145–1155. doi:10.1183/09031936.06.00145105

35. Peters D, Chen C, Markson LE, Allen-Ramey FC, Vollmer WM. Using an asthma control questionnaire and administrative data to predict health-care utilization. Chest. 2006;129(4):918–924. doi:10.1378/chest.129.4.918

36. Yurk RA, Diette GB, Skinner EA, et al. Predicting patient-reported asthma outcomes for adults in managed care. Am J Manag Care. 2004;10(5):321–328.

37. Lieu TA, Capra AM, Quesenberry CP, Mendoza GR, Mazar M. Computer-based models to identify high-risk adults with asthma: is the glass half empty or half full? J Asthma. 1999;36(4):359–370. doi:10.3109/02770909909068229

38. Ellman MS, Viscoli CM, Sears MR, Taylor DR, Beckett WS, Horwitz RI. A new index of prognostic severity for chronic asthma. Chest. 1997;112(3):582–590. doi:10.1378/chest.112.3.582

39. Grana J, Preston S, McDermott PD, Hanchak NA. The use of administrative data to risk-stratify asthmatic patients. Am J Med Qual. 1997;12(2):113–119. doi:10.1177/0885713X9701200205

40. Reddel HK, Taylor DR, Bateman ED, et al. An official American Thoracic Society/European Respiratory Society statement: asthma control and exacerbations - Standardizing endpoints for clinical asthma trials and clinical practice. Am J Respir Crit Care Med. 2009;180(1):59–99. doi:10.1164/rccm.200801-060ST

41. Shaw DE, Sousa AR, Fowler SJ, et al. Clinical and inflammatory characteristics of the European U-BIOPRED adult severe asthma cohort. Eur Respir J. 2015;46(5):1308–1321. doi:10.1183/13993003.00779-2015

42. Frank E, Hall M, Witten I. The WEKA workbench. In: Data Mining: Practical Machine Learning Tools and Techniques. 4th ed. Morgan Kaufmann; 2016.

43. Begg CB, Zabor EC. Detecting and exploiting etiologic heterogeneity in epidemiologic studies. Am J Epidemiol. 2012;176(6):512–518. doi:10.1093/aje/kws128

44. Sun B, VanderWeele T, Tchetgen Tchetgen EJ. A multinomial regression approach to model outcome heterogeneity. Am J Epidemiol. 2017;186(9):1097–1103. doi:10.1093/aje/kwx161

45. Rahman MM, Davis DN. Addressing the class imbalance problem in medical datasets. Int J Mach Learn Comput. 2013;3(2):224–228. doi:10.7763/IJMLC.2013.V3.307

46. Kupczyk M, Haque S, Sterk PJ, et al. Detection of exacerbations in asthma based on electronic diary data: results from the 1-year prospective BIOAIR study. Thorax. 2013;68(7):611–618. doi:10.1136/thoraxjnl-2012-201815

47. Wolff RF, Moons KGM, Riley RD, et al. PROBAST: a tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med. 2019;170(1):51–58. doi:10.7326/M18-1376

48. Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement. BMC Med. 2015;13(1):1. doi:10.1186/s12916-014-0241-z

49. Benchimol EI, Smeeth L, Guttmann A, et al. The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) Statement. PLoS Med. 2015;12(10):e1001885. doi:10.1371/journal.pmed.1001885

50. Kerr KF, Meisner A, Thiessen-Philbrook H, Coca SG, Parikh CR. RiGoR: reporting guidelines to address common sources of bias in risk model development. Biomarker Res. 2015;3:2. doi:10.1186/s40364-014-0027-7

留言 (0)

沒有登入
gif