Evaluating key performance indicators of the process of care in juvenile idiopathic arthritis

The access to care and safety KPIs were documented more frequently than the measurement of patient outcomes KPIs. In terms of overall documentation for the patient outcomes KPIs, the joint and pain assessment KPIs (#1 and #5) were documented more than 80% of the time which is in line with benchmarks proposed by Lovell et al.[15] and can be easily used in future analyses. Although the current documentation frequencies for each of the 10 JIA KPIs are sufficient to develop benchmarks of care, there is a significant opportunity for better clinical documentation and more consistent data collection for KPIs during clinical visits, which aligns with current clinical guidelines for JIA management [3].

A joint assessment was the most frequently documented KPI in this study. The standardized layout of the SCM form which includes physical examination having its own section with a description of a joint assessment likely facilitated this finding. It is unknown whether the frequent documentation of this KPI will continue with the transition to Epic (a new comprehensive electronic health record (EHR) being implemented across Alberta)[16, 17].

The data for patient outcomes KPIs (physician’s global assessment (PGA), assessment of functional ability, and measurement of clinical disease activity) were minimally documented in SCM. As noted earlier, Childhood Health Assessment Questionnaire (CHAQ) and other patient reported outcome measurements are currently documented in clinic paper charts. Moving to Epic’s electronic system may provide an opportunity to increase the frequency of documented CHAQ and PGA values if the assessments can be completed electronically rather than requiring a transfer of information from paper forms to the EHR.

In a prospective UK JIA study, data for cJADAS 10 (active joint count > 10 is given 10 points) were available for 96%, 77%, 94%, 87% and 80% at baseline, 6-month, year 1, year 2, and year 3 follow-ups respectively [18]. However, the UK study excluded patients if no cJADAS score could be calculated at any point. This could explain the large difference in data availability between the UK study and the present study.

In a previous study on rheumatoid arthritis (RA), a disease activity performance measure was defined as the “percent of RA patients with ≥ 50% of total number of outpatient encounters per year with assessment of disease activity using a standardized measure”, and 100% of the patients met this measure [19]. This contrasts dramatically to this study with only 12% of all JIA clinic visits documenting the cJADAS in SCM. It is possible that the higher levels of reporting disease activity by any acceptable composite measure (such as Disease Activity Score 28 or the Clinical Disease Activity Index) in the previous RA study was due to the use of the data platform Rheum4U, developed for inflammatory disease patients and implemented in both clinics in the study, with a patient platform to collect the patient reported outcomes. These higher levels could mean that the data are not routinely documented unless part of a specific RA registry where patient outcome data are explicitly recorded. The ease of monitoring when the required data for each KPI are entered into a platform that retrieves data for arthritis patients from selected electronic health records (EHR) systems such as Epic should be a key priority for the implementation of performance measures.

Epic is one of the EHRs commonly used with the Rheumatology Informatics System for Effectiveness (RISE) registry in the United States [20]. The registry automatically collects data from EMR and helps clinicians monitor quality of care by tracking performance at the patient-level on various measures as well as allow clinicians to compare themselves to their peers nationally [21]. A study using the RISE registry for RA found a performance rate for disease activity of 55.2% [22] and another found a performance of 53.6% in a random sample of RA patients [23] with the same KPI definition of documentation in ≥ 50% of outpatient encounters per year. The documentation of these KPIs in our context,could be improved with a streamlined transition of data from the paper chart to the electronic note. This could be facilitated through standardized headers for each clinician note, requiring data to be entered before the form can be completed, and having the software automatically calculate scores for various assessment such as the cJADAS.

The assessment of arthritis-related pain had a high documentation frequency for the total number of visits in the cohort but dropped in frequency for every visit for every patient. This could be because there was no standard section for pain similar to the physical examination section for joint assessment. Pain assessment is typically written at the start of the note where anything that has occurred since the last clinic visit is described and it is possible that the pain information does not get transferred to SCM if the patient’s pain was not significant in that visit. A specific section for pain in the SCM notes would be a step toward improved documentation patterns. Adoption of standardized measurements for pain in JIA could also help in this regard. In addition to the pain visual analog scale in the CHAQ, there are other available validated tools for pediatric use in JIA, such as the SUPER-KIDZ tool and the Iconic Paint Assessment Tool (IPAT) [24].

Documentation of the access to care KPIs was highly compliant except for the waiting times KPI. In SCM, the waiting time from referral was only mentioned in 17% of the cohort. After 2019, another database (Clinibase) contained the referral letters for each patient however as noted earlier our cohort was diagnosed between 2016–2018. Visits during the first year of diagnosis and annual visits demonstrate strong compliance as all applicable patients had a visit during the first year after diagnosis and 77% of patients had annual follow-up visits. It is possible that performance is higher if some visits were not entered into SCM.

Of the two safety KPIs, tuberculosis (TB) screening was documented in SCM more consistently than the KPI for laboratory monitoring for DMARDs. The TB screening KPI was documented in SCM for 95% of eligible patients. A noted limitation with the SCM data is that the dates of TB tests were not recorded; consequently, it could not be determined if TB testing occurred prior to the patient’s biologic therapy unless documented in the administrative databases. The Consolidated Laboratory Repository contains the tests for patients who received a TB screening blood test. This is typically only done for patients who recently had vaccinations or had prior TB exposure. This would explain the small number of patients who have this test reported in the Consolidated Laboratory Repository. The TB skin test is an ACR recommended screening test for latent TB and is the predominately used test for TB screening in Calgary [25]. NACRS or Practitioner Claims were used to identify TB screening; however, it has been consistently found that using ICD codes to identify TB screening and diagnoses has a relatively lower positive predictive value compared to other communicable diseases such as meningococcal meningitis and pneumococcal meningitis [26]. A TB skin test can also be performed in the Infectious Disease Clinic at the hospital may not appear in the NACRS data. A more accurate method of identifying TB screening should be a focus moving forward.

Although the laboratory monitoring KPI for patients on methotrexate and leflunomide was well documented in SCM, the Consolidated Laboratory Repository contained more accurate data. Even so, this study was unable to determine the exact biologic start date without patient interaction documented. Accuracy for start and stop dates could be improved by using the Pharmaceutical Information Network database to track the date the prescription was dispensed, patient feedback on their start and stop dates, or having more explicit description headers in the clinic visit note. The lower compliance levels for laboratory testing demonstrates that there is an opportunity for improvement in compliance and documentation of this KPI.

Quality measurement is dependent upon the availability of relevant data. This was cited as the greatest factor that facilitated or impeded the use of quality measures by the National Quality Forum report [27]. Data infrastructures need to be able to “talk to each other” and EHRs need to be “sufficiently robust” to generate the required information for “measure construction” [27]. It has been shown that performances in practices are the highest when the EHR system has rheumatology-specific templates in the software as it enables the collection and monitoring of key measures [28]. EHRs should be used to guide which process-related quality indicators are easily assessed in clinical care [29]. An important next step in the implementation of these KPIs is to align the measures across Canada and have them endorsed by the Canadian Rheumatology Association. Implementing nationally aligned and endorsed measures with a system similar to RISE or Rheum4U in Epic would provide the highest likelihood of physician uptake and potential for quality improvement.

There are four data sources for performance measurement in health care: administrative data, chart review (paper and electronic documents), surveys of patients/families/staff, and data generated and extracted from EHRs [30]. Measurement using administrative data necessitates the assumption that the diagnosis and procedure coding is accurate and medication that was prescribed matches the medication taken [30]. Chart reviews are labour intensive and used to validate measures from administrative data and EHRs [30]. The use of electronic health records provides an “opportunity to access patient-centric clinical data and the ability to efficiently measure quality performance outcomes measures” [30]. Technological advances have enabled data extraction from both discrete and free-text fields in EHRs [30]. Calgary’s new system, Epic, has the potential to capture the required data from a variety of data locations and consolidate to a single electronic database system, increasing the ease of monitoring KPIs by physicians and decision makers.

In addition to the previously mentioned RISE registry in the U.S., there are other international efforts being made to link clinical registries to electronic health records, and generate core minimal datasets. The CAPTURE-JIA (Consensus derived, Accessible (information), Patient-focused, Team-focused, Universally-collected (UK), Relevant to all and containing Essential data items) electronic dataset is in the pilot phase for determining the feasibility of data collection and moving towards a core national dataset in the UK [31, 32]. In the EU there are also endeavors in generating core minimal datasets that look at core elements of datasets to “facilitate better co-operative use of such data sets for research and health system administration.”[33] For example, a core dataset in juvenile dermatomyositis was developed for clinical settings that can later be incorporated into larger registries at the national and international level [34]. Local efforts of standardized data collection for minimal core datasets should align with international efforts to allow for global research collaboration to improve disease understanding. The glossaries accompanying these minimal core datasets can also be helpful in clinician training for those less familiar of the standardisation of data collection for the accompanying disease [34].

留言 (0)

沒有登入
gif