Capturing critical data elements in Juvenile Idiopathic Arthritis: initiatives to improve data capture

Patient selection

Patients with any JIA subtype, based on International League of Associations for Rheumatology (ILAR) criteria, seen for an in-person or virtual return visit were included. Telephone-only visits were excluded. Other exclusions included new patient visits, patients without a clear diagnosis of JIA, and patients with arthritis related to other diseases such as systemic lupus erythematosus, mixed connective tissue disease, or scleroderma. Three patients diagnosed with systemic JIA without significant arthritis and who had smoldering macrophage activation syndrome were also excluded.

Stakeholders

The stakeholders included pediatric rheumatologists and pediatric rheumatology fellows, which we denote as providers in this paper. We had between 6 and 10 providers collecting data during the 68 weeks. Variation in the number of providers over the course of the study related to personal leave, retirement, and new hires.

Outcome measures

The primary outcome measure was the proportion of CDE captured in an extractable form within the EMR out of the total number of JIA patients seen each week. Our institution uses Epic, an EMR that contains a data entry tool called a SmartForm. The SmartForm allows data to pull into clinic notes and to be extracted for use into registries such as the PR-COIN Registry. The SmartForm contains standardized data components chosen by PR-COIN which includes arthritis pain (0–10 scale with 1.0 increments), PtGA (0–10 scale with 0.5 increments), AJC, and PrGA (0–10 scale with 0.5 increments). Secondary outcome measures included the difference between in-person and virtual visit data entry.

Data source and collection

We screened problem lists for all patients scheduled with pediatric rheumatology between 9/14/2020-12/31/2021 at our institution. For patients who met inclusion criteria, we completed a manual chart review of both visit documentation and CDE found in the SmartForm. Data found in the provider note but not in the SmartForm did not count in the numerator as these data could not be extracted in an automated way. However, these data elements documented in provider notes but not in the SmartForm were used for feedback during individual provider meetings and provider reports which were two of our interventions detailed below.

For in-person visits, arthritis pain and PtGA were filled out by patients and/or parents/guardian on paper intake forms. These two CDE needed to be verbally asked by the providers during virtual visits. Active joint count and PrGA were assessed by the providers for both visit types. All data entered in the SmartForm were dependent on provider entry.

Interventions

Sixteen interventions were tested beginning at week 13 as illustrated in Figs. 1 and 2. The initial intervention was an email asking providers to document arthritis pain and PrGA in ≥ 80% of visits. Baseline data collection prompted meeting with an individual provider to assess barriers to data collection. A few weeks later, we held a group meeting (M1) to discuss weekly data collection rates via a run chart and to have a formalized discussion about our aim for ≥ 80% collection for arthritis pain and PrGA. At a second meeting (M2), we presented the evidence-base for T2T to provide relevance for CDE collection and learned that providers would like frequent and individualized feedback. Over the next several weeks, individual meetings with providers occurred to review each provider’s weekly data collection rate and to discuss provider-specific processes, barriers, and suggestions for data collection.

Fig. 1figure 1

Control P-Charts of Documentation for Patient-Reported Outcome Measures. Data points are charted over time. Dashed lines represent means calculated based on 12 consecutive points, if available, for each interval, P(a) = weeks 1-12, P(b) = weeks 21-32 and P(c) weeks 52-63. Dotted lines represent aims. Solid lines represent control limits. Upward shifts occurred at weeks 21 and 52 for both A and B. E = email; I = individual discussions; M= group meetings; R = feedback reports

Fig. 2figure 2

Control P-Charts of Documentation for Provider-Assessed Measures. Data points are charted over time. Dashed lines represent means calculated based on 12 consecutive points, if available, for each interval, P(a) = weeks 1-12, P(b) = weeks 45-56 and P(c) weeks 61-68. Dotted lines represent aims. Solid lines represent control limits. Upward shifts occurred at weeks 45 and 61 for both A and B. E = email; I = individual discussions; M= group meetings; R = feedback reports

In a third virtual group meeting (M3), we discussed themes from individual provider meetings, reviewed our improvements in data collection, revised our aims, and elicited group input on preferences for feedback reports. We established new aims to document arthritis pain and PtGA for ≥ 90% of visits and PrG and AJC for ≥ 80% of visits. New feedback reports included group and provider-specific data on the percent of patient-visits each week with arthritis pain ≤ 3 and a breakdown of disease activity based on the cJADAS10 cutoffs for oligoarticular and polyarticular JIA. Other JIA ILAR subtype’s disease activity was reported by PrGA, with inactive being equal to zero and active ≥ 0.5.

Analysis

Minitab 20.3 software was used. Statistical process control charts evaluated change over time. Baseline included the first 12 data points which were used to calculate the first centerline, the thick dashed line P(a), as shown in Figs. 1 and 2. Per control chart rules, a shift in the centerline occurs when there is a statistically significant change in the process which is defined by ≥ 8 consecutive points above or below the centerline. New centerlines were then calculated based on 12 data points, if available.

Control charts were initially separated into virtual and in-person visits for each CDE, in order to analyze how interventions impacted documentation based on visit type. However, the proportion of virtual visits decreased substantially in the post-intervention period, which made virtual visit control chart limits widely variable. Therefore, final control chart analysis incorporated the combination of virtual and in-person visits.

Inferential statistics included confidence intervals, ANOVA testing, and two-sample t-test. We separated the visit type comparisons into 3 different phases to minimize data bias. ANOVA testing was conducted on the weekly ratios of virtual to total visits for baseline (phase 1), post-intervention weeks 13–32 (phase 2) and post-intervention weeks 33–68 (phase 3). Two-sample t-tests were used to compare the weekly proportions of in-person and virtual visit data capture by phase.

留言 (0)

沒有登入
gif