Methods for capturing and analyzing adaptations: implications for implementation research

Case example: the Invested in Diabetes study

The Invested in Diabetes study is a pragmatic cluster-randomized comparative effectiveness trial designed to compare two models of diabetes shared medical appointments (SMAs) in 22 primary care practices. Practices were randomly assigned to one of two models for delivery of diabetes SMAs: “standardized” or “patient driven.” Practice care team members received training in a diabetes SMA curriculum (Targeted Training in Illness Management; “curriculum”) [18]. Implementation was supported with practice facilitation [19]. Both models involve six diabetes self-management education group sessions using the curriculum; patients also have a visit with a prescribing provider. Patient driven differs from standardized in that patient-driven sessions are delivered by a multidisciplinary care team including health educators, behavioral health providers, and peer mentors (vs a health educator alone in standardized), and patients in patient driven select curriculum topic order and emphasis (vs a set order and prescribed time on each topic in standardized). The enhanced REP framework serves as the D&I process framework guiding the implementation. Enhanced REP includes a “maintenance and evolution phase” in which practice-level fidelity and adaptations to the intervention are tracked. Adaptations during this phase are generally considered reactive and are decided by the local practice implementation teams, as opposed to pre-implementation adaptations, which are planned/proactive and decided upon in partnership by the research and practice teams [4]. The pre-implementation adaptations to the intervention are largely described elsewhere [4, 19].

Overview

During the REP maintenance and evolution phase of the project — that is, once practices were actively delivering diabetes SMAs — we used several methods to assess fidelity and adaptations. Table 1 outlines the data collection methods and associated instruments, the timing, participants, and the analytic and interpretive process for each of the multiple data sources used to assess fidelity and adaptations. While this paper focuses on methods used to assess and characterize practice-level adaptations made post-implementation, some interview findings reflect pre-implementation adaptations [4] because practice representatives did not necessarily know when the adaptation occurred relative to the REP phases.

Table 1 Data collection methods and use for studying adaptationsData collectionInterviews

Interviews investigated implementation progress and probed specifically for any adaptations made since beginning of implementation. Each individual interview was approximately 60 min, and participants included medical providers, health educators, behavioral health providers, and SMA coordinators. Semi-structured interview guides included specific questions on changes made to either the process of delivering SMAs or the curriculum content delivered during the SMAs. Questions were based on guidance within FRAME and other investigators studying adaptations [14, 20].

Fidelity observations

Research staff observed SMA sessions to capture fidelity to both study protocol and curriculum (i.e., personnel used, time of sessions, covered content) as well as elements of the facilitation style and group interaction. The study protocol was to observe one randomly selected session at each participating practice per quarter over six quarters of the study implementation period; the goal was to observe at least one of each of the six curriculum sessions at each practice over the course of the study [19]. This sampling plan was developed in accordance with qualitative data collection standards [21]. The research staff documented fidelity using a structured template which contained checklists to track fidelity of core components of the curriculum and its delivery as well as narrative field notes.

Facilitator field notes

The facilitators used templates to document facilitation sessions with the practice site contacts, including implementation challenges and changes made to improve implementation. While the initial four facilitation calls followed pre-planned agendas, additional facilitation sessions that produced written field notes were from ad hoc meetings or emails [4]. The number of facilitation calls (and thus field notes) varied per practice. Field notes were captured in a narrative form in a running shared online platform.

Data analysis

A core qualitative team analyzed the data. Our team included the qualitative lead (JH), a physician researcher (AN), a postdoctoral fellow (PPB), and a research assistant (DG). All had intimate knowledge of the study protocol and previous qualitative data collection and analysis experience, and all conducted the interviews. A series of steps were conducted to complete the data analysis. These are organized in the Table 2 and explained below.

Table 2 Analytic steps and rationaleIdentifying adaptations from multiple data sources (step 1)

We utilized a variety of methods to analyze the data. Not all data from all methods (i.e., Table 1) pertained to adaptations and thus were not utilized to capture adaptation information. First, we conducted a traditional qualitative thematic analysis [22] with the interview data. The audio recordings were transcribed into text documents and then uploaded into ATLAS.ti (version 8, Scientific Software Development GmbH). We identified codes using a collaborative process. One of the codes was adaptation, which was defined as any instance of the respondent noting a change from the intended curriculum or process, whether explicitly stated in response to the question — “From when you started, did you make any changes to how you were conducting the sessions or the process?” (e.g., “Yes, we changed the prescribing provider from one of our physicians to the clinical pharmacist”) — or inferred from knowing the protocol and noting that the explanation was different from the intended protocol (e.g., “We utilized our clinical pharmacist as the prescribing provider during the SMAs”). Any changes from the original plan were considered an adaptation; however, they were classified into fidelity consistent or inconsistent as per FRAME and based on the published study protocol [19]. Adaptations were noted into a Microsoft Excel-based adaptations tracking log, described below.

We reviewed each completed observation form and field note from the fidelity observations and facilitator check-ins to identify any adaptations evident in the data. For capturing the adaptations evident in the facilitator and observer notes, we thoroughly reviewed these notes to determine if any changes were made by the practices to the study content/curriculum, processes of implementation at the practice, or elements of the study protocol.

Categorizing adaptations using a tracking log (steps 2 and 3)

To organize the data collected from all methods, a tracking log was created in Microsoft Excel structured according to FRAME (Appendix Fig. 1) [2, 11]. This log allowed us to characterize each adaptation by FRAME components (Table 3). Component response choices were taken directly from FRAME; however, some choices were left open-ended or modified to better fit the specifics of the study while keeping to the spirit of FRAME such as “what was modified.” Instead of context, contextual, training and evaluation, or implementation or scale-up activities, we used program content, who was involved, etc. In addition to FRAME components, we added two descriptive categories, “implications of the adaptation” and “what made the adaptation work well/not work well,” so that raters could add narrative comments related to implications and outcomes of the adaptations. Members of the team worked in an iterative process through meetings to refine our common understanding of the terms describing each aspect of FRAME.

Table 3 FRAME constructs and additions/clarifications for this study

After several rounds of this calibration and cross-checking to achieve a high rate of consistency in scoring across the research team, we divided up the data for each practice and completed the tracking log until all adaptations found in all forms of data collection (interviews, observation templates, and facilitator notes) were tracked for each practice. Then each adaptation was reviewed and de-duplicated by practice (explained in the Table 2) and data source by DG, reducing the data set to include one description for each unique adaptation found by data source for each practice. The resulting document included each adaptation with descriptive information. Other team members (AN and PPB) reviewed the document for accuracy.

Comparing the data sources (step 4)

We created a second spreadsheet to determine the concordance/discordance (similar to agreement/disagreement) of the adaptation information revealed by each data source. Using our modified categories around “What was adapted” from FRAME (follow-up or tracking, program content, recruitment, resources, scheduling, time devoted, who is involved, and other), we completed a table noting areas of adaptation that were found in each data source and then scored (from 1 to 4) the degree to which the data found in each source was the same, somewhat the same, or different. Data was scored by all team members and reviewed at team meetings. Once consensus was achieved on scoring, a single reviewer (DG) finished rating all adaptation differences. Any uncertainty was brought up to the full team to review. The scores were then summarized across all practices (mean, count), and we wrote a summary of how data in each category differed or converged across the data sources. This produced a summary of total adaptations and concordance across the data sources and is represented in Table 4 in the “Results” section below.

Table 4 Concordance/discordance of adaptations found across data sourcesIdentifying the adaptation components and clusters (steps 5 and 6)

In order to understand what adaptation components were observed, we summarized each FRAME component of each adaptation separately and reviewed how many times each component was distributed across the data sources. Next, we used three analytic approaches to categorize adaptation components into meaningful types. First, we created a co-occurrence table (or cross tabs), to assess the frequency with which each component coexisted with one other component in a 2 × 2 table. Table 5 shows the co-occurrence for type of adaptation (process/implementation versus content/sessions) compared with all other adaptation components (who, what, when, etc.). Other configurations of adaptation components could be considered as needed.

Table 5 Data captured in the adaptations tracking log

Second, we used statistically based k-means clustering to identify patterns in the adaptation data by grouping together adaptation components into a pre-specified “k” number of clusters, resulting in groups of adaptations that had similar adaptation constructs which could be reviewed and interpreted by study team [23, 24]. The resulting clusters helped describe patterns for how adaptation components fit together. At first, only the elements of FRAME deemed most critical were included— the adaptation type, why was it adapted, when was it adapted, the planning level, and level of fidelity — which capture the adaptation as well as crucial contextual factors (e.g., when and why it occurred). Then other components were included to see if the additions produced insight to the results — i.e., did new groups created seem to go together well. In Tables 6 and 7, we present clusters with five and seven components identified qualitatively as most important to include with clusters staying more or less homogeneous. Other choices not selected for inclusion in the clustering were deemed less likely to influence eventual outcomes (e.g., for how long it occurred, delivery level).

Table 6 Five adaptation components cluster model. Clusters shown in Table 6 can be roughly summarized as the following: Cluster 1: unplanned program content changes for a variety of reasons that could go against study protocol. Cluster 2: Planned program content changes early on to improve outcomes. Cluster 3: Unplanned changes to practice processes (recruitment and scheduling) early on to improve reach/engagement. Cluster 4: Unplanned implementation changes of various sorts for the reason of improving feasibility that happened throughout implementation. Cluster 5: Unplanned reactionary changes throughout the implementation to study personnel. Cluster 6: Unplanned changes to a variety of process areas with the goal of improving feasibility through the implementation processTable 7 Seven adaptation components cluster model. Clusters shown in Table 6 can be roughly summarized as the following: Cluster 1: Unplanned program content changes for a variety of reasons that were outside protocols at unknown times and were mostly outside of study protocol. Cluster 2: Unplanned changes to study personnel early on that involved who was involved and were mostly outside of study protocol. Cluster 3: Unplanned changes to practice processes (recruitment and scheduling) early on to improve reach/engagement. Cluster 4: Unplanned program content changes for a variety of reasons at a variety of times that were within protocol. Cluster 5: Unplanned reactionary changes throughout the implementation to improve feasibility that largely affected who was involved. Cluster 6: Unplanned changes to follow-up or tracking with the goal of improving feasibility through the implementation process

Finally, we performed a taxonomic analysis, a configurational-comparative technique that identifies all possible different combinations of components and how they interact. Adaptation component combinations were added into a table identifying all possibilities present in the data [25]. To produce a table that was a reasonable size to interpret, we narrowed the number of included adaptation components, consistent with those in k-means cluster Table 6 so that comparisons could be made across methods. We separated the analysis by process/implementation and sessions/content, allowing us to identify patterns of how components of adaptations clustered together (i.e., how types of adaptation components paired with others) from different perspectives. The resulting tables showed each possible combination of adaptation components present in the data, along with the count of how often they occurred. These results are displayed in Tables 8 and 9.

Table 8 Adaptations within classes/contentTable 9 Adaptations within process/implementation

留言 (0)

沒有登入
gif