Can an evidence-based mental health intervention be implemented into preexisting home visiting programs using implementation facilitation? Study protocol for a three variable implementation effectiveness context hybrid trial

Formative evaluation

Our objective is to amplify the voices of pregnant/birthing people and to empower front-line implementers as active partners in our implementation research. We will conduct formative evaluation using participatory approaches to ensure engagement of stakeholders in the adaptation of the strategy as well as to improve adoption and fidelity. The context and evidence domains of iPARIHS in conjunction with specific determinant domains of CFIR 2.0 will provide the framework to better understand the contextual factors that impact implementation of MB and to account for different stakeholders’ perspectives as well as confirm the contextual variables to be measured in the trial. The context-specific adaption of implementation facilitation will be based on a shared understanding of the complex patchwork of HVPs in each state [44] and the current elements of psychosocial support. The research team will work with stakeholders to develop a shared understanding of the context of funding, policy, home visitors, training, recipients (as well as their families and communities), and the resources of existing HVPs in which MB will be implemented.

Key informant interviews

To gather context-specific information on the structural and social factors that may impact the participation of BIPOC birthing people in home visiting services, our research team will conduct brief key informant interviews with a purposefully sampled group of BIPOC birthing people. Interviews will focus on participants’ experiences, preferences, and barriers and opportunities to engage with HVPs with an interview guide informed by iPARIHS and CFIR 2.0.

Survey of HVPs

MIECHV services are tracked at the state level, but less is known about locally or state funded HVPs. We will conduct a brief survey of all HVPs within Iowa and Indiana through a REDCap survey asking each program to provide the numbers of 1) home visiting supervisors, 2) home visitors and 3) pregnant HVP recipients served each year. We will also assess which, if any, mental health services each program currently provides to determine initial interest our study.

Stakeholder meetings

There will be two groups of HVP and maternal mental health stakeholders for each state who will meet within the same timeframe. To ensure all voices are heard equally and not affected by power dynamics, one group per state will consist of multilevel care delivery stakeholders and one group will consist of program recipient stakeholders. The care delivery group will include representatives from state departments of health (e.g., HVP Directors, home visiting epidemiologists, applied research coordinators, professional development coordinators), individual HVPs, and front-line implementers (e.g., home visitors, home visiting supervisors, social workers, embedded mental providers, doulas). The other group will consist of program recipient stakeholders (e.g., birthing people, families, and community members). All conversations will be audio recorded and research team members will take field notes.

We will use constructs of CFIR 2.0 organized by evidence, context, and facilitation (iPARIHS domains) that might influence differential uptake of MB, including local conditions and attitudes, assessing needs, setting goals, adaptations) [30, 36]. At the beginning of each meeting, the research team will briefly describe MB and the purpose of the meeting to participants. To identify the mechanisms of change responsive to specific contextual elements, we will present a review of annual MIECHV Program Reports, results of the HVP survey and key informant interviews, and relevant literature. The discussion will be framed by broad questions such as “What would it take to integrate MB into your current workflow?”; “What resources would you need to successfully facilitate implementation of MB?”; and “How can we best support you and your family during pregnancy?” to allow for breadth and depth of responses.

The transcripts of the discussions will inform the development of a context-adapted implementation facilitation through implementation mapping between the items and the standard elements of implementation facilitation [45]. For example, home visitors might identify a need for ongoing training in MB which would result in the facilitation adaptations to include additional education and capacity building sessions, while a program recipient might highlight stigma around mental health or need for participant education materials. These types of participant-identified issues will allow for adapted facilitation-trained home visiting supervisors to incorporate support and capacity building for the home visitors specifically to address those issues. We will reconvene the groups virtually to present the findings and propose the adapted facilitation for feedback and adjustment.

Three variable implementation effectiveness context Cluster Randomized Control Trial (CRT)

We propose a parallel two-arm CRT in Iowa and Indiana. Randomization will occur at the home visiting supervisor level (Fig. 2). We will randomly assign home visiting supervisors (Fig. 2) in a 1:1 plus adapted facilitation. Our 65% participation rate is based on a recent survey of home ratio to a control arm or an implementation strategy: 1) the control arm will receive standard MB training and implementation support and the 2) implementation facilitation arm will receive standard MB training visitor well-being in Iowa that had a response rate of 60%; we expect we will have a higher rate for our study as it is offering a training opportunity rather than completion of a survey. Outcome data to be used in Aims 2 and 3 will still be captured from home visitors and HVP recipients for supervisors who refuse to participate in the trial as MB training through administrative records. Therefore, adoption and fidelity outcomes will still be captured for home visitors and clinical outcomes on HVP recipients who are not part of the CRT.

Fig. 2figure 2Three-variable hybrid design

Three-variable hybrid designs, while relatively new, seek to make explicit what many hybrid implementation effectiveness trials already do implicitly, which is take into account the role of contextual determinants in implementation [46,47,48]. Recent implementation research identifies the need to assess context explicitly as it impacts both implementation strategies and the intervention itself [49]. As the science moves toward more fully embedding equity into implementation work, it is critical that researchers incorporate context and stakeholder engagement more fully. Doing so will allow us to better measure and track the equitable delivery of evidence-based strategies and practices, in particular through a better understanding of structural factors (e.g., environmental exposures, systemic bias, etc.) [49,50,51]. The three-variable hybrid design is uniquely suited for the proposed implementation trial (expanding on a type 3) because 1) both the intervention and the implementation strategy are evidence-based; therefore, it is critical to test their relationship to context; and 2) the heterogeneity of contextual determinants (HVPs, funding, training, population, structural and environmental variables) requires explicit consideration.

Home visitors in control arm and outside the CRT

Home visitors in the control group and those not part of the CRT (i.e., declined participation) will receive standard MB training as developed and tested in RCT and hybrid type 2 trial, which involves 1.5-day group trainings. Following the training, control arm home visitors will receive standard support from their supervisors while those outside the trial will receive no support from their supervisors with respect to the MB intervention.

Adapted implementation facilitation arm

Home visitors in the adapted facilitation group will receive standard MB training plus adapted facilitation delivered by home visiting supervisors trained in adapted implementation facilitation. We will use facilitation adapted by the findings from Aim 1, and based on evidence that recognizes while dynamic and flexible, facilitation has core activities that have been identified as effective at different stages of implementation and for different types of implementers [52, 53]. The home visitor supervisors (internal facilitators) will receive a half-day training in adapted facilitation and ongoing self-driven and identified support from the external facilitators (i.e., research team members). In this pre- and early implementation, the external facilitators will employ core components of facilitation, including rapport and trust building, priority and goal setting, and clarifying roles and responsibilities, to support home visiting supervisors’ training as internal facilitators [54]. The training and ongoing support will be shaped by the findings from Aim 1 and may include issues such as: different needs and preferences of rural vs. urban HVP recipients; additional mental health resources for facilitators beyond MB-related issues; and if the HVP has existing mental health provision other issues the home visitors may face such as stigma, obstetric deserts, food insecurity, etc.

During implementation, the adapted facilitation-trained home visiting supervisors will draw upon core components of the strategy to support the home visitors in the front-line implementation of MB. These may include addressing resistance to change in specific home visiting contexts (e.g., rural) and supporting accountability by providing real-time feedback on implementation performance [54]. Additionally, internal facilitators will employ interactive problem-solving focused on supporting the home visitors to implement MB – working with pregnant people, challenges with resources, additional training needs, etc. – based on understanding the individual communities, contexts, and recipients’ and home visitors’ needs over the course of the project [35, 55, 56]. In the sustainment phase, the external and internal facilitators will continue to provide context-specific support as needed [54].

Randomization

Randomization will occur at the level of the home visiting supervisor. Supervisors will be randomized by study personnel who will not complete follow-up data collection. Using a sequence generated by study statistician, randomization will occur in permuted blocks of 4 and 6 and will be stratified by state. Allocation to intervention arm will remain concealed until inclusion and exclusion have been determined. This approach minimizes potential for experimenter and participant bias by protecting the randomization sequence and maintaining concealment of intervention allocation until the last moment. The home visiting supervisors will be aware of the trial and whether they are receiving training in adapted facilitation or in the control group. However, the home visitors and HVP recipients will not be aware there is an CRT in process so will not know to which treatment group they have been allocated or if they are not included in the cRCT.

Trial data collection and analysis

The first objective is to analyze the uptake of MB by home visitors who receive adapted facilitation from their supervisors compared to those home visitors who receive standard support from their supervisors and those supervisors who refuse participation. This rapid ethnographic assessment (REA) will involve collecting qualitative and quantitative data (i.e., semi-structured interviews, observations, FGDs, questionnaires, and geospatial data) concurrently and iteratively and triangulating the results [27, 57, 58]. The questionnaires will allow us to measure implementation outcomes in real time and the qualitative data will provide a more nuanced understanding of why and how MB will be more readily adopted by certain home visitors, in certain HVP models, and whether delivery of the intervention varies for populations with different characteristics. As a secondary outcome, we will investigate which contextual factors identified in Aim 1 mediate or moderate the impact of adapted facilitation on implementation outcomes. The geospatial data will provide measurable indicators that will allow us to investigate the mediating or moderating effect of structural and systemic factors. We will also be able to examine factors that impact the choice of home visitor supervisors to participate in the CRT and how that impacts adoption and fidelity of MB among home visitors that are not part of the CRT.

Qualitative componentSemi-structured interview and FGD guides

The research team will refine guides for the FGDs and interviews based on literature review, the iPARIHS and CFIR 2.0 frameworks, and any in vivo topics that may emerge from the Aim 1 formative evaluation. The guides will then be pilot tested with nonparticipants at partner organizations familiar with the topic and experiences of home visiting. This pilot testing will ensure that the questions are comprehendible and relevant to the Iowa and Indiana contexts. Research team members will conduct in-person or virtual interviews based on participant preferences and scheduling availability. The FGDs and interviews will be recorded, transcribed, and reviewed for accuracy. All transcripts will be analyzed using MAXQDA, secure qualitative data software [59].

Ethnographic observations

We will leverage existing meetings and site visits currently being conducted by IDHHS and IDOH with a combination of in-person and audio-recorded observations of: 1) weekly meetings between supervisors and visitors across all clusters in both arms to assess supervisor practice of adapted facilitation compared to standard implementation; 2) the existing biannual site visits conducted by the departments of health across a sample of models and staff, one of which is for monitoring, compliance, and needs assessment and the other for shadowing home visits, attending parent group meetings, etc. We will also attend or record monthly contractor calls, quarterly home visitor expert panel meetings, and quarterly participant advisory meetings.

Data analysis

The qualitative analysis team will conduct qualitative analyses using both deductive rapid and inductive in-depth methods. The analysis team will complete a rapid ethnographic analysis (e.g., template analysis, document review, etc.) throughout the data collection period to allow for real-time actionable adaptions to ongoing facilitation, and understand processes about MB and other main themes in a timely way [27, 57, 58]. Using a template developed with deductive, a priori iPARIHS and CFIR 2.0 domains particularly focused on inner setting variables including access to knowledge, culture, relational connections, compatibility, and structural characteristics, the analysis will also capture inductive, emergent themes. The analysis team will read a subset of transcripts to pilot the template and establish agreement and consistency among team members. Each transcript will be summarized independently by two members and then summaries will be reconciled until consensus is reached. The summaries will be entered into a matrix organized by the same domains and each domain will be analyzed vertically through analytic memos shared and discussed with the team. Ongoing feedback will be shared through external facilitation with HV supervisors (i.e., internal facilitators) to inform real-time adaptions to facilitation. In addition, the research team will conduct a thematic analysis that will allow for greater depth of analysis that can inform our understanding of the implementation outcomes by incorporating questions about differences in delivery and health outcomes for different populations in different contexts. The research team will read a subset of transcripts to generate a preliminary codebook, which the team will use to code a subset of three interviews independently and compare coding and examine agreement. We will conduct thematic analysis of each transcript independently to identify theoretical domains and constructs and to identify patterns. Once the threshold is reached, all subsequent coding will be performed by two coders.

Quantitative componentEducation and implementation questionnaires

We will use the validated baseline and follow-up education and retention questionnaire developed by the MB trainers [25] to assess knowledge and attitudes about MB and perceived self-efficacy among all the home visitors. We will also use a 5-item REDCap questionnaire for the home visiting supervisors (participating in the CRT) that will include closed questions about number of home visitors supervised, hours spent on supervision/facilitation, and open-ended questions about challenges and opportunities around supervision and additional support and resources needed. Fidelity will be assessed using the MB Home Visitor Fidelity Rating Form, which will be completed by the home visitors after each session.

Home visit documentation

We will leverage the DAISEY data system currently used by IDHHS which contains a home visit record form, completed within 48 h of a home visit [60]. We will develop a similar system to collect the same data from the Indiana HVPs.

Geospatial data

We will use the publicly available environmental justice screening and mapping tool (EJSCREEN), developed, and maintained by the Environmental Protection Agency (EPA). This nationally consistent dataset and methodology will allow us to calculate "EJ indexes" which assess the cumulative impact of environmental exposures (i.e., pollution, land use, green space, etc.) together with health and social vulnerabilities based on participant zip codes [61, 62]. EJ indexes will be our proxy variable for structural bias, which encompasses the complex interplay of systems that reinforce and perpetuate discrimination including housing, health care access, environmental exposures, etc. [63]. These geospatial indicators will provide quantifiable, intersectional contextual data as the critical and inextricable third variable in our model to help us understand how environmental and structural factors mediate or moderate both our implementation but also our clinical health outcomes [64]. These variables align with contextual variables with the CFIR outer setting domain, including financing, systemic conditions, policies and laws, and partnerships and connections [30].

In the analysis of cluster randomized trials, the generalize linear mixed model with the logit link function will be used to assess the effect of the adapted facilitation on implementation outcomes adoption and fidelity. We will include context specific variables (Table 1) in the model including type of HVP model, funding mechanism, EJ indices, and patient characteristics (race, age, geographic location) to identify the mediating or moderating effects. To account for the intracluster correlations, we include random effects for the level of home visitor and recipient in the model. Intent-to-treat analyses will be conducted on all those in the CRT. We will also examine implementation outcomes among home visitors that are not part of the CRT because their supervisor declined to participate and determine which contextual factors contribute to 1) participation in the trial; and 2) adoption and fidelity outcomes compared to the control and intervention arms of the CRT.

Table 1 Measures, definitions, instruments, and sources Power analysis

We defined cluster size (K) as the total number of HV recipients across home visitors within each supervisor. Given the K = 25, we calculated power based on different intracluster correlations (IC). We provided the total number of home visitor supervisors (N) required to achieve at least 90% power at a 5% Type I error rate (Table 2). We are well powered with the current number of home visitor supervisors estimated in each state to have ~ 100 per arm. Based on the work of Tandon et al. where the fidelity was ~ 35% we estimated the power needed to increase fidelity to 50%. The numbers were not drastically affected when changing K to 15 or 20.

Table 2 Power analysis for aim 2 Clinical outcomes

We will test the hypothesis that recipients in the adapted facilitation arm have better clinical outcomes (e.g., lower depressive symptoms and perceived stress) compared to recipients receiving standard implementation of MB.

Data collection

Depression screening and perceived stress questionnaires. We will use similar baseline and follow-up surveys used in the effectiveness trials conducted by Tandon et al. We will harmonize with current practices being utilized among home visiting programs for depression and stress screening. We will ask that home visitors deliver both the EPDS and the Perceived Stress Scale (PSS), which is a 4-item questionnaire to their home visiting recipients as part of standard practice.

Statistical analysis

In the analysis of cluster randomized trials, the linear mixed model will be used to assess the effect of the adapted facilitation on each of two outcomes depressive symptoms and PSS at month 6 controlling for the baseline outcome. We will include the context specific variables mentioned in Aim 2, including the EJ indexes, and the random effects for the level of home visitor and recipient in the model. Intent-to-treat analyses will be conducted on all those in the CRT. We will also compare EPDS and PSS outcomes of those not in the CRT to both the control and intervention arms and identify the contextual factors that affect these outcomes.

Power analysis

We provided the total number of home visiting supervisors (N) required to achieve at least 90% power at a 5% Type I error rate (Table 3). We based our power calculations on the Tandon et al. study which found those receiving MB had Beck Depression Inventory II (BDI) and PSS scores that were 1.014 and 0.639 lower than the control group that did not receive the intervention. Because in this study all groups are receiving the intervention, we hypothesize that the adapted facilitation group will have depressive symptom scores and PSS scores that are lower (by approximately half) to the original study. We are well powered at all IC’s to detect the difference in depressive scores and at minimal IC’s for PSS.

Table 3 Power analysis for aim 3 Data triangulation

We will interrogate relationships between administrative, qualitative, quantitative, and geospatial data with a focus on understanding local perspectives on the variables to inform our analysis. We will develop preliminary matrices in our initial review of the data and will conduct pattern analysis to capture convergent, latent themes across all data sources. During this phase of analysis, our full research team will meet frequently to integrate qualitative and quantitative data using side-by-side comparisons of the qualitative data and joint displays, which include qualitative themes and selected dimensions from the quantitative data. The resulting triangulation will identify the effects of structural and programmatic determinants on both the fidelity and adoption of the intervention and then more clearly disentangle how contextual determinants impact clinical outcomes for program recipients. We will use participatory interpretation with our stakeholders by explaining our methods, presenting them with the results, and applying meaning together [65, 66]. For example, we will show relationships between lack of resources and fidelity and ask, ‘This is happening in your program/community, why do you think that is?’. By interpreting the results together, we will be able to disseminate findings quickly and effectively to relevant stakeholder groups in the forms of public presentations, state reports, white papers, conference papers and manuscripts in peer-reviewed journals.

留言 (0)

沒有登入
gif