Exploring the importance of predisposing, enabling, and need factors for promoting Veteran engagement in mental health therapy for post-traumatic stress: a multiple methods study

Study purpose

The purpose of this study is to advance a contextually rich understanding of how predisposing, enabling, and need factors promote Veteran engagement in PTSD therapy using a multiple perspective and methods approach. Our research questions are:

1.

Which predisposing, enabling, or need factors are the most influential drivers of Veteran PTSD therapy initiation and completion of an adequate dose (or treatment retention)?

2.

In what ways do Veterans and their family members perceive predisposing, enabling and need factors to promote engagement in PTSD therapy?

Methods overview

Given the lack of clear justification from past studies and theory about which enabling factors are most important for PTSD care-seeking, we apply a machine learning approach, which allows for data-driven exploration in situations where there is limited information to inform hypotheses. We applied this method to a quantitative dataset that includes a rich set of predisposing, enabling, and need factors to learn which factors most influence both Veteran initiation and retention to PTSD therapy. We also considered economic resources as an explicit enabling factor. In this way, we can elucidate how these factors operate within the constellation of other drivers of treatment engagement. Second, the role of economic and social enabling factors to promote care-seeking for mental health are promising, as are family-level or household constructs. Yet the perspectives of family members are rarely captured, and existing knowledge is primarily based on Veteran perspectives. We include quantitative data collected from family members and friends who we refer to as “support partners”. Third, as our quantitative dataset did not capture factors including treatment attitudes, social support, and social norms, we supplemented this data with qualitative data gathered from Veterans and their support partners to explore how participants talk about predisposing, enabling, and need factors to augment what we learned from the quantitative data. All study activities were reviewed and approved by the Durham VA Institutional Review Board (Protocol #02227).

Study design

We applied a multiple methods design to enhance our understanding of determinants of treatment engagement and overcome the limitations of quantitative- and qualitative-only approaches. For example, our quantitative approach generates insights from a large sample with many potential determinants of treatment engagement, but its reductionist approach does little to elucidate nuances in how these determinants function [17]. While our qualitative analysis provides a nuanced understanding built on Veteran and family member perspectives, we are limited in the number of constructs we could explore. Our quantitative study is a secondary analysis of VA medical record data from an existing cohort of Veterans [49] merged with survey data from an associated support partner (e.g., partner, spouse). This dataset was originally used to evaluate the VA Program for Comprehensive Assistance for Family Caregivers (PCAFC) [49]. While this choice limits our sample to Veterans/supports partners who enrolled in PCAFC, this dataset includes a rich set of support partner reported information matched to the medical records of the Veterans for whom they provided care that we would have been unable to collect otherwise because support partners are not systematically identified in VA medical records [29]. Furthermore, the dataset included information about predisposing, enabling, and need factors and sample size needed to address the research question. We analyzed the quantitative data using machine learning algorithms to explore which drivers were most strongly related to initiating a new mental health treatment episode and completing an adequate dose, defined as at least 8 sessions [41]. In parallel, we interviewed 18 Veterans and 13 associated support partners about factors that promoted or hindered engaging in PTSD mental health therapy to provide more breadth and depth for understanding the quantitative results [22]. Results for both studies were analyzed separately according to constructs of the Andersen model. Study procedures were approved by the Durham VA Institutional Review Board (Protocol #02227). Table 1 provides an overview of the design for the quantitative and qualitative studies.

Table 1 Overview of qualitative and quantitative arms of the studyQuantitative proceduresSample

Quantitative data are from a prior study of dyads of Veterans and an associated support partner who had applied to the VA Program for Comprehensive Assistance for Family Caregivers (PCAFC) between May 1, 2010 and September 1, 2015 and subsequently enrolled in the program for at least 90 days (REFERENCE BLINDED FOR REVIEW). PCAFC is a national program of the VA Caregiver Support Program and supports eligible Veteran VA users who served during the Iraq and Afghanistan conflicts and were injured during military service. The support partners in our sample had completed a self-administered, web-based survey in September or October 2015 (n = 1,407); recruitment rate was 14%, and the sampling process is briefly described in the Additional file 1: Methodological Appendix 1.1. Full details about the survey methodology and cohort are published elsewhere (REFERENCE BLINDED FOR REVIEW). The sample for the present study was constrained to dyads for whom Veterans had an International Classification of Disease, 9th Revision (ICD-9) diagnosis for PTSD in their VA medical records anytime between January 1, 2000 and September 2, 2015 and who had not attended a PTSD psychotherapy visit during the baseline period defined as September 4, 2014 and September 3, 2015; 170 dyads were removed due to these exclusion criteria yielding an analytical sample of n = 1,237 dyads.

Data sources

The existing dataset had merged information from three data sources, including VA electronic health records, administrative data from the Caregiver Support Program, and survey data from support partners of Veterans. Administrative data sources included variables such as drive time to the nearest VA, PTSD medication refill, Veteran healthcare need risk score (called Nosos score), and Veteran diagnoses. Additional file 2: Table S1 provides more information on the data source of each variable.

Survey

The survey included 100 questions about support partner and Veteran socio-demographics, emotional, health, and financial wellbeing (e.g., level of education, income, depressive symptoms, health status, perceived financial stress), caregiving experiences (e.g., time spent caregiving, subjective burden, positive attitudes towards caregiving), and use of and satisfaction with services offered through the PCAFC. The survey used validated measures when possible, including the Center for Epidemiological Studies-Depression (CESD-10) scale for depressive symptoms [2], the Zarit subjective burden scale for support partner burden [3], a subscale of the Caregiver Reaction Assessment for perceived financial strain [13], Positive Aspects of Caregiving [44], and VR-12 health status [23].

Outcomes

The main outcomes of interest were Veteran initiation of and retention in an episode of VA-provided mental health therapy for PTSD. Mental health therapy visits were identified using clinic visit codes, provider-classification, and ICD-10 codes that indicated that the Veteran received individual or group behavioral counseling for PTSD (Spoont, personal communication 12/6/19) (Additional file 1: Methodological Appendix 2.1). If a Veteran had more than one qualifying visit on the same day, we counted those visits as one single visit. Initiation of a treatment episode was defined as at least two sessions of therapy received on different days occurring within 21 days of one another between December 1, 2015 and September 30, 2017 (determined via conversations with VA clinicians and researchers who were part of project Advisory Board 12/10/19). We designated two visits to maximize the possibility that Veterans engaged in therapy because the first visit could indicate an evaluation visit and not actual therapy and we limited the space between treatment to 21 days as visits that occur 30 days apart might indicate case management (Spoont, personal communication 12/6/19). We used the same definition as prior studies for completion of an adequate dose of treatment (referred to as “retention”); this was specified as the receipt of at least 8 sessions of therapy [41] received within 180 days between December 1, 2015 and September 30, 2017.

Treatment drivers

We modeled 55 treatment drivers that aligned with constructs in the Andersen model (e.g., pre-disposing, enabling, and need). The variables are presented in in Additional file 2: Table S1 and are organized by model constructs and presented as they were specified in the models. Briefly, pre-disposing factors included age, gender, marital status, etc.; enabling resources included marital status, financial strain, family member wellbeing, etc.; and need factors included health service use, medical diagnoses, Nosos (i.e., risk score denoting general need for health services), etc. Baseline was defined as the date the survey was deployed (September 3, 2015).

Quantitative data analysis

The quantitative analysis aimed to identify the most influential Veteran and family-level drivers of Veteran initiation of and retention in mental health therapy for PTSD. This study is exploratory in that there are not well-defined evidence-based or theoretical a priori hypotheses about which drivers are most influential, especially when we consider information reported by support partners. Machine learning algorithms search for patterns in the data and can identify complex patterns to support new insights and develop hypotheses. We applied well-known algorithms and accepted measures of rigor specific to machine learning approaches which include training the algorithms (objectivity), reporting predictive fit (validity), and assessing the consistency (reliability) of our model results; for details see Additional file 1: Methodological Appendix 3. We used a classification random forest algorithm for binary outcomes [5]. Random forests identify the relative importance of variables in the model and do not produce estimates of effect or assign direction of effects.

We created a dataset for each outcome. Missing data were imputed for each dataset using a machine learning algorithm to predict missing values [21]. We applied a sampling rebalancing technique to the retention outcome dataset [40]. Using accepted convention, we split each outcome dataset into a 70% training dataset and 30% testing dataset. For each outcome, we first tuned the parameters in the training dataset, we next ran the best fitting model in the test dataset to assess how predictive this model was, and then ran the best model in the full dataset to estimate the relative importance of each variable in driving the outcomes [27]. Random forests were estimated using 1,000 bootstrapped trees. To select the most influential variables presented in this paper, we ran the outcome models using different specifications of the random forest algorithm and then two analysts compared the results across algorithms to identify the most influential variables that appeared consistently. We estimated the bivariate association of these variables using regression models to understand the direction of effect. We conducted a robustness check using cross-fold validation to assess the consistency of the results of our primary model. RStudio version 4.0.2 and SAS version 9.4 were used. See Additional file 1: Methodological Appendix 4.1 for details.

Qualitative proceduresParticipants and data sources

Interviews were conducted with a parallel sample recruited for this study [7]. Qualitative participants were identified from Veterans Health Administration administrative data. Veterans had an ICD-10 PTSD diagnosis and had applied to PCAFC. The target support partner was the individual who was associated with the most recent application to PCAFC between May 1, 2010 and May 7, 2019. In one case the support partner participant was not the person who applied to PCAFC but was the Veteran’s spouse. The unit of observation was the dyad regardless of whether both individuals enrolled in the study. To qualify, Veterans also had to have a referral for a VA mental health visit in the 18 months prior to the data pull in December 2019, to recall receiving that referral, and to report being in touch with the support partner who applied with them to PCAFC. Eligible Veterans were randomly sorted to receive letters with study details and an opt-out number. If potential participants did not opt-out within 7 days, participants were contacted via telephone by study staff to explain the purpose of the research, convey the risks and benefits of study participation, and to assess eligibility. Once the study team enrolled the Veteran, they attempted to enroll the support partner using the same process. Participants enrolled in the study provided verbal consent to participate in the study and received $25 for completing the interview. See Fig. 2 Qualitative Study Flow.

Fig. 2figure 2

Qualitative Study flow. Note: SP = support partner; NIS = number not in service

Data collection

Interview guides were developed to understand barriers and facilitators to engaging in PTSD therapy currently and in the past and were modified after an initial review of the first set of interviews. Veterans were asked about predisposing factors (e.g., treatment attitudes, past treatment experiences, motivations, social norms, enabling factors (e.g., economic and social facilitators and barriers to treatment initiation and engagement), and need factors (e.g., descriptions of mental health needs, desired outcomes for treatment). Veterans were also asked about social support they received for their mental health from all sources and from their named support partner from the PCAFC application and the role of social support for treatment. Support partners were asked to describe their understanding of and perspective on the Veteran’s mental health treatment and how involved they were in the Veteran’s past and current mental health treatment episodes.

Data were collected between February 2020 and February 2021 by trained interviewers (AS, MSB, HW); participant recruitment paused between April-August 2020 due to challenges related to the COVID-19 pandemic. Interviews (n = 31) were semi-structured and lasted approximately 30 min. Veterans (n = 18) and support partners (n = 13) were interviewed separately. Interviews were digitally recorded and transcribed (n = 26) or documented through detailed notes (n = 5). One participant requested that their data not be analyzed and so the analytical sample is n = 30.

Qualitative data analysis

Transcripts were reviewed and summarized. We used Hamilton’s rapid analysis approach to structure the qualitative inquiry [16]. The qualitative analyst (AS) created a summary template for each dyad (or individual if only one member of the dyad was interviewed) that identified barriers and facilitators of treatment engagement organized by the main constructs in the Andersen model (e.g., pre-disposing, enabling, need). One of three members of the qualitative team summarized each transcript/interview notes into this template and another member reviewed each summary for accuracy. To establish credibility, the team met weekly to discuss “similarities within and across study participants” and emerging themes and their reactions to the data [45]. Then, the data from each summary template was transferred into a single matrix and coders summarized the information by construct across dyads/cases with each construct reviewed by two coders. To establish dependability and confirmability, the coding team met to compare summaries and then met again to review all summaries, identify emerging themes, and to discuss their interpretation of the data [45]. Once the team developed written narrative of the themes, data in the matrices and transcripts were reviewed again to ensure that themes reflected participants’ reporting.

Comparing qualitative and quantitative findings

Once the qualitative and quantitative data were analyzed, the coders applied a deductive approach that overlaid insights from the qualitative data (on therapy engagement) and findings from the quantitative data (on therapy initiation and retention) to the Andersen model constructs. To do this, the coding team compared findings that emerged from the quantitative analysis to themes from the qualitative analysis to identify how quantitative findings were better understood by what participants reported during the interviews. We supplemented this analysis with quotes from the transcripts to illustrate participants’ perspectives. We then compared findings across data sources to identify common themes and where findings diverged. We developed overarching narrative summaries of results that incorporated insights from both the qualitative and quantitative data. Finally, we reread the transcripts to ensure that the integrated findings aligned with the interview data.

留言 (0)

沒有登入
gif