Findings from an Organizational Context Survey to Inform the Implementation of a Collaborative Care Study for Co-occurring Disorders

Overview of study design

The purpose of the survey was to develop a pre-implementation understanding of organizational context of clinics participating in the CLARO RCT. The target population for the survey was clinic staff in the primary care clinics, including clinicians. The intent was to survey as many staff members from each healthcare system as possible, and to gather data from staff working in a range of clinic roles, as compared to taking a sampling approach.

The survey included informed consent language that explained the survey to respondents and specified that their participation was voluntary, that they could skip items or stop taking the survey at any time, that personal data would be kept confidential, and that data were for research use only and would not be shared with outside parties. Those invited to take the survey were entered into a sweepstakes for the chance to win one of 13 $100 gift cards for at least one winner per clinic.

To pre-test the survey, some CLARO team members and representatives from the healthcare systems were invited to test the survey and provide feedback about question wording and the length of time it took to complete the survey.

Recruitment procedures and survey administration

A total of 541 staff at the 13 clinics were sent emails asking for their voluntary survey participation. Inclusion criteria included all staff members including clinicians and other clinic staff members who were likely to interact with patients that may qualify for CLARO. Exclusion criteria were staff who worked in dental, facilities, or other areas in which interaction with potential CLARO patients would be unlikely. The healthcare system leaders most directly involved with the CLARO research team also did not take the survey because of their close involvement with the CLARO RCT; in some cases, they were asked to provide input on the survey prior to launch (e.g., to test the length of time the survey would take). Types of staff that took the survey included medical providers (including physicians, physician assistants, and nurses), behavioral health providers, and other non-clinician staff (including medical assistants and key administrators). This approach allowed for obtaining a relatively comprehensive view of each clinic.

For the three healthcare systems, CLARO RCT start dates (meaning the date when patient enrollment into CLARO was scheduled to begin) ranged from November 2020 to February 2021. The pre-implementation survey was administered to clinic staff members in the 3–4 weeks prior to the date on which RCT enrollment began at each clinic. Because of restrictions related to COVID, no in-person activities in the clinics were possible, and so all facets of survey administration were conducted virtually.

In each system, a clinic champion or administrator sent an email to all staff the week prior to survey launch to let staff know they may receive a legitimate (i.e., not spam) survey invitation email connected to CLARO. These emails also included language that mirrored the IRB-approved survey language to emphasize the voluntary nature of the survey. Survey respondents received an email with a unique URL for the survey from the RAND Survey Research Group. Any respondents who had not yet completed the survey received a weekly email reminder while the survey remained open.

Response rates

Among the three healthcare systems, response rates ranged from 50 to 68% across systems and 56 to 85% across role (Table 1), which is considered acceptable for a voluntary, organizational survey.24,25 Table 7 in Appendix 1 provides demographic information about the respondents.

Table 1 Response rates (RR) by role, across healthcare system

System 1, a Federally Qualified Health Center (FQHC), contained clinics located in both urban and rural parts of the state. System 2 was also an FQHC with clinics located in a rural part of the state, different than system 1. In system 3, which was affiliated with a university, clinics were located in an urban area.

Measures

This study adapted previously validated CFIR scales from Fernandez et al.20 and Kegler et al.26 and incorporated the Brief Opioid Stigma Scale from Yang et al. (Table 2).27 At least one scale related to each of the five CFIR domains. The scales from Kegler et al.26 were developed and validated by those authors to help researchers increase the likelihood of implementation success during any stage of implementation including during the planning phases. Their set of measures addressed each of the five CFIR domains and were easily adaptable. Similar to Kegler et al.,26 Fernandez et al.20 developed measures focused on the Inner Setting CFIR domain. Their study provided scales useful for examining facets of culture, climate, leadership, and resourcing. Yang et al.’s27 Brief Opioid Stigma Scale, a 6-item scale focusing on perceptions about individual and community-level stigma, was also included in the survey. Though not part of the CFIR, given the focus of the CLARO RCT, the individual section of the stigma scale contributes to the understanding of Inner Setting, and the community-level section, to Outer Setting. These measures provided coverage of concepts across the major CFIR domains. Additionally, the authors reviewed concepts to make sure they made sense to the context of the CLARO RCT. For example, the survey did not include items on constructs such as Design and Quality of Packaging (a CFIR Intervention Characteristic construct) or External Policies and Incentives (a CFIR Outer Setting construct) because these were less salient to the CLARO RCT and to the aim of this research focused on understanding the clinic setting.

Table 2 Summary of survey measures

All scales used in the survey were 5-point Likert scales (strongly disagree to strongly agree). To adapt these scales to the context of CLARO, minor edits were made to item wording (e.g., using the term “clinic”). Additionally, given the pre-implementation timing of the survey, the constructs chosen reflected the types of antecedent constructs that Damschroder et al.22 later identified; for example, items that were evaluative or retrospective were not included (e.g., items like, the intervention was “used by colleagues who were happy with it”26(p.4184)) since this survey was administered prior to the start of the intervention.

Survey analysis

In the survey, all respondents were asked to complete CFIR scale and demographic items. Behavioral health providers (BHPs; include those with LADAC, LCSW, LMHC, LMSW, LPCC, or PsyD credentials) and medical providers (MPs; include those with MD, PA, NP, or RN/LVN credentials) were asked some additional questions at the end of the survey about their clinic workflow (e.g., extent to which their appointments were in-person or virtual). Any respondent that answered 80% or more of the core questions were included in the analysis. This 80% cutoff rule was used because when missing data were analyzed, an 80% cutoff typically included respondents who had answered at least some items in the main sections of the survey (e.g., respondents that answered the CFIR questions but then did not complete the demographic items located at the end of the survey).

In the survey, questions asked about the respondents’ clinic environment, not about the healthcare system as a whole. Responses were aggregated to the healthcare system level to protect the confidentiality of respondents (e.g., some clinics had fewer than 30 staff total), and to facilitate a comparison across the three distinct systems. Furthermore, each system relied on system-wide top leadership teams, and in some instances, staff members worked at more than one site in a system.

Constructs were analyzed based on percent of favorable responses in which ratings of agree or strongly agree were considered favorable in order to gauge relatively strong versus weak context areas. Most survey items were phrased in an affirming way (e.g., “This clinic does a good job…”). Therefore, in most instances, construct scores that fell below 50% favorability (e.g., in which less than 50% agreed or strongly agreed) were considered areas for improvement. Three constructs, complexity and the two stigma scores, were exceptions in that due to question wording, high percent favorable ratings indicated deficits or areas for improvement. Scores were not reversed for those three constructs because it was simpler and clearer to keep those items as is; for example, it was unclear if reverse scoring the items would reflect respondents’ survey selections (e.g., disagreeing or strongly disagreeing that “most people believe that a person who is addicted to opioids cannot be trusted” may not be the same as agreeing or strongly agreeing that “most people believe that a person who is addicted to opioids can be trusted”).

留言 (0)

沒有登入
gif