Development and psychometric evaluation of the Implementation Support Competencies Assessment

Measurement development process

Our process of developing the ISCA was informed by DeVellis [17], whereby we engaged in a systematic and rigorous process of measurement development. To begin, we leveraged recent scholarship that offers clear and rich descriptions of the constructs intended for measurement—the 15 core competencies posited to undergird effective implementation support [1,2,3,4,5]. Recently developed practice guide materials intended to inform the work of ISPs also include operationalizations or core activities for each core competency [18]. These operationalizations, in combination with recent empirical and conceptual work noted above, provided an initial item pool (116 items across the 15 competencies) and foundation on which to advance measurement development, largely from a confirmatory perspective (as opposed to exploratory).

Next, we sought to identify an optimal format for measurement. This process was informed by other extant competency measures and our desire to balance parsimony (low respondent burden) with informativeness. Ultimately, we selected an ordinal-level response-option set whereby individuals could self-report their level of perceived competence with respect to each item. Consistent with other existing competency self-assessments [19], we selected the following response-option set: 1 = not at all competent, 2 = slightly competent, 3 = moderately competent, 4 = very competent, and 5 = extremely competent. The research team then initiated a three-stage process for item review and refinement. The first stage involved members of the research team identifying opportunities to simplify and consolidate possible items in the item pool. This led to a slight reduction in items (now 113) and item simplification.

The second stage involved use of modified cognitive interviewing with three experienced ISPs. The three participants were invited to review the assessment items in preparation for their interview, and during their interview (about 60 minutes) they were asked the following questions for each competency item set: (a) how clear are the items for this competency? (b) how accessible do the items feel for potential users? (c) what changes, if any, would you recommend for these items? Feedback from respondents led to several minor edits, shifts in terminology (e.g., use of “partner” instead of “stakeholder”), and opportunities to further clarify language used in some items (e.g., defining “champions”). All potential item revisions were reviewed and accepted by two research team members with extensive implementation research and practice experience.

The third stage involved pilot-testing the assessment with a group of professionals who were enrolled in a university-based certificate program focused on cultivating ISP core competencies. Prior to the delivery of certificate program content, participants were asked to complete the ISCA. Following the completion of each competency-specific item set, participants were given the following open-ended prompts: (a) please identity any items that felt unclear or confusing; (b) please identify any language used in these items that was difficult to understand; and (c) please provide any other thoughts or insights you would like to share about these items. The assessment was completed by 39 individuals, enabling us to tentatively assess internal consistency reliability for each competency item set (Cronbach’s alpha values ranged from .70 to .94; McDonald’s omega values ranged from .70 to .95), as well as the distributional properties of item responses (results indicated the items were not burdened significantly by skewness or kurtosis). We were also able to leverage open-ended feedback to incorporate several minor item edits, which were again reviewed and approved by the same two members of the research team.

Our next step was to prepare the assessment for validation analyses. In addition to the assessment items, we developed a set of items intended to measure two core constructs posited to be associated with the ISP core competencies [2]. One construct represented ISP gains, or the extent to which ISPs report receiving recognition, credibility, and respect from those who receive their implementation support. The second construct represented recipient benefits, or the extent to which ISPs perceive the recipients of their support experiencing increases in (a) relational capacities with the ISP, (b) implementation capability, (c) implementation opportunities, and (d) implementation motivation [2]. More details about the specific items used to measure these constructs and the ISCA are provided in the Final Measures subsection.

Data collection and sample

To recruit a sample for validation analyses, we leveraged a listserv of nearly 4,000 individuals who have registered for or expressed interest in various events and trainings focused on implementation practice offered by an implementation science collaborative housed within a research-intensive university in the Southeast region of the United States. A series of emails were sent to members of this listserv describing our efforts to validate the ISCA, with an invitation to participate. Voluntary responses (no participation incentives were offered) were collected between June and November 2023 using Qualtrics, a web-based survey platform. The survey included informed consent materials, items to collect information about respondent sociodemographic and professional characteristics, the ISCA items, and validation items. The median completion time for the survey was 22.7 minutes among the 357 participants in our final analytic sample.

Table 1 features an overview of participant characteristics. The majority of participants identified as women (84%), with 15% identifying as men, 1% identifying as gender nonconforming, and 1% preferring not to provide information about their gender identity (percentages are rounded, resulting in the possibility that the total exceeds 100%). Participants could select all racial and ethnic identifies that applied to them; 76% identified as White, 11% identified as Black, 9% identified as Asian, 7% identified as Hispanic, 1% identified as Native American/American Indian/Alaska Native, 0.3% identified as Pacific Islander, 3% identified as other, and 2% preferred not to provide information about their racial/ethnic identity. Six continents of residence were represented among participants, with 78% of participants residing in North America, 7% in Europe, 6% in Australia, 4% in Asia, 4% in Africa, and 2% in South America. Thirty-eight percent indicated having more than 15 years of professional experience, 23% indicated having one-to-five years of experience, 22% indicated have six-to-ten years of experience, and the remaining 17% indicated having between 11 and 15 years of experience. The following service types were well represented among participants (more than one type could be indicated by participants): public health (32%), health (31%), mental and behavioral health (26%), child welfare (22%), and K-12 education (18%), among others. The three most common work settings were non-profit organizations (36%), higher education (27%), and state government (20%; more than one setting could be indicated by participants). See Table 1 for more details.

Table 1 Participant characteristics (N = 357)Final measuresImplementation Support Competencies Assessment (ISCA)

Rooted in recent scholarship and foundational steps of measurement development described earlier, the ISCA included item sets (ranging from 5 to 15 items and totaling 113 items) intended to measure each of 15 core competencies posited to undergird effective implementation support, with competencies nested within one of three overarching domains: co-creation and engagement, ongoing improvement, and sustaining change. The co-creation and engagement domain included items designed to measure the following five competencies: co-learning (6 items), brokering (6 items), address power differentials (7 items), co-design (6 items), and tailoring support (7 items). See Appendix 1 for a list of all items associated with this domain. The ongoing improvement domain included items designed to measure the following six competencies: assess needs and assets (6 items); understand context (6 items); apply and integrate implementation frameworks, strategies, and approaches (5 items); facilitation (9 items); communication (6 items); and conduct improvement cycles (6 items). See Appendix 2 for a list of all items associated with this domain. The sustaining change domain included items designed to measure the following four competencies: grow and sustain relationships (11 items), develop teams (15 items), build capacity (8 items), and cultivate leaders and champions (9 items). See Appendix 3 for a list of all items associated with this domain. Information about internal consistency reliability for each item set is featured in the Results section as a key component of the psychometric evaluation of the ISCA.

When completing the ISCA, participants were instructed to reflect on their experiences supporting implementation in various settings, review each item, and assess their level of competence by selecting one of the following response options: not at all competent (1), slightly competent (2), moderately competent (3), very competent (4), or extremely competent (5). If participants did not have direct experience with a particular item, they were instructed to indicate how competent they would expect themselves to be if they were to conduct the activity described in the item.

Validation constructs

Consistent with the mechanisms of implementation support articulated by Albers et al. [2], we developed and refined multi-item scales intended to measure two constructs theorized to be byproducts of ISPs possessing proficiency across the 15 core competencies of implementation support provision. Specifically, we developed three items intended to measure ISP gains; or the extent to which ISPs receive recognition, credibility, and respect from those who receive their implementation support. Specifically, participants were asked to indicate their level of agreement (ranging from 1 = Strongly Disagree to 5 = Strongly Agree) with the following three statements: “I have credibility among those who receive my implementation support,” “I am respected by those who receive my implementation support,” and “My expertise is recognized by those who receive my implementation support.”

We also developed ten items intended to measure recipient benefits, or the extent to which ISPs perceive the recipients of their support experiencing increases in (a) relational capacities with the ISP, (b) implementation capability, (c) implementation opportunities, and (d) implementation motivation [2]. Specifically, participants were asked to indicate their level of agreement (ranging from 1 = Strongly Disagree to 5 = Strongly Agree) with the following ten statements: “I am trusted by those who receive my implementation support;” “Those who receive my implementation support feel safe trying new things, making mistakes, and asking questions;” “Those who receive my implementation support increase their ability to address implementation challenges;” “Those who receive my implementation support gain competence in implementing evidence-informed interventions in their local settings;” “I provide opportunities for continued learning to those who receive my implementation support;” “I promote implementation friendly environments for those who receive my implementation support;” “Those who receive my implementation support strengthen commitment to their implementation work;” “Those who receive my implementation support feel empowered to engage in their implementation work;” “Those who receive my implementation support demonstrate accountability in their implementation work;” and “Those who receive my implementation support develop an interest in regularly reflecting on their own implementation work.” Information about internal consistency reliability for item sets related to the two validation constructs is featured in the Results section.

Data analysis

To generate evidence of the internal consistency reliability of competency-specific item sets, we estimated Cronbach’s alpha, McDonald’s omega, and Raykov’s rho coefficients for each of the 15 competencies [20, 21]. To generate evidence of the factorial and construct validity of the ISCA, we then employed confirmatory factor analysis (CFA) in Mplus 8.6 [22]. Consistent with our hypothesized model, we estimated three separate second-order CFA models, one for each of the three competency domains: co-creation and engagement, ongoing improvement, and sustaining change. The first CFA model specified the co-creation and engagement domain as a second-order latent factor with the following five competencies specified as first-order latent factors: co-learning, brokering, address power differentials, co-design, and tailoring support. The second CFA model focused on the ongoing improvement domain as a second-order latent factor with the following six competencies specified as first-order latent factors: assess needs and assets; understand context; apply and integrate implementation frameworks, strategies, and approaches; facilitation; communication; and conduct improvement cycles. The third CFA model focused on the sustaining change domain as a second-order latent factor with the following four competencies specified as first-order latent factors: grow and sustain relationships, develop teams, build capacity, and cultivate leaders and champions. In all three models, ISP gains and recipient benefits were regressed on the second-order domain factor, and the error terms for the validation constructs were allowed to covary.

For purposes of model identification and calibrating the latent-factor metrics, we fixed first- and second-order factor means to a value of 0 and variances to a value of 1. To accommodate the ordinal-level nature of the ISCA items (and items used to measure the validation constructs), we employed the means- and variance-adjusted weighted least squares (WLSMV) estimator and incorporated a polychoric correlation input matrix [23]. Some missing values were present in the data, generally reflecting a steady rate of attrition as participants progressed through the ISCA. Consequently, the analytic sample for each second-order factor model varied, such that the model for the co-creation and engagement domain possessed all 357 participants, the model for the ongoing improvement domain possessed 316 participants, and the model for the sustaining change domain possessed 296 participants. Within each model, pairwise deletion was used to handle missing data, which enables the flexible use of partial responses across model variables to estimate model parameters. Missing values were shown to meet the assumption of Missing Completely at Random (MCAR) per Little’s multivariate test of MCAR (\(^\)[94] = 83.47, p = 0.77), a condition under which pairwise deletion performs well [24, 25].

To assess model fit, the following indices and associated values were prespecified as being indicative of good model fit: Comparative Fit Index (CFI) and Tucker-Lewis Index (TLI) values greater than 0.95, standardized root mean square residual (SRMR) values less than .08, and root mean square error of approximation (RMSEA) values less than or equal to 0.06 (including the upper-level 90% confidence interval) [26, 27]. Each factor-analytic model was over-identified and sufficiently powered to detect not-close model fit [28].

Ethics approval

We submitted our study proposal (study #: 23-0958) to our university’s Office of Human Research Ethics, whereby our study was approved and determined to be exempt from further review.

留言 (0)

沒有登入
gif