Cash Transfers and After-School Programs: A Randomized Controlled Trial for Young Men at Risk of Violence Exposure in Wilmington, Delaware

We conducted a three-arm RCT and pre-registered our research design with the Open Science Foundation (https://osf.io/wxtsb/). We randomized participants into the following three groups:

1.

Cash transfer paired with after-school programming (conditional cash transfer). This group began programming in the fall of 2021 and received six months of after-school curriculum accompanied by $150 a week if they attended enough sessions of the afterschool program to submit all of their relevant documents for the cash transfer (which was generally 2 to 4 sessions). The programming and cash transfer ended in May of 2022. There were 59 programming plus cash transfer participants; 55 excluding dropouts.

2.

Cash transfer alone (unconditional cash transfer). This group received $150 weekly for six months beginning in the fall of 2021, ending in May 2022. There were 56 participants in cash transfer only.

3.

Control group. This group was waitlisted for programming after the completion of the study. Control group programming began in the summer of 2022 and ran until November 2022. There were 57 control group participants; 56 excluding dropouts.

The after-school programming consisted of tutoring, conflict resolution training, financial coaching, recreational and arts activities, and soft skills training. DHSS provided transportation to and from the program venue, food, and tutoring before all other planned activities; between 3:30 pm and 4:30 pm. Programming took place an office park and retail center in the riverfront area of Wilmington, which was considered by the participants to be a safe, neutral location.

Study Participants

Young men eligible for the study were between the ages of 14 and 17, lived in families with low-incomes (those eligible for Medicaid), and resided in three Wilmington ZIP codes that DHSS identified as having high rates of violent crime: 19801, 19802, and 19805.

Using Medicaid enrollment data, DHSS identified close to 2000 eligible young men in the spring of 2021 to participate in the study. During summer and fall of 2021, we worked with DHSS to send an introductory flyer, e-mail, and text messages with information about the program and study to all eligible youth.

We implemented different IRB-approved methods to increase enrollment into the study:

Cold calls to eligible families conducted by the Urban team to explain the study and invite them to enroll.

A dedicated Facebook page about the study with flyer-style posts and information about how to enroll that was continuously updated.

A raffle for $250 for enrolled participants who successfully invited other eligible participants to enroll.

Partnerships with local nonprofit organizations focused on youth development to invite eligible clients to enroll in the study.

However, even with these enhanced recruitment methods, participation did not reach our sample size goal of 225. Due to COVID-19 safety protocols instituted at the time of recruitment, in person recruitment activities were not allowed. These restrictions likely had a significant impact on our final sample size since face-to-face connection is key to gaining trust amongst potential participants. Overall, 172 young men enrolled in the study; 167 excluding dropouts.

Randomization

Randomization occurred in one batch in October 2021. We used blocked (stratified) randomization to ensure balance across groups in terms of race, ethnicity, and neighborhood. Pairs of siblings were kept in the same treatment or control group. Individuals randomized into the cash transfer plus programming and cash transfer alone had a 2021 start date, and those randomized into the control group were offered to start programming in 2022, after the RCT was completed. All groups, including the waitlisted control group, took all surveys during the same time ranges. Within the first few weeks of enrollment, one youth dropped out of the control group since he was incarcerated and four dropped out of the cash transfer plus programming for unreported reasons.

Data Collection

We collected data via surveys and through administrative data from the Delaware Department of Health and Social Services on program participation and cash transfer pickup rates, and from the Delaware Department of Education on school attendance and disciplinary actions.

Surveys

Participants enrolled in this study were administered a total of six surveys: a baseline survey after enrollment but before programming began (or during the first week of programming), four consecutive monthly surveys, and a final exit survey after the completion of the program and cash transfer.

The baseline and the final exit surveys consisted of a maximum of 107 questions (inclusive of skip patterns). The baseline survey assessed demographics and self-reported school attendance, employment status, saving and spending patterns, financial stress, perceived health status, and criminal justice involvement. The measure of food insecurity was selected from the National Survey of Children’s Health [10]. Five items were used to assess housing instability consistent with the CDC definition including affordability, risk of eviction, and frequent moves [11]. Validated scales were used to assess social support and psychological distress, which was incorporated into the physical and mental health composite [12, 13]. Fifteen items with strong psychometric properties were used to measure violent and non-violent delinquency behaviors [14]. Three items were used to assess self-esteem and two items were to assess future orientation in the domains of fatalism and belief in the future [14, 15]. Self-reported lifetime use and past-30 day frequency of substance use were assessed with measures consistent with those from the Youth Risk Behavior Surveillance System [1].

The monthly survey consisted of 36 questions and was repeated for four consecutive months. Topics included income, employment status, purchasing patterns, family responsibilities and financial contributions to household expenses, food and financial security, delinquency behaviors, criminal justice involvement, substance use, self-esteem, and perceived health status for the previous 30 days. For youth in the cash transfer plus programming group, the monthly survey assessed engagement in programming, including reasons for missing program sessions.

The final exit survey repeated assessments of food insecurity, housing instability, saving and spending patterns, financial stress, social support, psychological distress, self-reported school attendance, employment status, financial stress, perceived health status, substance use, delinquency behaviors, criminal justice involvement, and future orientation. For youth in the cash transfer plus programming group, the exit survey assessed engagement in and perceived usefulness of the program.

Upon survey completion, participants were sent an e-gift card to either their cell phone or email address for time spent participating in the study. Participants received a $20 e-gift card for the baseline survey and each completed monthly survey, and $40 e-gift card for completing the exit survey. In order to increase exit survey response rates, the team increased the value of the gift card ($20 to $40), used a third-party outreach worker to visit participants at their homes to assist with completion of the exit survey, and offered a special event with food at the location of the program sessions where participants were invited to complete the survey. The increased outreach methods led to 126 participants completing the exit survey.

Administrative Data

We collected administrative data from the Delaware Department of Health and Social services on program participation and cash transfer pickup, as well as data from the Delaware Department of Education on school attendance and disciplinary actions. We also used Medicaid enrollment data to draw our sample.

Outcomes of Interest

The primary outcomes of interest for this study include those related to physical and mental health, health behaviors, and school attendance and disciplinary actions (Table 1). The secondary outcomes of interest include criminal history/involvement with the justice system, financial health, and social supports. Despite being a goal of the intervention, criminal history/involvement with the justice system is not a primary outcome in our analysis due to a lack of administrative data and the low frequency of such engagement which makes estimation challenging. Some criminal justice related measures are included in our primary measures of physical and mental health when they relate to injury and health behaviors, such as questions about fighting and carrying weapons.

Table 1 Outcomes of interest

To account for multiple outcomes and the probability of a type I error (a “false positive”), we combine individual measures into composite indices, as shown below. This reduces concerns about false positives for individual variables, similar to the methods used by Kling, Liebman, and Katz [16] and Karlan and Valdivia [17] (to see results for each individual measure, see the online appendix).

Analysis Methods

Our primary method for estimating the impact of the unconditional and conditional cash transfer plus programming on our outcomes of interest is an “intent to treat” model which tests the effect of being offered treatment on outcomes, whether or not the individuals participated in programming or received the cash transfer. This is estimated using the following linear regression model:

$$}_= \alpha + _treate_+_+_$$

where \(_\) is the outcome of interested measured using the exit survey data and administrative data, \(_\) is an intercept, \(treate_\) is equal to 1 for the group being studied and zero otherwise, \(_\) is a set of control variables, and \(_\) is the error term. For control variables, we first include the education level of the participants’ mother, an indicator for whether he was in foster care, his race and ethnicity, ZIP code, and age. We include these prognostic variables as covariates to increase the precision of the effect estimate. We select these control variables based on theory rather than on baseline t-tests of differences since choosing covariates based on significance tests for baseline differences can lead to omissions of important covariates and inclusion of irrelevant covariates (de Boer et al. 2015). Second, we include each of our primary and secondary outcomes calculated using the baseline survey and set to zero if the participant did not respond to the baseline survey questions along with dummy variables identifying participants who did not respond to the baseline survey. We initially planned to include baseline survey responses in a fixed-effects model to account for heterogeneity between groups and any issues that arose with balance. However, lower than expected survey response rates for the baseline survey reduced the sample size available for a full fixed-effects model. Including baseline responses where available and dummies where they are not allows us to keep the full sample of outcome survey respondents while accounting for observable differences at baseline.

We use a regression model rather than a t-test of sample means both to include control variables and in order to estimate heteroskedastic robust standard errors which account for the likelihood that the variance of the estimated treatment effect is not constant across participants.

We estimate the above model in three ways. First, we estimate the impact with treatment defined as the cash transfer, where only the cash transfer group and the control group are included to estimate an average treatment effect for the cash transfer component alone. Second, we estimate the model with treatment defined as cash transfer plus programming and with the cash transfer only group excluded. Third, we estimate the model with treatment referring to any type of cash transfer (both conditional and unconditional) and equal to one for both the cash transfer only and the cash transfer plus programming group; this allows us to increase the sample size within this intervention component to examine it from a different angle.

We also estimate the “treatment on the treated” or the impact of actually participating in programming. This method allows us to detect effects that may have been drowned out by non-participation in the prior model. However, participants who choose to participate in programming may systematically differ in unobservable ways from those who choose not to participate, which may cause bias in the results. We correct for this potential bias by estimating the complier average causal effect, which uses an instrumental variables approach to correct for this bias [18]. In this approach, randomization into the treatment group is used as an instrument for the actual treatment. For the cash-only group and for analysis of the combined groups, we define the treatment as receiving the cash transfer. For the programming group, we define the treatment as attending at least one-third of sessions, but also ran robustness checks where treatment is defined as the number of sessions attended, attended a single session, attended half the sessions, and attended two third of sessions.

Our initial research plan included the use of fixed-effects models that would incorporate data from both the baseline and exit surveys. Fixed-effects models would have allowed us to remove any time-invariant unobserved heterogeneity that exists for each individual that may be related to their outcomes. This would account for any baseline differences between groups that existed at the beginning of the programming and cash transfer period. Analysis using this method, however, would have relied on a much smaller sample size. While 72% of participants took the exit survey, only 55% took both the baseline and the exit survey. Among the control group, only 42% of participants took both surveys (24 participants). Similarly, we do not use monthly surveys in our data analysis since response rates for the monthly survey were so low and only 21 participants in the control group took both the baseline, outcome, and at least on monthly survey. Using just the outcome survey, we have a 67% response rate for the control group, a 61% response rate for the programming plus cash transfer group and an 89% response rate in the cash transfer only group.

A reverse power analysis revealed that we would not have the power to identify less than 0.51 to 0.29 standard deviations in our key variables. Given that the sample was small and take-up rates were not 100%, we expect our estimates to be attenuated and a lower-bound estimate for the effect of the intervention on our outcomes of interest.

Group Equivalence at Baseline

To test for potential differences across cohorts even after randomization, we analyzed differences in demographic characteristics and outcome measures at baseline, presented in Table 2. We find that the groups appear balanced across key demographic characteristics included in the administrative data but there were statistically significant differences between groups in many of our key outcome measures. These differences may exist by chance. Or, they may have been produced by differential nonresponse bias across groups. As described above, we include our primary and secondary outcomes calculated using the baseline survey, where available, to account for observable differences between groups.

Table 2 Demographic differences at baseline

留言 (0)

沒有登入
gif