How Might Indices of Happiness Inform Early Intervention Research and Decision Making?

Participants

The participants in this study were a part of a larger study that focused on developing a very early intervention for infants and toddlers aged 6–36 months. The current study consisted of four children and their caregivers participated in this study. A child participant qualified for the study if they met the following criteria: (1) considered “at-risk” for autism (based on age, APSI and MCHAT-R scores and/or sibling status), (2) between 6 and 36 months of age at the start of the study, (3) English was the primary language spoken in the home, and (4) had caregiver consent to participate in the project. Additional criteria for caregivers included (1) access to a video conferencing device and (2) access to a stable internet connection. The definition for “at-risk” was determined based on age specific criteria. The participants each worked with their respective caregiver. The children’s and their parents’ demographic data are presented in Tables 1 and 2 respectively.

Table 1 Child demographic informationTable 2 Caregiver demographic informationProceduresSessions

All sessions occurred during the COVID-19 pandemic and the researchers facilitated the sessions via telehealth. The caregiver and child participated from their homes, using the child’s toys, activities, and snacks. The coach conducted sessions from a university-based lab. Participants and their coach connected via Zoom® software. All sessions were video recorded using the record feature. Appointments were scheduled for 1 h, two times per week, for 15 weeks. Each appointment was structured into three components, a 5-min introduction with the coach and caregiver, a 5-min data probe, and caregiver coaching.

Interventionists

One trainer and a coach participated in this study. The trainer was a doctorate level Board Certified Behavior Analyst (BCBA-D) with 10 years of experience implementing parent training for children with ASD and 6 years of experience training others using telehealth technologies. The coach was a BCBA with a master’s degree and 4 years of experience implementing parent training for children with ASD.

Experimental Design

Researchers selected a non-concurrent multiple baselines across participants design. The lead researcher randomly assigned the caregivers-child dyads within the multiple-baseline design to either three, four, or five baseline data points.

Baseline

The coach provided caregivers with a copy of the caregiver fidelity rubric (see Appendix A) prior to the baseline phase. They then instructed the caregiver to “show us how you play with your child.” During the baseline, the coach observed but did not provide any instruction or feedback regarding the rubric or expected behaviors. Two to three baseline sessions were conducted per appointment with baseline lasting no less than two appointments.

Training

Following the completion of the baseline phase, the coach taught the caregivers how to implement each task listed on the caregiver fidelity rubric using teach-model-coach-review (e.g., Roberts et al., 2014). Each appointment started with a brief introduction during which the coach and caregiver discussed any issues relevant to the appointment. The coach then provided the caregiver with a copy of the caregiver fidelity rubric.

This introduction was followed by the 5-min data collection probe session. During the probe, the coach observed the caregiver engaging with their child without providing any feedback or direction and collected data on their fidelity of implementation using the caregiver fidelity rubric (Appendix A). After the probe session, the remainder of the session included teaching, modeling, and coaching the caregiver on how to implement the play sessions. The coach provided feedback based on the caregiver’s current level of responding.

Measures and Data Analysis

The primary data used to guide the evaluation of the intervention were the caregiver fidelity data and child social engagement. The IOH data were extracted for the purpose of the present analysis and, at the time, were not used as a primary indicator for intervention evaluation.

The researchers identified and operationalized IOH for both the caregiver and the child participant. The researchers identified six main IOH behaviors for the caregivers. The IOHs for caregivers included vocalized statements of praise, clapping, smiling, dancing, laughing/giggling, and elevated vocal pitch (see Table 3 for operational definitions).

Table 3 IOH operational definitions

The child participants’ IOH were individually identified. During the intake process, researchers asked caregivers two questions: (1) what kinds of things does your child enjoy doing; (2) how do you know when your child is happy? The researchers also conducted an indirect reinforcer assessment interview (see Table 4 for the questions). Using the responses, researchers developed operational definitions of the child IOHs and asked the caregiver to confirm if the definitions were accurate. Researchers then validated the IOH through observation of the child with known preferred items or activities (Green & Reid, 1996). For Nate, happiness was defined as any instance of smiling or laughing (see Table 3 for operational definitions for IOH). For Kyle, happiness was defined as smiling, laughing, and dancing. For Matt, happiness was defined as smiling, laughing, and elevated vocal pitch. Finally, for Kris, happiness was defined as smiling, laughing, and elevated vocal pitch. Caregiver and child IOH data were measured using 10-s partial interval recording, within the 5-min sessions.

Table 4 Indirect reinforcer assessment interview

In addition to indices of happiness, researchers collected data on caregiver fidelity of implementing the play sessions using the caregiver fidelity rubric (see Appendix A). Adapted from the Sunny Starts DANCE program (Ala'i-Rosales et al., 2013), the rubric lists three primary skill categories: pairing, play, and following the child’s lead. Researchers at REDACTED FOR REVIEW expanded the categories to include 11 specific tasks as part of the coaching program shown in Appendix A. Researchers collected data on the occurrence of opportunities (trials) for the caregiver to complete tasks during a 5-min session. Researchers scored trials as either completed, not completed, or not applicable. Mastery criteria was considered at 100% fidelity.

Researchers also collected data on child social engagement using partial interval recording. Researchers defined social engagement as the percentage of intervals in which the child engaged with the caregiver in play. Researchers defined social engagement as reaching to the caregiver, orienting body towards the caregiver, moving towards the caregiver, gazing towards the caregiver, and accepting stimuli from the caregiver. We excluded other forms of play, such as parallel play, when the child was engaged in play but not engaging with the caregiver.

Interobserver Agreement (IOA)

Raters collected IOA data for each dependent variable for a minimum of 33% of sessions within each phase for each participant (e.g., 33% of baseline, 33% of intervention sessions). Raters were trained by a lead researcher until they reached 100% reliability for at least one session. Interobserver agreement (IOA) was calculated for child occurrence or non-occurrence IOH using interval-by-interval agreement. The resulting IOA for child IOH averaged 99% (range, 97–100%) for Nate, 99.25% (range, 97–100%) for Kyle, 100% for Matt, and 100% for Kris. The resulting IOA for caregiver IOH averaged 91% (range, 83–100%) for Nate’s mother, 99.25% (range, 97–100%) for Kyle’s father, 99% (range, 97–100%) for Matt’s mother, and 99% (range, 97–100%) for Kris’ mother. The same method was used to calculate IOA for child social engagement. The resulting IOA for child IOH averaged 94% (range, 86–100%) for Nate, 97% (range, 93–100%) for Kyle, 93% (range, 86–100%) for Matt, and 97% (range, 93–100%) for Kris.

To calculate IOA for caregiver fidelity of the session, the lead author used exact agreement on the fidelity checklist. If both raters indicated that the caregiver completed or missed a certain step, the rater scored the step as an agreement. The author then divided the number of agreements by the total number of fidelity steps and multiplied by 100 to obtain a percentage. The resulting IOA for caregiver fidelity of implementation averaged 96% (range 92–100%) for Nate’s mother, 97.5% (range 92–100%) for Kyle’s father, 95% (range 88–100%) for Matt’s mother, and 98% (range 96–100%) for Kris’ mother.

Treatment Integrity

Researchers collected treatment integrity data for the coach’s adherence to the coaching fidelity rubric. The rubric shown in Appendix B consisted of 24 tasks subdivided into five categories: preparation, teaching, modeling, pre-session coaching, coaching during a session, and review. Treatment integrity data were collected for at least 30% of sessions. Treatment integrity was, on average, 97% (range, 90–100%) for the coach across sessions.

留言 (0)

沒有登入
gif