An Interdisciplinary Approach to Treating Severe Behavior in a Juvenile Justice Facility: Teaching Behavioral Self-Management via Telehealth

Participants

Four adolescents residing in a secure male juvenile residential justice facility participated in the current study. Each participant was adjudicated for illegal sexual behavior and, subsequently, court-ordered to receive treatment via the Accountability Based Sex Offender Prevention Program (ABSOPP) at the facility. For a description of the services provided within ABSOPP, please see Brogan et al. (2018). Specifically, “treatment as usual” for adolescents within the facility included trauma-focused cognitive behavioral therapy provided by mental health specialists, educational services provided by the public school district, and recreational services provided by a university affiliated program (Brogan et al., 2018; Luna et al., 2022). As an additive component of ABSOPP, Licensed Behavior Analysts (LBAs), who were also doctoral students, and students from a local Applied Behavior Analysis (ABA) Master’s program (hereafter, ABA therapists) provided trauma-informed behavior-analytic services to referred residents. Primary clinical mental health therapists or facility staff members referred participants for treatment due to repeated occurrences of challenging behavior within the facility. After referral, ABA therapists conducted an intake interview with the referred participant’s primary clinical mental health therapist to better understand behavioral concerns and trauma history. Then, ABA therapists met with the referred participant to conduct an additional intake. Table 1 provides participant characteristics.

Table 1 Participant characteristics

After intake, ABA therapists compiled a list of possible treatment options that was later reviewed with participants for their input. Three participants selected this specific intervention (described below) as their first choice. Participant (P) 1 initially selected a different intervention; however, they experienced significant barriers with that intervention. Subsequently, P1 agreed to work on this protocol, as recommended by their primary clinical mental health therapist, to facilitate progress on the original intervention. Anecdotally, completing this intervention facilitated P1’s progress with the originally selected intervention. ABA therapists periodically communicated with primary clinical mental health therapists via phone call, email, or video call to describe the focused intervention, as well as discuss participant progress or concerns. The primary clinical mental health therapists also provided guidance on variables that might affect treatment, emotional triggers, and aspects of their ongoing treatment that could be reinforced through the current intervention.

Setting

Due to the COVID-19 pandemic, facility administrators allowed only essential personnel to enter the juvenile facility. Thus, residents in the juvenile facility received remote therapy services via a secure video conferencing system, Zoom™. Facility staff maintained a therapy schedule for each dorm. A few minutes prior to the start of an appointment, staff accompanied participants in the transition from the ongoing activity within the dorm to a separate windowed room within the building. Rooms were equipped with a table, chair, computer with camera, and sometimes an additional table or cabinet. The participants attended appointments alone with staff walking past to check-in through the windows about every 10 min. The primary ABA therapist for the participant was remotely present during every appointment. Other ABA therapists or LBA supervisors remotely attended at least one appointment a week in order to record interobserver agreement and treatment integrity data, run generalization sessions, or observe for supervision purposes (LBAs only). Typically, ABA remote therapy appointments were 45 min in duration, one to two times per week. Each participant’s involvement in ABA appointments was voluntary. Specifically, participants could opt out of appointments at any time. Prior to each individual appointment and intervention session, participants provided verbal assent to participate. ABA therapists served as the instructors and data collectors for the current study. Typically, ABA therapists conducted one, 10-trial session per 45 min appointment. They collected data electronically on an Excel® template. Each week, ABA therapists prompted facility staff to record daily data on each participant’s behavior outside of therapy appointments. All conversations with staff members took place via telephone without video.

Response Measurement and Reliability

We defined participants’ challenging behavior as: (a) loud, offensive, profane, or disrespectful comments or questions that may or may not be directed toward another individual or (b) physical action such as throwing, ripping, breaking, punching, or forceful movement of objects in a manner other than the object’s intended purpose (e.g., pushing a chair 2 m, throwing a pencil). By contrast, ABA therapists helped participants identify alternative behaviors for their challenging behaviors. ABA therapists assisted participants in selecting specific alternative behaviors they could emit during therapy sessions, as well as at any time and location throughout the facility. Their alternative behaviors included: (a) repeating a verbal rule to self (e.g., “I will not throw the chair.”), (b) engaging in physical activity (e.g., taking a short walk), (c) squeezing loose ends of shirt or jacket, (d) taking three to five inward–outward breaths, and (e) completing a five senses activity (i.e., stating five things they could see, four things they could touch, three things they could hear, two things they could smell, and one thing they could taste). These alternative behaviors were adapted from resources on general coping strategies from the American Psychological AssociationFootnote 1 and Mayo ClinicFootnote 2.

During simulation sessions, ABA therapists recorded data on correct and incorrect responding. We scored a correct response when the participant either engaged in a trained alternative behavior or described the alternative behavior they would engage in. We also scored a correct response when the participant emitted a novel alternative behavior that was not one of the five selected for teaching, but could be an appropriate coping strategy (e.g., counting backwards from 100). More specifically, the alternative behavior had to be a self-managed response and incompatible with the challenging behavior in the moment. We scored an incorrect response when the participant either engaged in challenging behavior or stated that they would engage in a challenging behavior (specific or general). No participants displayed challenging behavior during simulation sessions. We scored a non-response when the participant either engaged in a verbal response stating that they do not know what to do or they failed to respond to a second presentation of the discriminative stimulus (SD) within 15 s.

Outside of ABA sessions, staff members collected data on (a) the count of days (i.e., whether challenging behavior did or did not occur each day) participants displayed challenging behavior each week and (b) the severity of challenging behavior on a 5-point Likert scale (1 = not at all severe and 5 = severe danger to self or others). We included (b) because we knew that (a) lacked sensitivity for detecting small or transitional decreases in challenging behavior. If the participant did not display challenging behavior during the week, we coded a “0” for severity. ABA therapists neither informed staff members of participants’ progression in training nor provided staff members with instructions for intervening with the participants. Due to the COVID-19 protocols put in place by the facility and limited interaction with all of the possible staff that could be on shift, researchers were unable to directly train staff to collect data on challenging behavior. Anecdotally, staff would confirm that they observed the participant every day that week, or they checked a records sheet maintained by staff across shifts that described any behavior incidents. Researchers did not collect interobserver agreement data. We converted data on daily occurrences into an average percentage of days with challenging behavior per week. We collected these supplemental measures through the maintenance phase to evaluate the extent to which participants’ challenging behavior decreased outside of simulated sessions.

Interobserver Agreement

A second independent observer collected data during 41.4% of sessions across all four participants (range, 22.2% to 62.5%). We calculated agreements using trial-by-trial exact agreement. Each session contained 10 trials. For each trial, experimenters recorded a 1 for agreements (i.e., both observers scored a response as correct, incorrect, or as a non-response) and a 0 for disagreements (i.e., observers scored a given trial differently). We summed agreements and then divided by the number of opportunities and multiplied by 100% (# agreements/total opportunities x 100%).

During baseline, we obtained secondary observations during 100%, 60%, 83.3%, and 85.7% of sessions for P1, P2, P3, and P4, respectively. During post-teaching with feedback, we obtained secondary observations during 66.7%, 100%, 66.7%, and 36.4% of sessions for P1, P2, P3, and P4, respectively. During generalization, we obtained secondary observations during 0%, 100%, 50%, and 42.9% of sessions for P1, P2, P3, and P4, respectively. During maintenance, we obtained secondary observations during 0%, 10%, and 27.3% of sessions for P2, P3, and P4, respectively. P1 did not participate in maintenance sessions. Baseline phase mean agreement scores were 86.7% (range, 80% to 90%), 90% (range, 70% to 100%), 83.8% (range, 70% to 100%), and 83.3% (range, 60% to 100%) for P1, P2, P3, and P4, respectively. Post-teaching with feedback phase mean agreement scores were 100%, 100%, 100%, and 97.5% (range, 90% to 100%) for P1, P2, P3, and P4, respectively. Generalization phase mean agreement scores were 85% (range, 70% to 100%), 90%, and 100% for P2, P3, and P4, respectively. Maintenance phase mean agreement scores were 100% for both P3 and P4.

Treatment Integrity

We assessed treatment integrity for 44.2% of sessions (range, 25% to 68.2%) across all four participants. The first and second author created a checklist for each phase that consisted of the implementer’s target behaviors (see Appendix). Some additional treatment integrity components were collected in the current investigation but have been omitted in the Appendix to better clarify the independent variable for readers. Components not included addressed therapists ensuring participant’s orientation, preparing materials, and following facility safety procedures in the chance dangerous behavior occurred. During each session in which we assessed treatment integrity, an observer scored each checklist item as either correct or incorrect as applicable. At the end of each session, we summed the number of correct and incorrect checklist items. We divided the total number of correct items by the total number of opportunities and then multiplied by 100% (# correct responses/total opportunities X 100%).

During baseline, we calculated a treatment integrity score for 0%, 100%, 83.3%, and 85.7% of sessions for P1, P2, P3, and P4, respectively. During BST, we calculated a treatment integrity score for 100% of sessions for all four participants. During post-teaching with feedback, we calculated a treatment integrity score for 33.3%, 100%, 66.7%, and 27.3% of sessions for P1, P2, P3, and P4, respectively. During generalization, we calculated a treatment integrity score for 100%, 100%, 0%, and 0% of sessions for P1, P2, P3, and P4, respectively. During maintenance, we calculated a treatment integrity score for 30%, 0%, and 0% for P2, P3, and P4, respectively. Treatment integrity scores for ABA therapists’ implementation of procedures during baseline phase were 99.2% (range, 95.8% to 100%), 100%, and 100% for P2, P3, and P4, respectively. Treatment integrity scores during BST sessions were 97.5% (range, 95% to 100%), 92.9% (range, 85.7% to 100%), 91.4% (range, 82.8% to 100%), and 98.3% (range, 96.6% to 100%) for P1, P2, P3, and P4, respectively. Treatment integrity scores during post-teaching with feedback phase were 96.6% (range, 95% to 100%), 96.3% (range, 92.3% to 100%), 97.6% (range, 95.1% to 100%), and 88.3% (range, 73.3% to 97.2%) for P1, P2, P3, and P4, respectively. Treatment integrity scores during generalization phase were 100% for both P1 and P2. The treatment integrity score during maintenance phase for P2 was 98.6% (range, 95.8% to 100%).

ProceduresPre-assessment

Prior to baseline, ABA therapists completed a pre-assessment with each participant in order to help them identify the evocative situations and precursors (both covert and overt) that usually preceded challenging behavior. The first author created a two-part template for the pre-assessment. ABA therapists conducted pre-assessments remotely by sharing their screen so that the participant could view the template. There were 10 blank slots at the top of the template with the prompt: “General situations that make you angry.” The ABA therapist read the prompt aloud and typed the participant’s response in the blanks. If the participant stopped responding for 15 s, the ABA therapist provided additional verbal prompts such as asking the individual to describe a recent incident that upset them. If the client verbally stated that there were no other situations, but had not filled all 10 blanks, the ABA therapist stopped providing prompts. Later, the ABA therapist contacted and prompted facility staff or clinical mental health therapists to fill the remaining evocative situations.

The bottom of the template consisted of a three-column narrative ABC analysis. The purpose of this portion of the assessment was to identify precursors to challenging behavior. The first, second, and third columns were labeled “What happens before (i.e., How are you feeling? What are you thinking? What is your body doing?),” “Challenging behavior (i.e., yelling, throwing, etc.),” and “What happens after (i.e., Feeling relaxed, less angry? What is your body doing?),” respectively. ABA therapists prompted the participant to fill out this section by stating: “I want you to think about events that happen internally (e.g., emotion, thought) and physically (e.g., body temperature rising, fists clench, stomach churning) both before and after the challenging behaviors.” The goal was to complete at least three rows in order to have three precursors to use during intervention. Importantly, the precursor only needed to occur before some instances of challenging behavior. If the participant did not respond within 15 s, ABA therapists provided verbal prompts such as asking them to think about a recent incident and describe what was happening with their body. ABA therapists ended the pre-assessment when the participant (a) identified at least three precursors and (b) verbally stated they did not want to add anything else.

Because P4 struggled to identify specific precursors (both covert and overt) during the second part of the pre-assessment, the second author developed a visual prompt consisting of common events that might happen when getting angry. Examples included heart beating fast, balling up fists, face gets hot, and breathing fast, among others. The ABA therapist presented the visual prompt via screen share and then asked P4, “Do any of these happen to you?” The ABA therapist highlighted the ones that P4 selected. With this prompt, P4 identified six precursors; they ultimately selected three that happened most frequently. ABA therapists identified 10 unique scenarios for each participant. Examples of these scenarios include peers shoving, threatening, or excluding the participant; staff members taking items away from the participant; and staff members unexpectedly changing the participant’s schedule. Please refer to Table 1 (sixth column) for a list of the three precursors identified from the pre-assessment for each participant.

Session Overview

Each Zoom™ session consisted of 10 trials and lasted 5 to 10 min. Materials for all sessions included client protocol with scripts, data sheet, list of evocative scenarios, and target precursor. ABA therapists introduced each session by providing a brief description of procedures. During each trial, ABA therapists presented one of the evocative situations identified from the pre-assessment (e.g., “During free time your peers are playing a card game. When you ask them to play, they rudely say no and tell you to go away.”), then delivered the SD: “Imagine [targeted precursor] starts happening. Tell me what you would do next.” ABA therapists randomly presented a different evocative scenario each trial such that the scenarios were different for every trial. ABA therapists collected data on correct, incorrect, or no response and the alternative behavior selected, if applicable. Contingent on a non-response, ABA therapists re-presented the scenario and SD. We set the acquisition criteria at 90% or higher correct responding for three consecutive sessions across two therapists.

Baseline

During the baseline phase, ABA therapists conducted a 10-trial probe session for all three precursors identified in the pre-assessment. ABA therapists conducted probe sessions in the order in which the respective participant identified precursors. Notably, we did not determine which precursor was the most reliable predictor of each participant’s challenging behavior. Thus, after conducting a single probe session with the first two listed precursors from the assessment, ABA therapists arbitrarily selected the last precursor as the targeted precursor for each participant during the remaining baseline and teaching sessions. ABA therapists followed procedures listed in the session overview section above and responded to all correct and incorrect responses neutrally (e.g., saying “Ok.” “I see.”). ABA therapists provided general praise for participation. ABA therapists moved to teaching sessions once baseline responding was low and stable with no increasing trends.

Teaching

During this phase, ABA therapists conducted two separate BST sessions with participants. Initially, ABA therapists (a) provided an overall description and rationale for the BST sessions and (b) explained that the goal was for the participant to learn alternative behaviors during simulations and then use those behaviors to respond to their precursors of challenging behaviors both within and outside of sessions.

The purpose of the first BST session was to teach the participant the five different alternative behaviors. ABA therapists completed BST by: (a) listing and describing five selected alternative behaviors and asking the participant to describe them in their own words, (b) modeling the alternative behaviors, (c) asking the participant to practice each alternative behavior, and (d) providing positive or corrective feedback on the participant’s practice response. If the participant’s practice response did not topographically match the model’s demonstration, ABA therapists re-presented the model and asked the participant to practice again. ABA therapists solicited and answered participant questions about alternative behaviors. Data for participant responding during BST sessions are not presented; however, ABA therapists used specific criteria to determine when to introduce the second BST session or the post-teaching with feedback phase. Participants met criteria to move to the second BST session once they correctly practiced each of the five alternative behaviors at least once.

The purpose of the second BST session was to teach the participant when to use the alternative behaviors (i.e., in response to the targeted precursor). During this session, ABA therapists:

1.

Explained the target precursor selected for teaching.

2.

Informed the participant that they would (a) present a situation that would make them angry, (b) state the targeted precursor, and (c) prompt the participant to practice engaging in one of the five alternative behaviors.

3.

Modeled how to respond to an evocative situation and precursor with an alternative behavior.

4.

Asked the participant to practice responding to the precursor by using the same trial setup described in the session overview section above.

5.

Provided positive or corrective feedback on the practice response. Contingent on an incorrect response, ABA therapists asked participants to practice again.

6.

Solicited and answered participant questions about identifying precursors and engaging in alternative behaviors.

If the participant verbally stated the alternative behavior (e.g., “I would take five inward–outward breaths.”), ABA therapists asked the participant if they would like to practice physically doing the behavior. If the participant agreed, ABA therapists re-presented the scenario and SD and allowed them to practice the behavior. ABA therapists and participants also verbally reviewed how to consider the appropriateness of an alternative behavior based on the current context (e.g., unable to do a physical activity in the classroom so provide a verbal rule to self instead). They also helped participants identify how some behaviors may be appropriate (e.g., leave to ask a staff member for help), but that it may not be as effective for diffusing challenging behavior in the moment (e.g., because a staff member may not be readily available). Participants met criteria to move to post-teaching sessions once they engaged in three consecutive correct responses. ABA therapists ensured that participants used at least two different alternative behaviors across rehearsals. If participants used the same strategy for two rehearsals in a row, ABA therapists reminded participants to use a different alternative behavior during the next rehearsal.

Post-teaching

Post-teaching sessions began with a review prior to the first trial of the session in which the ABA therapist asked the participant to list the previously learned alternative behaviors and to describe when to start engaging in one (i.e., when a precursor is present). ABA therapists then initiated a trial as listed in the session overview section above. Correct and incorrect responses received a consequence from the ABA therapist. Specifically, ABA therapists responded to a correct response with affirming positive behavior-specific feedback (e.g., “Awesome job using an alternative behavior.”). Conversely, following incorrect responses, ABA therapists implemented error correction by providing a verbal description of what would likely happen next for that scenario in their environment due to failing to engage in an appropriate alternative behavior (e.g., “Since you decided to [challenging behavior stated by the participant] you would likely get your book taken away for the day.”). The ABA therapist then asked the participant to describe an appropriate alternative behavior. The ABA therapist proceeded to model the correct alternative behavior the student provided and then re-present the evocative situation and SD. If the participant could not identify an appropriate alternative behavior, ABA therapists provided a verbal reminder of the behaviors reviewed during BST. ABA therapists continued error-correction procedures until the participant engaged in a correct independent response. As described above, if the participant verbally stated the alternative behavior, ABA therapists asked if they would like to physically practice.

There was one modification for P4 during the post-teaching sessions. Although P4 typically stated they would engage in behaviors more appropriate than challenging behaviors, such as filing a grievance with facility staff about the event, we wanted P4 to engage in an alternative behavior that would reduce the challenging behavior in the moment. In order to make this more salient for P4, ABA therapists added a rule stating: “I would like you to first respond to the signal with the alternative behavior. For example, if your heart starts beating fast, you want to engage in an alternative behavior so your heart slows down before you do anything else. After this, you can go talk to staff or file a grievance if you feel it is necessary.” ABA therapists continued to present this rule at the start of each session throughout the remainder of the post-teaching phase. After participants met the acquisition criteria for the target precursor, they advanced to generalization sessions.

Generalization

To assess generalization of skills to the precursors probed in baseline, ABA therapists repeated baseline procedures with the two precursors that they had not targeted during teaching. The generalization criteria consisted of the participant engaging in a correct response during 90% or greater of trials for non-targeted precursors during generalization probe sessions. If the participant responded correctly during less than 90% of trials during the generalization session, ABA therapists planned to introduce teaching for that specific precursor. This would have consisted of running baseline and teaching as described above until the participant met the 90% acquisition criterion during post-teaching sessions. This did not happen for any participants in the current study. P4 participated in extra generalization sessions.

Maintenance

One month after participants met the initial acquisition and generalization criteria, ABA therapists assessed maintenance of those effects across both targeted and non-targeted precursors using baseline procedures. If the participant responded with an appropriate alternative behavior during less than 80% of trials for a given precursor, ABA therapists conducted a booster teaching session. Booster sessions consisted of asking the participant to identify and demonstrate all five alternative behaviors and then ABA therapists provided feedback. If the participant was not able to identify any of the alternative behaviors, ABA therapists completed BST 1 procedures for that specific behavior. Next, ABA therapists completed BST 2 procedures with only one practice response required before running a session of 10 trials with the precursor that had responding below the criteria. ABA therapists used the procedures from the post-teaching phase to complete this session (i.e., ABA therapists provided differential consequences for correct and incorrect responding). ABA therapists conducted 10-trial sessions until the participant responded with an alternative behavior during at least 90% of trials. ABA therapists ran maintenance sessions at least once a month until the participant responded with 80% or higher accuracy for each precursor, or continued running sessions monthly if there was time and the participant agreed to the practice session. ABA therapists also added a component of graphical feedback for P2 prior to sessions in order to increase motivation for correct responding.

Due to P1’s impending release from the juvenile facility, they did not participate in all of the aforementioned phases. To maximize training time, ABA therapists omitted the precursor probes during baseline, weekly data collection from staff members, and maintenance checks. Additionally, staff report data were not collected during the maintenance phase for P4.

Social Validity Questionnaire

After ABA therapists completed the generalization probes, they invited each participant to complete a social validity questionnaire. If available, a secondary ABA therapist administered the questionnaire. ABA therapists delivered the questionnaire remotely by screen sharing a Word™ document. Procedures included providing the scale anchors (i.e., 1= not at all true, 3 = somewhat true, 5 = very true), reading each statement aloud, and then recording the participant’s verbal response. Table 3 provides both the questions and average ratings of each question.

Experimental Design and Analyses

We evaluated the effects of teaching appropriate alternative behaviors in response to a targeted precursor for decreasing challenging behavior using a four-tiered nonconcurrent multiple baseline (NMBL) across participants design (Carr, 2005; Coon & Rapp, 2018; Watson & Workman, 1981). We embedded generalization probes within the NMBL design to evaluate changes following non-targeted precursors and included maintenance assessments. We also used a three-tiered NMBL across participants design to evaluate change in participants’ behavior outside of programmed sessions. For both NMBL graphs, we evaluated the effects of training using visual analysis. Specifically, we made decisions about phase changes based on data depicted in Fig. 1.

Fig. 1figure 1

Percentage of trials with alternative behaviors across phases. Note. Open data points represent probe sessions with precursors not used during teaching. Filled data points denote each participant’s targeted precursor. P = participant

In addition, we evaluated changes in the percentage of days with challenging behavior and staff ratings of severity in the dorms across phases (depicted in Fig. 2) separately for P2, P3, and P4 using a Mann–Whitney Wilcoxon test (Mann & Whitney, 1947; Wilcoxon, 1945) with R studio. The purpose of this test was to compare data collected across phases and determine whether any differences were statistically significant. The Mann–Whitney Wilcoxon test compares repeated measurements from the same participant to determine differences in mean ranks. We repeated the test for each phase (i.e., Baseline to Intervention, Baseline to Maintenance, and Intervention to Maintenance) separately for each participant, and repeated this process for both occurrence and severity data. For these analyses, we combined data collected during post-teaching with feedback and generalization phases and labeled it Intervention. These tests evaluated if occurrence and severity of participants’ challenging behavior were significantly higher in one phase as compared to another.

Fig. 2figure 2

Staff report of challenging behavior. Note. Percentage of days with any challenging behavior (primary y-axis) across weeks for P2 (upper panel), P3 (middle panel), and P4 (lower panel) and severity of behavior (secondary y-axis). P = participant. Missing occurrence data points represent weeks ABA therapists were unable to collect data. Weeks with occurrence data points at 0% and missing severity bars represent absence of challenging behavior during the week. No staff report data recorded for P4 during the maintenance phase. Intervention combines data collected during post-teaching with feedback and generalization phases

留言 (0)

沒有登入
gif