The Use of Instructive Feedback to Promote Emergent Tact and Intraverbal Control: A Replication

Participants and Setting

Two children with ASD, diagnosed by medical professionals not affiliated with the study, served as participants. At the time of the study, participants were receiving behavior-analytic services at a university-based autism center. Participants were recruited for the study based on treatment goals related to listener and speaker responses by feature, function, and class. The experimenters collaborated with the children’s Board Certified Behavior Analysts to confirm appropriateness of the goal and identify teaching targets. Parents provided consent for research during service delivery, which was approved by the institution’s review board for human subjects, and the current evaluation was approved by the autism center’s executive director. Research sessions were conducted in the participant’s designated therapy room (3.3 m x 2.4 m) at the center. Each therapy room included a table and two chairs along with instructional materials and toys.

Miguel was a 5-year-old Hispanic male who had been receiving applied behavior analytic intervention for 12 months, not including a 3.5-month interruption due to novel coronavirus (COVID-19) closure at the center when this study began. He attended half-day sessions two times per week. In addition to his in-clinic intervention services, Miguel attended a public school four days per week. His family spoke Spanish and English at home. All intervention services were conducted in English. He obtained standard scores of 79 and 85 on the Peabody Picture Vocabulary Test-Fourth Edition (PPVT-4; Dunn & Dunn, 2007) and the Expressive Vocabulary Test-Second Edition (EVT-2; Williams, 2007), respectively. Miguel’s responding on the VB-MAPP was in the Level Two range in the mand, echoic, tact, and intraverbal domains. He could emit spontaneous mands for items, tact at least 200 nouns or verbs, answer 12 Wh-questions, and echo a variety of sounds and words (a score of 100 out of 100 on the Early Echoics Skills Assessment [EESA]; Esch, 2008). He demonstrated bidirectional naming prior to the study. That is, he emitted tacts following listener discrimination training and engaged in correct selection responses to listener discriminations following tact training. Novel targets were taught as tacts and probed as listener discriminations and vice versa. Miguel emitted correct responses in the untrained modality in probes that did not include differential reinforcement.

Clare was a 4-year-old Eastern European American female who had been receiving applied behavior analytic intervention for 16 months at the center, not including a 3.5-month interruption due to the COVID-19 closure, when this study began. Due to COVID-19 capacity limits, Clare attended half-day sessions three times a week, but she transitioned to full-day sessions five days a week halfway through the study as restrictions were eased. In the home, Clare’s family spoke both their native European language and English. All of Clare’s intervention services were conducted in English. She obtained standard scores of 147 and 106 on PPVT-4 and the EVT-2, respectively. Clare’s responding on the VB-MAPP was in the Level Two range in the mand, echoic, tact, and intraverbal domains. She could emit spontaneous mands for items, tact at least 200 nouns or verbs, answer at least 25 different Wh-questions, and echo a variety of sounds and words (a score of 95 out of 100 on the EESA). She demonstrated bidirectional naming with the procedures described above for Miguel.

Materials and Target Selection

Materials included a 30 cm × 46 cm cardboard divider, printed data sheets, writing utensils, video camera and tripod, participants’ preferred tangibles, and 5 cm x 9 cm stimulus cards. Each set of stimulus cards consisted of three laminated, colored images of community helpers (Miguel; see Table 1) or animals (Clare; see Table 2) on a white background. Images were found via an internet search engine. Targets included stimuli that corresponded with goals of the participants’ clinical programming and were modeled after those selected by Frampton and Shillingsburg (2020). We selected visual stimuli to which the participant could respond to as a speaker and a listener (e.g., “Otter” in response to the SD and antecedent verbal stimulus “What is it?” and selected otter from an array in response to the conditional stimulus “Touch otter.”).

Table 1 Targets for sets 1–3 for MiguelTable 2 Targets for sets 1–3 for Clare

The IF statements were features of each stimulus. We defined features as relative relations to the target picture (e.g., what the target animal ate, where it lived; Cooper et al., 2020). For each stimulus, we identified features that could not be observed in the picture (e.g., we did not include fur color as it could be observed, the picture of the dog did not include kibble; Frampton & Shillingsburg, 2020). One feature was selected per stimulus based on the participant’s responding during probes (described below). Each stimulus in a set had a feature that used a different carrier phrase (e.g., “It eats _.” “It lives in _.” “Its babies are _.”; Tables 1 and 2), and the carrier phrases were repeated across sets (e.g., three total “It eats _.” targets). To arrange stimuli in sets, we used a logical analysis (Cariveau et al., 2020; Wolery et al., 2014). Stimuli were arranged so that target names and IF statements included a similar number of syllables in each set (see Tables 1 and 2). We confirmed that participants could echo the features by conducting echoic probes with each vocal stimulus (Frampton & Shillingsburg, 2020). Visual images selected for each set were arranged similarly across sets (e.g., one animal in each set was facing forward, to the left, and to the right; community helpers holding items, etc.).

Response Measurement and Interobserver Agreement

The main dependent variable was the frequency of correct independent responses emitted during listener, tact, and intraverbal (fill-ins and Wh- questions) probes (Frampton & Shillingsburg, 2020). Across operants, a correct independent response was defined as the participant emitting a specific target response within 5 s of the antecedent verbal stimulus (see Table 3 for specific operational definitions). Correct responses could include repeating any portion of the antecedent verbal stimulus. An incorrect response was defined as the participant engaging in any response other than the target response or not engaging in a response within 5 s. Correct responses were summed for total frequency and divided by total number of opportunities to obtain a percentage. Sets were considered mastered if the participant emitted correct independent responses on at least 55% of trials within a probe session (i.e., at least 5/9 correct) across at least three of the following operants: listener-by-feature, tact-by-feature, intraverbal Wh- questions, and reverse intraverbals (fill-in intraverbals were excluded from the mastery criterion because they were not included in Frampton & Shillingsburg, 2020). The criterion for mastery was based on Frampton and Shillingsburg (2020), and it was designed to account for emergent responses tested under extinction conditions. We continued to collect data on correct independent responses following mastery of each set to assess responding across time.

Table 3 Operational definitions of correct and incorrect responses

In addition to the frequency of correct independent responses, we collected data on several other responses. Although the primary targets included in intervention were previously mastered, therapists collected data on participants’ responding to mastered targets during the intervention session. Independent correct and incorrect responses were defined similar to the listener-by-feature operant (Table 3); however, the antecedent verbal stimulus was the name of the stimulus (e.g., “Show me otter” rather than “Which lives in rivers?”). Prompted correct responses were defined as the participant imitating the therapist’s model of the correct response within 5 s. Prompted incorrect responses were defined as the participant failing to imitate the therapist’s model of the correct response, either because they selected an incorrect stimulus or because they did not respond, within 5 s. We also collected data on whether the participant echoed the IF statement during intervention trials. An echoic response was defined as a vocalization that had point-to-point correspondence with the antecedent verbal stimulus (Skinner, 1957), and could include all or some of the words in the IF statement. We recorded the occurrence and non-occurrence of echoics on each intervention trial.

A trained research assistant collected data on the participants’ responding from video for 34% of Miguel’s sessions and 40% of Clare’s sessions; data were collected throughout all phases of the study. An agreement was scored if both observers recorded the same response on a trial (e.g., both scored an independent correct response). A disagreement was scored if both observers scored a different response on a trial. We calculated interobserver agreement on a trial-by-trial basis by dividing the total number of agreements by the sum of agreements and disagreements and multiplying by 100 to obtain a percentage. Mean agreement for Miguel’s intervention sessions was 100% for independent correct responses, prompted correct responses, and echoic responses. Mean agreement for Clare’s intervention sessions was 98% (range, 78–100%) for independent correct responses, 91% (range, 0–100%; the lower bound of the range consisted of one session wherein one prompted response occurred, and the data collectors disagreed on whether it was correct) for prompted correct responses, and 97% (range, 89–100%) for echoic responses. Mean agreement for Miguel’s and Clare’s probe sessions was 98% (range, 78–100%) and 97% (range, 67–100%), respectively.

Independent Variable and Procedural Integrity

The independent variable in the current study was the inclusion of IF within mastered listener-by-feature trials. The IF statement included a feature of the target stimulus and did not include the name of the target stimulus (e.g., “It lives in rivers,” see Tables 1 and 2). Four, female graduate-student therapists implemented the procedures. The therapists were in their mid-twenties (M = 25; range, 24–26), all identified as White, three identified as Hispanic, and had 1.5 to 6 (M = 3.5) years of experience implementing behavior-analytic interventions with children with ASD.

A trained observer collected data on the therapist’s implementation of all components of the procedure with a checklist (supplementary information) across probe, pretest, and intervention sessions for 35% and 37% of Miguel’s and Clare’s sessions, respectively. Therapists were trained to run the procedure with integrity using written protocols, video models, and in-person role-play practice opportunities with feedback (i.e., Behavioral Skills Training). We calculated treatment integrity by dividing the total number of correct components implemented by the therapist by the total number of components per session and multiplying by 100 to obtain a percentage. Mean treatment integrity for intervention sessions was 89% (range, 70–100%) for Miguel and 95% (range, 78–100%) for Clare. Mean treatment integrity for probe sessions was 95% (range, 70–100%) for Miguel and 95% (range, 71–100%) for Clare. Two commission errors occurred in delivery of IF in Clare’s sessions wherein the therapist said the name of the stimulus (e.g., “Dog eats kibble” instead of “It eats kibble.”). None of the treatment integrity errors were errors in reinforcer delivery during probes.

We collected reliability data for procedural integrity for 34% and 33% of Miguel and Clare’s sessions across probe, pretest, and intervention. An agreement was scored if both observers recorded the same score for a component in the session. A disagreement was scored if both observers recorded a different score for a component in the session. Agreement was calculated by dividing the total number of agreements by the sum of agreements and disagreements and multiplying by 100. Mean agreement on procedural integrity was 96% (range, 75–100%) for both Miguel and Clare’s sessions.

Design

To evaluate whether instructive feedback led to emergent intraverbal responses, we used a concurrent multiple baseline design across sets. Baseline assessments were conducted with three sets of stimuli, and therapists measured the participants’ responding across operants. Then, therapists implemented one series of intervention with Set 1. Each intervention series consisted of three sessions (i.e., a total of nine exposures to each IF statement). Following one intervention series, therapists conducted probes to assess emergence across operants and sets. If emergence was not observed (i.e., at least 55% correct independent responses emitted for three operants [excluding fill-in intraverbals]), then the therapist conducted another intervention series with Set 1 before conducting more probes. Sets 2 and 3 remained in baseline conditions while Set 1 was in intervention. Once emergence was observed with Set 1, intervention began with Set 2. This process continued until all sets were exposed to intervention sessions (see supplementary information). Intervention was discontinued once twice the amount of intervention series with Set 1 had been conducted with Sets 2 or 3 and there was no increasing trend across any operant.

Procedure

We replicated the procedure by Frampton and Shillingsburg (2020) with one deviation: We provided 20-s access to preferred items rather than tokens because of the participants’ existing behavior-intervention plans. Preferred items were identified using a brief, daily, multiple-stimulus-without-replacement preference assessment (Carr et al., 2000). If the participants vocally selected an alternative item, therapists provided that item. Each session included three presentations of each target stimulus (i.e., nine trials) as well as one or two warm-up trials (described below). Probe and pretest sessions included trials of interspersed tasks resulting in a range of 12 to 14 trials per session.

Choice Trial

Before each session, we conducted a choice trial to identify a tangible item to deliver according to the reinforcement schedule. Therapists presented an array of three to five preferred items, pointed to each one of the items while providing a tact, and instructed the participant to “Pick one.” Once the participant selected one of the items (i.e., vocal mand, point, reach, or touch), the therapist said, “You can play with (item) after you do some work.” and removed all preferred items from the table to initiate warm-up trials.

Warm-up Trials

Each session began with one or two warm-up trials. Warm-up trials included high-probability tasks (e.g., motor imitation, echoics). General praise followed correct responses. If the participant engaged in an incorrect response, the therapist used the error-correction procedure (described below). Regardless of how the participant responded during the warm-up trials, the therapist moved onto the target task.

Interspersed-Task Trials

We interspersed trials of unrelated, high-probability tasks (e.g., motor imitation, listener discrimination with and without pictures, echoics, and intraverbals) approximately every three trials in pretest and probe sessions (described below). Therapists delivered general praise and 20-s access to a tangible for independent and prompted responses on interspersed trials. If the participant engaged in an incorrect response, the therapist used the error-correction procedure.

Error Correction

Error correction followed incorrect responses emitted during warm-up, interspersed-task, and mastered-listener trials (flowchart in supplementary information); no error correction followed incorrect responses on pretest nor probe trials. Following an incorrect response, the therapist re-presented the SD and immediately provided a model of the correct response. The therapist then re-presented the SD without a response prompt to give the participant 5 s to respond independently. Following a correct response, the therapist presented a distractor task (i.e., a high-probability response). If the participant engaged in an incorrect response during the distractor task, the therapist prompted the correct response. After the distractor task, the therapist again re-presented the SD. If the participant responded correctly, the therapist provided either general praise (warm-up trials) or general praise and access to a tangible item (interspersed-task and mastered-listener trials). If the participant engaged in an incorrect response, the therapist restarted and repeated the error-correction sequence until the participant engaged in an independent correct response to the SD.

Pretests

We evaluated prerequisite skills with all stimuli in each set (Frampton & Shillingsburg, 2020; Shillingsburg et al., 2018); skills included identity matching, echoics, listener-by-name discriminations, and tact-by-name responses. Sessions included three trials of each target stimulus, one or two warm-up trials, and three interspersed-task trials. Skills were tested in the following order: identity matching, echoics, listener-by-name, and tact-by-name. One skill was assessed with each set (e.g., identity matching with Sets 1, 2, and 3) before moving onto the next skill. The therapist presented antecedent stimuli according to descriptions in Table 3. The participant had 5 s to respond following each SD. The therapist did not provide any response prompts, and they provided a neutral statement following correct and incorrect responses (see Table 3 for operational definitions). To advance to probes, correct responses needed to occur on at least 89% of trials within a session across all skills with all sets (data available upon request).

Baseline and Emergence Probes

Baseline and probe sessions evaluated responding across listener-by-feature, tact-by-feature, name-feature intraverbal (fill-in statements and Wh- questions), and reverse intraverbal (Wh- questions) operants. Several trial-order versions for each probe type were created so that stimulus-presentation orders were semi-randomized across probe presentations (i.e., no stimulus occurred on more than two consecutive trials). Based on the procedures in Frampton and Shillingsburg (2020), skills were tested in a fixed order: listener-by-feature, tact-by-feature, intraverbal (fill-in statements and Wh-questions), and reverse intraverbals. One probe type was assessed with each set (e.g., listener-by-feature with Sets 1, 2, and 3) before moving onto the next type. No responses were prompted, and neutral statements were provided by the therapist (e.g., “Okay.,” “Alright.”) after each response regardless of whether it was correct or incorrect.

Instructive Feedback Intervention

The intervention sessions included mastered listener-by-name discriminations as the primary targets and feature tacts as the secondary targets (flowchart in supplementary information). These sessions included warm-up trials but no interspersed tasks.

Primary Targets

The procedure was identical to the listener-by name pretest trials (Table 3) except correct responses were reinforced. If the participant engaged in a correct response, the therapist provided general praise and 20-s access to a preferred item. If the participant responded incorrectly, the therapist used the error-correction procedure. Independent and prompted correct responses were followed by praise and 20-s access to preferred items.

Secondary Targets

Once the preferred item was delivered, the therapist presented the SD at the participant’s eye level while pointing to the stimulus. If the participant did not look at the SD within 5 s of its presentation, the therapist said “Look.” If another 5 s elapsed without attending to the picture, the therapist placed the visual stimulus in front of the preferred item until the participant looked at the picture. Once the participant looked at the picture, the therapist delivered the IF statement, which included the target feature of the stimulus (see Tables 1 and 2). The IF statement did not repeat the name of the stimulus (e.g., “It eats shrubs” instead of “Goat eats shrubs”). After the IF statement was delivered, the experimenter removed the picture after 1 s, removed the other pictures in the array, collected data, and engaged with the participant for the remainder of the time with the preferred item.

Procedural Modifications for Clare

Based on the participant’s responding during probe conditions, we modified the procedures to try to evoke correct responses in the presence of antecedent stimuli. These modifications occurred during probes only; the intervention sessions remained unchanged.

Removing Stimuli Following Incorrect Responses

After four intervention series with Set 1 (i.e., 12 sessions), we noticed Clare was engaging in unintelligible vocalizations or saying “okay” with a short latency for most probe trials across operants and sets. We hypothesized that the function of her short latency vocalizations was to remove the trial stimuli and end probe sessions (i.e., putative escape-maintained behavior). Therefore, we modified how therapists responded to incorrect responses during probes so that non-target vocalizations no longer resulted in immediate presentation of the neutral statement and termination of the trial. Specifically, all unintelligible vocalizations and “okay” resulted in the full 5-s response interval before the therapist provided a neutral statement and moved onto the next trial.

After four intervention series with Set 2 (i.e., 12 sessions), Clare’s responses during probes changed to repetitive, unrelated vocalizations (e.g., “Mommies, and daddies, and babies.”) with short latencies. We hypothesized that the change described above shifted responding from unintelligible vocalizations to the intelligible but repetitive vocalizations. Therefore, we modified the therapist’s responding so that every non-target vocalization resulted in the full 5-s response interval for all remaining probe sessions.

Interspersed-Task Ratio

After six intervention series with Set 3 (i.e., 18 sessions), Clare stopped emitting vocal responses on most probe trials. We became concerned that she was no longer responding due to the lean reinforcement schedule in place during probes compared to most of her intervention sessions (i.e., variable ratio 3 for probes compared to a fixed ratio 1 for intervention and other acquisition programs). Additionally, her responding suggested that removal of the instructional stimuli and shorter sessions may have been a more effective reinforcers than tangible items in the moment. Therefore, we increased the number of interspersed tasks, which resulted in access to more preferred tangibles within the session.

Differential Reinforcement

When we did not observe correct responding reach criterion for Sets 2 and 3 after eleven (i.e., 33 sessions) and ten (i.e., 30 sessions) intervention series, respectively, we decided to move from extinction conditions to differential reinforcement (Mitteer et al., 2020). If the responses had come under appropriate sources of control, we hypothesized that Clare was not emitting the target responses in probes because they did not contact differential reinforcement and the interspersed trials were highly discriminable. Therefore, with this modification, the therapist provided praise and 20-s access to a tangible item if Clare emitted a correct response during a probe trial.

Extended Response Interval

Clare was not emitting vocal responses reliably when we added differential reinforcement; therefore, we made a final modification to increase the response interval (Gorgan & Kodak, 2019). We hypothesized that an extended response interval would increase the probability of vocalizations, if they were acquired under relevant sources of control, to end the probe session more quickly. In addition, in other programs implemented in Clare’s comprehensive applied behavior analytic services, an extended response interval increased correct independent responses. With this modification, Clare had 10 s to respond.

留言 (0)

沒有登入
gif