A Preliminary Study of the Eye-Gaze Patterns and Reading Comprehension Skill of Students on the Autism Spectrum

Participants

Following ethics approval and informed parental consent, participants were selected based on their scores on the Progressive Achievement Tests in Reading (PAT-R), an Australian-based test of reading comprehension and word knowledge (see https://www.australiaeducation.info/Tests/K12-Standardized-Tests/reading.html). Two students with a clinical diagnosis of autism were recruited having been diagnosed by either a psychologist, paediatrician, or both. Students who were considered to be typically developing and of similar age, grade level, gender, word reading, and language skills were included as a form of comparison. Word reading and language skills were measured by the Total Reading Composite Score on the Woodcock Reading Mastery Tests—Third Edition (WRMT-III; Woodcock, 2011) pre-assessment. The descriptive statistics for participants are presented in Table 1.

Table 1 Participant demographics and WRMT-III pre-testing scoresProceduresAssessment

Five minutes of each assessment session was spent familiarising participants with eye-tracking equipment, followed by the 90-min WRMT-III assessment which was conducted to eliminate any participants with a severe reading deficit. Four participants met the inclusion criteria.

Eye-Tracking

During calibration and reading tasks, stimuli were presented by the laptop installed with Tobii Studio and viewed by participants via the connected monitor. Binocular eye movements were recorded remotely, meaning no contact was made between the participant and eye-tracker. Each participant sat in a chair with adjustable seat height to ensure the top of the monitor was aligned with the top of the participant’s head. The chair was positioned approximately 65 cm away from the front of the monitor. Before starting the experimental task, participants undertook a five-point calibration procedure. This involved participants following a dot with their gaze as it moved throughout the screen and stopped at five locations.

Reading tasks were completed in a single session of 50 min duration. These sessions were video-recorded, allowing recordings to be re-watched if researchers failed to code answers in vivo. Participants were instructed to read the entire text out loud from start to finish. The Reading Phase immediately followed calibration via a button press from the researcher and began the moment the text stimuli were presented on the monitor. This phase ended when the participant finished reading the last word of the passage, which prompted the researcher, via button press, to switch to a screen containing a black background and white fixation cross to prevent re-reading occurring during the Reading Phase.

Participants were instructed to answer a series of questions provided by the researcher, referring back to the text if needed. To begin the Question-Answering Phase, the researcher switched to the text screen. Participants were encouraged with a verbal prompt to guess the answer if they indicated they were unsure, or had not responded within 1 min. The researcher ended the task via a final button press once all questions were answered.

Measures

All testing sessions occurred in a small room containing two chairs and a table designed to minimise distractions.

Woodcock Reading Mastery Tests—Third Edition (WRMT-III)

Form A from the WRMT-III was used to test Basic Skills, Reading Comprehension, Listening Comprehension, and Oral Reading Fluency leading to a composite score of Total Reading ability.

Eye-Tracking Measures

Eye-movement data was collected using the Tobii X2-30, a remote-based eye-tracker that utilises pupil centre/corneal reflection technique (Tobii Technology AB, 2014). The X2-30 is suited to young children who may move their heads during testing as it maintains tracking activity within a 20″ × 14″ area (width × height). The X2-30 was interfaced with a Dell laptop installed with Tobii Studio v 3.4.6, a software program used to control the depiction of text stimuli to participants via a 1080 resolution 24″ monitor (19 × 20). The X2-30 was attached to the bottom of the viewing monitor via a magnetic bracket which held it in place. The front of the eye-tracker was positioned at a 45° angle so as to point directly at participants’ eyes. A Logitech HD Pro Webcam C920 was also connected to the top of the viewing monitor to record the participants’ faces during eye-tracking tasks.

Texts

Four texts were included, two from the Grades 3–4 level of the Wechsler Individual Achievement Test—Second Edition (WIAT-II; Wechsler, 2005), and two created by the research team. Created texts underwent proofreading by researchers, who evaluated each for its suitability for analysis. All text was presented in black monospace Century Gothic font (size 12) on white background with 1.5 line spacing, to ensure adequate space between lines so that ambiguous fixations (slightly above or below the text line) would not be wrongly attributed to a different line of text. Images were added to resemble WIAT-II texts.

Text difficulty was graded with the Flesch-Kincaid readability formula allowing created texts to be standardised and represent the reading ability of a 9 to 10-year-old child with 3 to 4 years of Australian education. The algorithm used to test grade level readability was (0.39 × average number of words per sentence) + (11.8 × average number of syllables per word) − 15.59. Word count ranged between 78 and 156 (M = 122, SD = 34.1). Text difficulty statistics for texts are presented in Table 2.

Table 2 Text difficulty measuresQuestions

The existing questions for the WIAT-II texts were used (Crickets = 6, Good Neighbours = 5 questions) while questions were written for created texts (Poppy the Puppy = 9, Camping Weekend = 7 questions) to assess students’ comprehension. While each text varied in the number of comprehension questions used, the total possible marks per text ranged from 10 to 12 (Crickets = 12, Good Neighbours = 10, Poppy the Puppy = 12, Camping Weekend = 12 marks). The comprehension questions were based on the taxonomy developed by Day and Park (2005), which can be used to develop comprehension questions for texts to help young students better understand what they read, rather than a hierarchical model such as Bloom’s Taxonomy (Bloom, 1956), where lower-order thinking skills must be acquired before higher-order critical thinking skills. Question types included were as follows: Literal (10), refers to an understanding of the straightforward meaning of the text, such as dates, facts, vocabulary, and locations; Reorganise (8), based on a literal understanding of the text, students must use information from various parts of the text and combine them for additional meaning, and; Inference (8), answers are based on material that is in the text but not explicitly stated, involves combining literal understanding of the text with own knowledge and intuitions (Day & Park, 2005). There were a total of 26 questions across all texts. Describing foundational comprehension skills as “lower-level” has led educators to devalue this knowledge (Munzenmaier & Rubin, 2013), yet the purpose of the current study was to explore how students obtain this knowledge through the study of their eye-gaze patterns. Question formats included the following: forced-choice i.e. alternatives, true/false; open-answer; and closed-answer, such as fill-in-the-blank (cloze exercise). Before a question and satisfactory answers were assigned to a story, a consensus of suitability was required from the research team. Answers provided by participants were assigned a mark of 0, 1, or 2, with higher marks reflecting a fuller understanding of the text. For certain forced-choice format questions, a maximum mark of 1 was assigned for a correct answer. Examples of questions and answer criteria are provided in Table 3.

Table 3 Question and answer examples Data Analyses

Tobii Studio was set to filter out all fixations shorter than 80 ms as this was calculated to be an insufficient time for text processing, and longer than 800 ms, as this was assumed to indicate “mind wandering,” that is, when the reader continued to maintain eye-gaze on a fixed location without processing visual information (Smallwood et al., 2011). Participant eye-gaze was recorded for both phases, with the Question-Answering Phase also divided into scenes, a feature of Tobii Studio, which allowed a recording to be broken up into subsets of the overall recording. The number of scenes equalled the number of comprehension questions, for example, if a participant answered five questions, five scenes were created for that passage. A scene began at the end of a question-utterance by the researcher and finished at the end of the reply-utterance by the participant.

Dependent variables were mean fixation duration and comprehension accuracy. Fixations were defined as the maintenance of eye-gaze for at least 80 ms at a single location; hence, mean fixation duration was calculated as the average length of fixations across texts for a single participant. Comprehension accuracy (Marks awarded/Total possible marks) scores were added and converted into an overall percentage for each participant.

留言 (0)

沒有登入
gif