During text reading, we generate 3 to 4 fast eye movements (saccades) per second to move words into the center of the visual field for high-acuity processing. Almost all word processing happens during fixations, where average durations range from about 150 to 300 ms (Kliegl et al., 2004, Rayner, 1998). The number of fixations in a typical text is comparable to the number of words, however, some words do not receive a fixation (word skipping), while others are targeted several times. A secondary fixation of the same word as the currently fixated word is denoted as a refixation. The eye’s scanpath is even more complicated, because some saccades shift the gaze against the reading direction to previously fixated words or regions of text. Such regressions represent about 5% to 10% of the saccades for a typical text. Observed sequences of fixations during reading are characterized by sequential dependencies. Experimentally, these sequential dependencies are investigated via complicated designs such as the boundary paradigm (Risse and Seelig, 2019, Starr and Rayner, 2001). In mathematical modeling, eye-movement control can be explained by dynamical theories that make specific assumptions on the observed sequential dependencies (Engbert et al., 2002, Reichle et al., 1998).
In this tutorial article, we provide an introduction to dynamical modeling of eye movements in reading as an example for the dynamical systems approach to cognition (Beer, 2000, Engbert, 2021). Potential readers might have varying degrees of interest in dynamical models, eye-movement control, or in one of following topics: Reading models, likelihood computation, Bayesian inference, Markov Chain Monte-Carlo techniques, or individual differences. What is specific to dynamical models is that they can predict a future data point from a current data point (the currently observed state) via simulation. This property of dynamical models can be exploited for their statistical inference, when time-ordered data are available and the likelihood function can be computed sequentially.
The dynamical model to be investigated in detail in this article is the SWIFT model (Engbert et al., 2002, Engbert et al., 2005) of eye-movement control in reading. This model has been developed with the aim to provide a theoretically coherent framework for all types of eye movements during reading. Since the model is based on temporally evolving word activations, it is conceptually similar to the dynamic field theory of movement preparation (Amari, 1977, Erlhagen and Schöner, 2002). The SWIFT1 model was published in 2002 in its first version, where a discrete space of words was assumed and a realistic physical layout with respect to word length was neglected as a first approximation. A major modification of the model was published in 2005, where we included, among other aspects, the full complexity of word-length effects and saccadic errors (Engbert et al., 2005). It is important to note that a number of other models have been developed in parallel over the last 20 years (e.g., Reichle et al., 1998, Reilly and Radach, 2006, Snell et al., 2018), which are all good candidates for implementing Bayesian parameter inference as proposed in this tutorial.
The next major step in the further development of the SWIFT model was related to statistical inference. First, in 2020, we proposed an approximate likelihood function for the model (Seelig et al., 2020). The application of a likelihood-based Bayesian framework to a challenging data set was achieved in 2021, where we demonstrated that likelihood-based inference of the model provides the possibility to estimate model parameters for individual readers (Rabe et al., 2021). Thus, for the first time in eye-movement research in reading, we were able to study interindividual differences in an advanced computational model. This result has been discussed recently in the broader context of dynamical cognitive modeling (Engbert et al., 2022).
Because of the remarkable advances in the field of eye-movement modeling over the last 25 years, current models are elaborated and, therefore, are difficult to teach, in particular with respect to statistical inference (Schütt et al., 2017). Therefore, for teaching and training purposes in dynamical modeling, it seems useful to start with a simplified version of the SWIFT model to introduce researchers to the key concepts of both dynamical models and their statistical inferences.
This tutorial article is organized as follows. In Section 2, we derive and explain the simplified version of the SWIFT model, where the key concepts are temporally-evolving word activations by a set of ordinary differential equations and stochastic target selection for saccade preparation. In Section 3, examples for simulated eye trajectories are discussed. We refer the reader to an R-based computer implementation for running their own numerical experiments. The likelihood function of the simplified model is discussed in Section 4. The likelihood function can be decomposed into temporal and spatial components. We report numerical simulations to explore profile likelihoods as a parameter identifiability analysis. In Section 5, we introduce Bayesian inference and carry out Monte-Carlo simulations to estimate posterior densities over the model parameter space. With individual parameter estimates for a larger text corpus, we run posterior predictive checks to reproduce interindividual differences in reading behavior in Section 6. We close with a short summary and discussion section. All computations are available (see Data Availability Statement below) in an implemented version in the R-Language for Statistical Computing (R Core Team, 2018).
留言 (0)