Understanding words in context: A naturalistic EEG study of children’s lexical processing

A central part of language comprehension is recognizing words, retrieving their meanings, and integrating them into the discourse you are hearing. When we hear a word out of the blue, with no prior context, we often identify it bottom-up, using only the sound of the word itself. More typically, however, word comprehension occurs in the context of a sentence or a conversation that can guide us to the correct meaning before the word is fully produced. For example, we can infer that the statement “He made a peanut butter and jelly…” is likely to end with “sandwich.” Adults rapidly use top-down cues to disambiguate and predict upcoming words (e.g. Altmann and Mirković, 2009, Federmeier, 2007, Kuperberg and Jaeger, 2016, McRae et al., 2005, McRae and Matsuki, 2013), but we know very little about how this ability develops. The bottom-up process of mapping speech sounds to words has been extensively studied in infants and children. It develops early and becomes more rapid and efficient during childhood (Fernald et al., 2001, Sekerina and Brooks, 2007, Swingley et al., 1999). In contrast, there are few studies on children’s use of top-down constraints during lexical processing and the findings are mixed (Henderson et al., 2013, Khanna and Boland, 2010). Some studies have found that children make less use of top-down information than adults, both in the resolution of lexical ambiguity (Rabagliati et al., 2013) and syntactic ambiguity (Kidd and Bavin, 2005, Snedeker and Trueswell, 2004, Snedeker et al., 2009). This pattern of findings suggests that there might be a developmental shift from bottom-up to top-down processing as children acquire knowledge about the world and become more adept at coordinating these information streams (Snedeker & Huang, 2015; Yacovone et al., 2021b).

The present study uses EEG to explore the comprehension of spoken words in adults and children between the ages of five and ten, in a rich and ecologically valid context. We look at the N400 response, a negative going deflection in the event-related potential (ERP) that typically peaks around 400 ms after word onset. The N400 response in adults is sensitive to a wide range of factors that affect the processing difficulty of a word in context and has therefore been argued to reflect the ease of lexical processing (e.g. Kutas and Federmeier, 2011, Van Petten, 1993, Van Petten and Luka, 2006).

Our task (the Storytime Paradigm) is an adaptation of a method that has been used to study language comprehension in adults (Alday et al., 2017, Brennan et al., 2016, Brennan and Hale, 2019, Brennan et al., 2019, Zhang et al., 2021) and more recently, to study syntactic processing in children (Brennan et al., 2019, Brennan and Hale, 2019).1 Participants listen to a story while we collect continuous EEG. We then model the event related response to each content word in the story to draw inferences about the role of top-down constraints during lexical comprehension. In the remainder of this Introduction we 1) describe what we know about lexical access in adults; 2) discuss the functional characterization of the N400 in adults 3) review the prior work on lexical access in children, noting the absence of strong evidence for the use of top-down lexical prediction, 4) discuss the limited characterization of the N400 in young children and finally 5) describe our method for looking at lexical access during naturalistic listening and introduce three possible hypotheses about children’s use of top-down context.

To understand speech, we must construct a series of representations (phonological, syntactic, and semantic) that link sounds to conversational intentions. This process is incremental: as we hear each speech sound, we update our hypotheses about the word that is being spoken, activate the meanings of those candidate words, and begin integrating these potential meanings into the sentence (e.g. Allopenna et al., 1998, Lee et al., 2012, MacDonald et al., 1994, Yee and Sedivy, 2006). This flow of information from lower to higher levels of representation is called bottom-up processing. Critically, information also flows in the other direction: as adults are listening to a sentence, their knowledge of the speaker’s intentions and likely sentence meanings shapes their hypotheses about upcoming words and sounds. This flow of information is often referred to as top-down processing (e.g. Kutas and Federmeier, 2011, Nieuwland and Van Berkum, 2006, Van Berkum et al., 2005, Van Berkum et al., 1999).

In the present study, we focus on one step in language comprehension: retrieving the meaning of a spoken word, here referred to as lexical access. As a word unfolds, we incrementally map the sounds we are hearing onto stored lexical representations. The degree to which ease of lexical access is determined by bottom-up vs. top-down processing depends on the amount of noise in the perceptual signal, the context in which a word is found, and the degree to which that context constrains the meaning of the word.

When comprehenders encounter a word in isolation, frequency plays a critical role in lexical access: more frequent words are identified more quickly and on the basis of less perceptual information (for review see Brysbaert et al., 2018) which affects naming times (e.g. Jescheniak and Levelt, 1994, Oldfield and Wingfield, 1965), lexical decision times (e.g. Gimenes & New, 2016), reading times (Gerhand & Barry, 1998), referent identification (Erker & Guy, 2012), and length of fixations on a word during reading (Kretzschmar et al., 2015, Staub, 2015). In EEG studies of isolated words, more frequent words have smaller N400 responses suggesting that they are more easily processed (Dambacher et al., 2006, Halgren and Smith, 1987, Rugg, 1990, Van Petten, 1993). Because frequency effects on single word comprehension are strong and ubiquitous, frequency was built into the bones of many early lexical processing models. For example, in many spreading activation models, each word has a set threshold which determines the level of activation required before it will be recognized (Marslen-Wilson, 1990, McClelland and Rumelhart, 1981, Morton, 1969). These thresholds are an inverse function of frequency, such that more common words have lower thresholds and thus require less perceptual input (and less time) for recognition (Solomon & Postman, 1952).

Early research on lexical processing in adults found that speed of access is not only affected by properties of the word itself but can also be influenced by the words that precede it. The classic demonstration of this is semantic priming: we are faster and more accurate in identifying a word (e.g. cat) when it is preceded by a related word (e.g. dog) than when it is preceded by an unrelated word (e.g. chair) (Meyer and Schvaneveldt, 1971, Moss et al., 1995, Swinney, 1979). Latent Semantic Analysis (LSA) measures have been used to measure lexical associations across words by tracking the contexts in which a word occurs and calculating the degree to which two (or more) words share the same contexts (Landauer & Dumais, 1997). This measure predicts the ease of lexical access as indexed by priming effects in lexical decision tasks (De Wit and Kinoshita, 2014, Günther et al., 2016, Vigliocco et al., 2009, but see Hutchison et al., 2008). LSA also predicts the magnitude of the N400 (Van Petten, 2014). Perhaps the strongest evidence that semantic priming plays a role in processing words in sentences comes from studies of cross-modal priming using homophones. Both meanings of a homophone are usually active immediately after the word is spoken, even when syntactic constraints rule out the irrelevant meaning. However, if one of the words in the prior context has a strong lexical association with the correct meaning, then the incorrect meaning is sometimes suppressed (Seidenberg et al., 1982). For example, hearing “the farmer bought the straw” does not prime “soda”.

When words are presented in the context of a sentence, passage, or conversation, lexical access can be constrained by higher-level representations of the unfolding discourse (e.g. McRae et al., 2005, McRae and Matsuki, 2013). The central role of top-down processing is supported by research demonstrating that a constraining sentence or discourse facilitates the identification and integration of words that are congruent with that discourse. Predictable words are identified faster (Craig et al., 1993), are more resistant to noise (Kalikow et al., 1977), show faster first-pass reading times (Kliegl et al., 2004, Rayner et al., 2011), and are more likely to be skipped over in reading (Rayner et al., 2004, Rayner et al., 2011). Conversely, words that are inconsistent with a preceding discourse are read more slowly (Van Berkum et al., 2005). Much of the evidence for top-down lexical processing has come from work looking at the N400 ERP response (Nieuwland and Van Berkum, 2006, Van Berkum et al., 1999, see Kutas & Federmeier, 2011 for review).

Historically there have been three ways of thinking about the N400. An early theory conceptualized the N400 response as an index of bottom-up lexical access (Holcomb & Neville, 1990). This reflected data primarily from words in isolation which showed effects of bottom-up constraints such as word frequency (Rugg, 1990, Smith and Halgren, 1987) and lateral activation due to semantic priming (Holcomb & Neville, 1990). Subsequent research on the N400 response, however, has shown that when words are presented within a sentence, the N400 response reflects top-down word predictability rather than bottom-up lexical properties (Dambacher et al., 2006, Van Petten and Kutas, 1990).

These top-down effects on the N400 gave rise to a second theory: that the response reflected later integrative stages of lexical processing (Brown and Hagoort, 1993, Burnsky et al., 2023, Kretzschmar et al., 2015, Staub, 2015). This hypothesis is primarily supported by recent data suggesting that, in sentence contexts, initial fixations to written words remain sensitive to bottom-up lexical constraints but that the N400 response is only sensitive to predictability, suggesting that the N400 reflects a later stage of processing than the fixations (Burnsky et al., 2023, Kretzschmar et al., 2015). While the early bottom-up lexical access theory of the N400 response failed to account for the top-down effects of context, the purely integrative account fails to explain why the N400 response can, under certain contexts, reflect bottom-up constraints (Nour Eddine et al., 2023).

A third theory of the N400 seeks to account for both of these effects, by positing that the response reflects lexical access, but that lexical access is not a feed-forward process but instead is one that is often guided by top-down constraints from the outset (Kutas and Federmeier, 2000, Lau et al., 2009, for review see Kutas & Federmeier, 2011). On this theory, all words produce an N400 response, which reflects the process of accessing the word or concept. When the prior context allows us to predict semantic features or words, the N400 is smaller (Kuperberg et al., 2020) and the word is easier to retrieve. In the current study, we conceptualize the N400 response in accordance with this third theory. However, in the Discussion we will consider how the current results may be interpreted under different functional characterizations of the N400.

This top-down predictive theory readily captures the findings from a wide range of N400 studies (see Kutas & Federmeier, 2011, for review). N400s are strongly and linearly correlated with cloze probability, or the likelihood that a respondent can guess a correct word given the prior context (Kutas and Federmeier, 2011, Wlotko and Federmeier, 2012). The relationship between the N400 response and word predictability persists even when constraints from semantic association are controlled for, indicating that the cloze effects cannot be explained by lateral priming alone (Rabs et al., 2022).

In adults, the N400 is sensitive not only to sentence context, but also to the context created by the broader discourse in which the sentence occurs. For example, Van Berkum et al. (1999) had participants read sentences like [1], both in isolation and in contexts which made one of the final words more coherent than the other.

[1] Jane told the brother that he was exceptionally quick/slow.

In isolation, both completions produced similar responses. But in a richer discourse, the N400 was smaller for the continuation that was consistent with the story (e.g. “quick” in a story where the brother completes a task in a shorter time than had been predicted). In fact, Nieuwland and Van Berkum (2006) showed that a supportive context can even eliminate N400 effects to verb-object animacy violations. In a story where a peanut was a sentient character, capable of emotion, the sentence in [2a] elicited a smaller N400 than the sentence in [2b].

[2a] The peanut was in love.

[2b] The peanut was salted.

When words are embedded in a context, predictability effects on the N400 response are highly robust. In contrast, frequency effects are either limited to words that appear very early in a sentence or are entirely absent (Payne et al., 2015, Van Petten and Kutas, 1990). This pattern suggests that when words are presented without relevant context (as in word lists or at the beginning of a sentence) then lexical access, as indexed by the N400, primarily reflects bottom-up activation with little help from top-down cues, resulting in frequency effects. However, when words are presented in a richer context, top-down constraints are prioritized and word meanings are preactivated, minimizing the role of bottom-up cues (Degno et al., 2019, Kretzschmar et al., 2015, Kutas and Federmeier, 2011, Payne et al., 2015, Van Petten and Kutas, 1990).

Like adults, children access words incrementally: as the first sounds of a word are spoken, they begin activating possible lexical items and integrating those candidate words into their interpretation of the sentence (Fernald et al., 2001, Huang and Snedeker, 2011). For example, 5-year-olds, like adults, will shift their gaze to a picture of a candle upon hearing the first phonemes in “candy” but will favor the correct target after the second syllable. This ability to use bottom-up information in an incremental fashion emerges and improves in the second year of life (Fernald et al., 2001, Swingley, 2009, Swingley and Aslin, 2000). There is one way in which incremental processing is different in children than in adults: children consider competing lexical items for a longer time, suggesting that they struggle to inhibit lexical representations once they have been activated (Huang and Snedeker, 2011, McMurray et al., 2010, Sekerina and Brooks, 2007).

When words are presented in isolation, lexical access in children is influenced by many of the same variables as in adults, suggesting similar bottom-up processes (Cirrin, 1984, Plaut and Booth, 2000, Schröter and Schroeder, 2017, Smith et al., 2006, Walley, 1988). For example, six- to nine-year-old children, like adults, are slower in auditory lexical decision tasks if the words are lower in frequency or are acquired later (Cirrin, 1984). Similarly, the eye-tracking patterns of both adults (Kretzschmar et al., 2015, Staub, 2015) and children (Joseph et al., 2013, Tiffin-Richards and Schroeder, 2020) show robust effects of frequency, with more frequent words showing shorter first fixations and being skipped over more often. There is also ample evidence for semantic priming in children, providing prima facie support for models with lateral activation (see e.g. Friedrich and Friederici, 2006, Radeau, 1983, Rämä et al., 2013). For example, six- and seven-year-olds are faster to identify a word after hearing a different word from the same category (e.g. cat after dog) than they are to identify the same word after hearing an unrelated word (e.g. cat after chair) (Radeau, 1983). Semantic priming has been found in children as young as 18 months of age, both in looking time paradigms (Arias-Trejo & Plunkett, 2009) and in EEG studies (Friedrich and Friederici, 2006, Rämä et al., 2013).

In contrast, there are few studies that directly explore whether children use top-down constraints during lexical access. While a number of looking time studies (see e.g. Borovsky et al., 2012, Mani and Huettig, 2012, Nation et al., 2003) demonstrate that children can use context to shift their gaze to an upcoming referent (e.g. looking toward a cake after hearing “eat”), these studies do not distinguish between lexical prediction (pre-activating the word “cake”) and referential prediction (looking around for something edible). We know of only one study in children that directly and unambiguously assesses top-down lexical processing. Khanna and Boland (2010) had participants listen to sentences that ended with a homophone (e.g. tag) and then read aloud a target word (e.g. grab) that was related to one meaning of the homophone but not the other. Adults and children over 12 were sensitive to context: hearing tag facilitated reading grab when the preceding sentence picked out the semantically related interpretation (e.g. At recess the children played tag) but not when the context picked out the other interpretation (e.g. Jerry was bothered by the shirt’s tag). Critically, however, younger children (7–9 years) showed facilitation in both contexts, indicating that they had accessed both meanings of the word, failing to use the sentence context to constrain initial lexical processing. In contrast, when the context was reduced to a single word, even the 7–9 year olds activated only the correct meaning of the homophone (i.e., faster reading times for grab after hearing “laser tag” but not after hearing “shirt tag”).

This pattern of results suggests that, under ordinary circumstances, children younger than ten might have difficulty using the broader discourse context to guide lexical access. This could occur for two very different reasons. First, young children could have a language processing architecture that is broadly similar to adults; top-down connections between levels that would, theoretically, allow for contextually guided lexical access, but they could lack the linguistic skill to construct and use top-down representations quickly enough to guide lexical access. When the context is simplified (word pairs presented without additional context), constructing the top-down representation is easier and context effects emerge. Second, young children could have a processing system that is architecturally distinct from that of adults and older children. Specifically, the bottom-up connections and lateral connections might mature early (allowing for bottom-up processing and priming respectively) while the top-down pathways might take longer to mature. Such a developmental trajectory is not inconceivable. There is ample evidence that there are changes in the brain (synaptic pruning and myelination) throughout this developmental period and that these changes occur earlier in sensory areas and bottom up pathways (see e.g. Gogtay et al., 2004, Huttenlocher and Dabholkar, 1997).

There are, however, good reasons to be cautious when drawing strong conclusions about top-down processing from this single data point. The task that Khanna and Boland (2010) used, required simultaneous listening and reading. Seven- to nine-year-old children are just beginning to read and are likely to have more difficulty coordinating these tasks than older children and adults. Perhaps, these demands interfered with the children’s ability, or motivation, to listen attentively to the sentence context. Studies using slower-paced, simpler paradigms have found that young children can make use of sentential context to determine the meaning of novel words (Goodman et al., 1998), to interpret noisy input (Cole and Perfetti, 1980, Newman, 2004), and to choose the relevant sense of an ambiguous word (Rabagliati et al., 2013).

In the most relevant of these studies, Rabagliati et al. (2013) asked four-year-olds to select the picture that matched the final word in a short vignette. Critical items contained words that primed the wrong meaning of the homophone given the sentence context (e.g. a story with a princess and a dragon that takes place at night, not knight). They found that children have some ability to override these lexical associations -they select the right meaning 39% of the time - but they make a surprisingly large number of errors (compared to adults) suggesting that sentential context is often overridden by mere association.

Further evidence for developmental differences in the ability to use top-down cues comes from studies that look at the effect of context on reading time in the absence of ambiguity. Reading times for words in isolated sentences vary depending on whether the words in context are plausible, implausible, or anomalous. Adults generally slow down at both anomalous and implausible words (Joseph et al., 2008, Rayner et al., 2004). This pattern, particularly the difference between implausible and plausible words, is consistent with the proposal that adults use top-down cues during lexical processing. Children between the ages of 7 and 12 also show slowdowns early in processing for anomalous words, but critically, they do not show early differences between plausible and implausible words (Joseph et al., 2008). A similar pattern appears when children read short, coherent discourses. Tiffin-Richards and Schroeder (2020) tracked eye-saccades as adults and children (mean age 8.5) read short stories. They found that while adults show early and late facilitation of gaze based on contextual constraint, children show only late facilitation effects, suggesting that adult readers use context more efficiently to pre-activate lexical candidates.

In short, the evidence to date is consistent with the hypothesis that children under 12 readily make use of bottom-up cues and semantic association to constrain upcoming lexical items but are far less adept at using top-down cues from context. This same pattern appears in other linguistic processes (see Snedeker, 2013 for review). For example, while four- to six-year-olds will readily use bottom-up information such as intonation or lexical information to infer and disambiguate syntactic structure, they often fail to use higher level cues such as the plausibility of an event or the referential context in which the sentence is used (Kidd and Bavin, 2005, Kidd et al., 2011, Snedeker and Trueswell, 2004, Snedeker et al., 2009, Trueswell et al., 1999; Yacovone et al., 2021b).

Our understanding of lexical access in adults has been shaped and informed by research looking at the N400 ERP response. Although children show an N400-like response that seems to index lexical retrieval, we know far less about the variables that influence this effect, and thus, far less about lexical access in children. In infancy, N400 responses are characterized by a widely distributed negativity that is often delayed and prolonged relative to adults (Junge et al., 2021). By the age of 5, children’s N400 responses are much more adult-like, but remain somewhat larger and a bit more distributed, delayed and prolonged (Atchley et al., 2006, Hahne et al., 2004, Holcomb et al., 1992). While there is some debate regarding the topography of the N400 response across development, most studies of school-aged show semantic congruency effects which are maximal over centro-parietal electrodes sites, similar to that of adults. (Holcomb et al., 1992, Juottonen et al., 1996, but see Atchley et al., 2006 showing more anteriorly distributed N400s in children). Between ages 5 and 18 the observed N400 response gradually becomes more adultlike in magnitude, timing and distribution (Holcomb et al., 1992).

The vast majority of N400 studies in children have focused on comparing responses to overt semantic disruptions: either semantically anomalous words in sentences, or a word that mismatches a prior word or picture (e.g. hearing “dog” after seeing or hearing “cup”). In the picture mismatch paradigm, N400-like responses appear as early as 12 months (Friedrich and Friederici, 2010, Lindau et al., 2017), becoming faster and more reliable over the second year of life (Friedrich and Friederici, 2004, Friedrich and Friederici, 2005, Mills et al., 2005). In the violation paradigm, N400-like responses have been observed in 19-month-olds (Friedrich & Friederici, 2005) as well as in toddlers and preschoolers (Silva-Pereyra et al., 2005).

These responses to violations and mismatches could reflect the use of top-down constraints in lexical prediction, however, they could also reflect passive lateral priming by related words, or even a reactive response to inconsistency. For example, a large N400 response to “He spread butter on his dog” relative to “He spread butter on his toast” could reflect prediction of the word toast given the context of the sentence, priming of the word toast from the word butter, or a response to the wild implausibility of buttering a dog.

There are two other lines of research on children’s N400 responses, but they do not fully resolve these questions. First, by 14 months of age, the N400 in children is affected by semantic priming such that primed words have smaller N400s than unprimed words (Friedrich and Friederici, 2005, Friedrich and Friederici, 2010, Sirri and Rämä, 2015, von Koss Torkildsen et al., 2007). This finding is compatible with two mechanisms: top-down prediction and lateral priming. In adults, the priming effects are modulated by the proportion of semantically related pairs in the study, favoring the top-down predictive account (de Groot, 1984). However, no parallel studies have been done in young children.

Second, Benau and colleagues explored how 10-year-old children and adults reacted to words that were congruent, mildly incongruent, or strongly incongruent in the context of the sentences in which they were presented (Benau et al., 2011). In adults, the magnitude of the response was modulated by the degrees of semantic incongruity (congruent < mildly incongruent < strongly incongruent). Ten-year-olds showed a smaller N400 response for congruent words, but there was no difference in magnitude between the response for mildly incongruent and strongly incongruent words. It is unclear how this finding bears on our theory of lexical access in children or our theory of the N400 response. On the one hand, it could be interpreted as favoring a predictive account of children’s N400 responses (only when the prediction is met does the response decrease). On this interpretation, however, the adult response would either be entirely reactive, or would consist of two processes (a predictive lowering of the N400 for expected/congruent words and a reactive response to the strongly incongruent words). Alternatively, it could be interpreted as favoring a reactive account to incongruity with children treating improbable and impossible events similarly. Such an interpretation would be consistent with research showing that children often view the unlikely as impossible (Shtulman & Carey, 2007). Critically, both interpretations rest on a null finding, in a population that typically produces noisier data. Furthermore, we have no data on how children under 10 would react to different levels of incongruity.

In sum, while children show ERP responses consistent with the N400, we do not yet know what constraints on lexical access are being captured by this response. To the best of our knowledge there is no work looking at whether the N400 in children is sensitive to word frequency or predictability per se. One aim of the current study is to determine if there is functional continuity in the N400 response in children and in adults.

In the current study we investigate the degree to which the N400 response in adults and children reflects the frequency, semantic association, and top-down predictability of an uttered word in order to investigate the sources of information that influence lexical access, as indexed by the N400 response. In a departure from prior work, we investigate lexical access in the rich, naturalistic context of a children’s story using single-trial ERP recordings.

Most EEG studies of language have used a traditional trial-based design, with many items in each condition, which are then averaged together. In these studies participants listen to, or read, a series of unrelated words or sentences, one after the next. To reduce the noise in the EEG signal, studies with adult participants often use 30–50 or more items per condition, resulting in experiments that can last for one to two hours (Luck, 2005). Adults typically comply with these demands. Children, however, are either less capable or less willing to sit still and attend to these long streams of disconnected stimuli. Researchers have adapted to the challenge of trial-based EEG testing in children by including distractor tasks (e.g. watching a silent video), reducing the number of items in each condition, tolerating more noise, and using designs with fewer conditions (typically 2) (Atchley et al., 2006, Benau et al., 2011, Pijnacker et al., 2017). As a result, many EEG studies with children are underpowered with minimal designs that leave many questions unanswered.

These designs also have an additional limitation. While trial-based EEG studies provide a time-sensitive index of lexical processing, they assess comprehension in one particular context (a list of disconnected sentences or words) which has little ecological relevance. Ultimately, we want to understand how children interpret words and sentences in real world contexts such as stories, conversations, or lessons.

The present study uses recent innovations in EEG methods to create a task that taps into children's intrinsic motivations and produces a much denser data set, resulting in approximately 25 times as many observations in a session as a trial-based study of the same length. The first critical difference is that we are using a single-trial ERP design. In this procedure, participants read or listen to sentences as ERP responses to every word are recorded (Brennan et al., 2016, Payne et al., 2015). Each word is coded for factors of interest (e.g. frequency or predictability) as they vary across the stimuli. These factors are then evaluated as continuous predictors of N400 amplitude. This allows for a second, critical difference in the method. Since single-trial ERPs are not restricted to a trial structure, these responses can be recorded in response to naturally occurring, meaningful language such as a story or dialog.

Single-trial ERP studies with adults have replicated some of the data patterns found in traditional trial-based designs. For example, single trial designs have found that the N400 response is modulated both by low-level features such as word frequency and by higher order processes associated with lexical prediction (Payne et al., 2015). Computational models applied to single-trial ERP data have found that the N400 reflects measures of lexical predictability given the preceding context (e.g. Frank et al., 2015). Several studies have specifically argued that lexical access in adults takes into consideration top-down constraints from the hierarchical syntactic structure (Brennan et al., 2016, Fossum and Levy, 2012; but see Frank and Bod, 2011, Frank et al., 2015). While many of these trial-based designs use isolated sentences presented visually word-by-word, this method has also been applied to auditorily presented stories (Brennan et al., 2016).

In the current study, we adapt this natural story-listening paradigm for use with children. Participants listen to a story as ERPs time-locked to the onset of every word are recorded, allowing us to gather data from hundreds of trials in a task that is short and fun, making it suitable for both adults and children. Critically, this method allows us to study children's language comprehension with greater ecological validity than other temporally sensitive methods. By using natural narratives, we can explore how children use the rich cues provided in real discourse to guide comprehension in real time, using a task which is both familiar and relevant to their lives.

The present study explores whether the cues used by five- to ten-year-old children to access upcoming words in naturalistic language input differ from those used by adults: looking first at the role of bottom-up features, specifically word frequency, then at lateral semantic activation, and finally top-down predictions based on context. While there is ample evidence that adults use this type of prediction during sentence comprehension, we know of no evidence that children of these ages do so.

We chose to focus on children of this age for three reasons. First, we suspected that five-year-old children would be able to listen to a story for at least 20 min without becoming restless. In talking with parents, we confirmed that this is the age at which many of us begin reading chapter books to our children. Third, as we noted above, previous behavioral research has found that children across this age range are less apt to use top-down context to guide lexical access in sentence contexts: 7–9-year-olds activate contextually inappropriate meanings of homophones in sentence contexts (Khanna & Boland, 2010); 7–10-year-olds do not slow down when reading unpredictable words (Joseph et al., 2008, Tiffin-Richards and Schroeder, 2020); and 10 year olds show reduced context sensitivity in EEG measures (Benau et al., 2011).

We will evaluate children’s ability to use contextual information during online sentence comprehension by seeing whether increasing use of context predicts the size of the N400 above and beyond models based on bottom-up activation alone. One possibility is that children’s lexical access is primarily driven by bottom-up constraints such as word frequency or semantic association, consistent with prior work showing a general difficulty using top-down constraints during online comprehension. On the other hand, it is possible that given a rich discourse and a natural task, both children and adults may be able to recruit top-down constraints to inform their comprehension. While this study was not designed to look for differences within the children, we also conducted exploratory analyses to see whether children at the younger end of the age range (∼5–7) differed from children at the older end (∼7–10).

留言 (0)

沒有登入
gif