Neuro-cognitive models of single-trial EEG measures describe latent effects of spatial attention during perceptual decision making

In daily life, we frequently experience environments where we are obliged to make fast decisions via ambiguous and low-coherent sensory information ([2], [61], [62]). Perceptual tasks are useful for studying decision-making due to the precise control of the quantity and quality of sensory information as well as being able to manipulate latent underlying effects on response time and accuracy, such as visual attention ([27]). One of the most important factors that influence this decision process is top-down spatial attention. In everyday life, top-down prioritized factors can control visual attention, for example: knowledge, expectation, and current goals ([13], [58]). In spatial prioritization tasks, used to explore the effects of top-down visual attention, a cue (e.g. arrow) in some experimental trials is often used to inform participants to attend covertly, without saccadic eye movements or head movements, to a location in the periphery of the visual field ([54], [60]). These experiments translate directly to everyday experiences. For instance, when driving, expectations about the location of traffic lights help drivers more quickly execute the choice to stop or accelerate.

Visual perceptual decision making immediately after a visual stimulus is suspected to involve two classes of processes, decision process(es) containing an accumulation of evidence toward decision choices and/or urgency signals, and non-decision time (NDT) processes, thought to contain at least visual encoding time (VET) and motor execution time (MET) ([59], [62], [58], [8], [16], [51]). Although non-decision time could contain any non-evidence accumulation and non-urgency process. Top-down spatial cues are known to improve behavioral performance and manipulate neural mechanisms ([92], [54], [64]). But while spatial prioritization is well studied, traditional proposed models could not separately identify different effects of spatial prioritization on VET, evidence accumulation, and other non-decision time components, such as MET simultaneously. In this study, we find evidence that spatial prioritization affects VET and other non-decision time components.

Two-alternative forced-choice tasks are described well by sequential sampling models. These models assume individuals accumulate adequate information until evidence is reached for one of two choices, typically conceptualized as hitting upper or lower boundaries ([62]). Sequential sampling models often contain parameters with cognitive interpretations. These parameters can then be compared across experimental conditions to understand cognitive effects, as well as be compared directly to neural measures. For instance, researchers revealed that manipulating the task difficulty of the stimuli will specifically increase decision time by decreasing the drift rate, a parameter that tracks the average rate of evidence accumulation within a trial ([56], [28], [62]).

Event-Related Potentials (ERPs) are averages of EEG across experimental trials, time-locked to specific events such as the onset of a stimulus or the execution of a response, such as a button press. Even though early ERP responses from the brainstem after the onset of auditory stimuli may be presented within a few milliseconds, ERP responses from the primary visual cortex take approximately 40–60 ms ([45]). Visual evidence received by the retina must pass through the LGN to reach the primary visual cortex, then that information is preprocessed, decoded, and prepared before further cognitive use ([31]). Part of this process is target selection ([44]) while another component of this process is figure-ground segregation. The time course of figure-ground segregation is thought to depend upon visual elements such as distractors within visual stimuli and low coherence of stimuli ([41], [51]). We define the time between the onset of stimuli to the beginning of the accumulation process as Visual Encoding Time (VET). VET is assumed to occur before the evidence can be accumulated during decision making, while some researchers show that evidence accumulation and motor planning could occur in parallel and not sequentially ([68], [15], [84]).

VET is expected to finish between 150 ms and 225 ms after stimulus onset ([74], [82], [51], [34]). For instance, when monkeys were trained to report the coherent direction of motion in a random dot motion task by a saccadic eye movement, groups of neurons in the lateral intraparietal cortex (LIP) were found to represent evidence accumulation for a saccadic choice ([63]). The onset of this neural evidence accumulation typically begins around 200 ms after the presence of random dot motion stimuli ([63], [37], [69]). Furthermore, participants in a go/no-go task required about 150 ms to process visual stimuli after stimulus onset ([74]).

Loughnane et al. [44] identified two pairs of N200 ERP latencies, the peak time of negative deflections occurring in temporo-occipital electrodes around 200 ms after changes in bi-hemispheric visual stimuli. The authors showed that N200 latencies affect the onset of accumulated sensory evidence during a random dot motion task. Research has shown that N200 waveforms in posterior electrodes between 200 and 350 ms after stimulus onset reflect early visual information ([17]). Recently, Nunez et al. [51] revealed initial evidence that there is a 1 ms to 1 ms relationship between N200 components and VET. This evidence was derived from relationships between observed N200 latencies and estimated non-decision times related to figure-ground segregation before the accumulation process during a perceptual decision making task.

Over the last few years, researchers have begun to examine the underlying cognition and neural correlates of the decision making process simultaneously with neuro-cognitive modeling. These models allow consideration of underlying connections between cognitive model parameters and brain dynamics. Neuro-cognitive joint modeling is thought to be the most powerful technique for linking the electrophysiological dynamics of the brain across experimental trials to cognition and associated cognitive model parameters ([57]). This research has resulted in models that can join single-trial electroencephalography (EEG) measures and individuals’ behavioral performance to make inferences about underlying states of the brain and behavior.

Three main model-fitting techniques have been used to relate unobserved brain and cognitive latent parameters, Directed, Integrative and Covariance approaches ([80], [77], [78]). In Directed approach, it is assumed that neural parameters constrain and predict cognitive latent parameters directly, and not vice versa ([52], [89], [19]). This approach is easily-understood and straightforward to implement, and the one we employed in this paper, in addition to direct simulations of the possible data generating models. This method allows us to “regress” one or more cognitive latent variables against single-trial EEG measures. We used this model-fitting procedure in a hierarchical Bayesian framework to estimate latent variables at individual and group levels. Integrative approaches can constrain and predict both cognitive and neural EEG parameters simultaneously ([55], [53], [25]). Both neural and behavioral data in this approach have been described with a set of shared parameters. Covariance approaches suggest that the cognitive and the neural parameters are constrained by a shared statistical distribution such as multivariate normal distribution ([79], [80], [55]). In general, Directed approach assumes a one-way relationship between the brain and behavior, while both Integrative and Covariance approaches assume a two-way relationship between the brain and behavior. Properly accounting for single-trial EEG data in Integrative and Covariance models is difficult ([25]). This is why we choose to use Directed models in this work in conjunction with simulations.

Using Directed single-trial EEG analysis, it has been shown that P200 measures after visual noise and N200 measures after visual stimuli (i.e. positive and negative deflections around 200 ms) could delineate single-trial visual attention effects on evidence accumulation and non-decision times ([52]). Nunez et al. [51], [52], [53] proposed neuro-cognitive hierarchical models of decision making to separate measures of the non-decision process by fitting models to both brain electrophysiology (EEG) and human response times and choices. These model parameters can then be related to visual attention as enforced by experimental paradigms.

In our previous work, we show that non-decision time is affected by spatial top-down attention in face-car perceptual decision-making task ([26]). In the current study, we hypothesized that single-trial N200 peak latencies would reveal the effects of spatial attention on the non-decision process across experimental conditions and participants. We then sought to differentiate whether spatial prioritization influences VET or other non-VET non-decision times or both during perceptual decision making. Using a public dataset of a face-car perceptual decision-making task, we used singular value decomposition (SVD) to extract single-trial N200 latencies for all conditions (two levels of spatial prioritization and two levels of visual coherence) and then applied neurocognitive modeling to find associations between the Drift–Diffusion Model (DDM) parameters and the N200 latency on each experimental trial. We constructed hierarchical models to identify which components of NDT are the most influential to spatial top-down cues and then conducted model comparisons informed by a simulation study. We found evidence that spatial prioritization can affect other non-decision time processes in addition to VET while not affecting decision-making itself.

留言 (0)

沒有登入
gif