Patients with migraine were recruited through the Neurology Outpatient Clinic of the Chinese People’s Liberation Army (PLA) General Hospital. Two neurologists specializing in headache disorders managed the entire enrollment process. All patients with migraine met the following inclusion criteria: (1) a confirmed diagnosis of migraine without aura, as specified by the International Classification of Headache Disorders (the 3rd Edition); (2) being in the interictal phase, defined as the period between migraine attacks (> 48 h after a migraine episode and > 48 h before the next episode) [17]; and (3) a migraine history of at least one year. Exclusion criteria included those who had taken preventive migraine medication within the three months prior to the study and individuals diagnosed with other types of headaches. Importantly, participants were not restricted to newly diagnosed or entirely medication-naïve individuals. Healthy participants, without chronic pain or a family history of migraines, were also recruited through advertisements posted on the hospital bulletin board. All participants enrolled in the study were native Chinese speakers and were required to be between 18 and 65 years old, right-handed, and have normal or corrected-to-normal vision and hearing. Participants with a diagnosis of brain injuries, psychiatric or neurodegenerative disease, or any chronic conditions requiring daily medication were excluded. Written informed consent was obtained from all participants prior to the experimental procedures. The study was approved by the Ethics Committee of the Chinese PLA General Hospital in accordance with the ethical principles of the Declaration of Helsinki (2023 − 460).
A total of 70 patients with migraine and 40 healthy participants were initially recruited. A larger sample size for patients due to the high dropout rate based on previous research [18, 19]. Follow-up assessments were conducted after the experiment to retrospectively identify participants in the interictal phase. Specifically, two patients and two healthy controls did not complete all tasks and withdrew from the experiment, and one patient was retrospectively identified as preictal and excluded from the analysis. Nine patients and two healthy participants were excluded due to poor task comprehension, while six patients and two healthy participants were excluded due to a poor signal-to-noise ratio of their EEG data. Consequently, 52 patients with migraine and 34 healthy participants were included in the final analyses. Patients (11 males) and healthy participants (11 males; χ2(1) = 1.354, p = 0.245) were matched in terms of sex ratio.
Clinical questionnairesThe clinical characteristics of patients with migraine were documented (see Table 1). Patients with migraine completed the Allodynia Symptom Checklist (ASC), Headache Impact Test-6 (HIT-6), and Migraine Disability Assessment (MIDAS) to evaluate the severity of their headaches and the impact on daily life. Given that mood states can influence attention, all participants were required to complete the Patient Health Questionnaire-9 (PHQ-9), Generalized Anxiety Disorder-7 (GAD-7), and Perceived Stress Scale-14 (PSS-14) to assess levels of depression, anxiety, and stress. The Pittsburgh Sleep Quality Index (PSQI) was administered to compare sleep quality between patients with migraine and healthy controls.
Table 1 Demographics and clinical characteristicsANTI-Vea taskA modified ANTI-Vea task [16] was employed in this study. The task consisted of two trial types: Type-1 trials, which assessed phasic alerting, orienting, executive control, and executive vigilance networks, and Type-2 trials, which assessed arousal vigilance network (see Fig. 1). In total, this task included 384 trials, divided into two blocks. All trials were presented in a pseudo-randomized order across participants.
Fig. 1The ANTI-Vea task. The task is comprised of two trial types, 320 trials for Type-1 and 64 trials for Type-2. (A) Type-1 trials, measuring phasic alerting, orienting, executive control, and executive vigilance networks by contrasting different combinations of warning stimuli, cue stimuli, and targets; (B) Warning stimulus condition, evaluating phasic alerting (trials with warning stimulus vs. without warning stimulus); (C) Cue stimulus condition, evaluating orienting (trials with valid cue vs. invalid cue); (D) Target stimulus condition account for 80%, respectively pressing ‘d’ or ‘j’ button for the left or right middle arrow, evaluating executive control (trials with congruent vs. incongruent target); (E) Target stimulus condition account for 20%, ignoring the arrow orientation and pressing ‘SPACE’ button, evaluating executive vigilance; (F) Type-2 trials, pressing any key as quickly as possible, measuring arousal vigilance. PA, phasic alerting; EC, executive control; EV, executive vigilance; AV, arousal vigilance
Type-1 trialsType-1 trials consisted of 320 trials and began with a white fixation cross on a black screen for 400–1600 ms. Subsequently, a 50 ms warning tone (838 Hz) was presented in half of the trials to induce phasic alerting, while the fixation cross remained visible. After 350 ms, a visual cue (no cue, double cue, upper cue, or lower cue) appeared for 50 ms, followed by a 50 ms fixation cross. The target stimulus, a row of five arrows, then appeared either above or below the fixation cross (50% each) for 200 ms. In 80% of the Type-1 trials, participants were instructed to determine the orientation of the middle arrow as quickly and accurately as possible by pressing either the ‘d’ key (left) or the ‘j’ key (right) with their index fingers while ignoring the flanking arrows. In the remaining 20% of the Type-1 trials, the middle arrow was presented off-center, and participants were asked to disregard its orientation but to promptly press the space bar. The inter-trial interval was 2000–3000 ms. All combinations of warning stimuli, cue stimuli, and target stimuli were presented in Fig. S1. Participants’ RT and accuracy (ACC) were recorded.
Type-2 trialsType-2 trials consisted of 64 trials and started with a white fixation cross in the center of a black screen, lasting for 800–2000 ms, followed by a 5-s countdown task. Participants were required to press any button as quickly as possible to stop the countdown, and the RT for each trial was recorded. The inter-trial interval was 1000–2000 ms.
Behavioral variablesGeneral performance was evaluated for Type-1 trials using the inverse efficiency score (IES), calculated as RT/ACC, which measures overall energy consumption during the task [20]. Phasic alerting was assessed by comparing trials with and without warning tones, as well as by contrasting double-cue trials with no-cue trials. The orienting function was evaluated by comparing trials with valid cues, which correctly indicated the upcoming target location, to those with invalid cues. Executive control was estimated by contrasting trials where the middle arrow pointed in the opposite direction (i.e., incongruent) vs. the same direction (i.e., congruent) as the flanking arrows. RT and ACC were calculated to estimate the effectiveness of each attentional function (see Table 2).
Table 2 Features and main results for each attentional systemExecutive vigilance decrement was assessed by analyzing the hit rate (executive vigilance-hit) and correct rejection rate (executive vigilance-CR) over time: the first 15 executive vigilance-trials (time 1) vs. the last 15 executive vigilance-trials (time 2) [16]. By contrast, arousal vigilance decrement was measured using the mean RT (arousal vigilance-RT) and intra-individual RT variability (arousal vigilance-IIRTV; calculated as standard deviation of RT divided by the mean RT [21]) across relevant trials over time: the first 15 arousal vigilance-trials vs. the last 15 arousal vigilance-trials, as well as under two arousal vigilance states: the fastest 15 arousal vigilance-trials (high arousal vigilance state) vs. the slowest 15 arousal vigilance-trials (low arousal vigilance state) [16, 22].
Data acquisition and analysesAfter completing the clinical questionnaires, participants underwent EEG preparation and then completed the ANTI-Vea task with continuous EEG recording. Both the migraine and control groups were examined by the same examiner in the same room, with fully standardized instructions and procedures to ensure consistency in data collection. To minimize bias, group labels were blinded during data preprocessing and feature extraction, before conducting group analyses.
EEG recording and preprocessingEEG data were recorded using 64 Ag-AgCl scalp electrodes placed according to the International 10–20 System (Compumedics Neuroscan; sampling rate: 1000 Hz, online reference: average). All electrode impedances were kept below 10kΩ.
EEG signals were preprocessed using the open-source toolbox EEGLAB [23], running in the MATLAB environment (MathWorks, USA). Continuous EEG data were filtered with a 0.1–60 Hz band-pass filter, and a 49–51 Hz notch filter was applied to remove 50 Hz powerline interference. EEG epochs were extracted using a time window of 3000 ms (1000 ms before and 2000 ms after the onset of the events of interest) and baseline corrected using the prestimulus interval. Specifically, the events of interest were the warning tones for Type-1 trials and the countdown challenges for Type-2 trials. All epochs were detrended to remove polynomial trends, and those contaminated with eyeblinks, movements, or other artifacts were corrected using an independent component analysis algorithm [23].
Frequency domain analysesPrestimulus data (-1000 to 0 ms) were extracted and transformed to the frequency domain using the periodogram estimate at the single-trial level to obtain the power spectral density (PSD) for each frequency point within 1–60 Hz. Spectrograms at each electrode were averaged across all trials at the single-subject level and then averaged across subjects within each group. The spectrum was divided into the following bands: δ (1–3 Hz), θ (4–7 Hz), α (8–13 Hz), β (14–30 Hz), and γ (31–60 Hz) [24]. To investigate the prestimulus attentional processes and movement preparation [25,26,27,28], the PSD for each frequency band was calculated as the average within respective band and compared between the two groups in regions of interest, that is, δ- and θ-bands at the frontal cortex (Fz electrode); α-, β- and γ-bands at the sensorimotor cortex (Cz electrode) and parietal cortex (Pz electrode), and α- and γ-bands at the occipital cortex (Oz electrode).
Time domain analysesPreprocessed EEG signals were further filtered with a 30 Hz low-pass filter and averaged across trials in the time domain, yielding event-related potential (ERP) waveform for each condition. In line with behavioral measurements, general ERP waveforms were obtained by averaging all Type-1 trials, and the 0-800 ms ERP was extracted from Cz where the significant inter-group differences were most pronounced. ERP waveforms associated with phasic alerting, orienting, and executive vigilance were generated by averaging Type-1 trials with and without warning tones, with valid and invalid cues, and with congruent and incongruent targets, respectively. In addition, ERP waveforms associated with executive vigilance were obtained by averaging the first and the last 15 executive vigilance-trials. ERP waveforms associated with arousal vigilance were obtained by averaging the first and the last 15 arousal vigilance-trials as well as the fastest and the slowest 15 arousal vigilance-trials. Referring to the previous study and the representative areas for specific components [16, 29,30,31], features extracted for each attentional system were presented in Table 2.
Time-frequency domain analysesTime-frequency domain analyses were conducted on all Type-1 trials. Each Type-1 trial was transformed to the time-frequency domain using a short-term Fourier transform with a 400 ms window size to calculate the PSD for each time-frequency point at each electrode. The PSD for each time-frequency point was then baseline-corrected by subtracting the averaged PSD across baseline time window (-800 to -200 ms) at the corresponding frequency point. Event-related synchronization/desynchronization (ERS/ERD) related to motor preparation and execution (i.e., γ-ERS [500 to 800 ms, according to the first 300 ms after target appearance] and γ-ERD [850 to 1150 ms, according to 350ms-650 ms after target appearance] at Cz electrode [32]) and visual processing (i.e., α-band activity [0 to 2000 ms] at the primary visual area [33]) were compared between patients and controls.
Statistical analysisBasic analysisConsidering the unequal sample size between the two groups and non-normal distribution of the data, all dependent variables were analyzed using non-parametric methods. Group differences in participant demographics and migraine characteristics were compared using Mann-Whitney U test (e.g., age) or Chi-squared (χ²) test (e.g., sex ratio). Age and educational level were considered as covariates in the following analyses for behavioral and EEG data. A generalized linear model (GLM) was performed to estimate group differences, and a generalized estimate equation (GEE) was conducted to assess the main effects and interaction effect of two independent variables (e.g., group × time for executive vigilance). Pairwise comparisons were performed when there was a significant interaction. Coefficient (B) was calculated to reflect the effect size. p < 0.05 were considered statistically significant. Bonferroni correction was applied for multiple comparisons when necessary.
Spearman rank correlations were estimated between features assessing corresponding attentional function, separately. Additionally, classification and regression models were established to test the potential diagnostic and monitoring ability of the dysfunctions in the attentional system revealed by the present study.
Classification modeling to distinguish patients from healthy controlsFor binary classification, we developed a model based on the extreme gradient boosting (XGB [34, 35]) machines which performed effectiveness in small and imbalanced datasets. Grid search with leave-one-out cross-validation (LOOCV) was utilized to obtain optimal model parameters of XGB classifier developed. Specifically, in each iteration, one participant’s data was reserved as the validation set, while the remaining participants’ data constituted the training set. A classification model was developed to predict the outcomes for the validation data, and this process was repeated until every participant had been used as the validation data. This study identified the model parameters that yield the highest mean F1 score and mean classification accuracy, and the corresponding precision and recall values were reported.
Regression modeling to predict clinical characteristics of patientsXGB regression was performed to predict clinical characteristics (e.g., headache duration in the past three months). The individual prediction values for each clinical characteristic were calculated using LOOCV. To assess the performance of the regression model, Spearman rank correlation was calculated between the real and predicted values for each clinical characteristic.
留言 (0)