A Brain-Controlled Vehicle System Based on Steady State Visual Evoked Potentials

This section elaborates the adopted materials and methods for the brain-controlled vehicle system. Specifically, in the “The Experimental Vehicle” section, the description of the experimental vehicle is detailed. Then, we introduce the architecture of the human-vehicle cooperative driving system combining BCV mode with intelligent assistant driving technology, in the “Strategy of the Human-Vehicle Cooperative Driving System” section. How we acquire and process the EEG signals is fully detailed in the “SSVEP-Based BCI” section. In “Laser Ranging Obstacle Detection” and “Communication System” sections, we describe the obstacle detection system on the BCV and the communication system between the computer processing terminal and the BCV, respectively. Finally, how the experiments were conducted is described in the “Experiments of Brain-Controlled Vehicle” section.

The Experimental Vehicle

The appearance of the experimental vehicle is the same as that of a normal real car with electronic brake switch. The laser ranging sensor is located in the front of the vehicle to collect the distance data of the obstacle in front of the vehicle. The computer processing terminal receives EEG signals from the BCI and the laser ranging data, and generates the final vehicle control commands after data processing. The communication module of the experimental vehicle is modified such that the vehicle can receive and execute the vehicle control commands sent by the computer processing terminal. The schematic diagram of the experimental vehicle is shown in Fig. 1.

Strategy of the Human-Vehicle Cooperative Driving System

The system structure of the human-vehicle cooperative driving system combining the BCV mode with intelligent driving assistance technology is illustrated in Fig. 2. The system can be described in five parts: SSVEP-based BCI, obstacle detection system, computer processing terminal, communication system and the intelligent vehicle. (1) The SSVEP-based BCI consists of the SSVEP visual stimulus sources presented on a computer screen, EEG signal acquisition unit and processing unit. (2) The obstacle detection system includes the laser ranging sensor and the ranging data processing unit. (3) The computer processing terminal integrates the EEG signal processing unit, the ranging data processing unit and the command transmission determination unit. (4) The communication system consists of the serial port, the signal converter and high-speed controller area network (CAN) bus. (5) The intelligent vehicle with electronic brake switch is modified in the communication module.

The SSVEP-based BCI recognises the driver’s intention by analysing EEG signals. The obstacle detection system will send a braking signal if an obstacle in front of the vehicle is detected to be closer than a threshold. The communication system establishes a communication channel between the computer processing terminal and the experimental vehicle. Both the BCI and the obstacle detection system are the vehicle control command generation terminals. Control commands generated by the BCI and the obstacle detection system are not sent to the experimental vehicle until they are judged by the command transmission determination unit. The command transmission determination unit only sends the valid commands. Moving commands are invalid if the obstacle detection system detects that an obstacle is too close to the vehicle in front.

If a moving signal is sent to the experimental vehicle, the electronic brake switch loosens, and the vehicle moves straight at a constant speed of 1.38 m/s. The electronic brake switch clamps to stop the vehicle if the experimental vehicle receives a braking signal. The electronic brake switch returns its status, namely, loose or clamped, to the computer terminal in real time.

In the following sections, we detail the three main parts of the human-vehicle cooperative driving system: (1) the SSVEP-based BCI; (2) the laser ranging obstacle detection system and (3) the communication system.

SSVEP-Based BCI

BCI provides a direct communication channel between the human brain and the computer system. EEG signals commonly used in BCIs include SSVEP, P300 potentials and motor imagery [3, 13, 17]. In this paper, we use SSVEP signals in BCI. When a subject’s eyes are focused on a visual stimulus source with a constant and continuously flickering frequency, an SSVEP signal containing the same frequency or a multiple of the frequency of the visual stimulus source can be measured in the subject’s EEG signals, with the highest amplitude on the occipital lobe (visual cortex) [18,19,20]. The condition for evoking SSVEPs is simple, and SSVEP signals are stable and easy to realise real-time control. For the SSVEP experiment, subjects do not need training before the experiment [2122].

Nakanishi et al. proposed a SSVEP detection method using the task-related component analysis (TRCA) with accuracy of 89.83% and trial lengths 1.2 ~ 1.5 s [23]. Kumar and Reddy proposed a subject-specific target detection framework, sum of squared correlations (SSCOR), to improve the performance of SSVEP. SSCOR had better performance than TRCA in the detection accuracy and information transfer rates [24]. However, both methods require acquiring individual training data prior to the online operation. Waytowich et al. used a compact convolutional neural network (CNN) to decode signals from a 12-class SSVEP dataset without user-specific calibration, which only required raw EEG signals for automatic feature extraction. The mean accuracy across subjects was approximately 80% with 4-s trial length in offline experiment [18]. Podmore et al. applied a deep convolutional neural network (DCNN), PodNet, and achieved 86% and 77% inter-subject classification accuracy for two data capture periods, respectively, 6 s and 2 s [25]. The above two studies have lower accuracy and longer trial time and did not carry out online experiments. Ravi et al. proposed a CNN-based classification method to enhance the detection accuracy of SSVEP in the presence of competing stimuli. The accuracy of the offline classification is 75.3% and that of the online simulation is 71.3% with a stimulus time of 6 s. The accuracy and trial time do not satisfy the vehicle control [26]. All above studies do not involve outdoor experiments.

In our work, the SSVEP-based BCI consists of the SSVEP visual stimulus sources presented on a computer screen, EEG signal acquisition unit and processing unit. We use two flickering frequencies of 8 Hz and 10 Hz as SSVEP visual stimulus sources and use non-invasive BCI to obtain the SSVEP EEG signals. According to the result of the offline test with different analysis time lengths, we choose 3 s as the analysis time length of SSVEP signals. We use canonical correlation analysis (CCA) method to classify SSVEPs and the overlap time windows voting (OTWV) method to improve the classification accuracy, which is training-free and used to control a vehicle outdoor. The driver’s intentions (moving or braking) are extracted by analysing the frequency features of SSVEP signals. The BCI sends a moving command or a braking command to the vehicle according to the classification results. Since repeated moving or braking commands are invalid for controlling the vehicle, there is no need for the driver to continuously focus on the stimulus if the vehicle stays in the desired state.

To better introduce the SSVEP-based BCI, we split it into several parts: SSVEP signals, EEG signal pre-processing, EEG signal acquisition unit, CCA method, offline test with different analysis time lengths and OTWV method.

SSVEP Signals

The SSVEP signals are oscillatory potentials elicited in EEG in response to periodic light stimulation. The SSVEP signals will occur in the visual cortex when a visual stimulation is applied to a human. Typical SSVEP response contains peaks at frequencies that are directly related to the stimulation frequency. The stimuli of different flickering frequencies will evoke the SSVEPs of different amplitude strengths. In general, the strongest, moderately strong and weak SSVEPs can be observed by the stimuli in the range of low frequency (1–12 Hz), medium frequency (12–30 Hz) and high frequency (30–60 Hz), respectively [27, 28]. In this paper, to obtain the strongest SSVEPs, the visual stimuli are in the low frequency range.

The first and second harmonics of the stimulus frequencies are used for classification in the CCA method, the first harmonic frequency of the stimulus should be different from the second harmonic frequency of the other stimulus [29]. Therefore, the SSVEP visual stimulus sources consist of two rectangular blocks with constant flickering frequencies of 10 Hz and 8 Hz, respectively. The size of both flashing rectangular blocks is 5 cm × 5 cm, and they are displayed on a laptop screen. The vehicle is controlled to move straight by the EEG signals evoked by the SSVEP visual stimulus source of 10 Hz. And the vehicle is controlled to brake if the flickering frequency of the SSVEP visual stimulus source is 8 Hz. Figure 3 shows the corresponding FFT frequency spectra of SSVEPs collected from Oz from a single subject for 3 s in response to 8 Hz (a) and 10 Hz (b) stimulation.

Fig. 1figure 1

Schematic diagram of the experimental vehicle

Fig. 2figure 2

The human-vehicle cooperative driving system combining BCV mode with intelligent assistant driving technology

Fig. 3figure 3

FFT frequency spectra of SSVEPs in response to 8 Hz a and 10 Hz b stimulation

Fig. 4figure 4

Location distribution of SSVEP signal acquisition electrodes

Fig. 5figure 5

Comparison between the raw EEG signals and the denoised EEG signals. a Raw EEG signals. b Denoised EEG signals

Fig. 6figure 6

Data processing diagram a with the OTWV method and b without the OTWV method

Fig. 7figure 7

UTM-30LX 2D laser ranging sensor located in the front of the vehicle

Fig. 8figure 8

Virtual environment of the simulated BCV driving

Fig. 9figure 9

Architecture of the simulated BCV driving experiment

Fig. 10figure 10

Schematic diagram of the outdoor experimental environment

Fig. 11figure 11

Experimental environment in the experimental vehicle

EEG Signal Acquisition Unit

The used EEG signal acquisition equipment is non-invasive. Compared with implanting a chip into the brain to enable intention control, non-invasive equipment does not cause harm to human body. The device g.USBamp of g.tec medical engineering GmbH (Austria) was used as the bio-signal amplifier, which allows 16-channel bio-signal acquisition. The sampling frequency of the EEG signals was 256 Hz per channel. In SSVEP-based BCIs, channels at the occipital and parietal (visual cortex) area are always selected to record the SSVEPs. Subjects were asked to wear a special cap with fixed electrodes, and the SSVEP signals were collected from \(O_\), \(O_\), \(O_\), \(PO_\), \(PO_\) and \(PO_\), according to the international 10/20 system, as shown in Fig. 4. The channels at the centre of visual cortex have higher amplitudes and therefore provide better features. The outdoor experiment of BCV requires high visual stimulation response. The six electrodes are selected to achieve high and stable classification accuracy [18, 26, 30].

The ground electrode of g.USBamp was \(F_\), positioned on the forehead, while the reference electrode was placed on the right earlobe [31]. The amplifier was directly connected to a PC by a USB cable through which the amplified and digitalised EEG signals were sent to the PC for further processing.

EEG Signal Pre-processing

Pre-processing is an important step to remove noisy parts from the collected EEG signals and prepare them for further analysis. Several filters are used, depending on the aims of the studies. A 50-Hz notch filter and a Butterworth band-pass filter were used to filter the power line interference and the high frequency noise. The Butterworth band-pass filter was used to extract EEG signals with frequencies between 5 and 60 Hz. A comparison between the raw EEG signals and the filtered EEG signals using Butterworth band-pass filter is shown in Fig. 5. In this figure, the EEG signals were collected from \(O_\), \(O_\), \(O_\), \(PO_\), \(PO_\) and \(PO_\). Data from the 3-s EEG signals were segmented for noise removal.

Canonical Correlation Analysis Method

The most prominent features of SSVEPs are in frequency domain. SSVEPs can be classified according to the frequency components. CCA method is used to classify SSVEPs by comparing the correlation between the collected SSVEP signals and each stimulus frequency. CCA is a statistical method, which has been traditionally and widely used to analyse relationships between two sets of variables in various fields [27, 32, 33]. The objective of CCA is to analyse the degree of correlation between two data sets by finding their transformed variants with the highest correlation by calculating their correlation coefficient. Two data sets are more relevant if their correlation coefficient is higher.

The principle of the CCA method is described as follows. Given two data sets \(X\) and \(Y\), CCA attempts to find a pair of vectors \(W_\) and \(W_\) that maximise the correlation between linear combinations of \(x\) and \(y\), where \(x\) and \(y\) are calculated as:

In addition, \(x\) and \(y\) are known as canonical variates, which are uncorrelated in each data set and have zero mean and unit variance. \(W_\) and \(W_\) are the canonical coefficient vectors. \(\rho\) is the correlation coefficient of \(x\) and \(y\), and can be calculated as follows:

$$\begin \rho = \max (corr(x,y)) \\ = \max (\frac }}) \\ = \max (\frac ,y]}} x]E[y^ y]} }}) \\ = \max (\frac^ XY^ W_ ]}}^ XX^ W_ ]E[W_^ YY^ W_ ]} }}) \\ \end$$

(3)

In Eq. (3), \(Var\), \(Cov\) and \(E\) represent the variance, the covariance and the expectation, respectively. The cross-correlation matrix of \(X\) and \(Y\) is described as:

The autocorrelation matrices of \(X\) and \(Y\) are \(C_\) and \(C_\), respectively, which are computed as:

The collected SSVEP signals and a stimulus frequency are represented by two data sets, \(X\) and \(Y\), respectively. \(X\) and \(Y\) are used to calculate the CCA correlation coefficients. The CCA method can detect harmonic frequencies [29]. In this paper, SSVEP signals containing the same and twice (first and second harmonic) frequencies of the corresponding visual stimulus are analysed. The visual stimulus signals \(Y\) with frequency \(f\) are defined as follows:

$$Y = \left\c} \\ \\ \\ \\ \end } \right\}$$

(7)

The flickering frequency of the visual stimulus that is more relevant to the collected SSVEP signals is considered as the classification result.

Offline Test with Different Analysis Time Lengths

To analyse the relationship between the time length of signals and the classification accuracy, the offline test with different analysis time lengths was carried out. At the same time, the accuracy of SSVEP classification was verified by the offline test. Five healthy subjects participated in the experiments (three males and two females). All subjects participated in the offline test on a voluntary basis and letters of consent were obtained from all participants.

In the offline test, each subject was asked to focus alternately on the two SSVEP visual stimulus sources of 8 Hz and 10 Hz. The SSVEP signals of each subject were collected to analyse the relationship between the time length of signals and the classification accuracy. SSVEP signals of four types of duration (1 s, 2 s, 3 s and 4 s) were analysed using the CCA method, respectively. The offline test was repeated 10 times, and the results including the correlation coefficient and the classification accuracy are presented in the “Offline Test Result of Different Analysis Time Lengths” section. Based on these results, 3 s was chosen as the analysis time for SSVEP signals in BCV experiments to obtain high classification accuracy and fast response time.

Overlap Time Windows Voting Method

The OTWV method was used to improve the classification accuracy of the SSVEP signals and the stability of command output. The principle of the OTWV method is shown in Fig. 6. SSVEP signals were analysed using a 3-s data window and a 1-s offset. From each 3-s data window, one result (one vote) is obtained. The classification result with more than two votes is considered the final result of the classification. The OTWV method can improve the classification accuracy of the SSVEP signals without increasing the time length, which can be proved as follows.

The classification accuracy of a 3-s SSVEP signals in a time window is \(p\), and the classification accuracy using the OTWV method is \(p^\). For simplicity, we assume that the classification results for each time window are independent. There are two possible cases: (1) the signals in the 3-s time windows are all of the same class and (2) signals in one of the 3-s time windows are of different class from the other two windows. However, the first case holds in most of the trials because the classes of the signals in continuous time are usually the same. In the first condition, \(p^\) can be calculated as:

$$p^ = C_^ p^ (1 - p) + p^$$

(8)

where \(C_^\) refers to the number of combinations of 2 elements taken from 3 elements at a time. The right-hand side of the equation presents the probability of the case that the classification of one of the 3-s time windows is wrong, and that the classification of the 3-s time windows is right. It means to solve the following inequation:

$$C_^ p^ (1 - p) + p^ > p$$

(9)

The inequation can be simplified to:

$$p(2p - 1)(p - 1) < 0$$

(10)

In the second case, p' can be calculated as:

$$p^ = p^ + p^ (1 - p) + C_^ p(1 - p)^$$

(11)

In a similar way to the first case, the inequation is simplified to:

$$p(2p - 1)(p - 1) > 0$$

(12)

In the second case, if the classification accuracy \(p\) is larger than 0.5 and smaller than 1, the OTWV method can reduce the classification accuracy of the SSVEP signals. However, in most cases in the real experiment we meet the first condition. In the “Analysis of Overlap Time Windows Voting Method” section, we compare the classification accuracy of the SSVEP signals between experiment with OTWV method and without OTWV method. It shows that the OTWV method can actually improve the SSVEP signal classification accuracy.

Theoretically, with the OTWV method, a classification result is generated every 1 s in continuous signal processing, ignoring data processing and transmission time. However, without the OTWV method, it takes at least 3 s to generate a classification result. Therefore, the OTWV method improves the generating rate of the classification results. At the same time, OTWV method can avoid the frequent change of the output control command and improve the stability of vehicle control in online experiments.

Laser Ranging Obstacle Detection

Considering the potential dangers of driving a real motor vehicle in the outdoor, an obstacle detection module is essential on the BCV. Obstacle detection techniques can detect obstacles appearing around the vehicle and alert the driver about possible collisions with obstacles [34, 35]. Considering the potential dangers of driving a vehicle in the outdoor, the laser ranging obstacle detection system is integrated into the vehicle to improve the security of the BCV, which realises the human-vehicle cooperative driving system. The obstacle detection system includes the laser ranging sensor and the ranging data processing unit. The laser ranging sensor is located at the front of the vehicle to collect the distance data of the obstacle in front of the vehicle. It transfers the measured distance information to the ranging data processing unit. If an obstacle is detected to be too close in front of the vehicle, the obstacle detection system will send a braking signal to stop the vehicle and avoid collision, which keeps the vehicle in a safe state.

Millimetre wave radars, visual sensors and LiDAR sensors are all important sensors in the field of intelligent driving. Millimetre wave radars have high adaptability to weather. However, traffic scenario elements, such as roads, buildings, vegetation, vehicles, pedestrians and so on, will introduce noise interference, which will lead to the decline or even failure of the radar detection and measurement accuracy [36]. Visual sensors are used to detect roads, lane signs, obstacles and objects. However, they are easily influenced by light changes, and the detection accuracy will be greatly reduced when encountering complex shadows or bad weather conditions [37]. Therefore, visual sensors are usually combined with laser scanners to achieve high-accuracy information. LiDAR sensors are widely used to detect objects and obstacles with good range resolution and high accuracy. Generally, LiDAR sensors can be divided into 2D LiDAR and 3D LiDAR sensors. 3D LiDAR sensors can obtain much richer information of the surroundings. However, the data obtained by a 3D LiDAR is large and complicated, which takes longer processing time compared to a 2D LiDAR one. Moreover, a 3D LiDAR sensor is more expensive [38]. In this paper, considering the low cost and simple data processing, the UTM-30LX, produced by HOKUYO, is used as the 2D laser ranging sensor of the vehicle. The UTM-30LX is a compact, lightweight 2D LiDAR sensor with a 270° field-of-view up to 30 m. With enhanced internal filtering and ingress protection rating, this LIDAR device is less susceptible to ambient outdoor light [39]. The LiDAR sensor is located horizontally on the bonnet of the car. The effective measurement range of the laser ranging obstacle detection system is set at 3 m and 90° in front of the vehicle, as illustrated in Fig. 7.

The collision avoidance behaviour acts as a full stop. It will be activated if the distance between the vehicle and the obstacle is detected to be less than the effective measurement distance. Then, the obstacle detection system triggers a braking signal to stop the vehicle. The collision avoidance behaviour can ensure the safety of the driver and the vehicle during driving, thus improving the performance of the BCV.

Communication System

Control commands, generated by the BCI and the obstacle detection system, are judged by the command transmission determination unit. Then, the communication system sends the control command to the electronic brake switch to perform the BCV control. The communication system supports the communication between the computer processing terminal and the experimental vehicle. It requires a fast signal transmission using a BCI combined with obstacle detection technologies to control a real vehicle. In this paper, the high-speed CAN communication is selected to transmit the vehicle control signals [40]. The communication system consists of three parts: the serial port, the signal converter and the high-speed CAN bus. The serial port is the first part of the communication system. Through the serial port, the BCI sends control commands to the signal converter. The serial port baud rate is 115,200 bps using an 8-bit data format, no parity bit and one stop bit. The signal converter is a signal conversion interface between the serial port and the high-speed CAN bus. Through the signal converter and the high-speed CAN bus, control commands are sent to the controlled component of the experimental vehicle, namely, the electronic brake switch. Meanwhile, the electronic brake switch returns the status information of the vehicle to the computer terminal in real time via the communication system.

Table 2 shows the definition of the control protocol in terms of SSVEP frequencies, vehicle control commands and hexadecimal commands. The control protocol is defined according to the vehicle internal protocol. These defined hexadecimal commands are only used to control vehicle movement and braking.

Experiments of Brain-Controlled Vehicle

In this study, we conducted two kinds of experiments on five subjects. One is the simulated BCV experiment to verify the feasibility of the SSVEP-based BCI system; the other one is the real vehicle controlling experiment to verify the new controlling mode in the outdoor environment.

Subjects

Five healthy subjects aged between 21 and 27 participated in the experiment on a voluntary basis, and letters of consent were obtained from all of them. Some subjects had taken part in other earlier BCI experiments. However, none of them had experience in controlling a real vehicle via the BCI prior to the experiment. In addition, none of the subjects had a history of brain or neurological disease.

Experiment Design and Procedures

Two experiments were performed: (1) the simulated BCV experiment and (2) real vehicle controlling experiment. In the simulated vehicle controlling experiment, we verified the feasibility of the SSVEP-based BCI system to control a simulated vehicle in the virtual driving environment. In the real vehicle controlling experiment, we implemented the human-vehicle cooperative driving combining the BCV system with obstacle detection and verified the new controlling mode in the outdoor. The simulation experiment and the real vehicle driving experiment were performed on different days. Before the experiments, we gave the instructions to the subject so that they could operate correctly during the experiments.

Simulated Brain-Controlled Vehicle Experiment

The experiment was carried out in a virtual driving platform with the simulated vehicle based on open graphics library (OpenGL), as illustrated in Fig. 8. The simulated environment was run on the Windows 7 operating system.

The architecture of the simulated BCV driving experiment included two main parts, SSVEP-based BCI and virtual driving platform with simulated vehicle, as shown in Fig. 9. The SSVEP-based BCI module consisted of the SSVEP visual stimuli presented on a computer screen, SSVEP signal acquisition unit and SSVEP signal processing unit. The BCI sent generated control commands to the simulated vehicle via socket communication. After receiving a control command, the simulated vehicle performed the corresponding action.

The BCI analysed segments of 3 s of the SSVEP signals and sent the generated control commands to the simulated vehicle every 3 s. If the subject was focusing on the visual stimulus of 10 Hz or 8 Hz, the SSVEP signals would be recognised as a moving command or a braking command, respectively. The simulated vehicle moved straight or braked after a moving command or a stop command was sent to it.

Subjects were asked to wear the EEG signal acquisition equipment and sat in front of the computer screen. The experiment was repeated four times for each subject. In each time, subjects were required to successively send ten commands, including five moving commands and five braking commands. Moving commands and braking commands were alternately sent to control the simulated vehicle moving and braking. We timed the response time from the driver starts focusing on the stimulus to the time that the corresponding command is generated (the simulated vehicle starts or stops). The results of the simulation experiment, including the average response time and the accuracy, are shown in the “Result of Simulated Vehicle Controlling Experiment” section.

Real Vehicle Controlling Experiment

In the simulation experiment, subjects were familiar to use the BCI and prepared for controlling the experimental vehicle. In the real vehicle driving experiment, subjects controlled the real vehicle via the BCI combined with the laser obstacle detection in the outdoor. To realise human-vehicle cooperative driving, the outdoor environment was built on an empty site. The schematic diagram of the outdoor experimental environment is presented in Fig. 10. The vehicle is an automatic car of size 4.856 m × 1.926 m × 1.900 m, with seven-speed automatic transmission, electronic brake force distribution, antilock braking system, brake assist system, etc. Two flags were set on the roadside. The distance between the adjacent flags was 20 m. In addition, an obstacle was placed at the end of the experimental road. The size of the obstacle was about 70 cm × 50 cm × 150 cm (length × width × height). The obstacle detection range of the laser ranging obstacle detection system was set at 3 m. The distance between the obstacle and the second flag was 5 m.

Each subject was required to complete experiment five times. The subjects were asked to control the experimental vehicle to move from the start position each time. When the vehicle arrived at the flag positions, subjects controlled the vehicle to stop. If the obstacle was detected too close in front of the vehicle, the obstacle detection system sent a braking signal to stop the vehicle. The response time was timed from the moment the driver starts to focus on the stimulus until the corresponding command is generated (a beep is heard when the command is sent). This is where the process of the experiment ended. Subjects were asked to wear the EEG signal acquisition equipment, as shown in Fig. 11. The BCI analysed SSVEP signals and generated a hexadecimal vehicle control commands every 3 s. Control commands generated by the BCI and the obstacle detection system were sent to the experimental vehicle after they were judged by the command transmission determination unit. Moving commands of the BCI were invalid if the obstacle detection system detected an obstacle too close in front of the vehicle. Repeated commands to move or brake were also invalid, so the driver only has to focus on the stimulus when the vehicle state changes.

留言 (0)

沒有登入
gif