Decoding Visual Motions from EEG Using Attention-Based RNN
The main objective of this paper is to use deep neural networks to decode the electroencephalography (EEG) signals evoked when individuals perceive four types of motion stimuli (contraction, expansion, rotation, and translation). Methods for single-trial and multi-trial EEG classification are both i...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-08-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/10/16/5662 |
id |
doaj-9722eec65edf4883a792e9519c122543 |
---|---|
record_format |
Article |
spelling |
doaj-9722eec65edf4883a792e9519c1225432020-11-25T03:35:04ZengMDPI AGApplied Sciences2076-34172020-08-01105662566210.3390/app10165662Decoding Visual Motions from EEG Using Attention-Based RNNDongxu Yang0Yadong Liu1Zongtan Zhou2Yang Yu3Xinbin Liang4College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, ChinaCollege of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, ChinaCollege of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, ChinaCollege of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, ChinaCollege of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, ChinaThe main objective of this paper is to use deep neural networks to decode the electroencephalography (EEG) signals evoked when individuals perceive four types of motion stimuli (contraction, expansion, rotation, and translation). Methods for single-trial and multi-trial EEG classification are both investigated in this study. Attention mechanisms and a variant of recurrent neural networks (RNNs) are incorporated as the decoding model. Attention mechanisms emphasize task-related responses and reduce redundant information of EEG, whereas RNN learns feature representations for classification from the processed EEG data. To promote generalization of the decoding model, a novel online data augmentation method that randomly averages EEG sequences to generate artificial signals is proposed for single-trial EEG. For our dataset, the data augmentation method improves the accuracy of our model (based on RNN) and two benchmark models (based on convolutional neural networks) by 5.60%, 3.92%, and 3.02%, respectively. The attention-based RNN reaches mean accuracies of 67.18% for single-trial EEG decoding with data augmentation. When performing multi-trial EEG classification, the amount of training data decreases linearly after averaging, which may result in poor generalization. To address this deficiency, we devised three schemes to randomly combine data for network training. Accordingly, the results indicate that the proposed strategies effectively prevent overfitting and improve the correct classification rate compared with averaging EEG fixedly (by up to 19.20%). The highest accuracy of the three strategies for multi-trial EEG classification achieves 82.92%. The decoding performance for the methods proposed in this work indicates they have application potential in the brain–computer interface (BCI) system based on visual motion perception.https://www.mdpi.com/2076-3417/10/16/5662electroencephalographyattention mechanismsrecurrent neural networksdata augmentationbrain–computer interfacevisual motion perception |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Dongxu Yang Yadong Liu Zongtan Zhou Yang Yu Xinbin Liang |
spellingShingle |
Dongxu Yang Yadong Liu Zongtan Zhou Yang Yu Xinbin Liang Decoding Visual Motions from EEG Using Attention-Based RNN Applied Sciences electroencephalography attention mechanisms recurrent neural networks data augmentation brain–computer interface visual motion perception |
author_facet |
Dongxu Yang Yadong Liu Zongtan Zhou Yang Yu Xinbin Liang |
author_sort |
Dongxu Yang |
title |
Decoding Visual Motions from EEG Using Attention-Based RNN |
title_short |
Decoding Visual Motions from EEG Using Attention-Based RNN |
title_full |
Decoding Visual Motions from EEG Using Attention-Based RNN |
title_fullStr |
Decoding Visual Motions from EEG Using Attention-Based RNN |
title_full_unstemmed |
Decoding Visual Motions from EEG Using Attention-Based RNN |
title_sort |
decoding visual motions from eeg using attention-based rnn |
publisher |
MDPI AG |
series |
Applied Sciences |
issn |
2076-3417 |
publishDate |
2020-08-01 |
description |
The main objective of this paper is to use deep neural networks to decode the electroencephalography (EEG) signals evoked when individuals perceive four types of motion stimuli (contraction, expansion, rotation, and translation). Methods for single-trial and multi-trial EEG classification are both investigated in this study. Attention mechanisms and a variant of recurrent neural networks (RNNs) are incorporated as the decoding model. Attention mechanisms emphasize task-related responses and reduce redundant information of EEG, whereas RNN learns feature representations for classification from the processed EEG data. To promote generalization of the decoding model, a novel online data augmentation method that randomly averages EEG sequences to generate artificial signals is proposed for single-trial EEG. For our dataset, the data augmentation method improves the accuracy of our model (based on RNN) and two benchmark models (based on convolutional neural networks) by 5.60%, 3.92%, and 3.02%, respectively. The attention-based RNN reaches mean accuracies of 67.18% for single-trial EEG decoding with data augmentation. When performing multi-trial EEG classification, the amount of training data decreases linearly after averaging, which may result in poor generalization. To address this deficiency, we devised three schemes to randomly combine data for network training. Accordingly, the results indicate that the proposed strategies effectively prevent overfitting and improve the correct classification rate compared with averaging EEG fixedly (by up to 19.20%). The highest accuracy of the three strategies for multi-trial EEG classification achieves 82.92%. The decoding performance for the methods proposed in this work indicates they have application potential in the brain–computer interface (BCI) system based on visual motion perception. |
topic |
electroencephalography attention mechanisms recurrent neural networks data augmentation brain–computer interface visual motion perception |
url |
https://www.mdpi.com/2076-3417/10/16/5662 |
work_keys_str_mv |
AT dongxuyang decodingvisualmotionsfromeegusingattentionbasedrnn AT yadongliu decodingvisualmotionsfromeegusingattentionbasedrnn AT zongtanzhou decodingvisualmotionsfromeegusingattentionbasedrnn AT yangyu decodingvisualmotionsfromeegusingattentionbasedrnn AT xinbinliang decodingvisualmotionsfromeegusingattentionbasedrnn |
_version_ |
1724555660435128320 |