World's fastest brain-computer interface: Combining EEG2Code with deep learning.

We present a novel approach based on deep learning for decoding sensory information from non-invasively recorded Electroencephalograms (EEG). It can either be used in a passive Brain-Computer Interface (BCI) to predict properties of a visual stimulus the person is viewing, or it can be used to activ...

Full description

Bibliographic Details
Main Authors: Sebastian Nagel, Martin Spüler
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2019-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0221909
Description
Summary:We present a novel approach based on deep learning for decoding sensory information from non-invasively recorded Electroencephalograms (EEG). It can either be used in a passive Brain-Computer Interface (BCI) to predict properties of a visual stimulus the person is viewing, or it can be used to actively control a BCI application. Both scenarios were tested, whereby an average information transfer rate (ITR) of 701 bit/min was achieved for the passive BCI approach with the best subject achieving an online ITR of 1237 bit/min. Further, it allowed the discrimination of 500,000 different visual stimuli based on only 2 seconds of EEG data with an accuracy of up to 100%. When using the method for an asynchronous self-paced BCI for spelling, an average utility rate of 175 bit/min was achieved, which corresponds to an average of 35 error-free letters per minute. As the presented method extracts more than three times more information than the previously fastest approach, we suggest that EEG signals carry more information than generally assumed. Finally, we observed a ceiling effect such that information content in the EEG exceeds that required for BCI control, and therefore we discuss if BCI research has reached a point where the performance of non-invasive visual BCI control cannot be substantially improved anymore.
ISSN:1932-6203