An Intrinsically Explainable Method to Decode P300 Waveforms from EEG Signal Plots Based on Convolutional Neural Networks

This work proposes an intrinsically explainable, straightforward method to decode P300 waveforms from electroencephalography (EEG) signals, overcoming the black box nature of deep learning techniques. The proposed method allows convolutional neural networks to decode information from images, an area...

Full description

Bibliographic Details
Published in:Brain Sciences
Main Authors: Brian Ezequiel Ail, Rodrigo Ramele, Juliana Gambini, Juan Miguel Santos
Format: Article
Language:English
Published: MDPI AG 2024-08-01
Subjects:
Online Access:https://www.mdpi.com/2076-3425/14/8/836
Description
Summary:This work proposes an intrinsically explainable, straightforward method to decode P300 waveforms from electroencephalography (EEG) signals, overcoming the black box nature of deep learning techniques. The proposed method allows convolutional neural networks to decode information from images, an area where they have achieved astonishing performance. By plotting the EEG signal as an image, it can be both visually interpreted by physicians and technicians and detected by the network, offering a straightforward way of explaining the decision. The identification of this pattern is used to implement a P300-based speller device, which can serve as an alternative communication channel for persons affected by amyotrophic lateral sclerosis (ALS). This method is validated by identifying this signal by performing a brain–computer interface simulation on a public dataset from ALS patients. Letter identification rates from the speller on the dataset show that this method can identify the P300 signature on the set of 8 patients. The proposed approach achieves similar performance to other state-of-the-art proposals while providing clinically relevant explainability (XAI).
ISSN:2076-3425