A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding
The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2018-08-01
|
Series: | Frontiers in Neuroscience |
Subjects: | |
Online Access: | https://www.frontiersin.org/article/10.3389/fnins.2018.00531/full |
id |
doaj-dd2069146be6417a9a4739faab3525d5 |
---|---|
record_format |
Article |
spelling |
doaj-dd2069146be6417a9a4739faab3525d52020-11-25T02:32:43ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2018-08-011210.3389/fnins.2018.00531352049A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention DecodingDaniel D. E. Wong0Daniel D. E. Wong1Søren A. Fuglsang2Jens Hjortkjær3Jens Hjortkjær4Enea Ceolini5Malcolm Slaney6Alain de Cheveigné7Alain de Cheveigné8Alain de Cheveigné9Laboratoire des Systèmes Perceptifs, CNRS, UMR 8248, Paris, FranceDépartement d'Études Cognitives, École Normale Supérieure, PSL Research University, Paris, FranceDepartment of Electrical Engineering, Danmarks Tekniske Universitet, Kongens Lyngby, DenmarkDepartment of Electrical Engineering, Danmarks Tekniske Universitet, Kongens Lyngby, DenmarkDanish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, Hvidovre, DenmarkInstitute of Neuroinformatics, University of Zürich, Zurich, SwitzerlandAI Machine Perception, Google, Mountain View, CA, United StatesLaboratoire des Systèmes Perceptifs, CNRS, UMR 8248, Paris, FranceDépartement d'Études Cognitives, École Normale Supérieure, PSL Research University, Paris, FranceEar Institute, University College London, London, United KingdomThe decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies.https://www.frontiersin.org/article/10.3389/fnins.2018.00531/fulltemporal response functionspeech decodingelectroencephalographyselective auditory attentionattention decoding |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Daniel D. E. Wong Daniel D. E. Wong Søren A. Fuglsang Jens Hjortkjær Jens Hjortkjær Enea Ceolini Malcolm Slaney Alain de Cheveigné Alain de Cheveigné Alain de Cheveigné |
spellingShingle |
Daniel D. E. Wong Daniel D. E. Wong Søren A. Fuglsang Jens Hjortkjær Jens Hjortkjær Enea Ceolini Malcolm Slaney Alain de Cheveigné Alain de Cheveigné Alain de Cheveigné A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding Frontiers in Neuroscience temporal response function speech decoding electroencephalography selective auditory attention attention decoding |
author_facet |
Daniel D. E. Wong Daniel D. E. Wong Søren A. Fuglsang Jens Hjortkjær Jens Hjortkjær Enea Ceolini Malcolm Slaney Alain de Cheveigné Alain de Cheveigné Alain de Cheveigné |
author_sort |
Daniel D. E. Wong |
title |
A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding |
title_short |
A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding |
title_full |
A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding |
title_fullStr |
A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding |
title_full_unstemmed |
A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding |
title_sort |
comparison of regularization methods in forward and backward models for auditory attention decoding |
publisher |
Frontiers Media S.A. |
series |
Frontiers in Neuroscience |
issn |
1662-453X |
publishDate |
2018-08-01 |
description |
The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies. |
topic |
temporal response function speech decoding electroencephalography selective auditory attention attention decoding |
url |
https://www.frontiersin.org/article/10.3389/fnins.2018.00531/full |
work_keys_str_mv |
AT danieldewong acomparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT danieldewong acomparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT sørenafuglsang acomparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT jenshjortkjær acomparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT jenshjortkjær acomparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT eneaceolini acomparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT malcolmslaney acomparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT alaindecheveigne acomparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT alaindecheveigne acomparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT alaindecheveigne acomparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT danieldewong comparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT danieldewong comparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT sørenafuglsang comparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT jenshjortkjær comparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT jenshjortkjær comparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT eneaceolini comparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT malcolmslaney comparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT alaindecheveigne comparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT alaindecheveigne comparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding AT alaindecheveigne comparisonofregularizationmethodsinforwardandbackwardmodelsforauditoryattentiondecoding |
_version_ |
1724818277188763648 |