Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information

In multi-modal emotion aware frameworks, it is essential to estimate the emotional features then fuse them to different degrees. This basically follows either a feature-level or decision-level strategy. In all likelihood, while features from several modalities may enhance the classification performa...

Full description

Bibliographic Details
Main Authors: Rania M. Ghoniem, Abeer D. Algarni, Khaled Shaalan
Format: Article
Language:English
Published: MDPI AG 2019-07-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/10/7/239
id doaj-51f2e48c1a8640c29b5552351c1acfb9
record_format Article
spelling doaj-51f2e48c1a8640c29b5552351c1acfb92020-11-24T21:44:13ZengMDPI AGInformation2078-24892019-07-0110723910.3390/info10070239info10070239Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain InformationRania M. Ghoniem0Abeer D. Algarni1Khaled Shaalan2Department of Computer, Mansoura University, Mansoura 35516, EgyptDepartment of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 84428, Saudi ArabiaFaculty of Engineering &amp; IT, The British University in Dubai, Dubai 345015, United Arab EmiratesIn multi-modal emotion aware frameworks, it is essential to estimate the emotional features then fuse them to different degrees. This basically follows either a feature-level or decision-level strategy. In all likelihood, while features from several modalities may enhance the classification performance, they might exhibit high dimensionality and make the learning process complex for the most used machine learning algorithms. To overcome issues of feature extraction and multi-modal fusion, hybrid fuzzy-evolutionary computation methodologies are employed to demonstrate ultra-strong capability of learning features and dimensionality reduction. This paper proposes a novel multi-modal emotion aware system by fusing speech with EEG modalities. Firstly, a mixing feature set of speaker-dependent and independent characteristics is estimated from speech signal. Further, EEG is utilized as inner channel complementing speech for more authoritative recognition, by extracting multiple features belonging to time, frequency, and time&#8722;frequency. For classifying unimodal data of either speech or EEG, a hybrid fuzzy <i>c</i>-means-genetic algorithm-neural network model is proposed, where its fitness function finds the optimal fuzzy cluster number reducing the classification error. To fuse speech with EEG information, a separate classifier is used for each modality, then output is computed by integrating their posterior probabilities. Results show the superiority of the proposed model, where the overall performance in terms of accuracy average rates is 98.06%, and 97.28%, and 98.53% for EEG, speech, and multi-modal recognition, respectively. The proposed model is also applied to two public databases for speech and EEG, namely: SAVEE and MAHNOB, which achieve accuracies of 98.21% and 98.26%, respectively.https://www.mdpi.com/2078-2489/10/7/239multi-modal emotion aware systemsspeech processingEEG signal processinghybrid classification models
collection DOAJ
language English
format Article
sources DOAJ
author Rania M. Ghoniem
Abeer D. Algarni
Khaled Shaalan
spellingShingle Rania M. Ghoniem
Abeer D. Algarni
Khaled Shaalan
Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information
Information
multi-modal emotion aware systems
speech processing
EEG signal processing
hybrid classification models
author_facet Rania M. Ghoniem
Abeer D. Algarni
Khaled Shaalan
author_sort Rania M. Ghoniem
title Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information
title_short Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information
title_full Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information
title_fullStr Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information
title_full_unstemmed Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information
title_sort multi-modal emotion aware system based on fusion of speech and brain information
publisher MDPI AG
series Information
issn 2078-2489
publishDate 2019-07-01
description In multi-modal emotion aware frameworks, it is essential to estimate the emotional features then fuse them to different degrees. This basically follows either a feature-level or decision-level strategy. In all likelihood, while features from several modalities may enhance the classification performance, they might exhibit high dimensionality and make the learning process complex for the most used machine learning algorithms. To overcome issues of feature extraction and multi-modal fusion, hybrid fuzzy-evolutionary computation methodologies are employed to demonstrate ultra-strong capability of learning features and dimensionality reduction. This paper proposes a novel multi-modal emotion aware system by fusing speech with EEG modalities. Firstly, a mixing feature set of speaker-dependent and independent characteristics is estimated from speech signal. Further, EEG is utilized as inner channel complementing speech for more authoritative recognition, by extracting multiple features belonging to time, frequency, and time&#8722;frequency. For classifying unimodal data of either speech or EEG, a hybrid fuzzy <i>c</i>-means-genetic algorithm-neural network model is proposed, where its fitness function finds the optimal fuzzy cluster number reducing the classification error. To fuse speech with EEG information, a separate classifier is used for each modality, then output is computed by integrating their posterior probabilities. Results show the superiority of the proposed model, where the overall performance in terms of accuracy average rates is 98.06%, and 97.28%, and 98.53% for EEG, speech, and multi-modal recognition, respectively. The proposed model is also applied to two public databases for speech and EEG, namely: SAVEE and MAHNOB, which achieve accuracies of 98.21% and 98.26%, respectively.
topic multi-modal emotion aware systems
speech processing
EEG signal processing
hybrid classification models
url https://www.mdpi.com/2078-2489/10/7/239
work_keys_str_mv AT raniamghoniem multimodalemotionawaresystembasedonfusionofspeechandbraininformation
AT abeerdalgarni multimodalemotionawaresystembasedonfusionofspeechandbraininformation
AT khaledshaalan multimodalemotionawaresystembasedonfusionofspeechandbraininformation
_version_ 1725911410652741632