Automatic Emotion Recognition Using Temporal Multimodal Deep Learning
Emotion recognition using miniaturised wearable physiological sensors has emerged as a revolutionary technology in various applications. However, detecting emotions using the fusion of multiple physiological signals remains a complex and challenging task. When fusing physiological signals, it is ess...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9206543/ |
id |
doaj-f8c132b48a234433bd4a4cd465a94774 |
---|---|
record_format |
Article |
spelling |
doaj-f8c132b48a234433bd4a4cd465a947742021-03-30T04:43:46ZengIEEEIEEE Access2169-35362020-01-01822546322547410.1109/ACCESS.2020.30270269206543Automatic Emotion Recognition Using Temporal Multimodal Deep LearningBahareh Nakisa0https://orcid.org/0000-0003-2211-2997Mohammad Naim Rastgoo1Andry Rakotonirainy2https://orcid.org/0000-0002-2144-4909Frederic Maire3https://orcid.org/0000-0002-6212-7651Vinod Chandran4https://orcid.org/0000-0003-3185-0852School of Information Technology, Deakin University, Geelong, VIC, AustraliaSchool of Electrical Engineering and Robotics, Queensland University of Technology, Brisbane, QLD, AustraliaCentre for Accident Research and Road Safety-Queensland, Queensland University of Technology, Brisbane, QLD, AustraliaSchool of Electrical Engineering and Robotics, Queensland University of Technology, Brisbane, QLD, AustraliaSchool of Electrical Engineering and Robotics, Queensland University of Technology, Brisbane, QLD, AustraliaEmotion recognition using miniaturised wearable physiological sensors has emerged as a revolutionary technology in various applications. However, detecting emotions using the fusion of multiple physiological signals remains a complex and challenging task. When fusing physiological signals, it is essential to consider the ability of different fusion approaches to capture the emotional information contained within and across modalities. Moreover, since physiological signals consist of time-series data, it becomes imperative to consider their temporal structures in the fusion process. In this study, we propose a temporal multimodal fusion approach with a deep learning model to capture the non-linear emotional correlation within and across electroencephalography (EEG) and blood volume pulse (BVP) signals and to improve the performance of emotion classification. The performance of the proposed model is evaluated using two different fusion approaches - early fusion and late fusion. Specifically, we use a convolutional neural network (ConvNet) long short-term memory (LSTM) model to fuse the EEG and BVP signals to jointly learn and explore the highly correlated representation of emotions across modalities, after learning each modality with a single deep network. The performance of the temporal multimodal deep learning model is validated on our dataset collected from smart wearable sensors and is also compared with results of recent studies. The experimental results show that the temporal multimodal deep learning models, based on early and late fusion approaches, successfully classified human emotions into one of four quadrants of dimensional emotions with an accuracy of 71.61% and 70.17%, respectively.https://ieeexplore.ieee.org/document/9206543/Emotion recognitionelectroencephalographyblood volume pulseconvolutional neural networklong short-term memorytemporal multimodal fusion |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Bahareh Nakisa Mohammad Naim Rastgoo Andry Rakotonirainy Frederic Maire Vinod Chandran |
spellingShingle |
Bahareh Nakisa Mohammad Naim Rastgoo Andry Rakotonirainy Frederic Maire Vinod Chandran Automatic Emotion Recognition Using Temporal Multimodal Deep Learning IEEE Access Emotion recognition electroencephalography blood volume pulse convolutional neural network long short-term memory temporal multimodal fusion |
author_facet |
Bahareh Nakisa Mohammad Naim Rastgoo Andry Rakotonirainy Frederic Maire Vinod Chandran |
author_sort |
Bahareh Nakisa |
title |
Automatic Emotion Recognition Using Temporal Multimodal Deep Learning |
title_short |
Automatic Emotion Recognition Using Temporal Multimodal Deep Learning |
title_full |
Automatic Emotion Recognition Using Temporal Multimodal Deep Learning |
title_fullStr |
Automatic Emotion Recognition Using Temporal Multimodal Deep Learning |
title_full_unstemmed |
Automatic Emotion Recognition Using Temporal Multimodal Deep Learning |
title_sort |
automatic emotion recognition using temporal multimodal deep learning |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
Emotion recognition using miniaturised wearable physiological sensors has emerged as a revolutionary technology in various applications. However, detecting emotions using the fusion of multiple physiological signals remains a complex and challenging task. When fusing physiological signals, it is essential to consider the ability of different fusion approaches to capture the emotional information contained within and across modalities. Moreover, since physiological signals consist of time-series data, it becomes imperative to consider their temporal structures in the fusion process. In this study, we propose a temporal multimodal fusion approach with a deep learning model to capture the non-linear emotional correlation within and across electroencephalography (EEG) and blood volume pulse (BVP) signals and to improve the performance of emotion classification. The performance of the proposed model is evaluated using two different fusion approaches - early fusion and late fusion. Specifically, we use a convolutional neural network (ConvNet) long short-term memory (LSTM) model to fuse the EEG and BVP signals to jointly learn and explore the highly correlated representation of emotions across modalities, after learning each modality with a single deep network. The performance of the temporal multimodal deep learning model is validated on our dataset collected from smart wearable sensors and is also compared with results of recent studies. The experimental results show that the temporal multimodal deep learning models, based on early and late fusion approaches, successfully classified human emotions into one of four quadrants of dimensional emotions with an accuracy of 71.61% and 70.17%, respectively. |
topic |
Emotion recognition electroencephalography blood volume pulse convolutional neural network long short-term memory temporal multimodal fusion |
url |
https://ieeexplore.ieee.org/document/9206543/ |
work_keys_str_mv |
AT baharehnakisa automaticemotionrecognitionusingtemporalmultimodaldeeplearning AT mohammadnaimrastgoo automaticemotionrecognitionusingtemporalmultimodaldeeplearning AT andryrakotonirainy automaticemotionrecognitionusingtemporalmultimodaldeeplearning AT fredericmaire automaticemotionrecognitionusingtemporalmultimodaldeeplearning AT vinodchandran automaticemotionrecognitionusingtemporalmultimodaldeeplearning |
_version_ |
1724181392791699456 |