Robust Multimodal Emotion Recognition from Conversation with Transformer-Based Crossmodality Fusion

Decades of scientific research have been conducted on developing and evaluating methods for automated emotion recognition. With exponentially growing technology, there is a wide range of emerging applications that require emotional state recognition of the user. This paper investigates a robust appr...

Full description

Bibliographic Details
Published in:Sensors
Main Authors: Baijun Xie, Mariia Sidulova, Chung Hyuk Park
Format: Article
Language:English
Published: MDPI AG 2021-07-01
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/14/4913
Description
Summary:Decades of scientific research have been conducted on developing and evaluating methods for automated emotion recognition. With exponentially growing technology, there is a wide range of emerging applications that require emotional state recognition of the user. This paper investigates a robust approach for multimodal emotion recognition during a conversation. Three separate models for audio, video and text modalities are structured and fine-tuned on the MELD. In this paper, a transformer-based crossmodality fusion with the EmbraceNet architecture is employed to estimate the emotion. The proposed multimodal network architecture can achieve up to 65% accuracy, which significantly surpasses any of the unimodal models. We provide multiple evaluation techniques applied to our work to show that our model is robust and can even outperform the state-of-the-art models on the MELD.
ISSN:1424-8220