Sentiment Analysis of Social Media via Multimodal Feature Fusion

In recent years, with the popularity of social media, users are increasingly keen to express their feelings and opinions in the form of pictures and text, which makes multimodal data with text and pictures the con tent type with the most growth. Most of the information posted by users on social medi...

Full description

Bibliographic Details
Main Authors: Kang Zhang, Yushui Geng, Jing Zhao, Jianxin Liu, Wenxiao Li
Format: Article
Language:English
Published: MDPI AG 2020-12-01
Series:Symmetry
Subjects:
Online Access:https://www.mdpi.com/2073-8994/12/12/2010
id doaj-f1d5a964aaf048e9b6bf08aa37751c87
record_format Article
spelling doaj-f1d5a964aaf048e9b6bf08aa37751c872020-12-06T00:01:48ZengMDPI AGSymmetry2073-89942020-12-01122010201010.3390/sym12122010Sentiment Analysis of Social Media via Multimodal Feature FusionKang Zhang0Yushui Geng1Jing Zhao2Jianxin Liu3Wenxiao Li4School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, ChinaSchool of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, ChinaSchool of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, ChinaSchool of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, ChinaSchool of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, ChinaIn recent years, with the popularity of social media, users are increasingly keen to express their feelings and opinions in the form of pictures and text, which makes multimodal data with text and pictures the con tent type with the most growth. Most of the information posted by users on social media has obvious sentimental aspects, and multimodal sentiment analysis has become an important research field. Previous studies on multimodal sentiment analysis have primarily focused on extracting text and image features separately and then combining them for sentiment classification. These studies often ignore the interaction between text and images. Therefore, this paper proposes a new multimodal sentiment analysis model. The model first eliminates noise interference in textual data and extracts more important image features. Then, in the feature-fusion part based on the attention mechanism, the text and images learn the internal features from each other through symmetry. Then the fusion features are applied to sentiment classification tasks. The experimental results on two common multimodal sentiment datasets demonstrate the effectiveness of the proposed model.https://www.mdpi.com/2073-8994/12/12/2010multimodal sentiment analysisfeature fusiondeep learningattention mechanism
collection DOAJ
language English
format Article
sources DOAJ
author Kang Zhang
Yushui Geng
Jing Zhao
Jianxin Liu
Wenxiao Li
spellingShingle Kang Zhang
Yushui Geng
Jing Zhao
Jianxin Liu
Wenxiao Li
Sentiment Analysis of Social Media via Multimodal Feature Fusion
Symmetry
multimodal sentiment analysis
feature fusion
deep learning
attention mechanism
author_facet Kang Zhang
Yushui Geng
Jing Zhao
Jianxin Liu
Wenxiao Li
author_sort Kang Zhang
title Sentiment Analysis of Social Media via Multimodal Feature Fusion
title_short Sentiment Analysis of Social Media via Multimodal Feature Fusion
title_full Sentiment Analysis of Social Media via Multimodal Feature Fusion
title_fullStr Sentiment Analysis of Social Media via Multimodal Feature Fusion
title_full_unstemmed Sentiment Analysis of Social Media via Multimodal Feature Fusion
title_sort sentiment analysis of social media via multimodal feature fusion
publisher MDPI AG
series Symmetry
issn 2073-8994
publishDate 2020-12-01
description In recent years, with the popularity of social media, users are increasingly keen to express their feelings and opinions in the form of pictures and text, which makes multimodal data with text and pictures the con tent type with the most growth. Most of the information posted by users on social media has obvious sentimental aspects, and multimodal sentiment analysis has become an important research field. Previous studies on multimodal sentiment analysis have primarily focused on extracting text and image features separately and then combining them for sentiment classification. These studies often ignore the interaction between text and images. Therefore, this paper proposes a new multimodal sentiment analysis model. The model first eliminates noise interference in textual data and extracts more important image features. Then, in the feature-fusion part based on the attention mechanism, the text and images learn the internal features from each other through symmetry. Then the fusion features are applied to sentiment classification tasks. The experimental results on two common multimodal sentiment datasets demonstrate the effectiveness of the proposed model.
topic multimodal sentiment analysis
feature fusion
deep learning
attention mechanism
url https://www.mdpi.com/2073-8994/12/12/2010
work_keys_str_mv AT kangzhang sentimentanalysisofsocialmediaviamultimodalfeaturefusion
AT yushuigeng sentimentanalysisofsocialmediaviamultimodalfeaturefusion
AT jingzhao sentimentanalysisofsocialmediaviamultimodalfeaturefusion
AT jianxinliu sentimentanalysisofsocialmediaviamultimodalfeaturefusion
AT wenxiaoli sentimentanalysisofsocialmediaviamultimodalfeaturefusion
_version_ 1724399629865320448