CMBF: Cross-Modal-Based Fusion Recommendation Algorithm

A recommendation system is often used to recommend items that may be of interest to users. One of the main challenges is that the scarcity of actual interaction data between users and items restricts the performance of recommendation systems. To solve this problem, multi-modal technologies have been...

Full description

Bibliographic Details
Main Authors: Xi Chen, Yangsiyi Lu, Yuehai Wang, Jianyi Yang
Format: Article
Language:English
Published: MDPI AG 2021-08-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/16/5275
id doaj-ff03e4349e6a4aa8af02ad5d9849bcd5
record_format Article
spelling doaj-ff03e4349e6a4aa8af02ad5d9849bcd52021-08-26T14:18:28ZengMDPI AGSensors1424-82202021-08-01215275527510.3390/s21165275CMBF: Cross-Modal-Based Fusion Recommendation AlgorithmXi Chen0Yangsiyi Lu1Yuehai Wang2Jianyi Yang3College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310063, ChinaCollege of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310063, ChinaCollege of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310063, ChinaCollege of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310063, ChinaA recommendation system is often used to recommend items that may be of interest to users. One of the main challenges is that the scarcity of actual interaction data between users and items restricts the performance of recommendation systems. To solve this problem, multi-modal technologies have been used for expanding available information. However, the existing multi-modal recommendation algorithms all extract the feature of single modality and simply splice the features of different modalities to predict the recommendation results. This fusion method can not completely mine the relevance of multi-modal features and lose the relationship between different modalities, which affects the prediction results. In this paper, we propose a Cross-Modal-Based Fusion Recommendation Algorithm (CMBF) that can capture both the single-modal features and the cross-modal features. Our algorithm uses a novel cross-modal fusion method to fuse the multi-modal features completely and learn the cross information between different modalities. We evaluate our algorithm on two datasets, MovieLens and Amazon. Experiments show that our method has achieved the best performance compared to other recommendation algorithms. We also design ablation study to prove that our cross-modal fusion method improves the prediction results.https://www.mdpi.com/1424-8220/21/16/5275recommendation systemsmulti-modal algorithmcross-modal fusionattention mechanism
collection DOAJ
language English
format Article
sources DOAJ
author Xi Chen
Yangsiyi Lu
Yuehai Wang
Jianyi Yang
spellingShingle Xi Chen
Yangsiyi Lu
Yuehai Wang
Jianyi Yang
CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
Sensors
recommendation systems
multi-modal algorithm
cross-modal fusion
attention mechanism
author_facet Xi Chen
Yangsiyi Lu
Yuehai Wang
Jianyi Yang
author_sort Xi Chen
title CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
title_short CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
title_full CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
title_fullStr CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
title_full_unstemmed CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
title_sort cmbf: cross-modal-based fusion recommendation algorithm
publisher MDPI AG
series Sensors
issn 1424-8220
publishDate 2021-08-01
description A recommendation system is often used to recommend items that may be of interest to users. One of the main challenges is that the scarcity of actual interaction data between users and items restricts the performance of recommendation systems. To solve this problem, multi-modal technologies have been used for expanding available information. However, the existing multi-modal recommendation algorithms all extract the feature of single modality and simply splice the features of different modalities to predict the recommendation results. This fusion method can not completely mine the relevance of multi-modal features and lose the relationship between different modalities, which affects the prediction results. In this paper, we propose a Cross-Modal-Based Fusion Recommendation Algorithm (CMBF) that can capture both the single-modal features and the cross-modal features. Our algorithm uses a novel cross-modal fusion method to fuse the multi-modal features completely and learn the cross information between different modalities. We evaluate our algorithm on two datasets, MovieLens and Amazon. Experiments show that our method has achieved the best performance compared to other recommendation algorithms. We also design ablation study to prove that our cross-modal fusion method improves the prediction results.
topic recommendation systems
multi-modal algorithm
cross-modal fusion
attention mechanism
url https://www.mdpi.com/1424-8220/21/16/5275
work_keys_str_mv AT xichen cmbfcrossmodalbasedfusionrecommendationalgorithm
AT yangsiyilu cmbfcrossmodalbasedfusionrecommendationalgorithm
AT yuehaiwang cmbfcrossmodalbasedfusionrecommendationalgorithm
AT jianyiyang cmbfcrossmodalbasedfusionrecommendationalgorithm
_version_ 1721190227335184384