Video Captioning Based on Channel Soft Attention and Semantic Reconstructor
Video captioning is a popular task which automatically generates a natural-language sentence to describe video content. Previous video captioning works mainly use the encoder–decoder framework and exploit special techniques such as attention mechanisms to improve the quality of generated sentences....
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-02-01
|
Series: | Future Internet |
Subjects: | |
Online Access: | https://www.mdpi.com/1999-5903/13/2/55 |
id |
doaj-89bfa7894d3943d8ac3192fd02d5c046 |
---|---|
record_format |
Article |
spelling |
doaj-89bfa7894d3943d8ac3192fd02d5c0462021-02-24T00:05:14ZengMDPI AGFuture Internet1999-59032021-02-0113555510.3390/fi13020055Video Captioning Based on Channel Soft Attention and Semantic ReconstructorZhou Lei0Yiyong Huang1School of Computer Engineering and Science, Shanghai University, Shanghai 200444, ChinaSchool of Computer Engineering and Science, Shanghai University, Shanghai 200444, ChinaVideo captioning is a popular task which automatically generates a natural-language sentence to describe video content. Previous video captioning works mainly use the encoder–decoder framework and exploit special techniques such as attention mechanisms to improve the quality of generated sentences. In addition, most attention mechanisms focus on global features and spatial features. However, global features are usually fully connected features. Recurrent convolution networks (RCNs) receive 3-dimensional features as input at each time step, but the temporal structure of each channel at each time step has been ignored, which provide temporal relation information of each channel. In this paper, a video captioning model based on channel soft attention and semantic reconstructor is proposed, which considers the global information for each channel. In a video feature map sequence, the same channel of every time step is generated by the same convolutional kernel. We selectively collect the features generated by each convolutional kernel and then input the weighted sum of each channel to RCN at each time step to encode video representation. Furthermore, a semantic reconstructor is proposed to rebuild semantic vectors to ensure the integrity of semantic information in the training process, which takes advantage of both forward (semantic to sentence) and backward (sentence to semantic) flows. Experimental results on popular datasets MSVD and MSR-VTT demonstrate the effectiveness and feasibility of our model.https://www.mdpi.com/1999-5903/13/2/55video captioningchannel soft attentionsemantic reconstructorrecurrent convolution networks |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Zhou Lei Yiyong Huang |
spellingShingle |
Zhou Lei Yiyong Huang Video Captioning Based on Channel Soft Attention and Semantic Reconstructor Future Internet video captioning channel soft attention semantic reconstructor recurrent convolution networks |
author_facet |
Zhou Lei Yiyong Huang |
author_sort |
Zhou Lei |
title |
Video Captioning Based on Channel Soft Attention and Semantic Reconstructor |
title_short |
Video Captioning Based on Channel Soft Attention and Semantic Reconstructor |
title_full |
Video Captioning Based on Channel Soft Attention and Semantic Reconstructor |
title_fullStr |
Video Captioning Based on Channel Soft Attention and Semantic Reconstructor |
title_full_unstemmed |
Video Captioning Based on Channel Soft Attention and Semantic Reconstructor |
title_sort |
video captioning based on channel soft attention and semantic reconstructor |
publisher |
MDPI AG |
series |
Future Internet |
issn |
1999-5903 |
publishDate |
2021-02-01 |
description |
Video captioning is a popular task which automatically generates a natural-language sentence to describe video content. Previous video captioning works mainly use the encoder–decoder framework and exploit special techniques such as attention mechanisms to improve the quality of generated sentences. In addition, most attention mechanisms focus on global features and spatial features. However, global features are usually fully connected features. Recurrent convolution networks (RCNs) receive 3-dimensional features as input at each time step, but the temporal structure of each channel at each time step has been ignored, which provide temporal relation information of each channel. In this paper, a video captioning model based on channel soft attention and semantic reconstructor is proposed, which considers the global information for each channel. In a video feature map sequence, the same channel of every time step is generated by the same convolutional kernel. We selectively collect the features generated by each convolutional kernel and then input the weighted sum of each channel to RCN at each time step to encode video representation. Furthermore, a semantic reconstructor is proposed to rebuild semantic vectors to ensure the integrity of semantic information in the training process, which takes advantage of both forward (semantic to sentence) and backward (sentence to semantic) flows. Experimental results on popular datasets MSVD and MSR-VTT demonstrate the effectiveness and feasibility of our model. |
topic |
video captioning channel soft attention semantic reconstructor recurrent convolution networks |
url |
https://www.mdpi.com/1999-5903/13/2/55 |
work_keys_str_mv |
AT zhoulei videocaptioningbasedonchannelsoftattentionandsemanticreconstructor AT yiyonghuang videocaptioningbasedonchannelsoftattentionandsemanticreconstructor |
_version_ |
1724253479321468928 |