Attention Embedded Spatio-Temporal Network for Video Salient Object Detection

The main challenge in video salient object detection is how to model object motion and dramatic changes in appearance contrast. In this work, we propose an attention embedded spatio-temporal network (ASTN) to adaptively exploit diverse factors that influence dynamic saliency prediction within a unif...

Full description

Bibliographic Details
Main Authors: Lili Huang, Pengxiang Yan, Guanbin Li, Qing Wang, Liang Lin
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8896915/
Description
Summary:The main challenge in video salient object detection is how to model object motion and dramatic changes in appearance contrast. In this work, we propose an attention embedded spatio-temporal network (ASTN) to adaptively exploit diverse factors that influence dynamic saliency prediction within a unified framework. To compensate for object movement, we introduce a flow-guided spatial learning (FGSL) module to directly capture effective motion information in the form of attention based on optical flows. However, optical flow represents the motion information of all moving objects, including movements of non-salient objects caused by large camera motion and subtle changes in background. Therefore, using the flow-guided attention map alone causes the spatial saliency to be influenced by all moving objects rather than just the salient objects, resulting in unstable and temporally inconsistent saliency maps. To further enhance the temporal coherence, we develop an attentive bidirectional gated recurrent unit (AB-GRU) module to adaptively exploit sequential feature evolution. With this AB-GRU, we can further refine the spatiotemporal feature representation by incorporating an accommodative attention mechanism. Experimental results demonstrate that our model achieves superior empirical performance on video salient object detection. Moreover, an experiment on the extended application to unsupervised video object segmentation further demonstrates the generalization ability and stability of our proposed method.
ISSN:2169-3536