Co-saliency Detection Using Saliency Propagation and Low-rank Constraint

碩士 === 國立中正大學 === 資訊工程研究所 === 106 === In the computer vision field, there is a broad range applications for saliency object detection, such as object recognition, object segmentation, image retrieval, and image compression. Most existing salient object detection approaches detected salient objects w...

Full description

Bibliographic Details
Main Authors: TSENG, SHIN-HUNG, 曾心虹
Other Authors: LEOU, JIN-JANG
Format: Others
Language:en_US
Published: 2018
Online Access:http://ndltd.ncl.edu.tw/handle/y389yy
Description
Summary:碩士 === 國立中正大學 === 資訊工程研究所 === 106 === In the computer vision field, there is a broad range applications for saliency object detection, such as object recognition, object segmentation, image retrieval, and image compression. Most existing salient object detection approaches detected salient objects within a single image. In recent years, several researches detected the visual information of similar salient objects among similar images, which has been attracting certain extent attention with several successful applications. It highlights similar salient foreground objects among similar images, so it is useful in some applications, such as common visual pattern discovery, co-recognition, and object co-segmentation. In this study, co-saliency detection using saliency propagation and low-rank constraint is proposed. First, smoothing procedure and superpixel segmentation are utilized for each image among similar images. Second, two existing state-of-art saliency detection approaches are performed with three segmentation scales and the multiscale saliency map fusion approach based on low-rank constraint is employed to generate more reliable initial saliency map for co-saliency detection. Further, coherence map for each image is calculated to consider the coherence among all similar images. Then, two co-saliency propagation approaches are performed to estimate co-saliency maps for each image. Both of them take the saliency cue within a single image and the coherence cue among all similar images into consideration. Third, estimated co-saliency maps are fused to generate fused co-saliency map for each image. Finally, each fused co-saliency map is refined by the foci region(s) of visual attention to obtain the final co-saliency map. Based on the experimental results, the performance of the proposed approach is better than the performance of three comparison approaches.