Automatic Methods for Segmenting and Summarizing Videos Taken with Google Glasses

碩士 === 國立交通大學 === 多媒體工程研究所 === 105 === This thesis discusses the topic of automatic segmentation and summarization of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically div...

Full description

Bibliographic Details
Main Authors: Chiu, Yen-Chia, 邱彥嘉
Other Authors: Wang, Tsai-Pei
Format: Others
Language:zh-TW
Published: 2016
Online Access:http://ndltd.ncl.edu.tw/handle/sr3b2p
Description
Summary:碩士 === 國立交通大學 === 多媒體工程研究所 === 105 === This thesis discusses the topic of automatic segmentation and summarization of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the extracted segments. Such information then enables automatic generation of video summary. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to expert annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach.