Automatic Methods for Segmenting and Summarizing Videos Taken with Google Glasses

碩士 === 國立交通大學 === 多媒體工程研究所 === 105 === This thesis discusses the topic of automatic segmentation and summarization of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically div...

Full description

Bibliographic Details
Main Authors: Chiu, Yen-Chia, 邱彥嘉
Other Authors: Wang, Tsai-Pei
Format: Others
Language:zh-TW
Published: 2016
Online Access:http://ndltd.ncl.edu.tw/handle/sr3b2p
id ndltd-TW-105NCTU5641002
record_format oai_dc
spelling ndltd-TW-105NCTU56410022019-05-15T23:09:04Z http://ndltd.ncl.edu.tw/handle/sr3b2p Automatic Methods for Segmenting and Summarizing Videos Taken with Google Glasses 基於Google Glass之影片自動分段與精簡方法 Chiu, Yen-Chia 邱彥嘉 碩士 國立交通大學 多媒體工程研究所 105 This thesis discusses the topic of automatic segmentation and summarization of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the extracted segments. Such information then enables automatic generation of video summary. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to expert annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach. Wang, Tsai-Pei 王才沛 2016 學位論文 ; thesis 89 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立交通大學 === 多媒體工程研究所 === 105 === This thesis discusses the topic of automatic segmentation and summarization of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the extracted segments. Such information then enables automatic generation of video summary. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to expert annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach.
author2 Wang, Tsai-Pei
author_facet Wang, Tsai-Pei
Chiu, Yen-Chia
邱彥嘉
author Chiu, Yen-Chia
邱彥嘉
spellingShingle Chiu, Yen-Chia
邱彥嘉
Automatic Methods for Segmenting and Summarizing Videos Taken with Google Glasses
author_sort Chiu, Yen-Chia
title Automatic Methods for Segmenting and Summarizing Videos Taken with Google Glasses
title_short Automatic Methods for Segmenting and Summarizing Videos Taken with Google Glasses
title_full Automatic Methods for Segmenting and Summarizing Videos Taken with Google Glasses
title_fullStr Automatic Methods for Segmenting and Summarizing Videos Taken with Google Glasses
title_full_unstemmed Automatic Methods for Segmenting and Summarizing Videos Taken with Google Glasses
title_sort automatic methods for segmenting and summarizing videos taken with google glasses
publishDate 2016
url http://ndltd.ncl.edu.tw/handle/sr3b2p
work_keys_str_mv AT chiuyenchia automaticmethodsforsegmentingandsummarizingvideostakenwithgoogleglasses
AT qiūyànjiā automaticmethodsforsegmentingandsummarizingvideostakenwithgoogleglasses
AT chiuyenchia jīyúgoogleglasszhīyǐngpiànzìdòngfēnduànyǔjīngjiǎnfāngfǎ
AT qiūyànjiā jīyúgoogleglasszhīyǐngpiànzìdòngfēnduànyǔjīngjiǎnfāngfǎ
_version_ 1719141065798189056