Video Transcoding Algorithm through Visual Attention Model Analysis for H.264/AVC

碩士 === 國立中山大學 === 電機工程學系研究所 === 96 === The proposed transcoding system consists of the spatial-resolution reduction and the temporal-resolution reduction method via visual attention model analysis. In the spatial domain, the visual attention model can be used to obtain the visual attention region. T...

Full description

Bibliographic Details
Main Authors: Shih-meng Chen, 陳世孟
Other Authors: Chia-Hung Yeh
Format: Others
Language:en_US
Published: 2008
Online Access:http://ndltd.ncl.edu.tw/handle/16035153361160593462
id ndltd-TW-096NSYS5442082
record_format oai_dc
spelling ndltd-TW-096NSYS54420822015-10-13T11:20:47Z http://ndltd.ncl.edu.tw/handle/16035153361160593462 Video Transcoding Algorithm through Visual Attention Model Analysis for H.264/AVC 應用於H.264國際視訊編碼標準基於視覺注目性分析之視訊轉換編碼演算法 Shih-meng Chen 陳世孟 碩士 國立中山大學 電機工程學系研究所 96 The proposed transcoding system consists of the spatial-resolution reduction and the temporal-resolution reduction method via visual attention model analysis. In the spatial domain, the visual attention model can be used to obtain the visual attention region. Then, the bitrate can be reduced since we can extract attention region of the original frame. The attention region conveys the same concept as that of the original frame. In the temporal domain, a frame skipping algorithm is proposed for reducing the temporal resolution to fit the channel target bitrate. The visual attention model is employed to measure the frame complexity in order to determine whether the frames should be skipped or not. Then, we can preserve the significant frames to avoid jerky effect. After combining with the motion vector composition algorithm, we can speedup the transcoding process with slight quality degradation. Chia-Hung Yeh 葉家宏 2008 學位論文 ; thesis 99 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 碩士 === 國立中山大學 === 電機工程學系研究所 === 96 === The proposed transcoding system consists of the spatial-resolution reduction and the temporal-resolution reduction method via visual attention model analysis. In the spatial domain, the visual attention model can be used to obtain the visual attention region. Then, the bitrate can be reduced since we can extract attention region of the original frame. The attention region conveys the same concept as that of the original frame. In the temporal domain, a frame skipping algorithm is proposed for reducing the temporal resolution to fit the channel target bitrate. The visual attention model is employed to measure the frame complexity in order to determine whether the frames should be skipped or not. Then, we can preserve the significant frames to avoid jerky effect. After combining with the motion vector composition algorithm, we can speedup the transcoding process with slight quality degradation.
author2 Chia-Hung Yeh
author_facet Chia-Hung Yeh
Shih-meng Chen
陳世孟
author Shih-meng Chen
陳世孟
spellingShingle Shih-meng Chen
陳世孟
Video Transcoding Algorithm through Visual Attention Model Analysis for H.264/AVC
author_sort Shih-meng Chen
title Video Transcoding Algorithm through Visual Attention Model Analysis for H.264/AVC
title_short Video Transcoding Algorithm through Visual Attention Model Analysis for H.264/AVC
title_full Video Transcoding Algorithm through Visual Attention Model Analysis for H.264/AVC
title_fullStr Video Transcoding Algorithm through Visual Attention Model Analysis for H.264/AVC
title_full_unstemmed Video Transcoding Algorithm through Visual Attention Model Analysis for H.264/AVC
title_sort video transcoding algorithm through visual attention model analysis for h.264/avc
publishDate 2008
url http://ndltd.ncl.edu.tw/handle/16035153361160593462
work_keys_str_mv AT shihmengchen videotranscodingalgorithmthroughvisualattentionmodelanalysisforh264avc
AT chénshìmèng videotranscodingalgorithmthroughvisualattentionmodelanalysisforh264avc
AT shihmengchen yīngyòngyúh264guójìshìxùnbiānmǎbiāozhǔnjīyúshìjuézhùmùxìngfēnxīzhīshìxùnzhuǎnhuànbiānmǎyǎnsuànfǎ
AT chénshìmèng yīngyòngyúh264guójìshìxùnbiānmǎbiāozhǔnjīyúshìjuézhùmùxìngfēnxīzhīshìxùnzhuǎnhuànbiānmǎyǎnsuànfǎ
_version_ 1716842090585915392