Parallel Implementation for In-Loop Filter of HEVC on GPU

碩士 === 國立成功大學 === 電機工程學系 === 102 === This thesis proposes a parallel program architecture which is running on GPU and CPU to reduce execution time for in-loop filter of HEVC. The in-loop filter includes de-blocking filter and sample adaptive offset. In the de-blocking filter, we use edge-level data...

Full description

Bibliographic Details
Main Authors: Yi-ShianShie, 謝毅賢
Other Authors: Chih-Hung Kuo
Format: Others
Language:zh-TW
Published: 2014
Online Access:http://ndltd.ncl.edu.tw/handle/84174104920642528922
id ndltd-TW-102NCKU5442221
record_format oai_dc
spelling ndltd-TW-102NCKU54422212015-10-14T00:12:48Z http://ndltd.ncl.edu.tw/handle/84174104920642528922 Parallel Implementation for In-Loop Filter of HEVC on GPU 高效能視訊編碼標準之環內濾波器在圖形處理器上平行實現 Yi-ShianShie 謝毅賢 碩士 國立成功大學 電機工程學系 102 This thesis proposes a parallel program architecture which is running on GPU and CPU to reduce execution time for in-loop filter of HEVC. The in-loop filter includes de-blocking filter and sample adaptive offset. In the de-blocking filter, we use edge-level data parallelism to filter block edges in parallel that skips quadtree decomposition algorithm and z-scan order process. For the sample adaptive offset, we divide sample adaptive offset into statistics calculation, parameters decision, and sample compensation. In the statistics calculation, we use atomic addition and parallel reduction methods to overcome the issue about parallel accumulate for memory. Moreover, we employ a function of information estimation to estimate bitrate instead of context-adaptive binary arithmetic coding for the parameters decision. Finally, the sample compensation compensates samples parallelly based on sample-level data parallelism. Experimental results show that the proposed parallel program architecture achieve 5.0 speedup for de-blocking filter and 10.5 speedup for sample adaptive offset. Chih-Hung Kuo 郭致宏 2014 學位論文 ; thesis 109 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立成功大學 === 電機工程學系 === 102 === This thesis proposes a parallel program architecture which is running on GPU and CPU to reduce execution time for in-loop filter of HEVC. The in-loop filter includes de-blocking filter and sample adaptive offset. In the de-blocking filter, we use edge-level data parallelism to filter block edges in parallel that skips quadtree decomposition algorithm and z-scan order process. For the sample adaptive offset, we divide sample adaptive offset into statistics calculation, parameters decision, and sample compensation. In the statistics calculation, we use atomic addition and parallel reduction methods to overcome the issue about parallel accumulate for memory. Moreover, we employ a function of information estimation to estimate bitrate instead of context-adaptive binary arithmetic coding for the parameters decision. Finally, the sample compensation compensates samples parallelly based on sample-level data parallelism. Experimental results show that the proposed parallel program architecture achieve 5.0 speedup for de-blocking filter and 10.5 speedup for sample adaptive offset.
author2 Chih-Hung Kuo
author_facet Chih-Hung Kuo
Yi-ShianShie
謝毅賢
author Yi-ShianShie
謝毅賢
spellingShingle Yi-ShianShie
謝毅賢
Parallel Implementation for In-Loop Filter of HEVC on GPU
author_sort Yi-ShianShie
title Parallel Implementation for In-Loop Filter of HEVC on GPU
title_short Parallel Implementation for In-Loop Filter of HEVC on GPU
title_full Parallel Implementation for In-Loop Filter of HEVC on GPU
title_fullStr Parallel Implementation for In-Loop Filter of HEVC on GPU
title_full_unstemmed Parallel Implementation for In-Loop Filter of HEVC on GPU
title_sort parallel implementation for in-loop filter of hevc on gpu
publishDate 2014
url http://ndltd.ncl.edu.tw/handle/84174104920642528922
work_keys_str_mv AT yishianshie parallelimplementationforinloopfilterofhevcongpu
AT xièyìxián parallelimplementationforinloopfilterofhevcongpu
AT yishianshie gāoxiàonéngshìxùnbiānmǎbiāozhǔnzhīhuánnèilǜbōqìzàitúxíngchùlǐqìshàngpíngxíngshíxiàn
AT xièyìxián gāoxiàonéngshìxùnbiānmǎbiāozhǔnzhīhuánnèilǜbōqìzàitúxíngchùlǐqìshàngpíngxíngshíxiàn
_version_ 1718087719731593216