Dependency Aware GPGPU Kernel Scheduling

碩士 === 國立臺灣科技大學 === 資訊工程系 === 106 ===   Recent GPUs are widely used in a variety of areas, such as image processing, deep learning, artificial intelligence, and so on. Related researches are constantly being presented. And programs such as deep learning and artificial intelligence do not use only on...

Full description

Bibliographic Details
Main Authors: Wei-Cheng Liao, 廖偉丞
Other Authors: Yuan-Shin Hwang
Format: Others
Language:zh-TW
Published: 2017
Online Access:http://ndltd.ncl.edu.tw/handle/5xsqa9
id ndltd-TW-106NTUS5392003
record_format oai_dc
spelling ndltd-TW-106NTUS53920032019-05-15T23:46:36Z http://ndltd.ncl.edu.tw/handle/5xsqa9 Dependency Aware GPGPU Kernel Scheduling 改善相依性程式之排程法 Wei-Cheng Liao 廖偉丞 碩士 國立臺灣科技大學 資訊工程系 106   Recent GPUs are widely used in a variety of areas, such as image processing, deep learning, artificial intelligence, and so on. Related researches are constantly being presented. And programs such as deep learning and artificial intelligence do not use only one kernel to execute, but assign tasks to different sub-kernels, which are data dependency. We categorize them as dependent kernels.   In the case of current GPU programming, these dependent kernels will be implemented in a sequential manner because of data dependencies, which will result in a reduction in GPU parallelism. In addition, the GPU will usually save the processed data back to memory, but for the dependent core, the data will be used again, resulting in unnecessary memory access.   We proposed the method to break the rules that the dependency kernel must perform in sequence by modifying the scheduling method in the simulator and retains the data in the cache memory with the appropriate memory write back policy. Yuan-Shin Hwang 黃元欣 2017 學位論文 ; thesis 41 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立臺灣科技大學 === 資訊工程系 === 106 ===   Recent GPUs are widely used in a variety of areas, such as image processing, deep learning, artificial intelligence, and so on. Related researches are constantly being presented. And programs such as deep learning and artificial intelligence do not use only one kernel to execute, but assign tasks to different sub-kernels, which are data dependency. We categorize them as dependent kernels.   In the case of current GPU programming, these dependent kernels will be implemented in a sequential manner because of data dependencies, which will result in a reduction in GPU parallelism. In addition, the GPU will usually save the processed data back to memory, but for the dependent core, the data will be used again, resulting in unnecessary memory access.   We proposed the method to break the rules that the dependency kernel must perform in sequence by modifying the scheduling method in the simulator and retains the data in the cache memory with the appropriate memory write back policy.
author2 Yuan-Shin Hwang
author_facet Yuan-Shin Hwang
Wei-Cheng Liao
廖偉丞
author Wei-Cheng Liao
廖偉丞
spellingShingle Wei-Cheng Liao
廖偉丞
Dependency Aware GPGPU Kernel Scheduling
author_sort Wei-Cheng Liao
title Dependency Aware GPGPU Kernel Scheduling
title_short Dependency Aware GPGPU Kernel Scheduling
title_full Dependency Aware GPGPU Kernel Scheduling
title_fullStr Dependency Aware GPGPU Kernel Scheduling
title_full_unstemmed Dependency Aware GPGPU Kernel Scheduling
title_sort dependency aware gpgpu kernel scheduling
publishDate 2017
url http://ndltd.ncl.edu.tw/handle/5xsqa9
work_keys_str_mv AT weichengliao dependencyawaregpgpukernelscheduling
AT liàowěichéng dependencyawaregpgpukernelscheduling
AT weichengliao gǎishànxiāngyīxìngchéngshìzhīpáichéngfǎ
AT liàowěichéng gǎishànxiāngyīxìngchéngshìzhīpáichéngfǎ
_version_ 1719154153158082560