GPU Virtualization Support in Cloud System
碩士 === 國立臺灣大學 === 資訊工程學研究所 === 101 === Nowadays a graphic processing unit (GPU) delivers much better performance than CPU does. As a result GPU is becoming increasingly important in high performance computing (HPC) because of its tremendous computing power. At the same time cloud computing is becomi...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2013
|
Online Access: | http://ndltd.ncl.edu.tw/handle/70577441737788874297 |
id |
ndltd-TW-101NTU05392004 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-101NTU053920042016-03-23T04:13:52Z http://ndltd.ncl.edu.tw/handle/70577441737788874297 GPU Virtualization Support in Cloud System 雲端作業系統的圖形處理器虛擬化支援系統 Chih-Yuan Yeh 葉智淵 碩士 國立臺灣大學 資訊工程學研究所 101 Nowadays a graphic processing unit (GPU) delivers much better performance than CPU does. As a result GPU is becoming increasingly important in high performance computing (HPC) because of its tremendous computing power. At the same time cloud computing is becoming increasingly popular, and HPC community will expect cloud computing companies to provide virtual GPU service, just like virtual CPU, virtual disk, and virtualized network they have been providing. This business opportunity means computing GPU will be more economical because users can spend relatively less to rent GPUs to fit their computing needs, rather than buying them. The current practice of virtual GPU rental service is to bind a GPU to a virtual machine statically. The problem of this approach is that virtual machines cannot share a GPU. This is against the "multiple tenancy" principle of cloud computing. The goal of this paper is to design a cloud computing system that can combine the CUDA program from every virtual machine and execute them concurrently, which support the concept of GPU "time-sharing" principle of cloud computing, which is crucial to providing an economic computing service. This paper describes a method of using NVidia Fermi Architecture GPU for GPU virtualization. The key idea in our design is that NVidia Fermi architecture GPU can support concurrent kernel execution, which can allows 16 CUDA kernels concurrent execution. We virtualize GPU by collecting GPU programs from different virtual machines into a single virtual machine, where the programs will be run by the GPU. Our approach reduces the compilation and execution time of combined programs, and also reduce the average waiting time of each CUDA program from different virtual machines. We conduct experiments to evaluate the efficiency of our GPU virtualization. Preliminary results are satisfactory. 劉邦鋒 2013 學位論文 ; thesis 23 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立臺灣大學 === 資訊工程學研究所 === 101 === Nowadays a graphic processing unit (GPU) delivers much better performance than CPU does. As a result GPU is becoming increasingly important in high performance computing (HPC) because of its tremendous computing power. At the same time cloud computing is becoming increasingly popular, and HPC community will expect cloud computing companies to provide virtual GPU service, just like virtual CPU, virtual disk, and virtualized network they have been providing. This business opportunity means computing GPU will be more economical because users can spend relatively less to rent GPUs to fit their computing needs, rather than buying them.
The current practice of virtual GPU rental service is to bind a GPU to a virtual machine statically. The problem of this approach is that virtual machines cannot share a GPU. This is against the "multiple tenancy" principle of cloud computing. The goal of this paper is to design a cloud computing system that can combine the CUDA program from every virtual machine and execute them concurrently, which support the concept of GPU "time-sharing" principle of cloud computing, which is crucial to providing an economic computing service.
This paper describes a method of using NVidia Fermi Architecture GPU for GPU virtualization. The key idea in our design is that NVidia Fermi architecture GPU can support concurrent kernel execution, which can allows 16 CUDA kernels concurrent execution. We virtualize GPU by collecting GPU programs from different virtual machines into a single virtual machine, where the programs will be run by the GPU. Our approach reduces the compilation and execution time of combined programs, and also reduce the average waiting time of each CUDA program from different virtual machines.
We conduct experiments to evaluate the efficiency of our GPU virtualization. Preliminary results are satisfactory.
|
author2 |
劉邦鋒 |
author_facet |
劉邦鋒 Chih-Yuan Yeh 葉智淵 |
author |
Chih-Yuan Yeh 葉智淵 |
spellingShingle |
Chih-Yuan Yeh 葉智淵 GPU Virtualization Support in Cloud System |
author_sort |
Chih-Yuan Yeh |
title |
GPU Virtualization Support in Cloud System |
title_short |
GPU Virtualization Support in Cloud System |
title_full |
GPU Virtualization Support in Cloud System |
title_fullStr |
GPU Virtualization Support in Cloud System |
title_full_unstemmed |
GPU Virtualization Support in Cloud System |
title_sort |
gpu virtualization support in cloud system |
publishDate |
2013 |
url |
http://ndltd.ncl.edu.tw/handle/70577441737788874297 |
work_keys_str_mv |
AT chihyuanyeh gpuvirtualizationsupportincloudsystem AT yèzhìyuān gpuvirtualizationsupportincloudsystem AT chihyuanyeh yúnduānzuòyèxìtǒngdetúxíngchùlǐqìxūnǐhuàzhīyuánxìtǒng AT yèzhìyuān yúnduānzuòyèxìtǒngdetúxíngchùlǐqìxūnǐhuàzhīyuánxìtǒng |
_version_ |
1718211114892787712 |