A model of dynamic compilation for heterogeneous compute platforms
Trends in computer engineering place renewed emphasis on increasing parallelism and heterogeneity. The rise of parallelism adds an additional dimension to the challenge of portability, as different processors support different notions of parallelism, whether vector parallelism executing in a few thr...
Main Author: | |
---|---|
Published: |
Georgia Institute of Technology
2013
|
Subjects: | |
Online Access: | http://hdl.handle.net/1853/47719 |
id |
ndltd-GATECH-oai-smartech.gatech.edu-1853-47719 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-GATECH-oai-smartech.gatech.edu-1853-477192013-09-01T03:09:03ZA model of dynamic compilation for heterogeneous compute platformsKerr, AndrewDynamic compilationGPU computingCudaOpenclSIMDVectorMulticoreParallel computingParallel computersParallel programs (Computer programs)Heterogeneous computingParallel processing (Electronic computers)High performance computingTrends in computer engineering place renewed emphasis on increasing parallelism and heterogeneity. The rise of parallelism adds an additional dimension to the challenge of portability, as different processors support different notions of parallelism, whether vector parallelism executing in a few threads on multicore CPUs or large-scale thread hierarchies on GPUs. Thus, software experiences obstacles to portability and efficient execution beyond differences in instruction sets; rather, the underlying execution models of radically different architectures may not be compatible. Dynamic compilation applied to data-parallel heterogeneous architectures presents an abstraction layer decoupling program representations from optimized binaries, thus enabling portability without encumbering performance. This dissertation proposes several techniques that extend dynamic compilation to data-parallel execution models. These contributions include: - characterization of data-parallel workloads - machine-independent application metrics - framework for performance modeling and prediction - execution model translation for vector processors - region-based compilation and scheduling We evaluate these claims via the development of a novel dynamic compilation framework, GPU Ocelot, with which we execute real-world workloads from GPU computing. This enables the execution of GPU computing workloads to run efficiently on multicore CPUs, GPUs, and a functional simulator. We show data-parallel workloads exhibit performance scaling, take advantage of vector instruction set extensions, and effectively exploit data locality via scheduling which attempts to maximize control locality.Georgia Institute of Technology2013-06-15T02:58:27Z2013-06-15T02:58:27Z2012-12-10Dissertationhttp://hdl.handle.net/1853/47719 |
collection |
NDLTD |
sources |
NDLTD |
topic |
Dynamic compilation GPU computing Cuda Opencl SIMD Vector Multicore Parallel computing Parallel computers Parallel programs (Computer programs) Heterogeneous computing Parallel processing (Electronic computers) High performance computing |
spellingShingle |
Dynamic compilation GPU computing Cuda Opencl SIMD Vector Multicore Parallel computing Parallel computers Parallel programs (Computer programs) Heterogeneous computing Parallel processing (Electronic computers) High performance computing Kerr, Andrew A model of dynamic compilation for heterogeneous compute platforms |
description |
Trends in computer engineering place renewed emphasis on increasing parallelism and heterogeneity.
The rise of parallelism adds an additional dimension to the challenge of portability, as
different processors support different notions of parallelism, whether vector parallelism executing
in a few threads on multicore CPUs or large-scale thread hierarchies on GPUs. Thus, software
experiences obstacles to portability and efficient execution beyond differences in instruction sets;
rather, the underlying execution models of radically different architectures may not be compatible.
Dynamic compilation applied to data-parallel heterogeneous architectures presents an abstraction
layer decoupling program representations from optimized binaries, thus enabling portability without
encumbering performance. This dissertation proposes several techniques that extend dynamic
compilation to data-parallel execution models. These contributions include:
- characterization of data-parallel workloads
- machine-independent application metrics
- framework for performance modeling and prediction
- execution model translation for vector processors
- region-based compilation and scheduling
We evaluate these claims via the development of a novel dynamic compilation framework,
GPU Ocelot, with which we execute real-world workloads from GPU computing. This enables
the execution of GPU computing workloads to run efficiently on multicore CPUs, GPUs, and a
functional simulator. We show data-parallel workloads exhibit performance scaling, take advantage
of vector instruction set extensions, and effectively exploit data locality via scheduling which
attempts to maximize control locality. |
author |
Kerr, Andrew |
author_facet |
Kerr, Andrew |
author_sort |
Kerr, Andrew |
title |
A model of dynamic compilation for heterogeneous compute platforms |
title_short |
A model of dynamic compilation for heterogeneous compute platforms |
title_full |
A model of dynamic compilation for heterogeneous compute platforms |
title_fullStr |
A model of dynamic compilation for heterogeneous compute platforms |
title_full_unstemmed |
A model of dynamic compilation for heterogeneous compute platforms |
title_sort |
model of dynamic compilation for heterogeneous compute platforms |
publisher |
Georgia Institute of Technology |
publishDate |
2013 |
url |
http://hdl.handle.net/1853/47719 |
work_keys_str_mv |
AT kerrandrew amodelofdynamiccompilationforheterogeneouscomputeplatforms AT kerrandrew modelofdynamiccompilationforheterogeneouscomputeplatforms |
_version_ |
1716596684968951808 |