A unified schedule policy of distributed machine learning framework for CPU-GPU cluster
With the widespread using of GPU hardware facilities, more and more distributed machine learning applications have begun to use CPU-GPU hybrid cluster resources to improve the efficiency of algorithms. However, the existing distributed machine learning scheduling framework either only considers task...
Format: | Article |
---|---|
Language: | zho |
Published: |
The Northwestern Polytechnical University
2021-06-01
|
Series: | Xibei Gongye Daxue Xuebao |
Subjects: | |
Online Access: | https://www.jnwpu.org/articles/jnwpu/full_html/2021/03/jnwpu2021393p529/jnwpu2021393p529.html |
id |
doaj-0f11b0cc4a2f47c2ba34ab7e1afd91e3 |
---|---|
record_format |
Article |
spelling |
doaj-0f11b0cc4a2f47c2ba34ab7e1afd91e32021-08-10T11:25:10ZzhoThe Northwestern Polytechnical UniversityXibei Gongye Daxue Xuebao1000-27582609-71252021-06-0139352953810.1051/jnwpu/20213930529jnwpu2021393p529A unified schedule policy of distributed machine learning framework for CPU-GPU cluster012School of Computer Science, Northwestern Polytechnical UniversitySchool of Computer Science, Northwestern Polytechnical UniversitySchool of Computer Science, Northwestern Polytechnical UniversityWith the widespread using of GPU hardware facilities, more and more distributed machine learning applications have begun to use CPU-GPU hybrid cluster resources to improve the efficiency of algorithms. However, the existing distributed machine learning scheduling framework either only considers task scheduling on CPU resources or only considers task scheduling on GPU resources. Even considering the difference between CPU and GPU resources, it is difficult to improve the resource usage of the entire system. In other words, the key challenge in using CPU-GPU clusters for distributed machine learning jobs is how to efficiently schedule tasks in the job. In the full paper, we propose a CPU-GPU hybrid cluster schedule framework in detail. First, according to the different characteristics of the computing power of the CPU and the computing power of the GPU, the data is divided into data fragments of different sizes to adapt to CPU and GPU computing resources. Second, the paper introduces the task scheduling method under the CPU-GPU hybrid. Finally, the proposed method is verified at the end of the paper. After our verification for K-Means, using the CPU-GPU hybrid computing framework can increase the performance of K-Means by about 1.5 times. As the number of GPUs increases, the performance of K-Means can be significantly improved.https://www.jnwpu.org/articles/jnwpu/full_html/2021/03/jnwpu2021393p529/jnwpu2021393p529.htmlcpu-gpu tasksunified schedulerclustering algorithmdistribution |
collection |
DOAJ |
language |
zho |
format |
Article |
sources |
DOAJ |
title |
A unified schedule policy of distributed machine learning framework for CPU-GPU cluster |
spellingShingle |
A unified schedule policy of distributed machine learning framework for CPU-GPU cluster Xibei Gongye Daxue Xuebao cpu-gpu tasks unified scheduler clustering algorithm distribution |
title_short |
A unified schedule policy of distributed machine learning framework for CPU-GPU cluster |
title_full |
A unified schedule policy of distributed machine learning framework for CPU-GPU cluster |
title_fullStr |
A unified schedule policy of distributed machine learning framework for CPU-GPU cluster |
title_full_unstemmed |
A unified schedule policy of distributed machine learning framework for CPU-GPU cluster |
title_sort |
unified schedule policy of distributed machine learning framework for cpu-gpu cluster |
publisher |
The Northwestern Polytechnical University |
series |
Xibei Gongye Daxue Xuebao |
issn |
1000-2758 2609-7125 |
publishDate |
2021-06-01 |
description |
With the widespread using of GPU hardware facilities, more and more distributed machine learning applications have begun to use CPU-GPU hybrid cluster resources to improve the efficiency of algorithms. However, the existing distributed machine learning scheduling framework either only considers task scheduling on CPU resources or only considers task scheduling on GPU resources. Even considering the difference between CPU and GPU resources, it is difficult to improve the resource usage of the entire system. In other words, the key challenge in using CPU-GPU clusters for distributed machine learning jobs is how to efficiently schedule tasks in the job. In the full paper, we propose a CPU-GPU hybrid cluster schedule framework in detail. First, according to the different characteristics of the computing power of the CPU and the computing power of the GPU, the data is divided into data fragments of different sizes to adapt to CPU and GPU computing resources. Second, the paper introduces the task scheduling method under the CPU-GPU hybrid. Finally, the proposed method is verified at the end of the paper. After our verification for K-Means, using the CPU-GPU hybrid computing framework can increase the performance of K-Means by about 1.5 times. As the number of GPUs increases, the performance of K-Means can be significantly improved. |
topic |
cpu-gpu tasks unified scheduler clustering algorithm distribution |
url |
https://www.jnwpu.org/articles/jnwpu/full_html/2021/03/jnwpu2021393p529/jnwpu2021393p529.html |
_version_ |
1721212151548346368 |