An Adaptive Synchronous Parallel Strategy for Distributed Machine Learning

In recent years, distributed systems have mainly been used to train machine learning (ML) models. However, as a result of the different performances among computational nodes in a distributed cluster and delays in network transmission, the accuracies and convergence rates of ML models are relatively...

Full description

Bibliographic Details
Main Authors: Jilin Zhang, Hangdi Tu, Yongjian Ren, Jian Wan, Li Zhou, Mingwei Li, Jue Wang
Format: Article
Language:English
Published: IEEE 2018-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8327827/
Description
Summary:In recent years, distributed systems have mainly been used to train machine learning (ML) models. However, as a result of the different performances among computational nodes in a distributed cluster and delays in network transmission, the accuracies and convergence rates of ML models are relatively low. Therefore, it is necessary to design a reasonable strategy that provides dynamic communication optimization to improve the utilization of the cluster, accelerate the training times, and strengthen the accuracy of the training model. In this paper, we propose the adaptive synchronous parallel strategy for distributed ML. Through the performance monitoring model, the synchronization strategy of each computational node with the parameter server is adjusted adaptively by considering the full performance of each node, thereby ensuring higher accuracy. Furthermore, our strategy prevents the ML model from being affected by irrelevant tasks in the same cluster. Experiments show that our strategy fully improves clustering performance, and it ensures the accuracy and convergence speed of the model, increases the model training speed, and has good expansibility.
ISSN:2169-3536