Summary: | In recent years, distributed systems have mainly been used to train machine learning (ML) models. However, as a result of the different performances among computational nodes in a distributed cluster and delays in network transmission, the accuracies and convergence rates of ML models are relatively low. Therefore, it is necessary to design a reasonable strategy that provides dynamic communication optimization to improve the utilization of the cluster, accelerate the training times, and strengthen the accuracy of the training model. In this paper, we propose the adaptive synchronous parallel strategy for distributed ML. Through the performance monitoring model, the synchronization strategy of each computational node with the parameter server is adjusted adaptively by considering the full performance of each node, thereby ensuring higher accuracy. Furthermore, our strategy prevents the ML model from being affected by irrelevant tasks in the same cluster. Experiments show that our strategy fully improves clustering performance, and it ensures the accuracy and convergence speed of the model, increases the model training speed, and has good expansibility.
|