Damped Newton Stochastic Gradient Descent Method for Neural Networks Training

First-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time. Second-order methods which can lower the training time are scarcely used on account o...

Full description

Bibliographic Details
Main Authors: Jingcheng Zhou, Wei Wei, Ruizhi Zhang, Zhiming Zheng
Format: Article
Language:English
Published: MDPI AG 2021-06-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/9/13/1533
id doaj-ac4c45e65d8e4b51a639c8534e563120
record_format Article
spelling doaj-ac4c45e65d8e4b51a639c8534e5631202021-07-15T15:41:36ZengMDPI AGMathematics2227-73902021-06-0191533153310.3390/math9131533Damped Newton Stochastic Gradient Descent Method for Neural Networks TrainingJingcheng Zhou0Wei Wei1Ruizhi Zhang2Zhiming Zheng3School of Mathematical Sciences, Beihang University, Beijing 100191, ChinaSchool of Mathematical Sciences, Beihang University, Beijing 100191, ChinaSchool of Mathematical Sciences, Beihang University, Beijing 100191, ChinaSchool of Mathematical Sciences, Beihang University, Beijing 100191, ChinaFirst-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time. Second-order methods which can lower the training time are scarcely used on account of their overpriced computing cost to obtain the second-order information. Thus, many works have approximated the Hessian matrix to cut the cost of computing while the approximate Hessian matrix has large deviation. In this paper, we explore the convexity of the Hessian matrix of partial parameters and propose the damped Newton stochastic gradient descent (DN-SGD) method and stochastic gradient descent damped Newton (SGD-DN) method to train DNNs for regression problems with mean square error (MSE) and classification problems with cross-entropy loss (CEL). In contrast to other second-order methods for estimating the Hessian matrix of all parameters, our methods only accurately compute a small part of the parameters, which greatly reduces the computational cost and makes the convergence of the learning process much faster and more accurate than SGD and Adagrad. Several numerical experiments on real datasets were performed to verify the effectiveness of our methods for regression and classification problems.https://www.mdpi.com/2227-7390/9/13/1533stochastic gradient descentdamped Newtonconvexity
collection DOAJ
language English
format Article
sources DOAJ
author Jingcheng Zhou
Wei Wei
Ruizhi Zhang
Zhiming Zheng
spellingShingle Jingcheng Zhou
Wei Wei
Ruizhi Zhang
Zhiming Zheng
Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
Mathematics
stochastic gradient descent
damped Newton
convexity
author_facet Jingcheng Zhou
Wei Wei
Ruizhi Zhang
Zhiming Zheng
author_sort Jingcheng Zhou
title Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
title_short Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
title_full Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
title_fullStr Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
title_full_unstemmed Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
title_sort damped newton stochastic gradient descent method for neural networks training
publisher MDPI AG
series Mathematics
issn 2227-7390
publishDate 2021-06-01
description First-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time. Second-order methods which can lower the training time are scarcely used on account of their overpriced computing cost to obtain the second-order information. Thus, many works have approximated the Hessian matrix to cut the cost of computing while the approximate Hessian matrix has large deviation. In this paper, we explore the convexity of the Hessian matrix of partial parameters and propose the damped Newton stochastic gradient descent (DN-SGD) method and stochastic gradient descent damped Newton (SGD-DN) method to train DNNs for regression problems with mean square error (MSE) and classification problems with cross-entropy loss (CEL). In contrast to other second-order methods for estimating the Hessian matrix of all parameters, our methods only accurately compute a small part of the parameters, which greatly reduces the computational cost and makes the convergence of the learning process much faster and more accurate than SGD and Adagrad. Several numerical experiments on real datasets were performed to verify the effectiveness of our methods for regression and classification problems.
topic stochastic gradient descent
damped Newton
convexity
url https://www.mdpi.com/2227-7390/9/13/1533
work_keys_str_mv AT jingchengzhou dampednewtonstochasticgradientdescentmethodforneuralnetworkstraining
AT weiwei dampednewtonstochasticgradientdescentmethodforneuralnetworkstraining
AT ruizhizhang dampednewtonstochasticgradientdescentmethodforneuralnetworkstraining
AT zhimingzheng dampednewtonstochasticgradientdescentmethodforneuralnetworkstraining
_version_ 1721298891639357440