Training Deep Neural Networks Using Conjugate Gradient-like Methods

The goal of this article is to train deep neural networks that accelerate useful adaptive learning rate optimization algorithms such as AdaGrad, RMSProp, Adam, and AMSGrad. To reach this goal, we devise an iterative algorithm combining the existing adaptive learning rate optimization algorithms with...

Full description

Bibliographic Details
Main Authors: Hideaki Iiduka, Yu Kobayashi
Format: Article
Language:English
Published: MDPI AG 2020-11-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/9/11/1809