Adadb: Adaptive Diff-Batch Optimization Technique for Gradient Descent

Gradient descent is the workhorse of deep neural networks. Gradient descent has the disadvantage of slow convergence. The famous way to overcome slow convergence is to use momentum. Momentum effectively increases the learning factor of gradient descent. Recently, many approaches have been proposed t...

Full description

Bibliographic Details
Main Authors: Muhammad U. S. Khan, Muhammad Jawad, Samee U. Khan
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9481902/