Learning rate for the back propagation algorithm based on modified scant equation

The classical Back propagation method (CBP) is the simplest algorithm for training feed-forward neural networks. It uses the steepest descent direction with fixed learning rate to minimize the error function E, since is fixed for each iteration this causes slow convergence for the CBP algorithm...

Full description

Bibliographic Details
Published in:المجلة العراقية للعلوم الاحصائية
Main Authors: Dr.Khalil K. Abbo, Marwa S. Jaborry
Format: Article
Language:Arabic
Published: College of Computer Science and Mathematics, University of Mosul 2014-12-01
Subjects:
Online Access:https://stats.mosuljournals.com/article_89207_3211002f29b4d040d47861348a0d28ec.pdf
Description
Summary:The classical Back propagation method (CBP) is the simplest algorithm for training feed-forward neural networks. It uses the steepest descent direction with fixed learning rate to minimize the error function E, since is fixed for each iteration this causes slow convergence for the CBP algorithm. In this paper we suggested a new formula for computing learning rate , using modified secant equation to accelerate the convergence of the CBP algorithm. Simulation results are presented and compared with other training algorithms.
ISSN:1680-855X
2664-2956