On the convergence speed of artificial neural networks in the solving of linear systems
(ندگان)پدیدآور
Jafarian, A.نوع مدرک
TextResearch Paper
زبان مدرک
Englishچکیده
Artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. This paper is a scrutiny on the application of diverse learning methods in speed of convergence in neural networks. For this aim, first we introduce a perceptron method based on artificial neural networks which has been applied for solving a non-singular system of linear equations. Next two famous learning techniques namely, the steepest descent and quasi-Newton methods are employed to adjust connection weights of the neural net. The main aim of this study is to compare ability and efficacy of the techniques in speed of convergence of the present neural net. Finally, we illustrate our results on some numerical examples with computer simulations.
کلید واژگان
System of linear equationsQuasi-Newton method
Steepest descent method
Cost function
Learning algorithm
شماره نشریه
1تاریخ نشر
2015-01-011393-10-11
ناشر
Science and Research Branch, Islamic Azad University, Tehran, Iran Website: ijim.srbiau.ac.ir Address: Science and Research Branch, Shohada Hesarak Blvd, Daneshgah Square, Sattari Highway, Tehran, Iran. Email: ijim@srbiau.ac.ir Tel:+98(44)32352053, +98(914)3897371. Fax:+98(44)32722660دانشگاه آزاد اسلامی واحد علوم و تحقیقات تهران
سازمان پدید آورنده
Department of Mathematics, Urmia Branch, Islamic Azad University, Urmia, Iran.شاپا
2008-56212008-563X




