Novel Conjugate Gradient Method in Optimization and Training Neural Networks

Authors

  • Basim A. Hassan Department of Mathematics, College of Computer Sciences and Mathematics, University of Mosul https://orcid.org/0000-0003-3510-9818
  • Alaa Luqman Ibrahim Department of Mathematics, College of Science, University of Zakho, Zakho, Iraq https://orcid.org/0000-0001-8862-9441
  • Bayda Ghanim Fathib Department of Mathematics, College of Science, University of Zakho, Zakho, Iraq

DOI:

https://doi.org/10.24996/ijs.2026.67.4.36

Keywords:

Optimization, Gradient Descent, Artificial Neural Networks, Convergence Properties

Abstract

     This work examines novel conjugate gradient methods. To address unconstrained optimization problems and optimize the training of neural networks. The methodologies employ advanced derivative-based techniques to enhance optimization outcomes. The novel approach employs any line search to guarantee adequate descent. Moreover, we prove that, given specific assumptions, our proposed method converges universally. Experimental evidence has demonstrated that our proposed approach is superior in terms of efficiency and robustness compared to traditional conjugate gradient approaches for training neural networks and solving unconstrained optimization issues.

Downloads

Published

2026-04-30

Issue

Section

Mathematics

How to Cite

[1]
B. A. . Hassan, A. L. . Ibrahim, and B. G. . Fathib, “Novel Conjugate Gradient Method in Optimization and Training Neural Networks”, Iraqi Journal of Science, vol. 67, no. 4, pp. 2356–2371, Apr. 2026, doi: 10.24996/ijs.2026.67.4.36.

Similar Articles

1-10 of 1393

You may also start an advanced similarity search for this article.