Novel Conjugate Gradient Method in Optimization and Training Neural Networks
DOI:
https://doi.org/10.24996/ijs.2026.67.4.36Keywords:
Optimization, Gradient Descent, Artificial Neural Networks, Convergence PropertiesAbstract
This work examines novel conjugate gradient methods. To address unconstrained optimization problems and optimize the training of neural networks. The methodologies employ advanced derivative-based techniques to enhance optimization outcomes. The novel approach employs any line search to guarantee adequate descent. Moreover, we prove that, given specific assumptions, our proposed method converges universally. Experimental evidence has demonstrated that our proposed approach is superior in terms of efficiency and robustness compared to traditional conjugate gradient approaches for training neural networks and solving unconstrained optimization issues.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Iraqi Journal of Science

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.



