A Descent Modification of Conjugate Gradient Method for Optimization Models

In this paper, we suggest a descent modification of the conjugate gradient method which converges globally provided that the exact minimization condition is satisfied. Preliminary numerical experiments on some benchmark problems show that the method is efficient and promising.


Introduction
The conjugate gradient algorithms are among the most efficient algorithms because of their simplicity, convergence properties, and competence to solve large-scale unconstrained optimization problems. Consider the unconstrained optimization problem (1) where is smooth whose gradient is available. The conjugate gradient (CG) methods are among the most preferred methods for solving problems (1). Starting with an initial point , the CG method generates a sequence of iterates through the following scheme [1]. (2) where is the step size computed along the search direction . The first direction of search is usually the negative of the gradient which is the steepest descent direction, i.e., , while subsequent directions are recursively defined as follows.
(3) in which is known as the CG update parameter capable of reducing to linear CG method if the step size satisfies the exact minimization condition and (1) is a quadratic function that is strictly convex. The performance of these CG methods differs for general non-quadratic objective functions [2,3]. Some of the efficient conjugate gradient coefficients are the Hestenes-Stiefel, (HS) [4], Polak-Ribiere-Polyak (PRP) [5,6], and Liu and Storey (LS) [7] with their formula defined below.
where and is known as the Euclidean norm. The methods of HS, PRP, and LS have the same numerator and can perform a restart when the algorithm moves along with a very small step size, that is, implying , and thus, produce effective numerical results. Their global convergence has been studied by numerous ISSN: 0067-2904 Science, 2020, Vol. 61, No. 7, pp: 1745-1750 1746 researchers. For the convergence of various CG methods, it is usually required that satisfies the exact minimization condition: (4) On the other hand, other researchers require satisfying the inexact line search such as the standard Wolfe condition: (5) (6) The convergence results of some unconstrained optimizations methods are obtained under the strong Wolfe line search (5) and (7) One interesting features of these methods is that, if is the exact minimizer, then HS = PRP = LS. But, Powell [8] provided a counter-example which shows that there exist some nonconvex functions in which PRP does not converge, which also applies to the HS method. Despite the effective computational results of these methods, their convergence is yet to be established under some inexact line searches. Various modifications have received a growing interest around the globe. Recently, Rivaie et al. [9] constructed a new denominator while retaining the numerator of the HS, PRP, and LS method, as follows; The global convergence was established under exact line search. The RMIL method was extended by Rivaie et al. [1], as follows; It is clear to see that . The RMIL* and PRP methods have similar features and thus possess the restart properties. Also, RMIL reduces to RMIL* if the exact minimization condition is satisfied. The global convergence of the RMIL AND RMIL* has been studied under exact line searches. Convergence analysis of recent modifications of the CG methods can be referred to form literature [10,11,12]. Also, application of the CG method to real-life problems can be referred to [13,14].

New formula and its properties
In an attempt to enhance the performance of the CG methods while retaining the nice convergence properties, various researches have proposed many variants of the CG coefficient and established their global convergence proof. However, some of the recent modifications of the CG methods are too complicated and thus, their proofs are difficult to establish. Motivated by the descent properties and numerical efficiency of the RMIL and RMIL* methods, we proposed a simple variant of RMIL AND RMIL* as follows. (10) where MIMS denotes the researchers name Mamat, Ibrahim Mohammed Sulaiman. The following algorithm describes the proposed MIMS method. This method inherits some nice properties of the RMIL and RMIL* method with better numerical performance. Another interesting feature of the proposed MIMS method is the ability to reduce to the standard RMIL* method provided that the exact minimization condition is satisfied. The algorithm of the proposed MIMS method is described as follows. Algorithm 2.1: Step Global convergence properties The following lemma, which follows from the assumptions above, is useful in the convergence analysis of the CG methods. Lemma 1. [15]. Suppose that Assumptions A and B hold true. Consider any CG methods of the form (2) and (3) where satisfies (16) and is obtained using the exact minimization rules (4). Then, (17) Proof: The proof follows from [9]. By contradiction, we suppose that (17) is not true and that such that . From (3) and (10) Since , then from (21), we have which contradicts the assertion in Lemma 1, and thus, the proof is completed. ■

Conclusions
In this paper, the proposed modification of RMIL methods guaranteed the descent condition and converged globally, provided that the exact minimization condition is satisfied. Also, the MIMS method inherited the restart mechanism of the RMIL* method with a better numerical performance. The proposed method was compared with RMIL and RMIL* conjugate gradient methods under exact line search. Numerical results illustrates the efficiency of the MIMS method. For further work, researchers interested in the area of conjugate gradient method can test this coefficient using the inexact line search.