New Class of Conjugate Gradient Methods for Removing Impulse Noise Images

The conjugate coefficient optimal is the very establishment of a variety of conjugate gradient methods. This paper proposes a new class coefficient of conjugate gradient (CG) methods for impulse noise removal, which is based on the quadratic model. Our proposed method ensures descent independent of the accuracy of the line search and it is globally convergent under some conditions, Numerical experiments are also presented for the impulse noise removal in images


Introduction
Optimization algorithms play an important role in the noise removal of images.Images are often corrupted by impulse noise, the goal of noise removal is to suppress the noise while preserving image details.The median filter is one of the most popular nonlinear filters for removing impulse noise due to its computational efficiency and good denoising power [1].Recently, a two-phase method was proposed in [2].The first phase is the detection of the noise pixels by using the adaptive median filter (AMF) [3] for the random-valued noise; it is accomplished by using the adaptive center-weighted median filter (ACWMF) [4].Let X be the original image with M by N pixels, and be the index set of X. let    be denote the set of indices of the noise pixels that are detected in the first phase.

ISSN: 0067-2904
Then, the second phase is the recovery of the noise pixels by minimizing the following functional: Where  is the regularization parameter, and: Here c denotes the number of elements of N. In fact, the smooth function is omitted and only noisy pixels are restored in the minimization.Then, the following smooth functional is obtained, see [5].
Due to the simplicity of their iteration and their very low memory requirements, nonlinear conjugate gradient methods are well suitable to solve the optimization problems: smooth and its gradient g is available.The line search method usually takes the following iterative formula: . or in the case of the strong Wolfe-Powell line search conditions k  satisfies inequality (6) and For example, see [8].The first search direction is usually the negative of the gradient which is the steepest descent direction, i.e., , while subsequent directions are recursively defined as follows.[9] Dai and Yuan (DY) [10] and conjugate descent (CD) by Fletcher [11].The update parameters of these methods are respectively specified as follows: , , Other conjugate gradient methods have also been suggested in the literature, [12]- [14] and a number of them are either modifications or hybridizations of the previously mentioned methods.
The global convergence properties are the most well-studied properties of conjugate gradient methods, The FR method was developed from the Hestenes and Stiefel method, which has a global convergent under exact and strong Wolfe line search [15].The CD method has descent direction under the strong Wolfe line search [16] and fulfils the sufficient descent condition under the strong Wolfe line search.For good references to studies that have described recent CG methods with important results, see [17], [18].This paper is organized as follows.Section 2, we present a new formula of conjugate gradient and describes the descent property of the new formula.Section 3, the global convergence properties of the proposed algorithms for impulse noise removal are analyzed under common assumptions.Numerical test results are reported in Section 4. Some conclusions are summarized in Section 5.

The new formula and the algorithm
It is known that all conjugate direction algorithms generate conjugate directions at least theoretically and hence the key element for the derivation of the new algorithms is Perry's conjugacy condition: Also in the derivation of all conjugate direction algorithms, it is assumed that the objective function is a quadratic model.Therefore, we begin with the following definition: is the Hessian of the objective function.It is obvious that the derivative of (12) for k s , we obtain: Putting ( 5) and ( 13) in (12), we get: So, it follows from (11), ( 9) and ( 13) that: We call our new conjugate gradient method by BKY, BKS and BKG.Based on the previous information, our algorithm framework will be explained as follows.

New algorithms (BKY, BKS and BKG algorithms)
Step 1: If  k g , then stop.
Step 4: Set and go to step 1.

Convergence Analysis
For any effective and robust considered method, it must satisfy the descent condition and the convergence criteria.To study the convergence analysis of the proposed CG method, the following assumptions are often needed on the objective function.

Theorem 1:
If the assumption holds and k  satisfies the Wolfe conditions then the search directions generated by the proposed algorithm of CG are descent directions for all k .Proof: Multiplying ) 9 ( by , we have: So that our descent proof will be easier, we need to simplify our new k  by using ( 13) and ( 15) with some algebraic operations.Therefore, we get: ) (

Applying the inequality 2 2
( 2 , where 1 ) ( and we have:  25), then we get:   L the search direction satisfies the descent condition.In a similar way, a descent property of a BKS and BKG method is proven.In order to ensure the global convergence of our algorithms, we need to find k  satisfying (6) and (7).The following lemma is often used to prove the global convergence of conjugate gradient algorithms.

Lemma:
Suppose that the assumptions hold and consider any conjugate gradient method (4) and ( 9), where k d a descent direction and k is obtained by the strong Wolfe line search ( 6) and (8).

Theorem 2:
Consider any conjugate gradient method in the form (4) and ( 9

This relation implies:
. Similarly idea we can test BKS and BKG methods.

Numerical results
In this section, we present some numerical results to demonstrate the performance of new methods for salt-and-pepper impulse noise removal.In our experiments, we compare new methods with FR-method.Here, we apply it to impulse noise denoising.Some noted papers can be seen [12], [18], [20], [21] and [22].
We present some numerical results to demonstrate the performance of the proposed algorithm for impulse noise removal.We compare the performance of the BKY algorithm to those of the classical FR method for salt-and-pepper impulse noise.To assess the restoration performance qualitatively, we use the PSNR (peak signal to noise ratio, see [23]) is defined as:

The comparisons of algorithms are given in the following table context, which their details: number of iterates (NI), : number of function (NOF) and PSNR (peak signal to noise ratio).
Table1: Performance of FR and BKY, BKS and BKG algorithms.

Conclusions
The direction that is generated by the new algorithms satisfies both the descent condition and the Perry-condition, independently of the line search.Under standard Wolfe line search conditions, we proved the global convergence of the algorithm.The computational evidence showed that the performance of our algorithm is better than those of the FR conjugate gradient algorithm so, the numerical performance of the proposed method is fine.
the set of four closet neighbours of the pixel at position  vector of length c ordered lexicographically.

For ( 4 )
, where k u is the current iterate point, 0  k  is a step length and k d is a search direction.Different choices of k d and k  will determine different line search methods, see [6], [7].The step length k  is very important for the global convergence of conjugate gradient methods.It can either be exact or inexact.In the case of an exact step size with a quadratic model: .......For inexact k  number of line search techniques can be used.For instance, the weak Wolfe- Powell line search conditions seek an k ), where k  is obtained by the Wolfe line search.Suppose that the assumption holds, then:

u
denote the pixel values of the restored image and the original image, respectively.The stopping criterion of both methods are: