Minimizing the Total Completion Time and Total Earliness Time Functions for a Machine Scheduling Problem Using Local Search Methods

In this paper we investigate the use of two types of local search methods (LSM), the Simulated Annealing (SA) and Particle Swarm Optimization (PSO), to solve the problems ( ) and . The results of the two LSMs are compared with the Branch and Bound method and good heuristic methods. This work shows the good performance of SA and PSO compared with the exact and heuristic methods in terms of best solutions and CPU time.


Introduction
Scheduling, generally speaking, means to assign machines to jobs in order to complete all jobs under the imposed constraints. The problem is to find the optimal processing order of these jobs on each machine to minimize the given objective function. There are two general constraints in the classical scheduling theory [1]. Each job is to be processed by, at most, one machine at a time and each machine is capable of processing at most one job at a time. A schedule is feasible if it satisfies the two general constraints, and also if it satisfies the various requirements relating to the specific problem type. The problem type is specified by the machine environment, the job characteristics and an optimality criterion. For the simultaneous objective function, there are two types; the first one is to find the sum of these objectives, while the second typically generates all efficient schedules (set of Pareto optimal solutions) and selects the one that yields the best composite objective function value of the criteria [2].

ISSN: 0067-2904
Ali and Jawad Several studies on the local search methods proved that they led to significantly better results than those obtained from the traditional heuristics if they are implemented carefully [3]. Local search heuristics are built upon observations of processes in the physical and biological sciences [4]. In 2012, Abdul-Razaq et al. [5] suggested a new development to the methods of the scheduling in flow shop to minimize makespan problems. Also, they applied two local search methods, namely the GA and PSO algorithms, on flow shop problems. In 2015, Ali [6] used some types of LSM (GA, PSO and Bees Algorithm (BA)) to solve some types of combinatorial optimization problems, such as multicriteria MSP. Section two introduces two important local search methods (SA and PSO). In section three, the mathematical formulations of ( ) and problems are discussed. The practical and comparative results are introduced in section four. Lastly, in section five, most important conclusions and some recommendations were presented.

Local Search Methods
Local search methods (LSMs) form a very general class of heuristic methods to treat discrete optimization problems (DOP). Such problems are given by a finite set S of feasible solutions and an objective function f:SR. The goal is to find a solution with minimal objective value, i.e., we look for a solution s*S with Generally speaking, LSMs move iteratively through the solution set S of a DOP. Based on the current, and may be on the previous visited solutions, a new solution is chosen. The basic structure of the local search algorithm is as follows: Choose an initial solution; REPEAT Choose a solution from the neighborhood of the current solution and move to this solution; UNTIL stopping criteria are met. Evolutionary Algorithms (EAs) or Local Search Methods have been shown to be successful for a wide range of optimization problems [1].

Simulated Annealing (SA)
Simulated annealing is an algorithmic method that is able to escape from local minima. It is a randomized local search method for two reasons: First, from the neighborhood of a solution a neighbor is randomly selected. Second, in addition to better-cost neighbors, which are always accepted if they are selected, worse-cost neighbors are also accepted, although with a probability that is gradually decreased in the course of the algorithm's execution. The randomized nature enables asymptotic convergence to optimum solutions under certain mild conditions. Nevertheless, the energy landscape, which is determined by the objective function and the neighborhood structure, may admit many and/or "deep" local minimum. Therefore, avoiding local minima is a crucial part of the performance of the algorithm [7,8].
SA algorithm starts to work by generating random initial solution (s), then the difference ∆ = F(s')−F(s) and neighbor (s') in the objective function are calculated. If ∆<0, the neighbor (s') will be accepted to be the new solution in the next iteration since it has a better function value. If the objective function value does not decrease (i.e.∆ ≥ 0), the generated neighbor may also be accepted with a probability exp(−∆/T), where T is a control parameter called temperature. This temperature is always reduced by a cooling technique in every iteration. As a stopping criteria, one may use e.g. a given number of iteration, a time limit or a given number of iterations without an improvement of the best objective function value. In the first two cases, one must adjust the cooling scheme in such a way that SA stops with a small temperature [3]. Let B be an integer s.t. B [2,5], N*(s) be the neighborhood of s, p ( , t k ) be the probability that depends on the exponential function.

Particle Swarm Optimization (PSO)
PSO has found applications in a lot of areas. In general, all the application areas that the other evolutionary techniques are good at are good application areas for PSO [9]. PSO was originally developed by two specialist; first, the social-psychologist J. Kennedy and, second, the electrical engineer R. Eberhart in 1995 [10]. It emerged from earlier good experiments with algorithms that modeled the "flocking behavior" seen in many swarms of birds. The suggested algorithm is an optimization algorithm which falls under the evolving algorithms umbrella that covers many algorithms as well. PSO is a very simple concept which can be implemented without complex data structures. No costly or complex mathematical functions are used, and it doesn't require a great amount of memory [11]. PSO possesses a fast convergence, only a small number of control parameters, very simple computations, and good performance, with the lack of derivative computations made it an attractive option for solving the problems. The PSO algorithm depends on the following two relations: where w is the inertia weight for convergence, c 1 and c 2 are positive constants, r 1 and r 2 are random functions in the range [0,1], X i =(x i1 ,x i2 ,…,x id ) represents the i th particle, P i =(p i1 ,p i2 ,…,p id ) represents the best previous position (pbest; the position giving the best fitness value) of the i th particle; the symbol g represents the index of the best particle among all the particles in the population, represents the rate of the position change (velocity) for particle i [9]. The PSO algorithm is as follows.

Algorithm (2): Particle Swarm Optimization (PSO) algorithm
Step(1): Initialize a population of particles with random positions and velocities on d-dimensions in the problem space.
Step(2): For each particle, evaluate the desired optimization fitness function in d variables.
Step(3): Compare particle's fitness evaluation with its pbest. If the current value is better than pbest, then set pbest equal to the current value, and p i equals to the current location x i .
Step(4): Identify the particle in the neighborhood with the best success so far, and assign its index to the variable g.
Step(5): Change the velocity and position of the particle according to equations (1.a) and (1.b).
Step(6): Loop to step (2) until a criterion is met. The main parameters which affect the good performance of PSO are as follows [10]: 1. The number of particles in a specified swarm affects the run-time of algorithm, thus there is a balance between the number of particles and the speed of the algorithm.

2.
The parameter maximum velocity (v max ) limits the maximum jump of a particle in the swarm.

Total Completion Time and Total Earliness Time (
) The object can be described as a set of n jobs N={1,2,…,n} on a single machine to find (where S is the set of all feasible schedules), so they can be useful to specify whether that minimizes the multi-criteria ( ). The ( ) problem can be written as [12]: For the P-problem, we can deduce a sum of the two objectives to obtain the problem which is formulated as follows:

Practical Results of the Implementing Local Search Methods (LSMs)
In this section, we will apply the two proposed LSMs; these methods are SA and PSO. The following parameters are applied for both SA and PSO: 1. For SA, cooling rate is 0.95, temperature is 10000, and final temperature is 0, R as a uniform random and some hundreds number of generations. 2. While for PSO, which is used for the first time for multicriteria MSP, from our experience, the following parameters are preferred to be used: number of particles (N_par=20,30), maximum velocity (V max =Number of jobs (n)), minimum velocity (V min =1), inertia weight ( ), first acceleration parameter ( ), second acceleration parameter (c 2 =c 1 ), diversity of the population maintenance (random r 1  In this section, we apply SA and PSO for some chosen n with the following initial solutions: 1. Start with a random initial solution for SA and PSO. 2. Start with initial solutions for SA and PSO, obtained from the two heuristics, namely the SPT-MST-SCSE for P-problem and DR-SCSE for P 1 -problem. [12]. We use the notations LSM (ob,i) to specify the type of LSM (SA or PSO) which be used, ob for the objective function (F or F 1 ) for the problem (P or P 1 ), respectively, and i=1,2 for the kind of initial solution (random or chosen initial) from the heuristic method. For example, PSO(F 1 ,2) means that LSM is PSO, the problem is P 1 and the initial solution is obtained from SPT-MST-SCSE. While SA(F,1) means that LSM is SA, the problem is P and the initial solution is random.

Comparison Results of P-problem.
Tables- (1 and 2) show the comparison results of applying SA(F,1) with SA(F,2) and PSO(F,1) with PSO(F,2) for P-problem for chosen n.  From Tables -(1 and 2), we notice the good performance of LSM (F, 2) for P-problem. Therefore, it will be used for the next comparison tables. For simplicity we use LSM (ob) instead of LSM (ob, 2). Table -3 shows the comparison results between the LSM methods (SA(F) and PSO(F)) compared with the CEM(F) method to solve P-problem, n=4:10.    Table-6 shows the comparison results of applying SA (F 1 , 1) with SA (F 1 , 2) and PSO (F 1 , 1) with PSO (F 1 , 2) for P 1 -problem for a chosen n. For simplicity, we use LSM (F 1 ) instead of LSM (F, 2), which is used to compare with other methods. In Figure- In Table -7, we show a comparison between the optimal results of CEM(F 1 ) and the results of the local search methods SA(F 1 ) and PSO(F 1 ), n=4:10 for P 1 -problem.  Table-8 shows the results of BAB(F 1 ) or P 1 -problem compared with results of SA(F 1 ) and PSO(F 1 ) methods for P 1 -problem, n=11:15.  Table-9 describes the efficient solution for P 1 -problem for n=30:70,100, 300, 700, 1000, using DR-SCSE(F 1 ) compared with SA(F 1 ) and PSO(F 1 ) methods.