Exact and Local Search Methods for Solving Travelling Salesman Problem with Practical Application

This paper investigates some exact and local search methods to solve the traveling salesman problem. The Branch and Bound technique (BABT) is proposed, as an exact method, with two models. In addition, the classical Genetic Algorithm (GA) and Simulated Annealing (SA) are discussed and applied as local search methods. To improve the performance of GA we propose two kinds of improvements for GA; the first is called improved GA (IGA) and the second is Hybrid GA (HGA). The IGA gives best results than GA and SA, while the HGA is the best local search method for all within a reasonable time for 5 ≤ n ≤ 2000, where n is the number of visited cities. An effective method of reducing the size of the TSP matrix was proposed with the existence of successive rules. The problem of the total cost of Iraqi cities was also discussed and solved by some methods in addition to local search methods to obtain the optimal solution.

x ij =0 or 1, where n is the number of cities. [6] 3. Some Heuristic Methods to Solve TSP [7] This section discusses two heuristic methods; Greedy method and Improved Minimum distance method.
The Greedy method (GRM) starts by sorting the edges by length, and always adding the shortest remaining available edge to the tour. The shortest edge is available if it is not yet added to the tour and if adding it would not create a 3-degree vertex or a cycle with edges less than n. This heuristics can be applied to run in O(n 2 log(n)) time.
The Minimizing Distance Method (MDM) is an efficient method for finding a good solution, but it has a weak point. This weak point has been manipulated by improved minimum distance method (IMDM) which is suggested in [7]. The IMDM has good achievement with high efficiency for solving TSP.

Branch and Bound Technique for Solving TSP 4.1 General Review of BABT
BAB technique (BABT) is most widely used in TSP by constructing a state space tree to find the optimal solution among all feasible solutions by taking the value of the objective function. Branch and bound was initially studies by Dantzig and a more description was provided by him in the applications of TSP. The BABT gives all feasible solutions by solving the problem, by trying the practical solution ad starting the value in the upper bound for finding the optimal solutions [8]. The general algorithm of the BABT is as follows: Branch and Bound Technique (BABT) Algorithm [9] Step 1: Choose a starting point.
Step 2: Choose one of the routes for that point.
Step 3: After choosing that route between the current point and unvisited point to add the distance. After doing that choose a new destination without choosing the same point.
Step 4: Keep doing this until we have gone through each point.
Step 5: Add up each distance of each subgroup.
Step 6: You will see the difference in routes and pick the smallest route.

Using BABT for Solving TSP with Two Models
This section discusses and the applying of BABT to solve TSP. It is very well known that BABT is one of the most important methods of the exact solution for COP. This method can act with different upper and lower bound to get very good results within a good time. The choosing process of upper bound (UB) and lower bound (LB) are figured as (UB-LB) this symbol of UB and LB we called it a model for BABT with notation BABT: UB-LB, we apply IMDM or GRM methods for finding UB.
The LB consists of two main parts such that LB equal sequenced nodes plus the unsequenced nodes, the sequence nodes: is the basic rout until the current node, while the unsequenced nodes: it's the subsequence obtained from all the cities after eliminating the sequence nodes which are obtained from applying IMDM method. Now we will discuss two techniques for BAB, the first technique is the classical BABT with notation BABT1, The BABT1 algorithm is as follows: BABT1 Algorithm Step1: Input n, D=[d ij ], i, j=1,...,n.
Step3: For each node in the search tree compute the LB= cost of sequencing nodes + cost of unsequenced nodes; where the cost of the unsequenced nodes is obtained by GRM, BABM or IMDM, k=k+1.
Step 4: Branch each node with LB ≤ UB for level k.
Step 5: If k  n then go to step 3.
Step 6: If the last level (k=n-2) of BAB algorithm, the optimal solution is obtained.
The second technique is similar for BABT1 but with modification. This modification includes finding a LB by branch form the least cost node and continue until getting the root node and calculate the LB, then we update the initial UB by the new LB, and then apply the same steps of BABT1. The BABT2 algorithm is as follows:
Step 3: Compute the New_LB= cost of sequencing nodes + cost of unsequenced nodes; where cost of unsequenced nodes is obtained by IMDM, if New_LB ≤ UB1 branch from this node and set UB1=New_LB, repeat until reaching the root node set UB=UB1, if all New_LB  UB1 then UB=UB1.
Step 4: For each node in the search, tree compute the LB is obtained by IMDM, k=k+1.
Step 5: Branch each node with LB ≤ UB for level k.
Step 6: If k  n then goto step 4.
Step 7: If the last level (k=n-2) of BAB algorithm, the optimal solution is obtained.
In the practical examples we will choose different n such that 5≤n≤2000 with integer cost (distance) such that d ij  [1,30] Table-1 we notice that BABT1 is the best method from other methods so it is can be compared with other methods for n > 12. Table-2 shows the comparison results between the BABT1: IMDM-IMDM from one side and with BABT2: GRM-IMDM and IMDM from the other side for n=13,…,20,25, where the standard method is BABT1.

Local Search Methods for Solving TSP
Metaheuristic (local search) algorithms are formally defined as algorithms inspired by nature and biological behaviors. They produce high quality solutions by applying a robust iterative generation process for exploring and exploiting the search space efficiently and effectively. Recently, metaheuristic algorithms seem to be hot and promising research areas. They can be applied to find near-optimal solutions in a reasonable time for different COP. Metaheuristic algorithms such as genetic algorithms (GA), particle swarm optimization (PSO), tabu search (TS), simulated annealing (SA), and ant colony optimizations (ACO) are widely used for solving the TSP [10].

Simulated Annealing
Simulated Annealing (SA) is a trajectory-based optimization technique. It is basically an iterative improvement strategy with a criterion that accepts higher cost configurations sometimes. The first attempt to apply SA for solving the COP was in the 80s of the last century. An overview of SA, its theoretical development, and application domains is shown in. SA was inspired by the physical annealing process of solids in which a solid is first heated and then cooled down slowly to reach a lower state of energy. Metropolis acceptance criterion, which models how thermodynamic systems moves from one state to another state, is used to determine whether the current solution is accepted or rejected [10].
The original Metropolis acceptance criterion was that the initial state of a thermodynamic system was chosen at energy (Cost or C) and temperature (Temperature or t). Holding constant t, the initial configuration of the system is perturbed to produce new configuration and the change in energy ΔC is calculated. The new configuration is accepted unconditionally if ΔC is negative whereas it is accepted if ΔC is positive with a probability given by the Boltzmann factor shown in (1) to avoid trapping in the local optima: … (1) This process is then repeated until reaching a good sampling statistics for the current temperature, and then the temperature is decreased and the process is repeated until a frozen state (free energy state) is reached at t= 0 [10]. Algorithm of SA as follows: Simulated Annealing Algorithm [10] Step 1: Input: Temperature, FinalTemperature, cooling rate, ch; Step 2: ch' = ch; Cost = Evaluate (ch'); Step 3: while (Temperature > FinalTemperature) do ch1 = Mutate (ch'); NewCost = Evaluate (ch1); Cost = NewCost −Cost; if ( Cost ≤ 0) OR ( > Rand) then Cost = NewCost; ch' = ch1; end Temperature = cooling rate × Temperature end Step 4: Output: the best ch'.
Where cooling rate is 0.95, Temperature is 10000, and final Temperature is 0, Rand as a uniform random and number of generation is 5000 iteration

Genetic Algorithm
Genetic algorithms (classical genetic algorithm) (GA) are derivative free stochastic approach based on biological evolutionary processes proposed by Holland. In nature, the most suitable individuals are likely to survive and mate; therefore, the next generation should be healthier and fitter than the previous one. A lot of work and applications have been done about GAs in a frequently cited book by Golberg. GAs work with a population of chromosomes that are represented by some underlying parameters set codes [4], where P is numbers of chromosomes and Pc is the probability of crossover. Genetic Algorithm [4] Step 1: Create an initial population of P chromosomes and evaluate the fitness for each one.
Step 2: Choose Pc*P parents from the current population via chosen selection.
Step 3: Select two parents to create offspring using crossover operator.
Step 4: Apply mutation operators for minor changes in the results.
Step 5: Repeat Steps 4 and 5 until all parents chosen are selected and mated, and the remain (1-Pc)*P chromosomes are initialized randomly.
Step 6: Replace old population with new one, with elitism for best chromosome.
Step 7: Evaluate the fitness of each chromosome in the new population.
Step 8: Terminate if the number of generations meets some upper bound; otherwise go to Step2.

Improving Genetic Algorithm
In this section, we will attempt to improve the GA. We suggest to choose F number of chromosomes of the population of GA, where F= (P/4), these F chromosomes are improved using three techniques, these technologies can be considered as a combination of the crossover and mutation operators.

GA Operators
These techniques are implemented on the origin chromosome as follows: 1-Simple Inversion Crossover [11]: Simple inversion selects two points along the length of the chromosome, which is in at these points, and the sub string between these points is reversed.

2-Swap mutation [12]:
In pairwise swap mutation, the residues at the randomly chosen two positions swapped.

3-Displacement mutation [12]
: Displacement mutation pulls the first selected gene out of the set of string and reinserts it into a different place then sliding the substring down to form a new set of string.
When applying the three techniques on one chromosome we obtained three new chromosomes and the 4 th is the origin chromosome. This procedure is called mixing crossover mutation algorithm (MCMA), which act as follows:
Step 2: Choose F= (P/4) best chromosomes from the current population.
Step 4: Repeat Steps 3 until all the F chromosomes are finished.
Step 5: Replace old population with new one.
Step 6: Evaluate the fitness of each chromosome in the new population.
Step 7: Terminate if the number of generations meets some upper bound; otherwise go to Step2.

Results of Applying IGA
Before describing the results of applying IGA we have to demonstrate the most GA and IGA parameters. These parameters are as follows: the population size (pop_size)=30, Probability of crossover (Pc)=0.8, Probability of mutation (Pm)=0.005 and number of generation (NG)=2000 for n=5,…,9, NG=4000 for n=10,…,14, and NG=5000 for n15.

Remark (1):
For the initial population of GA, SA, and other improved algorithms, the GRM is suggested to be used to obtain one of the population chromosomes. The [0,0] it mean no improvement in the initial solution. Table-5 shows the comparison results between the BABT1: IMDM-IMDM (or CEM since they are identical) from one side and with IGA and SA from the other side for n=5,…,20,25, where the standard method is BABT1.   Table-6 shows the comparison results between the BABT2: GRM-IMDM from one side and with IGA and SA from the other side for n=30,…,80, where the standard method is BABT2.   Table-7 shows the comparison results between the IMDM from one side and with IGA and SA from the other side for n=90,100,…,500, where the standard method is IMDM.

Hybrid Genetic Algorithm
From classical GA and IGA, we see the good performance of IGA, but the IGA is still far from the results of some method like BABT or IMDM, especially for large n so that makes us suggest a hybrid between the IGA and SA to increase the performance of IGA. In this section we suggest to employ the SA in some important position of the IGA, so we suggest using the SA to improve chromosome which is the output of MCMA procedure. The suggested algorithm is called Hybrid GA (HGA) and the main steps as follows: Hybrid Genetic Algorithm (HGA) Step 1: Create an initial population of P chromosomes and evaluate the fitness for each one.
Step 2: Choose F=(P/4) best chromosomes from the current population.
Step 4: Repeat Steps 3 until all the F chromosomes are finished.
Step 5: For each chromosome of the population Call SA (chromosome).
Step 6: Replace old population with a new one.
Step 7: Evaluate the fitness of each chromosome in the new population.
Step 8: Terminate if the number of generations meets some upper bound; otherwise, go to Step2.

Reduce Matrix using Successive Rules
The successive rule (SR) plays a very important role in solving the combinatorial optimization problem (COP), especially TSP, these rules may be mandatory. The SR's will be helpful to reduce the numbers of cities that mean reduce the size of the problem, and this implies to reduce the required computation time to solve the problem. If the size of the TSP matrix is (n×n) and the number of SR's is m, then the size of the matrix after the reduction is (n-m)×(n-m). In order to use the SR, we suggest the following Matrix reduction algorithm: Matrix Reduction Algorithm (MRA) Step 1: Input n, D=[d ij ], i, j=1,...,n.
Step 3: We have the SR c i c j , k=k+1.
Step 4: Reduce matrix D by Remove (cancel) two row's and two columns c i and c j to obtained the reduced matrix .
Step 7: Print the reduced matrix with dim (n-m×n-m).
Step 8: Stop.  A  −  1  7  9  8  6+2=8  D  10  −  1  10  5  2+2=4  E  7  8  −  1  5  10+2=12  F  1  8  1  −  7  10+2=12  G  3  4  1  4  −  5+2=7  H=CC i  2  10  8  4 2 − Where C i =A, D, E, F and G according to i. The optimal route for the above TSP table is: ADHGEFA, with C=10. Now suppose we have the following SR's: (BC and EF) for the same example, we treat the two cities E and F as a single city say I, so we obtain the following reduced

Remark (2):
The symmetric and asymmetric matrices when applying MRA are transformed into an asymmetric matrix.

Optimal Solution for Iraqi's Cities Problem using Some Methods and LCM
In this section we exploit the TSP to evaluate the minimum total cost (distance or time) for Iraqi cities. So some methods are investigated to solve this problem; these methods are; Branch and Bound Technique (BABT), HGA, IGA, GA and SA.

Iraqi's Cities Problem (ICP) Definition
The Iraq's cities problem is asymmetric TSP. Iraq consists of 18 governorates, the traveling cost between each two governorates centers is known. We wish to find the minimum total cost of these cities starting from the capital city; Baghdad, then returns to it without repeat the path between any two cities. The symbol of each city is as in the following table: