Choice of best possible metaheuristic algorithm for the travelling

Journal of Theoretical and Applied Computer Science
ISSN 2299-2634
Vol. 7, No. 1, 2013, pp. 46-55
http://www.jtacs.org
Choice of best possible metaheuristic algorithm for the
travelling salesman problem with limited computational
time: quality, uncertainty and speed
Marek Antosiewicz, Grzegorz Koloch, Bogumił Kamiński
Warsaw School of Economics
[email protected]
Abstract:
We compare six metaheuristic optimization algorithms applied to solving the travelling
salesman problem. We focus on three classical approaches: genetic algorithms, simulated
annealing and tabu search, and compare them with three recently developed ones: quantum
annealing, particle swarm optimization and harmony search. On top of that we compare all
results with those obtained with a greedy 2-opt interchange algorithm. We are interested in
short-term performance of the algorithms and use three criteria to evaluate them: solution
quality, standard deviation of results and time needed to reach the optimum. Following the
results from simulation experiments we conclude that simulated annealing and tabu search
outperform newly developed approaches in short simulation runs with respect to all three
criteria. Simulated annealing finds best solutions, yet tabu search has lower variance of results and converges faster.
Keywords: metaheuristic algorithms, travelling salesman problem
1. Introduction
Defined in the 1930s, the travelling salesman problem (TSP) is one of the most significant problems in combinatorial optimization [11]. It is important both as a separate problem
and as a part of more complex optimization problems [2]. Also, since it is strongly NPcomplete [4], in practical applications for large-scale problem instances it is only possible to
use heuristic optimization algorithms which give approximate solutions.
Some of the most successful heuristic algorithms that have been used to solve the TSP
and its variants include genetic algorithms [7], simulated annealing [10] and tabu search [9].
Recently, however, new optimization algorithms such as swarm optimization [14], harmony
search [1] and quantum annealing [6] are becoming increasingly popular.
Earlier works [8, 12, 3] considered comparisons of classical deterministic or metaheuristic algorithms. The aim of our work is to extend this strand of research by considering comparisons of the effectiveness of the latest optimization metaheuristics: swarm optimization,
harmony search and quantum annealing with traditional ones. All compared algorithms are
non-deterministic therefore in order to assess their performance meaningfully we evaluate
them using three criteria: (i) mean value of the objective function of solutions found by the
algorithms, (ii) its standard deviation and (iii) run-time of the algorithms.
Traditionally, metaheuristic algorithms have been compared according to large sample
or asymptotic quality of solutions and mostly the best obtained solution or the mean quality
Choice of best possible metaheuristic algorithm for the travelling salesman problem…
47
of solutions obtained in multiple runs is reported [13, 5]. We are however, interested in a
business scenario, where the decision-maker is forced to constantly repeat simulations of
small and medium sized test cases. Such a situation naturally arises in the case of transportation problems, in which vehicles must visit no more than a few hundred locations and new
locations might be added at any moment resulting from incoming new orders. Therefore we
compare algorithms on medium-size problems in limited running time taking into account
mean solution quality, variance of obtained results and convergence speed.
The conclusion from the study carried out in this paper also has the aim to facilitate the
choice of an algorithm for more complex transportation problems, such as the Vehicle Routing Problem with Loading Constraints or Vehicle Routing problem with Time Windows.
For such problems effective exact algorithms do not exist, therefore one is left to the choice
of a metaheuristic. Due to the similarity of the TSP and VRP, a good algorithm for the TSP
is likely to perform well in more complex transportation problems. However, we decided to
use the TSP as a benchmark because of the fact, that the test cases that are used in this paper
can be solved to optimum by existing exact algorithms, such as the Concorde TSP solver
(http://www.tsp.gatech.edu/concorde/index.html). This allows us to objectively assess the
quality of the solutions obtained by different metaheuristic algorithms.
Summing up the discussion – metaheuristic optimization is intrinsically stochastic and in
many cases decision makers don't have the comfort to simulate the solver long enough to
enjoy asymptotic solution quality. Therefore, an interesting practical question that we explore is the following: how big uncertainty is involved in competing approaches when only
a limited time can be devoted for each run. Such a question leads to a multicriteria comparison which should include the following factors:
1) expected fitness of the solution,
2) the uncertainty of the solutions,
3) algorithm convergence speed.
In this study we formulate appropriate informative assessment criteria and apply them to
the TSP problem.
Remainder of the paper is organized as follows. In Section 2 we describe the formulation of TSP which we used for test cases and specify the implementation of employed algorithms. Next in Section 3 we outline the proposed testing methodology and report the results
of simulation experiments.
2. Optimization methods
In this section we formally define the TSP and then we describe optimization methods
compared in the paper.
Consider an undirected complete weighted graph . Assume that it has vertices labeled by integers from 1 to and that the distance between vertices and is denoted by
. The optimal solution of the TSP is defined as the shortest Hamiltonian cycle in .
, ∈
This problem can be formulated as a binary programming task:
48
Marek Antosiewicz, Grzegorz Koloch, Bogumił Kamiński
,
,
,
+
,
,
,
→ min
subject to:
∀ ∈ 1, … ,
:
,
=1
∀ ∈ 1, … ,
:
,
=1
∀,
∈ 1, … ,
:
,
(1)
∈ 0,1
Decision variable , takes value 1 if vertex is visited in step and is 0 otherwise. The
objective function measures distance between vertices visited along the path and its last
term ensures that the path forms a cycle. First constraint ensures that each vertex is visited
exactly once and second constraint means that in each step exactly one vertex is visited.
Now let us move to the description of six metaheuristic optimization methods compared
in the paper: genetic algorithms, simulated annealing, tabu search, swarm optimization,
harmony search and quantum annealing.
2.1. Genetic algorithm
Genetic algorithms are stochastic search methods that imitate the process of Darwinian
natural selection. In each iteration the algorithm processes not one solution but a whole
population of solutions, which is called a generation. The main building blocks of a genetic
algorithm are the following three operators: (1) selection operator, (2) crossover operator
and (3) mutation operator.
In each iteration the population of solutions is transformed into the next generation population using the above three operators in the following way. Firstly, solutions are put in
pairs called parents through the use of the selection operator. The selection operator which
we used is a tournament selection. Tournament selection generates two random permutations of the current population, compares objective values of corresponding solutions and
picks the better one to the set of parent solutions. Next, consecutive parents from the set of
parent solutions are matched two-by-two using the crossover operator. Each pair of parents
is transformed into two other solutions called offspring using the crossover operator. The
crossover operator we utilized is the 2-point permutation crossover (PMX), which works in
the following way. First, two crossing points in the parents (i.e. paths) are chosen at random.
To produce an offspring (say, the first one), vertices from the first parent between the two
crossing points are transferred into the offspring solution. The remaining vertices are placed
in the offspring according to the order in which they appear in the second parent solution,
see Figure 1 on next page. The second offspring is generated in an analogical way.
After the population of offspring solutions have been generated, each of them altered using the mutation operator with a small probability , called mutation probability. The mutation operator we used is a standard swap operator, which randomly chooses two vertices in
the solution and switches their positions.
Choice of best possible metaheuristic algorithm for the travelling salesman problem…
49
Parent solutions:
0
1
2
5
2
0
3
7
4
3
5
8
6
4
7
9
8
6
9
1
Offspring solutions:
2
0
0
1
7
2
3
7
4
3
5
8
6
4
8
5
9
6
1
9
Figure 1. Illustration of crossover operator; crossing points are denoted by vertical lines. The numbers are numbers of nodes visited along the path
2.2. Simulated annealing
Simulated annealing is a metaheuristic algorithm which processes only a single solution.
In order to use this approach we must first define the neighborhood of a solution. We defined the neighborhood of a current solution as all the solutions that can be reached from the
current solution by exactly one application of the swap operator as (described in Section 2.1).
Simulated annealing works as follows. In each iteration a solution from the neighborhood of the current solution is randomly chosen. If the new solution is better than the current one, the current solution is replaced by the new one. If not, it is replaced by the new one
with a certain probability which is a function of the number of iterations which passed since
the beginning of the optimization and of the difference in objective function values between
the two considered solutions. Probability of accepting a solution which is worse than the
current one decreases as the algorithm proceeds. In the first stages of optimization simulated
annealing behaves more like a random search procedure, then its biased toward accepting
only better solutions increases and in the final stages of optimization it operates according to
the greedy search principle.
2.3. Tabu search
The Tabu search algorithm is very similar to simulated annealing. Both metaheuristics
process only one solution in each iteration and explore the neighborhood of the current solution (which is defined in the same way as in case of simulated annealing). However, in each
iteration of the tabu search algorithm the entire neighborhood of the current solution is explored and the best solution in the neighborhood is chosen as the new solution. In order to
prohibit the algorithm from cycling around local optima, a tabu list is created. The tabu list
stores the last operations that have been performed on solutions or the last solutions
visited during the search. While exploring the neighborhood of the current solution, it is
forbidden to accept as the next solution an element from the tabu list.
2.4. Swarm optimization
In each iteration of the swarm optimization a population of solutions called particles is
processed. Each particle possesses an attribute called velocity. Each particle knows its current solution and remembers the best solution it has found during the search as well as the
globally best solution found during the search by all the particles. The velocity of each particle is a list of standard swap operations. During every iteration of the optimization the velocity !" of each particle # is updated according to the following rule:
!" : = !"
+ $%#&," –#" ( + )%
&,"
–#" (,
(2)
50
Marek Antosiewicz, Grzegorz Koloch, Bogumił Kamiński
where: #" denotes a solution encountered by the particle in iteration * − 1; #&," is the
best solution the particle has encountered up to iteration * − 1 and &," is the best solution
encountered by all the particles up to iteration * − 1; #&," –#" and &," –#" denote
the lists of swap operations which must be exerted on solution #" in order to arrive at solution #&," and &," respectively; $ and ) are random numbers uniformly drawn from
interval ,0,1- and they denote the probability of adding of each of the swap operations to
velocity !" . After !" has been calculated, all the swap operators in !" are exerted on the solution #" in order to derive #" and proceed with the search.
2.5. Harmony search
In each iteration of the harmony search algorithm a population of solutions is processed in order to produce a new solution. If the new solution is better than the worst solution in the current population it is accepted and the worst solution is removed from the
population. If it is worse than the worst solution in the current population, then it is rejected.
In order to use harmony search we must arbitrarily chose one vertex which is the starting
location in the TSP path. The next solution is constructed from the current population in the
following way. The first vertex of the path is set by definition as the staring location of the
TSP path. For remaining positions in the path, the vertex to be visited in the -th position is,
with probability , randomly drawn from all the vertices on the -th positions of all the solutions (paths) in the current population (if the randomly selected vertex has already been
visited in the path, the vertex is randomly selected from the available vertices). With probability 1 − it is randomly chosen from all the vertices which have not yet been used in the
constructed path.
2.6. Quantum annealing
Quantum annealing is a search metaheuristic which is very similar to simulated annealing described in Section 2.2. There are three main differences. Firstly, although this algorithm also randomly chooses a new solution from the neighborhood of the current solution,
the new solution is accepted if it is better than the current one or, if it is worse, it is accepted
with a probability which is a function of the difference in objective function values only (the
temperature is not used during the quantum annealing). Secondly, the objective function
value of a solution is calculated using the following formula:
.,/- = .
0" ,/- +
,*- ∗ .2 3 ,/-,
(3)
Where . 0" ,/- is total distance of the path and .2 3 ,/- is average total distance of paths
of all solutions in the neighborhood of the current solution; ,*- is a decreasing function of
iteration *.
Thirdly the neighborhood used in this algorithm is defined by the use of the 2-opt operator, which randomly chooses 2 edges in the solution, swaps them and inverts the order of
vertices between the chosen edges, see Figure 2 on the next page.
Choice of best possible metaheuristic algorithm for the travelling salesman problem…
51
Figure 2. 2-opt interchange procedure: initial solution (left) and final solution (right). Dashed lines
represent the rest of the graph
2.7. Greedy 2-opt
This algorithm is initiated with a random solution and proceeds in the following way. In
each iteration the algorithm starts by applying the 2-opt operator to all possible combinations of points on the path, starting from the pair (1,2), (1,3) and so on. When it finds a 2-opt
move that results in a better solution, it is accepted and the procedure is repeated until no
improvement can be found. Therefore, in each iteration the algorithm checks at maximum
n * (n-1)/2 possible moves.
In the next section we show results of performance comparison of presented algorithms.
3. Simulation results and setup
In this section we first describe the simulation methodology and then we present results
of comparison of employed algorithms on sample test cases.
3.1. Simulation methodology
The algorithms are tested on 8 test sets, two comprising of 20 vertices, two comprising
of 50 vertices two comprising of 80 vertices and the following two test sets taken from the
TSPLIB webpage1: att48 and eil76. The first six test case instances are randomly generated
as a set of points on two-dimensional plane with Euclidean distance metric. Each coordinate
of each vertex is a random number between 0 and 100. A sample test set is shown on Figure 3.
Figure 3. A sample test case (left) and its solution (right)
In order to get a comparison of the search methods used, each algorithm is initiated with
a random solution or with a random population of solutions. To calculate a sample standard
deviation of objective function values, each of the seven algorithms was tested on each test
case instance 10 times, which resulted in a total of 560 simulation runs. Each simulation run
was limited to a computing time of 100 seconds.
1
http://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/tsp/
52
Marek Antosiewicz, Grzegorz Koloch, Bogumił Kamiński
All algorithms were implemented on the Microsoft Visual Express C# platform and
simulations were run on Intel Pentium Core Duo 3GHz, Windows 7 (64bit).
3.2. Algorithm comparison results
We compare the results of the simulation using three criteria: (1) mean value and (2)
standard deviation of the objective function of solutions found by the algorithms after 100 s.
and (3) run-time of the algorithms. Run time is measured as mean time needed to find the
solution that is optimal after 100 s.
The measurements of mean value of objective function criterion are presented in Table 1
on next page. They are reported for six test cases. Additionally in the last column we report
the objective function values of optimal solutions for considered test cases found using
Concorde TSP Solver.
Table 1. Mean value of compared algorithms at stopping time (100 s.). Test column denotes the
number of vertices in test case (for all graph sizes two tests were generated). GA denotes genetic
algorithm, HS - harmony search; PSO – particle swarm optimization, QA – quantum annealing, SA
– simulated annealing, TS – tabu search, 2-OPT – greedy 2-opt heuristic, and OPT – optimal solution.
Test
20 (a)
20 (b)
50 (a)
50 (b)
80 (a)
80 (b)
att48
eil76
GA
510
535
1613
1576
2693
2812
398
785
HS
524
403
1109
1116
2446
2541
524
1284
PSO
544
556
1790
1746
2931
2098
883
1783
QA
480
494
1041
1008
2154
2273
485
1170
SA
408
367
586
695
802
779
342
582
TS
430
436
703
700
909
903
385
642
2-OPT
524
553
996
1011
1325
1280
570
887
OPT
397
367
560
571
709
687
333
538
The first conclusion is that simulated annealing uniformly outperforms all the other metaheuristics used in the experiment over all test cases. Furthermore, despite the relatively
small sample test cases, it also produces better results than the greedy 2-opt heuristic. The
greedy heuristic is also outperformed by tabu search, and for the smallest test case, it is additionally outperformed by harmony search and quantum annealing. Is has to be recalled
that all the algorithms were stopped after the 100 s. of runtime. This is because runtime
(along with financial cost) constitutes the most important operational constraint when solving practical optimization tasks.
It should be noted that simulated annealing (along with quantum annealing and tabu
search) explores in each iteration exactly one point (i.e. one solution) of the search space.
It is instructive to investigate the number of solutions effectively processed during the
100 s. of optimization. As can be seen from Table 2 on the next page, simulated annealing
processed the largest number of solutions, ranging from 12 million for the smallest test cases to 3,5 million for the largest test cases. The number of solutions processed by the algorithms for the test cases taken from TSPLIB are similar to the 50- and 80- test cases
respectively, and is therefore not shown in Table 2.
Choice of best possible metaheuristic algorithm for the travelling salesman problem…
53
Table 2. Number of explored possible TSP paths during 100 s. of simulation. Values given in thousands
Graph size
20
50
80
GA
3000
650
275
HS
525
81
40
PSO
4750
1800
1000
QA
8200
3500
2400
SA
12500
6000
3500
TS
8000
7000
4500
The general conclusion is that population based algorithms tend to produce, within the
limited time span, worse results than the procedures which proceed with a single solution in
each iteration. What comes as a surprise is that quantum annealing, a procedure which is
very similar to simulated annealing and tabu search, obtains inferior solutions.
Summarizing, with respect to mean solution value criterion the “new” procedures –
harmony search, particle swarm optimization and quantum annealing do not outperform the
classical ones. The greedy algorithm did not produce better results than the best metaheuristics, therefore the use of metaheuristic algorithms is justified.
The measurements of standard deviation of objective function criterion are presented in
Table 3. In this comparison one can notice that the most stable solutions are obtained using
tabu search and genetic algorithms. In particular it is worth reporting that simulated annealing, which had the best mean performance can have significant problems with convergence
to optimal solution. The standard deviation of the greedy 2-opt algorithm results from the
fact that it is initiated with a random solution and it is one of the highest among all explored
algorithms.
Table 3. Standard deviation of compared algorithms at stopping time (100 s.)
Graph size
20 (a)
20 (b)
50 (a)
50 (b)
80 (a)
80 (b)
att48
eil76
GA
10
12
21
30
58
54
14
252
HS
29
44
93
54
110
163
36
68
PSO
34
25
69
51
86
68
39
57
QA
46
64
105
48
94
113
41
64
SA
31
0
13
239
31
32
5
9
TS
20
41
23
14
41
34
4
12
2-OPT
55
68
153
75
167
84
77
65
Table 4. Mean time needed to reach the best solution found by compared algorithms (100 s.) by
problem size; minimum-maximum range given below.
Test
20
50
80
att48
eil76
GA
55,9
19-99,3
48,7
0,7-97,7
55,8
5,0-98,4
55,0
7,9-89,9
75
4,1-99,0
HS
23,8
3,5-82,8
89,5
67,0-99,3
93,4
79,2-99,3
92
57,9-99,4
93
84,2-98,4
PSO
54,2
5,3-93,6
41,4
1,3-100,0
33,2
2,8-94,5
55
18,3-89,8
53
1,9-97,7
QA
12,6
1,2-50,1
94,0
82,9-99,9
98,2
95,2-100,0
90
59,9-99,8
97
90,8-99,9
SA
21,6
17-25,4
65,0
55,0-73,2
98,7
96,3-100
25
21,8-26,7
37
34,3-45,0
TS
0,01
0,01-0,1
49,0
1,1-92,7
70,7
11,3-99,5
39
0,5-96,8
69
20,9-99,8
2-OPT
0,01
0,01-0,1
49,0
1,1-92,7
70,7
11-99,5
0,2
0,1-0,3
1,3
0,8-1,8
54
Marek Antosiewicz, Grzegorz Koloch, Bogumił Kamiński
The measurements of mean, minimum and maximum time needed to find the best solution found by the algorithms are presented in Table 4. For almost all test cases the metaheuristic algorithm which was the quickest to find the best solution was tabu search. The
difference in speed is especially visible for smaller test cases. Particle swarm optimization is
the second best algorithm when it comes to speed, however we need to keep in mind that
this algorithm produced the worst results, especially for the largest test cases. Simulated
annealing is also a good algorithm with regard to speed. The max value for the medium test
case, which is equal to 73,2 suggests that the algorithm almost always reaches a good solution. For the largest test case we can deduce that all algorithms would find better solutions if
we increased run time to over 100 seconds. The maximum time values for the population
based algorithms for the smallest test case suggest that they can sometimes have problems
with reaching a good solution even for not very complicated instances. The nature of the
greedy algorithm results in it being the quickest algorithm, however one needs to keep in
mind that this comes at the cost of worse solution quality.
The results for all three comparison criteria are summarized in Table 5. Mean value criterion is calculated as an average over all test cases relative to exact optimal solution.
Standard deviation criterion is average over all test cases relative to mean standard deviation
over all algorithms in test case. The solution time criterion is calculated in the same way as
standard deviation criterion. For all criterions, the lower the number, the better the evaluation of the algorithm.
Table 5. Summary comparison of algorithms using three criterions
Criterion
Mean val.
Std. dev.
Sol. time
GA
2,37
0,82
1,25
HS
2,15
1,21
1,34
PSO
2,97
0,92
1,09
QA
2,02
1,27
1,26
SA
1,08
0,69
0,95
TS
1,21
0,44
0,67
2-OPT
1,68
1,66
0,44
From the summary results we conclude that simulated annealing and tabu search and
genetic algorithms have proven to be non-dominated. Tabu search is slightly worse in optimal solution finding, but its results are more stable. It is also the algorithm which can find a
good solution in the smallest amount of time. Genetic algorithms produce the most stable
results but have worse performance with respect to solution quality and solution time.
4. Conclusions
In our study we have compared the performance of classical metaheuristic algorithms
with newer ones and with a greedy 2-OPT algorithm. One of the aims of this comparisons is
to identify algorithms which can be of greatest use to solve richer transportation problems.
We concentrate on a scenario, when we can only devote a limited time to each simulation
run. Due to probabilistic nature of assessed algorithms and due to a constraint on available
resources which can be binding in practice, we go for a multi-criteria comparison which
involves three criteria: mean quality, dispersion of quality and time needed to reach the optimum. We find that recently developed algorithms produced far worse results than classical
metaheuristics. Simulated annealing, tabu search and genetic algorithms are non-dominated
decisions. Also, in such a simulation setup, the greedy algorithm does not outperform the
best metaheuristics.
The difference in solution quality is especially visible for the larger test cases. The best
quality solutions were generated using simulated annealing. The performance of tabu search
was also satisfactory and importantly – guaranteed stable “run to run” optimization results.
Choice of best possible metaheuristic algorithm for the travelling salesman problem…
55
Both of these are algorithms which process a single solution in each iteration, in contrast to
other algorithms which process a population of solutions in each iteration. Also these are the
algorithms which, during the designated time span transverse the largest number of trial
solutions. Quantum annealing, which is similar to the two best algorithms produces poor
quality results mainly because it effectively processes a very small number of solutions due
to the fact that calculating fitness values requires a lot of computational time. With regard to
solution time, among metaheuristic algirithms, tabu search is the best one, with swarm optimization and simulated annealing in second place. They are outperformed by greedy 2OPT algorithm, but it is its only advantage, as it is found to generate low quality solutions.
In this work we have focused on identification of non-dominated optimization algorithms. The choice of single method from Pareto-efficient ones depends on decision makers
preferences and can be made using one of the standard solution selection procedures.
References
[1] Geem Z.W., Kim J.H., Loganathan G.V. A New Heuristic Optimization Algorithm: Harmony
Search. Simulation, No. 2/76, 2001, s. 60-68
[2] Gutin G Punnen., A.P. The Traveling Salesmen Problem and Its Variations. Kluver Academic
Publishers, 2002
[3] Hahsler M., Hornik K. TSP - Infrastructure for the Traveling Salesperson Problem. Journal of
Statistical Software, No 2/23, 2007
[4] Johnson D.S., Papadimitriou C.H. Computational Complexity. E.L. Lawler, J.K. Lenstra,
A.H.G. Rinnooy Kan, D.B. Shmoys, (eds), The Travelling Salesemen Problem, Wiley, New
York, 2002
[5] Johnson D.S., McGeoch L.A. Experimental Analysis of Heuristics for the STSP. G. Gutin, A.P.
Punnen (eds), The Travelling Salesemen Problem and its Variations, Kluwer Academic Publishers, 2002
[6] Kadowaki T. Study of Optimization Problems by Quantum Annealing, PhD Thesis, Department
of Physics, Tokyo Institute of Technology, 1998
[7] Kaur B., Mittal U. Optimization of TSP using Genetic Algorithm. Advances in Computational
Sciences and Technology, No 2/3, 2010, s. 119-125
[8] Kim B.I., Shim J.I., Zhang M. Comparison of TSP Algorithms. Project for Models in Facilities
Planning and Materials Handling, 1998
[9] Misevičius A. Using Iterated Tabu Search for the Travelling Salesemen Problem. Informacines
Technologijos ir Valdymas, No 32/3, 2004, s. 29-40
[10] Ram D.J., Sreenivas T.H., Subramaniam K.G. Parallel Simulated Annealing Algorithms. Journal of Parallel and Distributed Computing, No 37, 1996, s. 207-212
[11] Schrijver A. On the history of combinatorial optimization (till 1960). K. Aardal, G.L.
Nemhauser, R. Weismantel, (eds), Handbook of Discrete Optimization, Elsevier, Amsterdam,
2005
[12] Sze S.N., Tiong W.K. A Comparison between Heuristic and Meta-Heuristic Methods for Solving the Multiple Traveling Salesman Problem. World Academy of Science, Engineering and
Technology, No 25, 2007, s. 300-303
[13] Tan K.C., Lee L.H., Zhu Q.L., Ou K. Heuristic methods for vehicle routing problem with time
windows. Artificial Intelligence in Engineering, No 15, 2001, s. 281-295
[14] Zhang C., Sun J., Wang Y., Yang Q. An Improved Discrete Particle Swarm Optimization Algorithm for TSP. Proceedings of Web Intelligence/IAT Workshops'2007, 2007, s. 35-38