Motivation, Basic Concepts, Basic Methods, Travelling Salesman

Motivation, Basic Concepts,
Basic Methods,
Travelling Salesman Problem,
Algorithms
1
What is Combinatorial
Optimization?
• Combinatorial Optimization deals with problems where we have to search for and
find an optimal combination of objects from a very large, but finite, number of
possibilities.
• The set containing the finite number of possibilities is called the solution space.
• Each solution (i.e., combination of objects) in the solution space has a cost or
benefit associated with it.
• Finding an optimal combination of objects means finding a combination of objects
that minimizes the cost or maximizes the benefit function.
• Within the solution space, there may be many solutions that represent very good
solutions (local optimum solutions), which have very low cost or very high benefit,
and one best solution (i.e., global optimum solution), which has the lowest cost or
the highest benefit.
2
Combinatorial Optimization Example
• You have n = 10,000 objects.
• Each object has a different value:
–
–
–
–
Object 1: $9.47
Object 2: $5.32
…
Object n: $7.44
• A combination is defined as a subset of 100 objects.
• Each object has to be used at least once, and not more than x times.
• The solution space consists of all possible combinations of 100 objects
• Can you find a combination of 100 objects whose total value is closest to
$1,245,678.90?
3
CO Example: Travelling Salesman
Problem
• The Travelling Salesman Problem (TSP) describes a
salesperson who must travel a route to visit cities.
• The distance between city and city is known.
• The salesperson problem is to:
1. Visit each city exactly once (i.e., visit each city at least once
and no more than once)
2. Return to the starting point (city). The starting city (starting
point of the route) can be any of the cities.
3. Find a route that represents the minimum distance.
4
Combinatorial Optimization
Algorithms
• Global optimum algorithms:
– Exhaustive search
– Held and Karp (1962): dynamic programming
– Branch-and-cut (later branch and bound)
• Concorde implementation holds current record (finding best route for 85,900 cities)
– However, even for the state-of-the-art Concorde implementation, these “exact”
algorithms take a long time to compute.
• In April 2006 Concorde TSP Solver of 85,900 cities took over 136 CPU-years
• Evolutionary algorithms (Heuristic and approximate)
–
–
–
–
–
–
Hill climbing (and variants, such as random restarts)
Simulated annealing
Genetic algorithm
Artificial neural network
Ant Colony Optimization
Particle Swarm Optimization
5
• Exhaustive Search suffers from a serious problem—as the number of
variables increases, the number of combinations to be examined explodes.
• For example, consider the travelling salesperson problem with a total
number of cities to visit equal to 23.
• Then there are
• How large a number is
different possible solutions.
?
• Suppose we have a computer capable of evaluating a feasible solution in
one ns (
s).
• If we had only 23 cities to visit, then it would take approximately 178
centuries to run through the possible tours.
6
• In contrast, evolutionary algorithms do not suffer from this problem of
taking “forever” to solve a problem.
• These algorithms are so-named “evolutionary” because
– They mimic natural processes that govern how nature evolves.
– They are iterative algorithms that evolve incrementally (hopefully improve on
their solution) through each iteration.
• These algorithms do not attempt to examine the entire space.
• Even so, they have been shown to provide good solutions in a fraction of
the amount time, as compared to exact algorithms, which, for large size
problems, take a large amount of time, i.e., they are infeasible.
7
•
The annealing of solids is a phenomenon found in nature.
•
The term “annealing” refers to the process in which a solid, that has been brought into liquid
phase by increasing its temperature, is brought back to a solid phase by slowly reducing the
temperature in such a way that all the particles are allowed to arrange themselves in the
strongest possible crystallized state. Such a crystallized state represents the global minimum
of the solid’s energy function.
•
The cooling process has to be slow enough in order to guarantee that the particles will have
time to rearrange themselves and find the best position at the current temperature, for all
temperature settings in the cooling process.
•
At a certain temperature, once the particles have reached there best position, the substance
is said to have reached thermal equilibrium. Once thermal equilibrium is reached, the
temperature is lowered once again, and the process continues.
•
At the completion, atoms assume a nearly globally minimum energy state.
8
•
During the annealing process, a particle’s range of motion is governed by the Boltzmann probability
function:
∆
•
The probability of a particle’s range of motion is proportional to the temperature.
•
At higher temperatures, the particles have a greater range of motion than at lower temperatures.
•
The higher temperature at the beginning of the annealing process allows the particles to move a greater
range than at lower temperatures.
•
As the temperature decreases, the particles range of motion decreases as well.
•
The probability of a particle’s range of motion is inversely proportional to the change of energy from one
- as the change in energy is positive and larger, the probability of
temperature state to another
movements gets smaller.
9
Dependence on Temperature and
Cooling Schedule
• The annealing process consists of first raising the temperature of a solid to a point
where its atoms can freely (i.e., randomly) move and then to lower the
temperature, forcing the atoms to rearrange themselves into a lower energy state
(i.e., a crystallization process).
• The cooling schedule is vital in this process. If the solid is cooled too quickly, or if
the initial temperature of the system is too low, it is not able to become a crystal
and instead the solid arrives at an amorphous state with higher energy.
• In this case, the system reaches a local minimum (a higher energy state) instead of
the global minimum (i.e., the minimal energy state).
• For example, if you let metal cool rapidly, its atoms aren’t given a chance to settle
into a tight lattice and are frozen in a random configuration, resulting in brittle
metal.
• If we decrease the temperature very slowly, the atoms are given enough time to
settle into a strong crystal.
10
Application to Combinatorial
Optimization Problems
• The idea of the annealing process may be applied to combinatorial
optimization problems.
[1]Metropolis, Nicholas; Rosenbluth, Arianna W.; Rosenbluth,
Marshall N.; Teller, Augusta H.; Teller, Edward (1953). "Equation of
State Calculations by Fast Computing Machines". The Journal of
Chemical Physics 21 (6): 1087. Bibcode:1953JChPh..21.1087M.
doi:10.1063/1.1699114.
[2]Kirkpatrick, S.; Gelatt Jr, C. D.; Vecchi, M. P. (1983). "Optimization
by Simulated Annealing". Science 220 (4598): 671–680.
Bibcode:1983Sci...220..671K. doi:10.1126/science.220.4598.671.
JSTOR 1690046. PMID 17813860.
11
Simulated Annealing Algorithm:
Basic Structure
•
The basic simulated annealing algorithm can be described as an iterative and evolutionary
procedure composed of two loops: an outer loop and a nested inner loop.
•
The inner loop simulates the goal of attaining thermal equilibrium at a given temperature.
This is called the “Thermal Equilibrium Loop”.
•
The outer loop performs the cooling process, in which the temperature is decreased from its
initial value towards zero until a certain termination criterion is achieved and the search is
stopped. This is called the “Cooling Loop”.
•
The algorithm starts by initializing several parameters:
– The initial temperature is set to a very high value (to mimic the temperature setting of the
initial natural annealing process). This is needed to allow the algorithm to search a wide
breadth of solutions initially.
– The initial solution is created. Usually, the initial solution is chosen randomly.
– The number of times to go through the inner loop and the outer loop is set.
12
•
Each time the Thermal Equilibrium Loop (i.e., the inner loop) is called, it is run with a
constant temperature, and the goal of the inner loop is to find “a best” solution for the given
temperature to attain thermal equilibrium.
•
Each iteration through the Thermal Equilibrium Loop, the algorithm performs the following:
– A small random perturbation of the currently held, best-so-far solution is made to create a new candidate
solution. Since the algorithm does not know which direction to search, it picks a random direction by
randomly perturbing the current solution.
– The “goodness” of a solution is quantified by a cost function. For example, the cost function for the TSP is
the sum of the distances between each successive city in the solution list of cities to visit.
– We can think of the solution space by imagining a cost surface in hyperspace. Each point on the surface
represents a cost of a candidate solution, and the algorithm wants to go to the minimum point of that cost
surface.
– A small random perturbation is made to the current solution because it is believed that good solutions are
generally close to each other. But, this is not guaranteed to be true all of the time, because it depends on
the problem and the distribution of solutions in the solution space.
13
• Each iteration through the Thermal Equilibrium Loop, continued:
– Sometimes, the random perturbation results in a better solution:
• If the cost of the new candidate solution is lower than the cost of the previous solution, i.e., the
random perturbation results in a better solution, then the new solution is kept and replaces the
previous solution.
– Sometimes, the random perturbation results in a worse solution, in which case the
algorithm makes a decision to keep or discard this worse solution.
• The decision outcome depends on an evaluation of a probability function, which depends on the
temperature and change of energy of the current loop.
• The higher is the temperature, the more likely the algorithm will keep a worse solution.
• Keeping a worse solution is done to allow the algorithm to explore the solution space and to keep it
from being trapped in a local minima.
• The lower is the temperature, the less likely the algorithm will keep a worse solution.
• Discarding a worse solution allows the algorithm to exploit a local optimum, which might be the
global optimum.
14
•
Each iteration through the Thermal Equilibrium Loop, continued:
– The decision outcome depends on an evaluation of an estimation [1] of the Boltzmann's probability
function:
∆
The Boltzmann′s probability fun
∆
∆
– For a real annealing process, the
is the change in energy of the atoms from the previous temperature
state and the current temperature state, where the energy is given by the potential and kinetic energy of
the atoms in the substance at the given temperature of the state.
– For simulated annealing, this
may be estimated by the change in the cost function
corresponding
to the difference between cost of the previously found best solution at its temperature state and the cost
of the new candidate solution at the current temperature state.
– The Boltzmann's constant
the inner loop. Therefore,
∆
may be estimated by the average cost function taken over all iterations of
∆
15
•
Each iteration through the Thermal Equilibrium Loop, continued:
– The decision outcome depends on:
∆
∆
– As can been seen, this probability is proportional to the temperature
∆
of the current solution.
normalized change in cost
and inversely proportional to the
∆
∆
1
16
•
Each iteration through the Thermal Equilibrium Loop, continued:
– Thus, in one iteration of the inner loop, the algorithm will either find and keep a better solution, keep a
worse solution with probability , or make no change and keep the previously found best solution.
•
The algorithm continues running the inner loop and the above procedure for a number of
times.
•
After running the inner loop many times, where in each loop it takes on a new better
solution, or takes on a worse solution, or keeps the previously found best solution, the
algorithm may be viewed as taking a random walk in the solution space, looking for a stable
sub-optimal solution for the given temperature of the inner loop.
•
After having found a stable sub-optimal solution for the given temperature of the inner loop,
the process is said to have reached thermal equilibrium.
•
At this point the inner loop completes, and the algorithm goes to the outer loop.
17
• The outer loop (i.e., the Cooling Loop) performs the following:
– The currently best solution is recorded as the optimal solution.
– The temperature is decreased, according to some schedule.
• The initial temperature is set to a very high value (to mimic the temperature setting
of the initial natural annealing process). This is needed to allow the algorithm to
search a wide breadth of solutions initially.
• The final temperature should be set to some low value to prevent the algorithm
from accepting worse solutions at the late stages of the process.
– The number of outer loops is decremented.
– If the number outer loops hasn’t reached zero, then the inner loop is called
once again; otherwise the algorithm terminates.
18
Set
#nCL,
iTemp.
Done
Get Initial Route
Y
# nCL
s==0?
Compute Distance of
Initial Route
Set #nEL
N
N
Y
nCL=numCoolingLoops
nEL=numEquilibriumLoops
Compute Distance
of Route
perturbRoute
Dec # nCL
Reduce
Temperature
Equilibrium Loop:
The temperature is held constant, while the
system reaches equilibrium, i.e., until the “best”
route if found for the given temperature.
# nEL
==0?
Y
Dec # nEL
N
Y
N
genRand #
?
?
Better
Route?
Find Prob of
Acceptance,
19
20
Next Step
• Application of simulated annealing to the Traveling
Salesman problem.
21
• Choosing a temperature schedule.
• Initially, obtain a random list of cities (initial route).
– Can do using permuting a given list.
• Perturbing a city list.
– Many ways to do this, but will show a simple way.
• Computing the cost function
– I.e., computing the distance of a route.
– Presents two different methods.
• Basic Simulated Annealing flow chart.
• MatLab code for the flow chart.
• Results for the 29 City data set.
22
Choosing a Temperature Schedule
• There are many different possible temperature reduction methods that can be
used in SA, aka, cooling strategies or cooling schedules (i.e., temperature cooling
equation(s) that implement how the temperature is to be changed at each iteration
of the outer loop).
They may be categorized as non-adaptive, and adaptive:
• Non-adaptive temperature reduction schedule
– The temperature reduction strategy is fixed at the initialization stage and is not changed
at any time during the simulated annealing iterative process.
– I.e., the temperature is decreased at each iteration of the outer loop in accordance with
the initial strategy, which is determined and fixed at initialization time.
• Adaptive cooling strategy
– The cooling strategy is set at the initialization stage of the simulated annealing process
and then, thereafter, the schedule strategy may be changed.
– The cooling strategy is influenced by the performance of the SA process.
– The control parameters of the temperature reduction equations or the equations
themselves that govern how the temperature is to be changed may change, depending
on the performance of the SA process.
23
• Let
– T represent the temperature
– is the “time” step (algorithm’s iteration number)
–
and are user selectable constants ( is usually set to one) ( is
usually close to but less than 1)
• Exponential:
, or
• Linear:
, or
• Logarithmic:
, or
24
• The temperature for the next iteration is computed by multiplying the current
temperature , found by any fixed schedule, by an adaptive factor , which is
based on the difference between the cost of the current solution and the cost of
the best solution achieved so far.
• The new temperature
and best solutions.
is proportional to the change in cost of the current
– If the current solution has a larger cost than the best solution, then the new
temperature change will be greater.
– If the current solution has a better cost than that of the best solution, the temperature
changes is smaller.
• Other adaptive schemes are possible.
25
•
First, need to establish the initial temperature, which should be very high, and the final temperature of the
cooling strategy, which should be very low.
•
Solve for the initial and final temperatures from Boltzmann’s probability function:
∆
•
∆
Set the probability
that a worse design could be accepted at the beginning of the optimization process.
Set to a high number.
∆
∆
•
Set the probability
to a low number.
, that a worse design could be accepted at the end of the optimization process. Set
∆
•
Then, if we assume
∆
, (which is clearly true at the start of the optimization), then:
26
Choosing How to Decrease
• Select the total number of outer loop iterations
(i.e., cooling cycles) .
• For each Cooling Loop (outer loop) the temperature
can be decreased as follows:
•
is the temperature for the next cycle and
the current temperature.
is
27
•
• Accordingly,
28
Generate Initial City Route List
• Requirements
– Write MatLab code to generate a list of
cities, where
• Cities are chosen according to a uniform random distribution
• No city appears more than once
• All cities are included in the list.
• Options
– Write custom routine to generate a list of cities
– Use Matlab’s library routine randperm.
29
Generate Initial City Route
(Using Matlab Library randperm)
• P = randperm(N) returns a vector containing a random
permutation of the integers 1:N.
• For example, randperm(6) might be [2 4 5 6 1 3].
• Therefore, to get an initial city route of 10 cities:
cityRoute = randperm(10);
g=sprintf('%d ', cityRoute);
fprintf('City route before = %s\n', g);
cityRoute = randperm(10);
g=sprintf('%d ', cityRoute);
fprintf('City route after = %s\n', g);
Example Output:
City route before = 1 9 7 5 4 10 8 6 3 2
City route after = 1 6 2 3 5 9 4 8 10 7
30
Generate Initial City Route
(Using Custom Function)
• The city list is stored in a vector of -components, where
is the number of cities.
• One potential way to permute the list:
– Starting from the first component in the list, and continuing on to the end of the list, for
every component of the list, choose a component randomly.
– Swap the positions of these two components in the list.
– If
, no swap will be done, but this is not a problem since this process is random.
for i=1:numCities
j = randi(numCities);
temp = cityRoute(i);
cityRoute (i) = cityRoute(j);
cityRoute (j) = temp;
end
• This will create the same random permutation each time it is run. A different seed
31
is required to create different permutation.
Generate Initial City Route
(Using Custom Function)
cityRoute = randperm(10);
g=sprintf('%d ', cityRoute);
fprintf('City route before = %s\n', g);
numCities = size(cityRoute',1);
for i=1:numCities
j = randi(numCities);
temp = cityRoute(i);
cityRoute (i) = cityRoute(j);
cityRoute (j) = temp;
end
Example Output:
City route before = 4 2 6 7 10 3 8 1 5 9
City route after = 7 3 4 1 10 2 6 9 5 8
g=sprintf('%d ', cityRoute);
fprintf('City route after = %s\n', g);
32
Perturbing the City Route List
• For each iteration of the Equilibrium loop, the SA algorithm needs to
perturb the city route list (cityRoute).
• There are many ways in which the cityRoute may be perturbed.
• One potential way
–
–
–
–
Choose two indices of the list at random, using a uniform distribution
Swap the positions of these two cities in the list.
For example:
If indices 1 and 3 were chosen at random, then city 3 would swap position with
city 4.
Before
After
Index
1
2
3
4
5
Index
1
2
3
4
5
City
3
2
4
1
5
City
4
2
3
1
5
33
Perturbing the City Route List
cityRoute = randperm(5);
Example Output:
randIndex1 = randi(5);
alreadyChosen = true;
Random index 1 = 4
while alreadyChosen == true
Random index 2 = 5
randIndex2 = randi(5);
City route before = 2 3 5 1 4
if randIndex2 ~= randIndex1
City route after = 2 3 5 4 1
alreadyChosen = false;
end
end
fprintf('Random index 1 = %d\n', randIndex1);
fprintf('Random index 2 = %d\n', randIndex2);
g=sprintf('%d ', cityRoute);
fprintf('City route before = %s\n', g);
temp = cityRoute(randIndex1);
cityRoute(randIndex1) = cityRoute(randIndex2);
cityRoute(randIndex2) = temp;
g=sprintf('%d ', cityRoute);
fprintf('City route after = %s\n', g);
34
• A city-distance data set may be represented in many
ways.
• Examples:
1. Two dimensional array, where each element
the distance between city and city .
d(1,1)
d(1,2)
d(1,3)
d(1,4)
d(1,5)
d(2,1)
d(2,2)
d(2,3)
d(2,4)
d(2,5)
d(3,1)
d(3,2)
d(3,3)
d(3,4)
d(3,5)
d(4,1)
d(4,2)
d(4,3)
d(4,4)
d(4,5)
d(5,1)
d(5,2)
d(5,3)
d(5,4)
d(5,5)
denotes
2. Each entry in a list represents the Euclidean coordinates of
the city. So, you need to compute the distances.
35
• Assume the data set is a two dimensional array, where each
element
denotes the distance between city and city
.
• Let represent the city list route, i.e., is a list of city
indices, that represents the route to take by the TS person.
• The cost function is the round trip distance
number of cities):
( is the
36
Computing Route Cost
Example
• The cost function is the round trip distance :
Index
d(1,1) d(1,2) d(1,3)
d(1,4) d(1,5)
d(2,1) d(2,2) d(2,3)
d(2,4) d(2,5)
d(3,1) d(3,2) d(3,3)
d(3,4) d(3,5)
d(4,1) d(4,2) d(4,3)
d(4,4) d(4,5)
d(5,1) d(5,2) d(5,3)
d(5,4) d(5,5)
1
2
3
4
5
3
2
4
1
5
37
Computing Route Cost
Example
•
Now assume the data is stored in a one dimensional vector (i.e., in a File), and the distances
are symmetric. For example:
d(1,1)
d(1,2)
d(1,3)
d(1,4)
d(1,5)
d(2,1)
d(2,2)
d(2,3)
d(2,4)
d(2,5)
d(3,1)
d(3,2)
d(3,3)
d(3,4)
d(3,5)
d(4,1)
d(4,2)
d(4,3)
d(4,4)
d(4,5)
d(5,1)
d(5,2)
d(5,3)
d(5,4)
d(5,5)
File
d(1,1)
d(1,2)
d(1,3)
d(1,4)
d(1,5)
d(2,1)
d(2,2)
d(2,3)
d(2,4)
d(2,5)
d(3,1)
d(3,2)
d(3,3)
d(3,4)
d(3,5)
d(4,1)
d(4,2)
d(4,3)
d(4,4)
d(4,5)
d(5,1)
d(5,2)
d(5,3)
d(5,4)
d(5,5)
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
0.00
d(2,1)
d(3,1)
d(4,1)
d(5,1)
d(1,2)
0.00
d(3,2)
d(4,2)
d(5,2)
d(1,3)
d(2,3)
0.00
d(4,3)
d(5,3)
d(1,4)
d(2,4)
d(3,4)
0.00
d(5,4)
d(1,5)
d(2,5)
d(3,5)
d(4,5)
0.00
38
Computing Route Cost
Example
d(1,1) = 0.00
d(2,1) = d(1,2)
d(3,1) = d(1,3)
d(4,1) = d(1,4)
d(5,1) = d(1,5)
d(1,2) = d(2,1)
d(2,2) = 0.00
d(3,2) = d(2,3)
d(4,2) = d(2,4)
d(5,2) = d(2,5)
Index
1
2
3
4
5
City
3
2
4
1
5
d(1,3) = d(3,1)
d(2,3) = d(3,2)
d(3,3) = 0.00
d(4,3) = d(3,4)
d(5,3) = d(3,5)
d(1,4) = d(4,1)
d(2,4) = d(4,2)
d(3,4) = d(4,3)
d(4,4) = 0.00
d(5,4) = d(4,5)
d(1,5) = d(5,1)
d(2,5) = d(5,2)
d(3,5) = d(5,3)
d(4,5) = d(5,4)
d(5,5) = 0.00
39
Row in File File
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
00.00
12.00
13.00
14.00
15.00
12.00
00.00
23.00
24.00
25.00
13.00
23.00
00.00
34.00
35.00
14.00
24.00
34.00
00.00
45.00
15.00
25.00
35.00
45.00
00.00
Computing Route Cost
d(1,1) = 0.00
d(2,1) = d(1,2)
d(3,1) = d(1,3)
d(4,1) = d(1,4)
d(5,1) = d(1,5)
Example
d(1,2) = d(2,1)
d(2,2) = 0.00
d(3,2) = d(2,3)
d(4,2) = d(2,4)
d(5,2) = d(2,5)
Index
1
2
3
4
5
City
3
2
4
1
5
d(1,3) = d(3,1)
d(2,3) = d(3,2)
d(3,3) = 0.00
d(4,3) = d(3,4)
d(5,3) = d(3,5)
d(1,4) = d(4,1)
d(2,4) = d(4,2)
d(3,4) = d(4,3)
d(4,4) = 0.00
d(5,4) = d(4,5)
d(1,5) = d(5,1)
d(2,5) = d(5,2)
d(3,5) = d(5,3)
d(4,5) = d(5,4)
d(5,5) = 0.00
40
Computing Route Cost
MatLab Code with Example
Distances
∗
cityRoute = [3 2 4 1 5];
Distances = load('5x5Symmetric.txt');
∗
Index
1
2
3
4
5
cityRoute
3
2
4
1
5
D=0; n=5;
for i=1:n-1
D = D + Distances((cityRoute(i)-1)*n+cityRoute(i+1));
end
D = D + Distances((cityRoute(n)-1)*n+cityRoute(1));
Output:
D = 111
00.00
12.00
13.00
14.00
15.00
12.00
00.00
23.00
24.00
25.00
13.00
23.00
00.00
34.00
35.00
14.00
24.00
34.00
00.00
45.00
15.00
25.00
35.00
45.00
00.00
41
Computing Route Cost
Euclidean Distance Format (TSPLIB)
Example: a 10x2 cityCoords (cC) array,
that holds the
coordinates for
each city to be visited.
y-coordinate
City x-coordinate
1 20900.0000
17066.6667
2 21300.0000
13016.6667
3 21600.0000
14150.0000
4 21600.0000
14966.6667
5 21600.0000
16500.0000
6 22183.3333
13133.3333
7 22583.3333
14300.0000
8 22683.3333
12716.6667
9 23616.6667
15866.6667
10 23700.0000
15933.3333
• The TSPLIB is a standard format for
representing cities for the travelling
salesperson problem (TSP).
• Each entry in the file denotes the
Euclidean coordinates of the city.
• To determine the distance of a TSP
route, the Euclidean metric is used.
For example:
cC
cC
cC
cC
cC
cC
cC
cC
42
Computing Route Cost
Matlab Code with Example
EUC_2D_29.txt
20833.3333
20900.0000
21300.0000
21600.0000
21600.0000
21600.0000
22183.3333
22583.3333
22683.3333
23616.6667
23700.0000
23883.3333
24166.6667
25149.1667
26133.3333
26150.0000
26283.3333
26433.3333
26550.0000
26733.3333
27026.1111
27096.1111
27153.6111
27166.6667
27233.3333
27233.3333
27266.6667
27433.3333
27462.5000
17100.0000
17066.6667
13016.6667
14150.0000
14966.6667
16500.0000
13133.3333
14300.0000
12716.6667
15866.6667
15933.3333
14533.3333
13250.0000
12365.8333
14500.0000
10550.0000
12766.6667
13433.3333
13850.0000
11683.3333
13051.9444
13415.8333
13203.3333
9833.3333
10450.0000
11783.3333
10383.3333
12400.0000
12992.2222
cC = load('EUC_2D_29.txt');
D=0; n=29;
for i=1:n-1
D = D + sqrt((cC(i,1) - cC(i+1,1))^2 +
(cC(i,2) - cC(i+1,2))^2);
end
D = D + sqrt((cC(n,1) - cC(1,1))^2 + (cC(n,2)cC(1,2))^2);
D
Matlab output:
D = 5.2284e+04
43
EUC_2D_29.txt
20833.3333
20900.0000
21300.0000
21600.0000
21600.0000
21600.0000
22183.3333
22583.3333
22683.3333
23616.6667
23700.0000
23883.3333
24166.6667
25149.1667
26133.3333
26150.0000
26283.3333
26433.3333
26550.0000
26733.3333
27026.1111
27096.1111
27153.6111
27166.6667
27233.3333
27233.3333
27266.6667
27433.3333
27462.5000
17100.0000
17066.6667
13016.6667
14150.0000
14966.6667
16500.0000
13133.3333
14300.0000
12716.6667
15866.6667
15933.3333
14533.3333
13250.0000
12365.8333
14500.0000
10550.0000
12766.6667
13433.3333
13850.0000
11683.3333
13051.9444
13415.8333
13203.3333
9833.3333
10450.0000
11783.3333
10383.3333
12400.0000
12992.2222
•
Usually in TSP problems, the city route is stored (represented) in city index format,
rather than coordinate format.
•
For example, cityRoute (cR): 9 18 16 7 12 4 10 15 24 27 17 11 25 6 20 22
14 21 28 29 13 5 1 23 19 3 2 26 8
•
An entry in the route denotes the index of the city.
•
The index references the distance file.
•
To determine the distance of the route stored in the index format, we must use
the index in the route array to extract the coordinates from the input file.
•
For example, let the file array be called cC and the route array be called cR:
cC
cC
cC
cC
cC
cC
cC
cC
44
Set
#nCL,
iTemp.
Done
Get Initial Route
Y
# nCL
s==0?
Compute Distance of
Initial Route
Set #nEL
N
N
Y
nCL=numCoolingLoops
nEL=numEquilibriumLoops
Compute Distance
of Route
perturbRoute
Dec # nCL
Reduce
Temperature
Equilibrium Loop:
The temperature is held constant, while the
system reaches equilibrium, i.e., until the “best”
route if found for the given temperature.
# nEL
==0?
N
Dec # nEL
Y
Y
N
genRand #
?
?
Worse
Route?
Find Prob of
Acceptance,
45
clc; clear; close all;
cC = load('EUC_2D_29.txt');
numCities = size(cC,1);
x=cC(1:numCities, 1);
y=cC(1:numCities, 2);
x(numCities+1)=cC(1,1);
y(numCities+1)=cC(1,2);
figure
hold on
plot(x',y','.k','MarkerSize',14)
labels = cellstr( num2str([1:numCities]') ); %' # labels
text(x(1:numCities)', y(1:numCities)', labels, ...
'VerticalAlignment','bottom', ...
'HorizontalAlignment','center');
ylabel('Y Coordinate', 'fontsize', 18, 'fontname', 'Arial');
xlabel('X Coordinate', 'fontsize', 18, 'fontname', 'Arial');
title('City Coordinates', 'fontsize', 20, 'fontname', 'Arial');
46
47
numCoolingLoops = 1100;
numEquilbriumLoops = 100;
pStart = 0.6;
% Probability of accepting worse solution at the start
pEnd = 0.001;
% Probability of accepting worse solution at the end
tStart = -1.0/log(pStart); % Initial temperature
tEnd = -1.0/log(pEnd);
% Final temperature
frac = (tEnd/tStart)^(1.0/(numCoolingLoops-1.0));% Fract temp reduction
cityRoute_i = randperm(numCities); % Get initial route
cityRoute_b = cityRoute_i; % Best route
cityRoute_j = cityRoute_i; % Current route
cityRoute_o = cityRoute_i; % Optimal route
% Initial distances
D_j = computeEUCDistance(numCities, cC, cityRoute_i);
D_o = D_j; D_b = D_j ; D(1) = D_o;
numAcceptedSolutions = 1.0;
tCurrent = tStart; % Current temperature = initial temperature
DeltaE_avg = 0.0;
% DeltaE Average
48
for i=1:numCoolingLoops
disp(['Cycle: ',num2str(i),' starting temp: ',num2str(tCurrent)])
for j=1:numEquilbriumLoops
cityRoute_j = perturbRoute(numCities, cityRoute_b);
D_j = computeEUCDistance(numCities, cC, cityRoute_j);
DeltaE = abs(D_j-D_b);
if (D_j > D_b) % if objective is worse, then:
if (i==1 && j==1) DeltaE_avg = DeltaE; end
p = exp(-DeltaE/(DeltaE_avg * tCurrent));
if (p > rand()) accept = true; else accept = false; end
else accept = true; % objective function is better
end
49
if (accept==true)
cityRoute_b = cityRoute_j;
D_b = D_j;
numAcceptedSolutions = numAcceptedSolutions + 1.0;
DeltaE_avg = (DeltaE_avg * (numAcceptedSolutions-1.0) + ...
DeltaE) / numAcceptedSolutions;
end
end // j=1:numEquilbriumLoops
tCurrent = frac * tCurrent; % Lower temp for next cooling cycle
cityRoute_o = cityRoute_b; % Record the best route
D(i+1) = D_b; % Record each route distance
D_o = D_b;
end
50
Best Route Distance Found:
29,702.3 m
29,702.3
51
52
[1] J. D. Hedengren, "Optimization Techniques in Engineering," 5 April 2015. [Online]. Available:
http://apmonitor.com/me575/index.php/Main/HomePage. [Accessed 27 April 2015].
[2] A. R. Parkinson, R. J. Balling and J. D. Heden, "Optimization Methods for Engineering Design
Applications and Theory," Brigham Young University, 2013.
53