AMS 530

AMS 530
Principles of Parallel Computing
Project 3
(Instructor :Prof. Deng)
Submitted by:
Name :Nitin C Pillai
E:mail Id : [email protected]
Solar Id : 104605101
Project Description
A traveling salesman must travel to 20 cities (at least once each) of a
country whose boarders form a rectangle of length (east-west) 3000 miles
and width 1500 (south-north) miles. First, generate 20 such cities
(randomly, uniformly) and then start traveling from the most northeast city
(defined as the city closest to the northeast corner of the country) to the
remaining 19 cities. Find a path that has the shortest total distance. You
may use a brute force method to do it but you are encouraged to use one of
the stochastic optimization methods such as simulated annealing. Please
do the following:
(1)
(2)
Compute the CPU time necessary to find such shortest path by one processor.
Use P processors to compute the same path or a different path with the same
distance (accurate to machine precision if the new distance is different.)
a. P = 2
b. P = 4
c. P = 8
(3) Plot the speedup curve and discuss your timing results.
A Brief description about the Traveling Salesman Problem:
The traveling salesman problem consists of finding the shortest (or a nearly shortest) path connecting a
number of locations (perhaps hundreds), such as cities visited by a traveling salesman on his sales route. The
Traveling Salesman Problem is typical of a large class of "hard" optimization problems that have intrigued
mathematicians and computer scientists for years.
Here I used the so-called Simulated Annealing Algorithm to solve this problem. The algorithm is so named
because it can be seen as a simulation of the physical process of annealing (in which a hot material cools
slowly, allowing its constituent atoms to assume arrangements that would not be possible with rapid cooling). I
then wrote a program to implement this algorithm.
A good explanation of the simulated annealing algorithm is given:
The impetus behind the SA algorithm is its analogy to the statistical mechanics of annealing solids ... When a
physical system is at a high temperature, the atoms in the system are in a highly disordered state, and
consequently the associated ensemble energy of the system is also high. Lowering the temperature of the system
results in the atoms of the system acquiring a more orderly state, thus reducing the energy of the system.
For example, to grow a crystal, which is highly ordered, the system needs to be heated to a temperature
which allows many atomic rearrangements. Then the system must be carefully cooled, ensuring a thermal
equilibrium is reached at each temperature stage, until the system is `frozen' into a good crystal. ...
This process of cooling is known as annealing. If the cooling process is performed too quickly, extensive
irregularities can be locked into the crystals structure, with the resulting energy level far greater than in a perfect
crystal.
The Metropolis algorithm ..., developed to simulate the behavior of atoms in thermal equilibrium at a
particular temperature, begins with an initial configuration with energy E0. Each subsequent iteration involves
performing a small random perturbation to the current configuration i (with associated energy Ei), then the
energy, Ej, of the resulting configuration j is calculated. If the change in energy E'' = Ej - Ei <= 0, the
configuration j is accepted with probability 1 and becomes the current configuration of the next iteration. If E''
> 0, then configuration j is accepted with probability exp(- E''/kT), with the aim of obtaining a Boltzman
distribution. If configuration j is rejected the current configuration i remains unchanged. After a large number of
iterations the accepted configurations approximate a Boltzman distribution at a particular temperature, T.
From the above energy distribution we see there are local minima and a global minima. The system is
bounded by its temperature,(i.e it can be anywhere under this temperature and it has a certain probability given
by eqn 1 of getting above the energy related to this temperature).The process of cooling is as follows. As the
system wizz's around under the present temperature (and sometimes above), the system will eventually reach a
point of equilibrium. This means that eventually on average the system will have equal probabilities of
increasing in energy or decreasing in energy at the next time interval. Now we start when the system is melted
(ie the system could be be in any configuration it wanted be be in) and we cool the system by a decremental
percentage ie.
Next Temperature = Present Temperature _ Decremental Percentage
We now wait until the system has reached equilibrium before cooling again by the same decrement, this
process is repeated until we feel the system is no longer changing ie it has reached a stable state. What we
expect is that the system will spend a larger percentage of its time in the area of the curve were there is the
global minima. If the system is cooled slowly enough it is unlikely to get trapped by local minima and if it does
there is a probabilty of getting up and over the potential hill surrounding the local minima, on the other hand if
the system is cooled to rapidly (quenched) there is a much higher chance that it will get forced into the bottom
of a local minima and it will not have a high enough probabilty of getting out. This process of slowly cooling
the sytem to achieve crystalisation is known as annealing.
How do we simulate annealing for the traveling salesman?
If its not obvious the best way to help the salesman is to find the shortest route he/she can take, this can be
done by simulated annealing an artificial replication of what happens in nature as described above. We can
perform simulated annealing on the `travelling salesman problem' as follows. The state of the system at time `t'
in this case is the `Order' our salesmann visits his cities. This may be given as a permuation of the `N' cities,
State(t) = (city0(t); city1(t); city2(t); ::::::::; cityN(t)) where obviously cityi 6= cityj at time t if i 6= j.
The associated energy of the system will be:
Energy = Distance of total trip
Now in our system we can perturb (mix up) the order in which we visit the cities. This may be done by
a) Take the order of visit and swap the positions of two cities in the list at random.
b) Take two cities (randomly) in the list and swap the path of the visit in between.
Now in this problem we can relate `temperature' T directly with distance, this means that in this problem the
associated Boltzman constant k' is equal to 1 as energy is directly related to temperature.
Alogrithm Description :
PSEUDO-CODE For solving Traveling Salesman Problem using Simulated Annealing
TEMPERATURE = INITIAL TEMP; (INITIAL TEMP=10000)
CITY=INITIAL CONFIG OF CITIES(set the initial config)
CITY(0)= Most Northeast city(sort out the most northeast city and put as 1’st element in the city
array)
Initialize mindistance to a large number (used for calculating minimum)
While (TEMP > 0.1) and (FLAG==0)
{
mindistanceby1processor = calcshortestpath( );
Inside the calcshortestpath function the following operations will be done
{
for count =0 to 100 run the following code
{
RANDOMIZE() ; ( this function is used to swap the position of cities in the city
array at random, so that every time a new path will be calculated)
Distancecalc= CALC PATH( ); (calculate the path through this configuration of cities in
the city array)
If ( Distancecalc< mindistance)
{
mindistance= Distancecalc;
[OldCost = NewCost;]
SaveCity=City (DistCalc);
[Save the configuration at the least distance]
}
---------Accept the good move
Else
{
r = RANDOM(0,1);
Delta Cost=NEW COST-OLD COST;
y = exp( -DeltaCost / TEMPERATURE);
if(r < y)
{
mindistance= Distancecalc;
SavedCity=City (DistCalc);
} ---------Accept the bad move
ELSE
{
reject bad move
( don’t change mindistance)
[OldCost = NewCost;]
[Save the configuration at the least distance]
}
}
} //end of for loop
} //End of calcshortest path function
-----------------------------------------INNER LOOP boundary
Mindistance1Proc=mindistance (this is the distance returned by the calcshortestpath function)
for(j=0;j<P;j++)
--- for all the processors do this
{
if myrank==j and myrank !=0;
{
MPI_Send(&min_dist1proc,1,MPI_DOUBLE,0,j*10,MPI_COMM_WORLD);
MPI_Send(&j,1,MPI_INT,0,j*10,MPI_COMM_WORLD);
} (This is done to send the minimum distances calculated by each processor to rank0 processor
so that it can determine which processor calculated the minimum distance)
}
if(myrank==0)
{
minimumbyprocs[0]=mindist1proc; (store the minimum distance calculated by rank0 processor
itself in an array)
for(j=1;j<p;j++)
{
MPI_Recv(&minimumbyprocs[j],1,MPI_DOUBLE,j,j*10,MPI_COMM_WORLD,&status);
MPI_Recv(&ranking,1,MPI_INT,j,j*10,MPI_COMM_WORLD,&status);
} (Rank 0 processor gets the minimum distances calculated by each processor and their
corresponding rank)
SORT THE DISTANCES AND FIND OUT WHICH PROCESSOR CALCULATED THE
MINIMUM DISTANCE
NOW IF RANK 0 PROCESSOR FINDS THAT THIS SHORTEST DISTANCE CALCULATED
IS SMALLER THAN THE SHORTEST DISTANCE CALCULATED WHEN THE PROGRAM
WAS RUN USING JUST 1 PROCESSOR THEN SET THE VALUE OF FLAG TO 1
for(j=1;j<p;j++)
{
MPI_Send(&min_rank,1,MPI_INT,j,j*20,MPI_COMM_WORLD);
} (Inform all other processors about which processor calculated the minimum distance)
for(j=1;j<P;j++)
{
IF myrank ==j
MPI_Recv(&rankfrom0,1,MPI_INT,0,j*20,MPI_COMM_WORLD,&status);
} (All the other processors now know which processor calculated the minimum distance)
for(j=0;j<p;j++)
{
if(myrank==rankfrom0) (This means this processor knows it was the one who calculated the
minimum distance)
NOW THIS PROCESSOR WILL SEND THE CONFIGURATION OF CITIES WHICH
OBTAINED THE MINIMUM DISTANCE TO ALL THE OTHER PROCESSORS
for(k=0;k<p;k++) //Code for broadcasting the city configuration
{
if(my_rank!=k)
for(i=0;i<20;i++)
{
//Sending the best path to all processors
MPI_Send(&saved_city[i].x,1,MPI_INT,k,k*30,MPI_COMM_WORLD);
MPI_Send(&saved_city[i].y,1,MPI_INT,k,k*30,MPI_COMM_WORLD);
}
}
}
Now all the other processors will receive this best configuration of cities and will store them in their respective city
arrays so that for the next iteration, it will improve upon this best configuration
for(k=0;k<p;k++) //Code for receiving best city configuration from other processors
if((my_rank==k)&&(my_rank!=rankfrom0))
{
for(i=0;i<20;i++)
{
MPI_Recv(&city[i].x,1,MPI_INT,rankfrom0,k*30,MPI_COMM_WORLD,&status);
MPI_Recv(&city[i].y,1,MPI_INT,rankfrom0,k*30,MPI_COMM_WORLD,&status);
}
}
TEMP= TEMP * ALPHA
(TEMP= TEMP* .8)
}---------------OUTER LOOP BOUNDARY (This loop will continue execution when the value of TEMP>0.1)
- The Simulated Annealing algorithm parameters :Initial temperature = 10000
Freezing temperature = 0.1
Cooling Schedule parameter “alpha” = 0.8
Inner loop count = 100
C code for computing the solving the Travelling Salesman Problem
using Simulated Annealing
/*C Code for solving the Travelling Salesman Problem for finding the shortest path through 20 cities
using Simulated Annealing*/
//Declaration of Header Files
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <math.h>
#define MAX 20
//declaring the function prototypes
double calc_shortestpath();
double calc_path();
void randomize();
void calc_parallel();
//defining the city
struct city_structure
{
int x,y;
};
struct city_structure city[MAX],saved_city[MAX];
//definition of global variables
int t=10000;
double alpha=0.8;
double min_distance=100000000000;
int p,my_rank,rankfrom0;
long int smallest;
int flag=0;
MPI_Status status;
//START of Main Function
int main(int argc, char *argv[])
{
int i,j,tag,temp_x,temp_y;
double temp,distance[MAX];
double minimum;
double starttime=0, endtime=0;
//variables used for calculating execution time
double time_taken=0,timeelapsed=0,totaltime=0;
//variables used for calculating execution time
//MPI code
MPI_Init(&argc, &argv); //Initiate MPI
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); //Set COMM and my_rank
MPI_Comm_size(MPI_COMM_WORLD, &p); //find COMM's size
//Assigning the locations of the cities
for(i=0;i<MAX;i++)
{
city[i].x=rand()%3000;
city[i].y=rand()%1500;
}
//Doing this to find the north east city
temp=3354;
for(i=0;i<20;i++)
{
distance[i]= sqrt(pow(city[i].x -3000,2)+ pow(city[i].y - 1500,2));
if(distance[i]<temp)
{
temp=distance[i];
tag=i;
}
}
//1st city swapped with most northeast city
temp_x=city[0].x;
temp_y=city[0].y;
city[0].x=city[tag].x;
city[0].y=city[tag].y;
city[tag].x= temp_x;
city[tag].y= temp_y;
starttime = MPI_Wtime(); //starting timer
calc_parallel();// This is the crux of the project
endtime = MPI_Wtime();//stopping the timer
time_taken=endtime-starttime;//computing time taken
//Calculating the time taken by each processor and then sending it to processor with Rank 0
for(j=0;j<p;j++)
{
if(my_rank==j)
MPI_Send(&time_taken,1,MPI_DOUBLE,0,j*3,MPI_COMM_WORLD);
}
if(my_rank==0)
{
totaltime=time_taken;
for(j=1;j<p;j++)
{
MPI_Recv(&timeelapsed,1,MPI_DOUBLE,j,j*3,MPI_COMM_WORLD,&status);
totaltime+=timeelapsed;
}
minimum=smallest;
printf("The Travelling Salesman problem for calculating the shortest path between 20 cities
using Simulated Annealing\n");
printf("\n\nminimum distance calculated is %lf miles",minimum);
printf("\n\nYou get the shorterst path by traversing thru the following cities back to the first
city\n");
for(i=0;i<20;i++)
printf(" (%d,%d)\n-",city[i].x,city[i].y);
printf(" (%d,%d)\n-",city[0].x,city[0].y);
printf("\n\nTime taken for execution of the program is %f seconds\n",totaltime/p);
}
MPI_Finalize();
return 0;
}
//Function for calculating the shortest path by each processor
double calc_shortestpath()
{
int count,i;
double dist=0,distance_calc,minimum,delta,c,a;
for(count=0;count<100;count++)
{
randomize();
distance_calc=calc_path();
if(distance_calc<min_distance)
{
min_distance=distance_calc;
for(i=0;i<20;i++)
{
saved_city[i].x=city[i].x;
saved_city[i].y=city[i].y;
}
}
else
{ //Simulated Annealing being used to chose the minimum distance
delta=distance_calc-min_distance;
c=((rand()%1)+1)/(sqrt(rand()%5));
a=delta/t;
if(c<exp(-a))
min_distance=distance_calc;
for(i=0;i<20;i++)
{
saved_city[i].x=city[i].x;
saved_city[i].y=city[i].y;
}
}
}
return(min_distance);
}
//Function for calculating the path for 1 iteration
double calc_path()
{
int i;
double dist=0;
for(i=1;i<19;i++)
dist+=sqrt(pow(city[i].x -city[i+1].x,2)+ pow(city[i].y - city[i+1].y,2));
dist+=sqrt(pow(city[0].x -city[1].x,2)+ pow(city[0].y - city[1].y,2));
dist+=sqrt(pow(city[0].x -city[19].x,2)+ pow(city[0].y - city[19].y,2));
return(dist);
}
//Function for changing the cities selected for calculating the path
void randomize()
{
int i,number,temp_x,temp_y;
for(i=1;i<20;i++)
{
number=rand()%19+1;
temp_x=city[i].x;
temp_y=city[i].y;
city[i].x=city[number].x;
city[i].y=city[number].y;
city[number].x= temp_x;
city[number].y= temp_y;
}
}
/*This is a function which each processor will execute for finding the shortest path,then send it to rank 0
processor for determining which processor calculated the shortest path and then broadcast this information
to all other processors so that it can improve upon this path using the Simulated Annealing Technique
*/
void calc_parallel()
{
int i,j,k,count,ranking,min_rank;
double min_dist1proc=100000000,minimumbyprocs[8];
while ((t>0.1)&&(flag==0))
{
min_dist1proc=calc_shortestpath();
//Send smallest distance calculated by each processor to rank 0 processor
for(j=0;j<p;j++)
if((my_rank==j)&&(my_rank!=0))
{
MPI_Send(&min_dist1proc,1,MPI_DOUBLE,0,j*10,MPI_COMM_WORLD);
MPI_Send(&j,1,MPI_INT,0,j*10,MPI_COMM_WORLD);
}
/*Rank 0 processor determines which processor calculated the shortest distance and then
sends this information to other processors*/
if(my_rank==0)
{
minimumbyprocs[0]=min_dist1proc;
for(j=1;j<p;j++)
{
MPI_Recv(&minimumbyprocs[j],1,MPI_DOUBLE,j,j*10,MPI_COMM_WORLD,&status);
MPI_Recv(&ranking,1,MPI_INT,j,j*10,MPI_COMM_WORLD,&status);
}
smallest=1000000;
for(count=0;count<p;count++)
{
if(minimumbyprocs[count]<smallest)
{
smallest=minimumbyprocs[count];
min_rank=count;
}
}
for(j=1;j<p;j++)
{
MPI_Send(&min_rank,1,MPI_INT,j,j*20,MPI_COMM_WORLD);
}
rankfrom0=min_rank;
}
/*Each processor checks the rank sent by rank 0 processor to check whether it itself is the one
which calculated the shortest distance*/
for(j=1;j<p;j++)
{
if(my_rank==j)
{
MPI_Recv(&rankfrom0,1,MPI_INT,0,j*20,MPI_COMM_WORLD,&status);
//
printf("\nminimum rank=%d",rankfrom0);
}
}
for(j=0;j<p;j++)
{
if(my_rank==rankfrom0)
for(k=0;k<p;k++)
{
if(my_rank!=k)
for(i=0;i<20;i++)
{
//Sending the best path to all processors
MPI_Send(&saved_city[i].x,1,MPI_INT,k,k*30,MPI_COMM_WORLD);
MPI_Send(&saved_city[i].y,1,MPI_INT,k,k*30,MPI_COMM_WORLD);
}
}
}
/*All processors receive the best path computed during 1 iteration by 1 processor and then
improves upon this path*/
if(p>1)
for(k=0;k<p;k++)
if((my_rank==k)&&(my_rank!=rankfrom0))
{
for(i=0;i<20;i++)
{
MPI_Recv(&city[i].x,1,MPI_INT,rankfrom0,k*30,MPI_COMM_WORLD,&status);
MPI_Recv(&city[i].y,1,MPI_INT,rankfrom0,k*30,MPI_COMM_WORLD,&status);
}
}
t=0.8*t;
}
}
Output Generated when this program was executed for p=1 processor
The Travelling Salesman problem for calculating the shortest path between 20 cities using Simulated Annealing
minimum distance calculated is 13984.000000
You get the shorterst path by traversing thru the following cities back to the first city
(2383,886)
- (1793,835)
- (540,426)
- (172,736)
- (2690,559)
- (2649,421)
- (1862,1123)
- (67,1135)
- (1393,1456)
- (362,1027)
- (782,530)
- (2386,492)
- (1211,368)
- (929,1302)
- (2069,1167)
- (567,429)
- (777,415)
- (2022,558)
- (1763,1426)
- (2011,1042)
- (2383,886)
Time taken for execution of the program is 0.0976 seconds
I am also giving the C code for Brute Force.
C code for computing the solving the Travelling Salesman Problem
using Brute Force
/*C Code for solving the Travelling Salesman Problem for finding the shortest path through 20 cities
using Brute Force*/
//Declaration of Header Files
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <math.h>
#define MAX 20
//declaring the function prototypes
double calc_shortestpath();
double calc_path();
void randomize();
void calc_parallel();
//defining the city
struct city_structure
{
int x,y;
};
struct city_structure city[MAX],saved_city[MAX];
//definition of global variables
int t=10000;
double alpha=0.8;
double min_distance=100000000000;
int p,my_rank,rankfrom0;
long int smallest;
int flag=0;
MPI_Status status;
//START of Main Function
int main(int argc, char *argv[])
{
int i,j,tag,temp_x,temp_y;
double temp,distance[MAX];
double minimum;
double starttime=0, endtime=0;
//variables used for calculating execution time
double time_taken=0,timeelapsed=0,totaltime=0;
//variables used for calculating execution time
//MPI code
MPI_Init(&argc, &argv); //Initiate MPI
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); //Set COMM and my_rank
MPI_Comm_size(MPI_COMM_WORLD, &p); //find COMM's size
//Assigning the locations of the cities
for(i=0;i<MAX;i++)
{
city[i].x=rand()%3000;
city[i].y=rand()%1500;
}
//Doing this to find the north east city
temp=3354;
for(i=0;i<20;i++)
{
distance[i]= sqrt(pow(city[i].x -3000,2)+ pow(city[i].y - 1500,2));
if(distance[i]<temp)
{
temp=distance[i];
tag=i;
}
}
//1st city swapped with most northeast city
temp_x=city[0].x;
temp_y=city[0].y;
city[0].x=city[tag].x;
city[0].y=city[tag].y;
city[tag].x= temp_x;
city[tag].y= temp_y;
starttime = MPI_Wtime(); //starting timer
calc_parallel();// This is the crux of the project
endtime = MPI_Wtime();//stopping the timer
time_taken=endtime-starttime;//computing time taken
//Calculating the time taken by each processor and then sending it to processor with Rank 0
for(j=0;j<p;j++)
{
if(my_rank==j)
MPI_Send(&time_taken,1,MPI_DOUBLE,0,j*3,MPI_COMM_WORLD);
}
if(my_rank==0)
{
totaltime=time_taken;
for(j=1;j<p;j++)
{
MPI_Recv(&timeelapsed,1,MPI_DOUBLE,j,j*3,MPI_COMM_WORLD,&status);
totaltime+=timeelapsed;
}
minimum=smallest;
printf("The Travelling Salesman problem for calculating the shortest path between 20 cities using
Brute Force\n");
printf("\n\nminimum distance calculated is %lf miles",minimum);
printf("\n\nYou get the shorterst path by traversing thru the following cities back to the first
city\n");
for(i=0;i<20;i++)
printf(" (%d,%d)\n-",city[i].x,city[i].y);
printf(" (%d,%d)\n-",city[0].x,city[0].y);
printf("\n\nTime taken for execution of the program is %f seconds\n",totaltime/p);
}
MPI_Finalize();
return 0;
}
//Function for calculating the shortest path by each processor
double calc_shortestpath()
{
int count,i;
double dist=0,distance_calc,minimum,delta,c,a;
for(count=0;count<100;count++)
{
randomize();
distance_calc=calc_path();
if(distance_calc<min_distance)
{
min_distance=distance_calc;
for(i=0;i<20;i++)
{
saved_city[i].x=city[i].x;
saved_city[i].y=city[i].y;
}
}
}
return(min_distance);
}
//Function for calculating the path for 1 iteration
double calc_path()
{
int i;
double dist=0;
for(i=1;i<19;i++)
dist+=sqrt(pow(city[i].x -city[i+1].x,2)+ pow(city[i].y - city[i+1].y,2));
dist+=sqrt(pow(city[0].x -city[1].x,2)+ pow(city[0].y - city[1].y,2));
dist+=sqrt(pow(city[0].x -city[19].x,2)+ pow(city[0].y - city[19].y,2));
return(dist);
}
//Function for changing the cities selected for calculating the path
void randomize()
{
int i,number,temp_x,temp_y;
srand(my_rank*1000);
for(i=1;i<20;i++)
{
number=rand()%19+1;
temp_x=city[i].x;
temp_y=city[i].y;
city[i].x=city[number].x;
city[i].y=city[number].y;
city[number].x= temp_x;
city[number].y= temp_y;
}
}
/*This is a function which each processor will execute for finding the shortest path,then send it to rank 0
processor for determining which processor calculated the shortest path and then broadcast this information
to all other processors so that it can improve upon this path using the Simulated Annealing Technique
*/
void calc_parallel()
{
int i,j,k,count,ranking,min_rank;
double min_dist1proc=100000000,minimumbyprocs[8];
while (t>0.1)//&&(flag==0))
{
min_dist1proc=calc_shortestpath();
//Send smallest distance calculated by each processor to rank 0 processor
for(j=0;j<p;j++)
if((my_rank==j)&&(my_rank!=0))
{
MPI_Send(&min_dist1proc,1,MPI_DOUBLE,0,j*10,MPI_COMM_WORLD);
MPI_Send(&j,1,MPI_INT,0,j*10,MPI_COMM_WORLD);
}
/*Rank 0 processor determines which processor calculated the shortest distance and then
sends this information to other processors*/
if(my_rank==0)
{
minimumbyprocs[0]=min_dist1proc;
for(j=1;j<p;j++)
{
MPI_Recv(&minimumbyprocs[j],1,MPI_DOUBLE,j,j*10,MPI_COMM_WORLD,&status);
MPI_Recv(&ranking,1,MPI_INT,j,j*10,MPI_COMM_WORLD,&status);
}
smallest=1000000;
for(count=0;count<p;count++)
{
if(minimumbyprocs[count]<smallest)
{
smallest=minimumbyprocs[count];
min_rank=count;
}
}
rankfrom0=min_rank;
t=0.8*t;
}
}
Output Generated when this program was executed for p=1 processor
The Travelling Salesman problem for calculating the shortest path between 20 cities using Brute Force
minimum distance calculated is 18187.000000 miles
You get the shorterst path by traversing thru the following cities back to the first city
(2383,886)
- (172,736)
- (362,1027)
- (2649,421)
- (2011,1042)
- (1862,1123)
- (777,415)
- (929,1302)
- (2069,1167)
- (2690,559)
- (540,426)
- (1763,1426)
- (782,530)
- (1793,835)
- (67,1135)
- (567,429)
- (2022,558)
- (1211,368)
- (1393,1456)
- (2386,492)
- (2383,886)
Time taken for execution of the program is 0.183574 seconds
Results and Analysis
Timing Results for Simulated Annealing Algorithm:
X-axis: Number of Processors
Y-axis: Time in seconds
N=20 cities
0.2
0.15
Time in
Seconds
0.1
N=20 cities
0.05
0
1
2
4
8
N=20 cities 0.0976 0.1649 0.0793 0.0642
Number of Processors
Speedup Curves for Simulated Annealing Algorithm :
X-axis: Number of Processors
Y-axis: Speedup
N=20 cities
1.6
1.4
1.2
1
Speedup 0.8
0.6
0.4
0.2
0
N=20 cities
1
2
4
8
Number of Processors
Shortest Distance found by Simulated Annealing algorithm is 13984 miles
Timing Results for Brute Force Algorithm:
X-axis: Number of Processors
Y-axis: Time in seconds
N=20 Cities
0.2
0.15
Time in
seconds
0.1
N=20 Cities
0.05
0
1
2
4
8
N=20 Cities 0.1836 0.179 0.1649 0.152
Number of Processors
Speedup Curves for Brute Force Algorithm :
X-axis: Number of Processors
Y-axis: Speedup
N=20 cities
1.4
1.2
1
Speedup
0.8
0.6
N=20 cities
0.4
0.2
0
1
2
4
8
Number of Processors
Shortest Distance found by Brute Force algorithm is 18187 miles
Analysis:
The timing diagrams and speed up curves for Simulated Annealing and Brute Force Algorithms for different
number of processors (P=1,2,4,8) is plotted as shown in the graphs above.
Speedup is defined by S = T1 / T(N), where S is the speedup, T1 is the time the fastest serial program takes to
run, and T(N) is the time it takes to run the same problem with N processors. This is a "pure" definition of
speedup. However, for the sake of programming simplicity, T(1), that is, the parallel algorithm run on one
processor, is often used in the place of T1.
From the graphs it can be seen that :
When Number of cities N = 20, the best timings are obtained for P = 8.
The main method employed here for finding an optimum solution for the traveling salesman problem is
SIMULATED ANNEALING as it calculates a shorter distance compared to BRUTE FORCE. The
methodology we have used here for calculating the speed up is that what distance we found out in the case of 1
processor ,using that as a reference we have evaluated the performance in case of more processors. Generally
this method involves a lot of message passing, so it is useful mostly for the problems with large computation
parameters .Our algorithm is based on the calculation for the distances between the cities and then adding them
up for getting the total path length, which involves a very less time for 20 cities. So computation time is very
less as compared to that of the message passing involved.
Moreover the results reveal that since all the processors are running the whole process individually on each of
them separately, so it might be the case that 2 or more processors may not be able to generate the minimum
distance in less time as compared to that of 1 processor case. Because the results for more processors depends
on just the random distribution for the cities, so this scenario is possible. This case is observed in case of 2
processors, where the same distance path (13,984 miles) as that of 1 processor case was found by 2 processor
case in more time. But fortunately when 4 or more processors are used, the random combinations for the path
selected are more and thereafter the same distance path can be discovered in a relatively less time. Generally the
probability of finding the minimum distance path in lesser time is more when we use more than 1 processor
because each processor randomly selects different paths between the cities.