Optimizing cloudlet scheduling and wireless sensor localization

The University of Toledo
The University of Toledo Digital Repository
Theses and Dissertations
2014
Optimizing cloudlet scheduling and wireless sensor
localization using computational intelligence
techniques
Hussein S. Al-Olimat
University of Toledo
Follow this and additional works at: http://utdr.utoledo.edu/theses-dissertations
Recommended Citation
Al-Olimat, Hussein S., "Optimizing cloudlet scheduling and wireless sensor localization using computational intelligence techniques"
(2014). Theses and Dissertations. 1739.
http://utdr.utoledo.edu/theses-dissertations/1739
This Thesis is brought to you for free and open access by The University of Toledo Digital Repository. It has been accepted for inclusion in Theses and
Dissertations by an authorized administrator of The University of Toledo Digital Repository. For more information, please see the repository's About
page.
A Thesis
entitled
Optimizing Cloudlet Scheduling and Wireless Sensor Localization using
Computational Intelligence Techniques
by
Hussein S. Al-Olimat
Submitted to the Graduate Faculty as partial fulfillment of the requirements for the
Master of Science Degree in Engineering
Dr. Mansoor Alam, Committee Chair
Dr. Robert Green, Committee Member
Dr. Vijay Devabhaktuni, Committee Member
Dr. Weiqing Sun, Committee Member
Dr. Patricia R. Komuniecki, Dean
College of Graduate Studies
The University of Toledo
August 2014
Copyright 2014, Hussein S. Al-Olimat
This document is copyrighted material. Under copyright law, no parts of this
document may be reproduced without the expressed permission of the author.
An Abstract of
Optimizing Cloudlet Scheduling and Wireless Sensor Localization using
Computational Intelligence Techniques
by
Hussein S. Al-Olimat
Submitted to the Graduate Faculty as partial fulfillment of the requirements for the
Master of Science Degree in Engineering
The University of Toledo
August 2014
Optimization algorithms are truly complex procedures that consider many elements when optimizing a specific problem. Cloud computing (CCom) and Wireless
sensor networks (WSNs) are full of optimization problems that need to be solved.
One of the main problems of using the clouds is the underutilization of the reserved
resources, which causes longer makespans and higher usage costs. Also, the optimization of sensor nodes’ power consumption, in WSNs, is very critical due to the
fact that sensor nodes are small in size and have constrained resources in terms of
power/energy, connectivity, and computational power.
This thesis formulates the concern on how CCom systems and WSNs can take
advantage of the computational intelligent techniques using single- or multi-objective
particle swarm optimization (SOPSO or MOPSO), with an overall aim of concurrently minimizing makespans, localization time, energy consumption during localization, and maximizing the number of nodes fully localized. The cloudlet scheduling
method is implemented inside CloudSim advancing the work of the broker, which was
able to maximize the resource utilization and minimize the makespan demonstrating
improvements of 58% in some cases. Additionally, the localization method optimized
the power consumption during a Trilateration-based localization (TBL) procedure,
through the adjustment of sensor nodes’ output power levels. Finally, a parameteriii
study of the applied PSO variants for WSN localization is performed, leading to
results that show algorithmic improvements of up to 32% better than the baseline
results in the evaluated objectives.
iv
á Ô gQË@
é<Ë@
Õæ
kQË@
Õæ„
.
In the name of Allah, the Compassionate, the Merciful
h
á ÖÏ AªË@ H. P é<Ë úGAÜØð ø AJ
m× ð ú¾‚ ð úGC“ à@ i
hVerily my prayer and my rites and my living and my dying are all for Allah, Lord of the worldsi
Õç'
QºË@ à @Q®Ë@
162:6
The Holy Quran 6:162
Acknowledgments
Prophet Muhammad (PBUH) says: “Whoever does not thank people (for their
favor) has not thanked Allah (properly)”. [Musnad Ahmad, Sunan At-Tirmidh]
I would like to first thank my brother Dr. Khalid Al-Olimat and his wife Dr. Feng
Jao. Words cannot express my feelings, nor my thanks for everything.
I also would like to thank my advisor Dr. Mansoor Alam and Dr. Vijay Devabhaktuni for giving me the opportunity of being part of the University of Toledo.
A big thanks to my co-advisor Dr. Robert Green for his guidance as a mentor and
as the head of our research group, and for making many ’PHD comics’ apply to my
graduate carrier. I deeply appreciate the financial support during my masters degree
from the EECS department in the form of TA and RA assistantships, which allowed
me to learn a lot and allowed me to be part of a great research group.
Finally, I would like to thank each and every one at the University of Toledo
for allowing me to be part of the family for almost one and a half years during
my graduate studies; my research team members for their collaboration, insightful
feedbacks, and the fun we had together, my professors for their encouragements and
the knowledge I learned from them, and all the good friends I met in Toledo for the
good times we had together.
vi
Contents
Abstract
iii
Acknowledgments
vi
Contents
vii
List of Tables
x
List of Figures
xii
List of Abbreviations
xiv
List of Symbols
xvi
1 Introduction
1
1.1
Objectives and Contributions . . . . . . . . . . . . . . . . . . . . . .
3
1.2
Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2 Background
2.1
5
Cloud Computing (CCom) . . . . . . . . . . . . . . . . . . . . . . . .
5
2.1.1
Makespan, Utilization, and Scheduling . . . . . . . . . . . . .
6
2.1.2
Cloud Simulator
. . . . . . . . . . . . . . . . . . . . . . . . .
7
2.1.2.1
CloudSim Network Topology . . . . . . . . . . . . .
8
2.1.2.2
Cloudlet Scheduling . . . . . . . . . . . . . . . . . .
9
2.1.2.3
Time-shared VM Scheduling and Oversubscription .
10
vii
2.2
2.3
Wireless Sensor Networks (WSNs) . . . . . . . . . . . . . . . . . . . .
11
2.2.1
WSN Localization
. . . . . . . . . . . . . . . . . . . . . . . .
11
2.2.2
Power Consumption in WSNs . . . . . . . . . . . . . . . . . .
12
Particle Swarm Optimization (PSO)
. . . . . . . . . . . . . . . . . .
14
2.3.1
Continuous PSO mathematical model . . . . . . . . . . . . . .
14
2.3.2
Binary PSO mathematical model . . . . . . . . . . . . . . . .
16
2.3.3
Multi-objective PSO (MOPSO) . . . . . . . . . . . . . . . . .
17
2.3.4
The inertia weight (ω) . . . . . . . . . . . . . . . . . . . . . .
18
2.3.4.1
Adjusting ω using a Simulated Annealing (SA) method 18
3 Cloudlet Scheduling with PSO
21
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
3.2
PSO and Cloud Scheduling . . . . . . . . . . . . . . . . . . . . . . . .
22
3.3
Implemented Method . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
3.3.1
Problem Formulation . . . . . . . . . . . . . . . . . . . . . . .
24
3.3.2
PSO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . .
25
3.3.3
Randomly adjusted inertia weight (ω) . . . . . . . . . . . . . .
27
Simulations and Results . . . . . . . . . . . . . . . . . . . . . . . . .
27
3.4.1
Setting the computing power in MIPS . . . . . . . . . . . . .
28
3.4.2
Results of scheduling without oversubscription . . . . . . . . .
28
3.4.3
Results of scheduling with oversubscription . . . . . . . . . . .
32
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
3.4
3.5
4 Swarm Optimized WSN Localization
4.1
4.2
35
Nodes Output Power Levels . . . . . . . . . . . . . . . . . . . . . . .
36
4.1.1
Discrete Levels . . . . . . . . . . . . . . . . . . . . . . . . . .
37
4.1.2
Continuous Levels . . . . . . . . . . . . . . . . . . . . . . . . .
38
WSNs Localization Problem formulation . . . . . . . . . . . . . . . .
38
viii
4.3
4.4
Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
4.3.1
PSO Formulation . . . . . . . . . . . . . . . . . . . . . . . . .
44
4.3.1.1
Objective Functions . . . . . . . . . . . . . . . . . .
45
4.3.1.2
Binary PSO Representation . . . . . . . . . . . . . .
46
4.3.1.3
Continuous PSO Representation . . . . . . . . . . .
46
4.3.1.4
MOPSO . . . . . . . . . . . . . . . . . . . . . . . . .
47
Simulations and Results . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.4.1
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.4.2
Baseline Results . . . . . . . . . . . . . . . . . . . . . . . . . .
50
4.4.3
BSOPSO Results . . . . . . . . . . . . . . . . . . . . . . . . .
52
4.4.4
Simulations of MOP . . . . . . . . . . . . . . . . . . . . . . .
54
4.4.4.1
BMOPSO Results . . . . . . . . . . . . . . . . . . .
54
4.4.4.2
CMOPSO Results . . . . . . . . . . . . . . . . . . .
55
Parameter Study . . . . . . . . . . . . . . . . . . . . . . . . .
59
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
4.4.5
4.5
5 Conclusions and Future Work
66
5.1
The Methods Used and The Goals Achieved . . . . . . . . . . . . . .
66
5.2
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
References
69
ix
List of Tables
3.1
Execution-time table . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
3.2
A workload consist of four cloudlets . . . . . . . . . . . . . . . . . . . . .
24
3.3
List of the available cloud resources . . . . . . . . . . . . . . . . . . . . .
24
3.4
Cloud resources assignment table (BSOPSO positions matrix) . . . . . .
26
3.5
Some of the processors used in data centers. . . . . . . . . . . . . . . . .
28
3.6
Results of running the BSOPSO scheduling method for 100 times . . . .
31
3.7
Results of the simple CloudSim brokering method . . . . . . . . . . . . .
31
3.8
Makespans using Time-shared with or without oversubscription . . . . .
33
3.9
Makespans’ averages, minimums, and maximums while using Time-shared
scheduling with oversubscription and the BSOPSO . . . . . . . . . . . .
34
4.1
TBL process using single power level (Step 1 Data) . . . . . . . . . . . .
41
4.2
TBL process using single power level (Step 2 Data) . . . . . . . . . . . .
41
4.3
TBL process using multiple power levels (Step 1 Data) . . . . . . . . . .
43
4.4
TBL process using multiple power levels (Step 2 Data) . . . . . . . . . .
43
4.5
Nodes output power levels assignment table (Binary PSO positions matrix) 46
4.6
Nodes output transmission range assignment table (CMOPSO positions
matrix) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
4.7
Baseline results using multiple discrete output power levels. . . . . . . .
52
4.8
BSOPSO Parameters’ values . . . . . . . . . . . . . . . . . . . . . . . . .
53
4.9
BMOPSO Parameters’ values . . . . . . . . . . . . . . . . . . . . . . . .
55
4.10 CMOPSO Parameters’ values . . . . . . . . . . . . . . . . . . . . . . . .
56
x
4.11 Averages and Standard Deviations of the CMOPSO . . . . . . . . . . . .
56
4.12 Averages and Standard Deviations of the CMOPSO for the Second Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
58
List of Figures
2-1 Cloud Resource Utilization . . . . . . . . . . . . . . . . . . . . . . . . . .
7
2-2 CloudSim Network Topology Design . . . . . . . . . . . . . . . . . . . .
10
3-1 Simulations makespans of four groups of cloudlets . . . . . . . . . . . . .
30
3-2 Fitness value vs. Iteration number . . . . . . . . . . . . . . . . . . . . .
30
4-1 Simulating the first localization method using single output power level.
The data of the steps are listed in Tables 4.1 and 4.2 . . . . . . . . . . .
40
4-2 Simulating the second localization method using multiple output power
level. The data of the steps are listed in Tables 4.3 and 4.4 . . . . . . . .
42
4-3 Simulation Flow Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
4-4 Simulation Panels showing three different simulation scenarios using the
(a) minimum, (b) medium, and (c) maximum output power levels. Lines
between nodes represents a 1-hop connection between nodes. . . . . . . .
51
4-5 Results from 50 Trials of BSOPSO with an objective of maximizing localizability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
4-6 Number of transmitted messages when using multiple discrete output
power levels over 50 Trials of BSOPSO with an objective of maximizing
Localizability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
4-7 Power consumption, localization time, and number of localized nodes of
a solution set containing 115 solutions while using the BMOPSO method
over 50 trials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xii
55
4-8 Power consumption, average transmission ranges used by all nodes, localization time, and number of nodes localized of a solution set containing
57 solutions while using the CMOPSO method over 50 trials. . . . . . . .
57
4-9 Results of CMOPSO vs. BMOPSO . . . . . . . . . . . . . . . . . . . . .
58
4-10 Experiments Varying The Number of Particles and Iterations
. . . . . .
60
4-11 Experiments Varying The Maximum Transmission Ranges . . . . . . . .
61
4-12 Experiments Varying The Mutation Percentage . . . . . . . . . . . . . .
63
4-13 Experiments Varying The Mutation Value . . . . . . . . . . . . . . . . .
63
4-14 Experiments Varying The Inertia Weight (ω) . . . . . . . . . . . . . . . .
64
xiii
List of Abbreviations
ACO . . . . . . . . . . . . . . . . . . . . . .
AIW . . . . . . . . . . . . . . . . . . . . . .
BMOPSO . . . . . . . . . . . . . . . . .
BRS . . . . . . . . . . . . . . . . . . . . . .
BSOPSO . . . . . . . . . . . . . . . . .
CCom . . . . . . . . . . . . . . . . . . . .
CI . . . . . . . . . . . . . . . . . . . . . . . .
CMOPSO . . . . . . . . . . . . . . . . .
FCFS . . . . . . . . . . . . . . . . . . . . .
FIW . . . . . . . . . . . . . . . . . . . . . .
GA . . . . . . . . . . . . . . . . . . . . . . .
GPS . . . . . . . . . . . . . . . . . . . . . .
IaaS . . . . . . . . . . . . . . . . . . . . . .
IDLE . . . . . . . . . . . . . . . . . . . . .
jMetal . . . . . . . . . . . . . . . . . . . .
LDIW . . . . . . . . . . . . . . . . . . . .
MBL . . . . . . . . . . . . . . . . . . . . . .
MI . . . . . . . . . . . . . . . . . . . . . . . .
MIPS . . . . . . . . . . . . . . . . . . . . .
MOP . . . . . . . . . . . . . . . . . . . . .
MOPSO . . . . . . . . . . . . . . . . . .
NIST . . . . . . . . . . . . . . . . . . . . .
NP . . . . . . . . . . . . . . . . . . . . . . .
NSGA-II . . . . . . . . . . . . . . . . . .
OFF . . . . . . . . . . . . . . . . . . . . . .
OMOPSO . . . . . . . . . . . . . . . . .
PaaS . . . . . . . . . . . . . . . . . . . . . .
PBM . . . . . . . . . . . . . . . . . . . . .
PD . . . . . . . . . . . . . . . . . . . . . . .
PSO . . . . . . . . . . . . . . . . . . . . . .
RF . . . . . . . . . . . . . . . . . . . . . . . .
Ant Colony
Adaptive Inertia Weight
Binary Multi-objective Particle Swarm Optimization
Best Resource Selection
Binary Single-objective Particle Swarm Optimization
Cloud Computing
Computational Intelligence
Continuous Multi-objective Particle Swarm Optimization
First-Come-First-Serve
Fixed Inertia Weight
Genetic Algorithm
Geographic Positioning System
Infrastructure as a Service
Idle Mode
Metaheuristic Algorithms in Java
Linearly Decreasing Inertia weight
Multilateration-Based Localization
Millions of Instructions
Millions of Instructions Per Second
Multi-objective Problem
Multi-objective Particle Swarm Optimization
National Institute of Standards and Technology
Non-deterministic Polynomial time
Sorting Genetic Algorithm-II
Voltage Regulator Off
Optimal Multi-objective Particle Swarm Optimization
Platform as a Services
Population-based Metaheuristic
Power Down mode
Particle Swarm Optimization
Radio Frequency
xiv
RIW . . . . . . . . . . . . . . . . . . . . . .
RSS . . . . . . . . . . . . . . . . . . . . . . .
RSSI . . . . . . . . . . . . . . . . . . . . . .
SA . . . . . . . . . . . . . . . . . . . . . . . .
SaaS . . . . . . . . . . . . . . . . . . . . . .
SJF . . . . . . . . . . . . . . . . . . . . . . .
SMPSO . . . . . . . . . . . . . . . . . . .
SOPSO . . . . . . . . . . . . . . . . . . .
TBL . . . . . . . . . . . . . . . . . . . . . .
TS . . . . . . . . . . . . . . . . . . . . . . . .
VM . . . . . . . . . . . . . . . . . . . . . . .
WSN . . . . . . . . . . . . . . . . . . . . .
Random Inertia Weight
Received Signal Strength
Received Signal Strength Indicator
Simulated Annealing
Software as a Service
Shortest Job First
Speed constrained Multi-objective Particle Swarm Optimization
Single-Objective Particle Swarm Optimization
Trilateration-based Localization
Tabu Search
Virtual machine
Wireless Sensor Network
xv
List of Symbols
ω . . . . . . . . . . . Inertia weight
c1 . . . . . . . . . . . Cognitive factor indicates the self confidence of the particle
c2 . . . . . . . . . . . Social factor which allows the particle to be affected by the behavior
of the neighbor particles usually the globally best
r1 . . . . . . . . . . . Random number
r2 . . . . . . . . . . . Random number
minV alue . . . The minimum transmission range in meters
maxV alue . . The maximum transmission range in meters
Ran . . . . . . . . Random number between 0 and maxV alue
p . . . . . . . . . . . . Swarm particle
p̂i . . . . . . . . . . . Particle’s personal best solution
ĝ . . . . . . . . . . . . Global best solution among all particles
vi . . . . . . . . . . . Particle’s p velocity of the ith component
pi . . . . . . . . . . . Particle’s p position of the ith component
s(pi ) . . . . . . . . The sigmoid function value of the particle i
R1 . . . . . . . . . . Cloud Resource number one, i.e. virtual machine number one
R2 . . . . . . . . . . Cloud Resource number two, i.e. virtual machine number two
R3 . . . . . . . . . . Cloud Resource number three, i.e. virtual machine number three
δ . . . . . . . . . . . . Half the difference between the maximum and minimum transmission
ranges
itr . . . . . . . . . . Current iteration
i
ωitr
. . . . . . . . . The inertia weight used for particle i in iteration number itr
ωmax . . . . . . . . A predefined maximum possible value of inertia weight
ωmin . . . . . . . . A predefined minimum possible value of inertia weight
itrmax . . . . . . . The predefined maximum number of iterations
cT empitr . . . . The annealing temperature value in current iteration itr
pF itnessitr
avg . The average of all recorded particle’s p fitness throughout all iterations
until current iteration itr
pF itnessbest . The best recorded fitness of the particle p
k . . . . . . . . . . . Any fixed number
pF itnessitr−k Particle’s fitness in the iteration itr − k
pF itnessitr . . Particle’s fitness in current iteration itr
xvi
ρitr
..........
i
ran . . . . . . . . .
EXCV M (j) .
Cx . . . . . . . . . .
R ...........
Po . . . . . . . . . .
P̂o . . . . . . . . . .
Fm . . . . . . . . . .
Pr . . . . . . . . . .
f ...........
n ...........
Annealing probability of particle i in iteration itr
Any binary random number
The execution time of running the set of cloudlets j on a V M
Cloudlet number x
Maximum transmission range a message can reach
The sender transmit/output power in dBm
The sender transmit/output power in mW
The fade margin in dB
The receiver sensitivity in dBm
the signal frequency in MHz
the pass-loss exponent
xvii
Chapter 1
Introduction
The advancement of today’s technologies have allowed the world to be connected
together through networks. Both wired and wireless networks are heavily used by
our everyday computing allowing information share and remote computing across
continents. However, these advances in technologies and the increase adoption of
them are introducing many new challenges and opening new areas of research. In
this thesis, two of the most important areas of research in todays systems are studied,
Cloud computing (CCom) and Wireless Sensor Networks (WSNs).
CCom is one of the most interesting recently developed models of network computing that provides services over networks on demand. Real world records of data
centers’ resources utilization shows that the overall percentage of utilization ranges
from 5% to 20% [1, 2]. However, in terms of elasticity, according to a study by Armbrust et al. [3], the pay-per-use concept is considered cost-effective if the cost of three
years of server-hours is less than 1.7 times the cost of buying the server. Having that
in mind, clouds are highly adapted in present-day computing but real issues regarding
performance, reliability, and security still exist in such complex systems that need to
be addressed. One of the main issues is regarding the pay-per-use concept of the
clouds that may be affected by underutilization of the reserved resources by a single
user. So even the clouds suffer from underutilization but per users’ reserved resources
1
not per data center as a whole. Therefore, the maximization of system utilization
while simultaneously minimizing makespans is of great interest to cloud users wishing
to reduce usage costs through the decrease of usage time.
WSNs are also facing great challenges on the network level and on the individual
sensor node level that form the sensor network. One of the most important issues
that need to be addressed on the sensor node level is the power consumption of sensor
nodes. Power consumption is considered to be very important in WSNs due to the
fact that sensor nodes have small and limited power supplies. Consequently, WSN
applications, such as the localization procedures, need to be modified or the behavior
of sensor nodes needs to be adjusted, in order to reduce excessive amount of power
consumption.
Computational intelligence (CI) methodologies and approaches are among the
techniques used to solve hard problems intelligently through the use of heuristic
methods. George Polya described heuristics as “the art of good guessing”, where
the behavior of such algorithms allow us to solve complex problems faster but sacrificing some degrees of accuracy, completeness, optimality, or precision. The natureinspired CI simulate the behavior of nature when solving problems, such as the swarm
intelligence-based algorithms which are a set of biological-inspired algorithms, or the
probabilistic metaheuristic, such as the simulated annealing (SA) which is a global
minimization strategy.
This thesis is effectively applies CI approaches for addressing the aforementioned
issues of the CCom and WSNs. The two problems were optimized by implementing
hybrid versions of CI algorithms inside the cloud simulator (CloudSim), in order to
advance the work of the already implemented simple broker to maximize the resource
utilization and minimize makespans. Additionally, different versions of CI algorithms
are designed as global optimizers in WSNs, to concurrently minimize the required
localization time and the maximum number of nodes fully localized while reducing
2
the power consumption of nodes. The optimization in this case is achieved through the
adjustment of the sensor nodes’ output power levels, leading to different transmission
ranges a message can actually reach.
1.1
Objectives and Contributions
The main focus of this thesis is the use of CI techniques to enhance the performance of systems for multi-objective multi-level optimization problems in CCom
and WSNs. Generally, optimization problems are categorized into two different types
according to the type of problems’ variables: combinatorial and continuous optimization problems, for discrete and continuous variables, respectively. In this thesis, both
types of optimization problems are solved and results are provided and discussed.
The main contributions of the thesis are:
• The implementation and design of single- and multi-objective Particle Swarm
Optimization (SOPSO & MOPSO) methods.
• The use of more than one optimization algorithm together to form hybrid versions, such as the use of the SA method and the Non-dominated Sorting Genetic
Algorithm-II (NSGA-II) to enhance the performance of PSO and the quality of
its solutions.
• Two optimization methods are applied to the Cloud scheduling; in order to
decrease the execution time of tasks (aka. Cloudlets) on the cloud, and to WSN
localization problem; in order to find optimal solutions while optimizing the
power consumption of the whole network.
• Providing a detailed fine-tuning analysis of PSO parameters through a detailed
parameter study, to enhance the performance of PSO and the quality of its
solutions.
3
• The thesis source codes are provided under the terms of the GNU General
Public License (GPL), the library containing all codes are called Cloud and
Localization Computational Intelligence Techniques (clocacits) and available
online at: http://figshare.com/articles/clocacits/1050128
1.2
Thesis Structure
The aforementioned contributions of the thesis are detailed within the next four
chapters giving a brief background, evaluating results, explaining the implementation
of all methods used for problems optimization, and drawing the lines for future works
based on the thesis contribution.
The background information about the systems used in research, the technologies,
and the previous work in which this study was built on is presented in Chapter 2.
Chapter 3 and Chapter 4 presents the detailed formulations of optimization problems,
the implemented methods, and complete analyses and feedback on the performance of
methods and the quality of solutions found. Finally, Chapter 5 concludes the thesis.
4
Chapter 2
Background
2.1
Cloud Computing (CCom)
CCom is a term used to describe the on-demand, elastic, and scalable services
offered over a network. The two main types of clouds are private and public clouds
which are served over private (internal) and public networks, respectively. Hybrid
clouds are also available and are a combination of the two previous kinds of clouds,
where public clouds are used to increase and supplement the capabilities of the onpremise private clouds. Cloud companies offer cloud resources such as network, server,
storage, or applications for users as services where payments are per resource unit
used. Additionally, Internet companies provide convenient public cloud services over
the Internet on a pay-per-use basis.
IT professionals and researchers are trying to optimize the work of the clouds,
including trying to reduce the overall cost and data center footprint, in addition to
improving the amount of computational power available via the cloud. According to
NIST [4], one of the essential characteristics of the clouds is to give the consumer
the illusion that network capabilities and computing power are unlimited, and can be
requested at any time and in any quantity.
When considering the cloud, the widely used saying “time is money” applies.
5
When a user requests cloud resources, the cloud should be able to serve the user’s
request as soon as possible and in a cost-effective manner. In order to satisfy the
“unlimited” and “elastic” characteristics of the cloud, consumer requests are often
handled by way of the First-Come-First-Serve (FCFS), where the customer request
should be the driving factor for workload scheduling [5]. Thus, network resources in
CCom should be provided to users while satisfying the aforementioned characteristics
of the cloud.
2.1.1
Makespan, Utilization, and Scheduling
It is important to consider both makespan and resource utilization when optimizing scheduling in a system such as a cloud. Makespan is the total time the network
resources take to finish executing all the tasks or jobs, and utilization is the measure
of how well the overall capacity of the cloud is used. In order to increase the utilization of the resources at a given time, an effective scheduling algorithm should be
used to schedule the tasks among the reserved resources. Makespans and utilizations
have an inverse relationship, i.e. increasing utilization will decrease the makespan.
In other words, to achieve higher utilization, tasks should be distributed efficiently
among all the reserved cloud resources.
A more efficient distribution of tasks will allow us to decrease the makespan, and,
since we pay per second on the cloud, costs of reserving resources will be minimized
with the decrease of the utilization time. Fig. 2-1 shows three different resources
with a possibility of having different computing power and different hosts. Consider
a makespan time of 500 seconds. During that execution time of the tasks on the three
resources and according to the current brokering policy, resources R1, R2, and R3
will be utilized during the 500 seconds for around 80%, 66%, and 100% respectively.
In order to decrease the makespan of all tasks on the three resources to be, for
example, 450 seconds, rescheduling may be effective in this case, where some of the
6
Cloud Resources
R3
R2
Unutilized
Utilized
R1
0%
20%
40%
60%
80%
100%
Utilization percentage
Figure 2-1: Cloud Resource Utilization
tasks that were assigned to R3 may be reassigned to either R1 or R2. In this way,
the makespan will decrease leading to an increase in the utilization of the resources
for the new duration. This kind of solution is very cost-effective and very applicable
to the pay-per-use cloud services.
2.1.2
Cloud Simulator
CCom simulation tools, just like any other simulation tools, allow users to test
modeled services in a controlled environment with different workloads and scenarios
before deploying them on real Clouds [6]. CloudSim is one of the popular tools for
simulating CCom systems amongst others including iCanCloud, SIMCAN, CloudMIG
Xpress, RealCloudSim and PACSim. Simulation models developed using CloudSim
require little effort and time, with little programming knowledge. The tool is applicable and flexible when testing heterogeneous and federated Cloud environments,
and offers simulations of large scale networks as well. The tool also contains service
brokers in addition to provisioning and allocation policies. A virtualization engine is
also available to aid in creating virtualized services on data centers [7].
7
2.1.2.1
CloudSim Network Topology
CloudSim tool [7] starts by creating a network of nodes as illustrated in Fig. 2-2.
A basic node consists of data centers, hosts, and cloud brokers. Data centers (resource
providers in CloudSim) are created first with specifications that define the operating
system, memory, bandwidth, storage, etc. One or more hosts are then created on each
data center with the proper specification of RAM, storage, bandwidth, processing
elements, and the selection of the scheduling algorithm to schedule virtual machines
(VMs) inside the host. Processing elements are known as cores or CPUs, where
each processing element is given a defined processing power measured in millions of
instructions per second (MIPS). Hosts are managed by data centers where each data
center may manage a single or numerous hosts. The cloud broker, “an entity that
creates and maintains relationships with multiple cloud service providers” [8], is also
created to distribute work among the available data centers or cloud services, which
makes a cloud broker the middleware subsystem in charge of the relationship between
users and cloud service providers.
After creating all of the network nodes of CloudSim, VMs are created in order to
run on the specified hosts. Characteristics of each VM are defined by parameters such
as processing power in MIPS, RAM in megabytes, bandwidth, etc. As is illustrated
in Fig. 2-2, a scheduling algorithm should be chosen to schedule cloudlets inside the
VM too, so scheduling is done at the host and VM levels. The last step in this process
is the generation of tasks (aka. cloudlets) either by initializing them through code
or from existing workload traces. Cloudlets are defined based on specifications that
define the task length in millions of instructions (MI), needed number of processing
elements, and a utilization model that states the cloudlet’s execution rate through
defining the current requested MIPS from the processing elements.
When generating cloudlets from workload traces, the workload format should
be checked to make sure that it follows the standard workload format described
8
in [9]. Cloudlets’ length should also be converted to MI instead of the standard’s
execution time in seconds, and that is achieved by multiplying the execution time
by the execution rate, where the default execution rate in CloudSim is 1 MIPS. For
example, a cloudlet with an execution time of 10 seconds is converted to 10 MI.
Finally, after creating network nodes, VMs, and cloudlets, the list of the available
VMs and cloudlets are submitted to the cloud broker to assign each cloudlet to run
on a specific VM based on the brokering policy.
2.1.2.2
Cloudlet Scheduling
As mentioned before, scheduling in CloudSim is done at the node level, the authors of the tool have already implemented two different scheduling algorithms in two
different levels of the software: the VM level and the host level. The provisioning
scenarios use space-shared and time-shared policies for VMs and task units [7]. Using
the space-shared policy allows one VM or cloudlet to be executed at a given moment
of time. On the contrary, using the time-shared policy allows multiple cloudlets to
multi-task within a VM and will allow VMs to multi-task and run simultaneously
within a host, in case time-shared was used in the two aforementioned levels.
VMs will be allocated to hosts on a FCFS basis, as specified by CloudSim authors
[7], even though the Shortest Job First (SJF) policy was found to be faster than
FCFS [10]. SJF wont be in the advantage of cloud users with larger cloudlets and
wont give the illusion of having unlimited resources to them, therefore, FCFS is the
ideal policy on the cloud. Similarly, cloudlets will be executed on the corresponding
VM on FCFS basis after being allocated to the VMs by the broker.
The simple implemented brokering will iterate through all cloudlets then assign
them to the available VMs one by one. For example, if we have two VMs and three
cloudlets, the broker will assign the first VM to run the first cloudlet, the second VM
to run the second cloudlet, and then will start again with the first VM by assigning
9
Data Center 1
Host 1
VM 1
Cloudlet 1 Cloudlet n
VM
Scheduling
Host x
VM y
VM ..
Cloudlet ..
Cloudlet z
Cloudlet
Scheduling
Figure 2-2: CloudSim Network Topology Design
it to run the third cloudlet.
2.1.2.3
Time-shared VM Scheduling and Oversubscription
Hosts are oversubscribed when VMs inside them get less amount of resources than
the requested amount, i.e less bandwidth, storage or CPU power [11]. The oversubscription technique will normally try to accommodate as many VMs as possible and
make sure that all requested VMs by users are going to be instantiated. Since hosts
and networks have fixed amount of resources and fixed computing power, the oversubscription method will have a huge negative impact on the overall performance.
For example, when oversubscribing the CPU power, VMs will have less CPU share
which in return will result in long execution times and makespans. Additionally, the
network link will be fully utilized in case we are oversubscribing the network, and disk
operations will decrease the throughput in case we are oversubscribing the disks [12].
However, resources in CloudSim can only oversubscribe the CPU power and does
not allow disk, network, or memory oversubscription. The method was implemented
over the Time-shared VM scheduling algorithm and simply squeeze down the amount
10
of the requested MIPS by VMs on host level to allow instantiating all VMs on the
available hosts.
2.2
Wireless Sensor Networks (WSNs)
WSNs consist of many sensing devices which are distributed inside of a given area.
Sensors in the network carry out different tasks such as recording weather conditions,
sensing motion, or recording sounds in addition to many other tasks [13]. In WSNs,
sensors cooperate with each other to formulate a fully connected network to allow
information sharing between the network nodes. The collected information can also
be sent to a command and control center for processing and decision making. Such
networks have many applications both for civilian and military purposes, yet finding
the actual location of a single sensor in any type of WSN is important. For example,
it can be used in military applications to detect infiltration and target tracking or for
environmental monitoring and forest fire control.
2.2.1
WSN Localization
Each sensor node consists of multiple heterogeneous components such as power
supply, CPU, memory, and a transceiver. Since the location of sensors is needed in
most of the WSNs, sensors can be equipped with an additional component such as a
Geographic Positioning System (GPS) device. However, sensor nodes are spread in
significantly large quantities, therefore, it does not seem to be a cost-effective solution
to equip all nodes with a GPS device. As a result, a scheme other than GPS is needed
to locate the sensors.
Localization in WSNs is a challenging task due to the inherent characteristics of
the network such as the constrained resources of sensor nodes which lead to limited
computational and communication capabilities, and power limitations due to the use
11
of batteries to power the nodes. Further, the information regarding an entire network,
including its topology and technology, is often not available at a single node. Due to
this lack of information, there is a requirement for the localization protocol of a given
network to be distributed.
A wireless network can be formed by spreading sensors in a terrain manually
or through an airplane depending on the terrain conditions. Additionally, nodes in
WSNs may be positioned permanently or dynamically in a field [14]. For permanent
localization scenarios, knowing the location of the sensor is not a problem throughout
the life time of the network, but in dynamic networks localizing nodes can be time
and power consuming and, in some scenarios, a lack of accuracy may occur.
To avoid high costs, two different types of nodes are used within a typical WSN:
Anchor and Blind nodes. Anchor nodes are the ones aware of their positions through
the use of the aforementioned devices which allow the node to obtain its position
in the global coordinate system (using GPS), or by deploying the node in known
positions in the local coordinate system. Blind nodes then rely on the anchor nodes
to estimate their positions in the respective coordinate system [14]. Trilateration and
multilateration based localization (TBL and MBL) techniques are among the best
known and most used methods for localization. TBL and MBL allow blind nodes to
localize themselves based on the difference in distance between the blind nodes and
the neighbor anchor nodes [15].
2.2.2
Power Consumption in WSNs
Wireless sensor nodes often use solar cells to extend the battery life in order to
allow nodes to run for longer times. Other methods of extending battery life are the
intelligent slowing of power consumption through a reduction in listening time [16],
increasing the sleep time [17], or modifying sampling rates [18]. Another method of
accomplishing power reduction is the use of multiple transmission ranges as is seen
12
in the well-known CC2420 ZigBee RF transceivers [19, 20]. CC2420 allow nodes to
transmit messages using eight discrete output power levels, as discussed in section
28 of the transceiver data sheet [21]. On the other hand, the Atmel AT86RF230
transceiver, as the CC2420, allow varying the output power but not only between
8 levels, instead, between 16 discrete levels ranging from 3 dBm to 17.2 dBm [22].
Having more output power levels allow minimum length of intervals than what the
CC2420 can provide.
Previous studies took advantage of such functionalities and tried to optimize the
power consumption by varying the output power as in [23], where the output power
of nodes were varied based on the distance between the communicated nodes after
sharing the information using the Request to Send/Clear to Send mechanism. Additionally, in [24] a localization protocol was proposed to optimize the power consumption after clustering nodes based on the used power levels. In most of the simulations
and studies done in literature, they tried to change the power level sequentially while
observing the effect of using different power levels, or assign power levels to nodes
randomly.
Table 6.1 of the CC2420 data sheet lists the five different modes in which the
transceiver consumes different amount of power, the five modes are: Voltage regulator off (OFF); Power Down mode (PD); Idle mode (IDLE); Receive mode; and
finally, the Transmit mode. Depending on the localization protocol, all of the other
modes including the transmit mode will be affected by the steps and behavior of
the localization procedure. For example, longer localization time means more power
consumption due to the power consumed while in the idle mode.
13
2.3
Particle Swarm Optimization (PSO)
PSO is a population-based search algorithm inspired by bird flocking and fish
schooling, where each particle learns from its neighbors and itself during the time it
travels in space. The original single objective PSO (SOPSO) was first introduced by
Kennedy and Eberhart in 1995 and operates over a continuous space [25]. Later, in
1997 a discrete binary version of the algorithm was presented also by Kennedy and
Eberhart to operate on discrete binary variables [26]. PSO was extended by Moore
et al., as the first recorded attempt, to handle a multi-objective problem (MOP) [27].
Later in 2002 MOPSO was introduced by [28] as an effective algorithm to solve MOPs.
All versions of PSO algorithm start by creating a number of particles to form
a swarm that travels in the problem space searching for an optimum solution. An
objective function should be defined to examine every solution found by each particle
throughout the traveling time. A particle in this method is considered as a position
in D-dimensional space, where each element can take a continuous value between
a fixed upper and lower bounds. Additionally, Each particle has a D-dimensional
velocity, where each element also can take a bounded continuous value. Alternately,
the elements of the positions matrix of the binary PSO can take the binary value of
0 or 1, where the value of each element of the velocity matrix is in the range [0, 1].
2.3.1
Continuous PSO mathematical model
The individuals in PSO are a group of particles that move through a search space
with a given velocity. The mathematical model was designed to mimic the flocking
behavior of swarms as listed in (2.1) to (2.5) where ω is the inertial constant, c1
and c2 represent cognitive and social constants that are usually ∼ 2, and r1 and
r2 are random numbers. minV alue and maxV alue are the predefined minimum
and maximum possible continuous values respectively, and Ran is a random number
14
between 0 and maxV alue.
Equations (2.1) to (2.3) are used to update the velocity of the ith component of
particle p, where (2.4) and (2.5) are used to update the position of that same component. At each iteration the velocity and position of each particle is stochastically
updated by combining the particle’s current solution, the particle’s personal best solution (p̂i ), and the global best solution (ĝ) over all particles. However, choosing the
best global in MOPSO is quit different and more complex as discussed in 2.3.3.
vi = ωvi + c1 r1 · (p̂i − pi ) + c2 r2 · (ĝ − pi )
δ=
(maxV alue − minV alue)
2




minV alue
vi = 


δ
vi < δ
pi = maxV alue







pi
(2.2)
(2.3)
vi ≥ δ
pi = Ran + vi





minV alue





(2.1)
(2.4)
pi < minV alue
pi > maxV alue
minV alue ≤ pi ≤ maxV alue
15
(2.5)
2.3.2
Binary PSO mathematical model
The continuous PSO equations before were edited in [26] to have a binary output
of 0 or 1 instead of a continuous value. The minV alue and maxV alue boundaries are
changed to 0 and 1 respectively, and replacing the equations (2.2)-(2.3) with (2.6) is
going to give the velocities a value in the range [0, 1]. The positions matrix, however,
is updated using different equations than the continuous PSO as in (2.7)-(2.8), where
s(pi ) is the sigmoid function value of the particle i, e is Euler’s number, and ran is
any random binary. Sigmoid function was used in the equation to scale the value to
stay in the range [0, 1].





minV alue





vi < minV alue
vi = maxV alue

vi > maxV alue






vi
Otherwise
s(pi ) =
pi =
(2.6)
1
1 + e −pi




minV alue
s(pi ) ≤ ran



maxV alue
Otherwise
(2.7)
(2.8)
All previous continuous and binary PSO equations indicate that the velocity of
neighbors and the current velocity of the particle itself contribute in deciding the next
position of the particle. Particles behave this way in order for them to be part of the
swarm and be able to keep up with the other particles during the search for a solution,
so each particle adapts to the velocity of the swarm as a whole by learning from itself
and its neighbor particles. Also, to improve the performance of the SOPSO, the
16
inertia weight (ω) in (2.1) can be modified dynamically (instead of a constant value)
using mechanisms such as the SA to increase the probability of finding a near-optimal
solution in fewer iterations and with less computational time, see further discussion
on this in §2.3.4
2.3.3
Multi-objective PSO (MOPSO)
MOPs are known to have many contradictory objectives where enhancing the
result of one objective will have a negative impact on the other objectives involved.
MOPSO attempts to effectively find a solution or a set of solutions that ensure a
balance between all the involved objectives as is thoroughly discussed in [29–37]. The
main differences between the SOPSO and the MOPSO algorithms are:
• MOPSO does not have a single global best solution, the ĝ of the SOPSO in (2.1),
that all particles learn from when they update their velocities in each iteration.
Instead, MOPSO will have an archive of particles called leaders, where each
leader is a potential solution of the problem. So instead of having only one
global best solution the MOPSO will keep track of different solutions and use
them randomly to lead other particles to update their velocities in each iteration
using (2.1);
• Dominance comparators are also implemented inside the MOPSO to help in
finding a set of optimal solutions instead of only one [38];
• To avoid filling up the leaders archive a crowding distance based on nondominated sorting methods should be used to decide which particles must remain in the archive [39, 40]; and
• A mutation operator is applied to a portion of the swarm to improve the exploration and search ability and to avoid premature convergence [35, 36, 38].
17
2.3.4
The inertia weight (ω)
ω in (2.1) is one of the most important adjustable parameters in PSO besides
the acceleration coefficients and random variables. ω value can affect the overall
performance of the algorithm in finding a potential optimal solution in less computing
time. Many techniques are used to choose or modify the value of ω at runtime, such
as the fixed ω (FIW) with constant values, linearly decreasing ω (LDIW) which
changes the value of ω linearly and per iteration [41]. Also, the adaptive ω (AIW)
and random ω (RIW) suggested by [42–44], where ω starts with large value like 0.9
and start decreasing to 0.1.
2.3.4.1
Adjusting ω using a Simulated Annealing (SA) method
The RIW method proposed by [44] shows an advantage against the other methods mainly in adjusting the balance between the particle’s local and global search
abilities. To increase the probability of finding a near-optimal solution in fewer iterations and computing time, RIW use LDIW besides a SA mechanism. LDIW in
(2.9) was suggested by Eberhart et al. [41] to lower the negative impact of using the
FIW method, however, LDIW still have disadvantages mainly caused by the low local
search ability at the beginnings of the PSO iterations.
i
ωitr
=
(ωmax − ωmin )(itrmax − itr)
itrmax
!
(2.9)
i
where, ωitr
is the inertia weight of particle i in iteration number itr , ωmax is a
predefined maximum possible value of inertia weight, ωmin is a predefined minimum
possible value of ω, itrmax is the predefined number of iterations.
In LDIW, even if a particle started at a near point of the global optimization
point, the particle will keep moving fast, where sometimes will push the particle away
from the point. Similarly, if a particle did not find a close optimal solution and is
18
stuck in one part of the space, and due to the linearly decrease in ω, the global search
ability will decrease, which decreases the chance of finding a better solution. This,
in turn, makes the iteration forepart more effective in finding the nearest optimal
solution. To overcome the aforementioned problems with LDIW, a SA method was
also used along with the LDIW.
The main idea behind RIW, as discussed before, is to overcome the negative
influences of LDIW on both local and global search abilities. To achieve this, RIW
learns from historical velocities and fitness of the particle, where ω make the historical
effect by randomly selecting inertia weights and later adjust adaptively with the best
solution found. To learn from the historical velocities and fitness values, Yue-lin et
al. [44] annealing method used a cooling temperature function shown in (2.10). To
increase the probability of changing the particle’s speed, the average fitness values
of each particle along with the best fitness recorded by any particle is used in the
equation.
cT empitr =
pF itnessitr
avg
pF itnessbest
!
−1
(2.10)
In this equation, cT empitr is the annealing temperature value in current iteration
itr, pF itnessitr
avg is the average of all recorded fitness throughout all iterations until
current iteration, pF itnessbest is the best recorded fitness of the particle.
According to the aforementioned annealing temperature in (2.10), ω will be adjusted according to (2.12), which is the annealing probability of the proposed method
and will be calculated according to (2.13), where ρ is the annealing probability,
pF itnessitr is the particle current fitness in current iteration itr, pF itnessitr−k is
the previous particle’s fitness in iteration itr − k; where k is a fixed number, e is
i
Euler’s number, cT empitr is the cooling temperature from (2.10), ωitr
is the inertia
weight ω of particle i in iteration number itr and ran any binary random number.
19
η=
ρitr
i
=
pF itnessitr−k − pF itnessitr
cT empitr




1,
pF itnessitr−k ≤ pF itnessitr



e −η ,
otherwise
i
ωitr
=
(2.11)
(2.12)




1 + ran
,
2
ρ ≥ ran



0 + ran ,
otherwise
2
20
(2.13)
Chapter 3
Cloudlet Scheduling with PSO
3.1
Introduction
In order to reduce the costs of using on-demand cloud resources, CCom systems
always attempt to maximize the utilization value of the available resources. To maximize the utilization value, smart and adaptive scheduling algorithms may be used [45].
Designing smart scheduling algorithms is important to overcome problems and constraints when delivering cloud services over the network. One of the more interesting
problems arises when many users request many cloud resources at the same time, and
such problems can be solved by scheduling cloudlets to the available resources properly. Additionally, oversubscription has a huge negative impact on the performance
of VMs and to decrease the makespan, a smarter scheduling method would be very
helpful in maximizing the utilization value. The new scheduling method here is very
important since in the case of oversubscription we already degraded the CPU power
of VMs on hosts, and in the case that we are not effectively using the reserved VMs,
the the overall performance and makespans will be very bad in contrast to using the
clouds without oversubscribing its resources.
Execution sequences can be found after satisfying a set of objectives, like minimum
execution time or minimal cost, or overcoming a set of constraints, such as bandwidth
21
limitation and location of network resources. Therefore, in order to minimize the cost
of using cloud resources, the time of executing all assigned tasks to various compute
resources should be minimized.
Accordingly, this chapter presents a solution for improving the makespan and
the utilization of cloud resources through the use of a hybrid variant of a popular
population-based metaheuristic (PBM) algorithm, the PSO. The method uses a proposed SA algorithm to enhance the performance of the binary PSO for scheduling.
As a result, using the hybrid metaheuristic method was able to enhance the performance of the simulated cloud in CloudSim tool by assigning cloudlets to VMs in a
different way, according to their MIPS value to allow some kind of load balancing
to decrease the makespan of the whole workload. The test cases in this chapter was
designed using real world workload, HPC2N [46], and different combinations of cloud
resources.
3.2
PSO and Cloud Scheduling
Workload scheduling is known to be an NP-complete problem, therefore metaheuristics have been used to solve such problems as in [47–50]. The idea behind using
metaheuristics is to increase the performance and decrease the computational time
to get the job done, in our case metaheuristics are considered the robust solution of
finding the right combinations of resources and tasks to minimize the computational
expenses, cut costs, and provide better services. In short, PSO is used to solve the
problem shown in Fig. 2-1.
Abraham et al. [51] discussed the features of GAs, SA, and Tabu Search(TS)
as three basic heuristics for grid scheduling. Nevertheless, according to a study by
Rana et al. [52], comparing the optimization algorithms for resource scheduling in the
clouds, the most growing and used algorithms are: Ant colony optimization (ACO),
22
PSO, and genetic algorithms (GAs), because of the fact that they are easier to implement and because of their performance in finding a solution, therefore, other algorithms were not able to grow and compete with them.
At the same time, according to other studies in [48,53], PSO was found to be better
than GA in most of the cases and better than TS in some cases. And according to [54],
PSO was found to be faster and more simpler than GA in terms of execution and
implementation. Furthermore, PSO is one of the commonly used workflow scheduling
algorithms in the cloud [55], where it is mainly used to increase cloud resources
utilization which in return will reduce costs and makespans. Zhang et al. in [56]
implemented a hierarchical swarms in CloudSim for load balancing and scheduling
costs reduction, their method was successful and PSO showed very good results in
comparison to the best resource selection (BRS) algorithm.
The performance of PSO can be improved with the help of other algorithms or by
using an improved version of PSO [57]. For example a hybrid version of PSO and TS
was used in CloudSim to maximize the utilization and reduce energy consumption,
the method was very successful in reducing the energy consumption by 67.5%. [58].
Less average task execution times were also achieved by mixed scheduling of PSO
and SA than the solution found by the normal PSO, GA, ACO or SA alone [59].
For all the reasons mentioned before and for the fact that PSO implementation
is very simple in comparison with other methods, PSO was chosen to schedule tasks
inside CloudSim. A combination of PSO and SA is used to allow particles to explore
more of the problem space and not get stuck in local optima. This chapter propose
a method to extend the capabilities of the broker, a smarter method than the simple
already implemented normal iterative Round-Robin like resource allocation method
inside CloudSim.
23
Table 3.1: Execution-time table
C1
C2
C3
C4
VM1
4
6
8
10
VM2
2.6
4
5.3
6.6
VM3
1.6
2.4
3.2
4
Table 3.2: A workload consist of four cloudlets
3.3
Cloudlets
MI
C1
200
C2
300
C3
400
C4
500
sum
1400
Table 3.3: List of the available cloud resources
Resources
MIPS
VM1
50
VM2
75
VM3
125
sum
250
Implemented Method
The detailed description of mapping the problem to PSO and how the SA algorithm was used to enhance the performance of PSO through ω adjustment are
presented next.
3.3.1
Problem Formulation
Cloud broker assign the cloudlets to the available cloud resources, simply the
VMs. The main objective of the proposed solution is to achieve the highest possible
utilization of the group of VMs with a minimum makespan. However, the costs of
using the resources and the positions of VMs across the system are not considered
as parameters in this solution. Additionally, task migration between resources is not
allowed which means that every cloudlet can run on one VM until it is done executing.
As an example, Table 3.1 shows how many seconds each resource from Table 3.3
takes to run each cloudlet from Table 3.2, by dividing the cloudlet length on the
processing power of the resource. Accordingly, C1 will take 4 seconds to run on VM1 ,
but it takes 2.6 and 1.6 seconds to run on VM2 and VM3 respectively.
24
The objective function of the algorithm used in this study is to find the shortest
time of running the 4 tasks on the available resources to achieve minimum makespans,
so the best combination of the pairs of cloudlets on VMs should be found. In order
to meet the objective of the algorithm and find an optimum solution, (3.1) is used to
calculate the fitness value of each PSO particle. The fitness function in (3.1) calculates
the execution times of all possible cloudlet combinations on every cloud resource and
then returns the highest execution time as the fitness value of the particle.
F itness = M ax [ EXCV M1 (j1 ) , . . . EXCV Mn (jm ) ]
(3.1)
Here, EXCV M1 (j1 ) is the execution time of running the set of cloudlets j1 on
V M1 . j1 is a normal set, e.g. j1 = [ C1 , C2 , . . . Cx ], where x is the number
of cloudlets. Also, n and m are the number of VMs and number of possible sets of
cloudlets respectively.
3.3.2
PSO Algorithm
PSO will initialize a swam of particles where each particle will have a velocity and
positions vector, the complete pseudo code is shown in Algorithm. 1. Both of the
vectors will look like Table 3.4, a 3 × 4 matrix filled with binaries in the case of the
positions vector and values in the range [0, 1] for the velocity vector. Furthermore,
since no task migration is allowed, if we want to run C1 on V M2 , the second element
in the first column of the positions matrix will have the value 1, and the rest of the
column values will be 0. The problem shown in Table 3.1 can be solved after calculating its fitness value using (3.1) as follows:
F itness = M ax [ 6, 5.4, (1.6 + 4) ] = 6
25
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
procedure BSOPSO(nodesList)
calculate execution times;
initialize the swarm;
set global best;
for i ← 0 → numberof iterations do
for j ← 0 → numberof particles do
calculate inertia value;
calculate new velocities;
calculate new positions;
calculate fitness value;
evaluate solution;
update particle memory;
update global best;
end for
end for
end procedure
. as in Table 3.1
. using the SA method
Algorithm 1: BSOPSO pseudo code
Table 3.4: Cloud resources assignment table (BSOPSO positions matrix)
C1
C2
C3
C4
VM1
0
1
0
0
VM2
0
0
1
0
VM3
1
0
0
1
The fitness value “6 seconds” is the minimum makespan of running all tasks on all
available resources and that is the best combination of cloudlets to resources that can
be found. Table 3.4 is the positions matrix of the most optimal solution. The table
clearly defines the relationship between VMs and cloudlets, where C1 , C2 , C3 and C4
will be running on VM1 , VM2 and VM3 respectively. Each of the cloudlets will be
executed in a FCFS basis as discussed before. On the other hand, if the computing
power of resources or the length of the cloudlets were different, a different solution
would be found, in which different VMs will run different cloudlets.
26
3.3.3
Randomly adjusted inertia weight (ω)
To increase the probability of finding a near-optimal solution in fewer iterations
and computing time, Algorithm 1 uses the RIW method to calculate ω in (2.1), as
discussed in §2.3.4.1. RIW was chosen for the fact that it showed an advantage against
the other methods mainly in adjusting the balance between particle’s local search and
global search abilities. Algorithm 2 shows the pseudo code of the implemented RIW
algorithm inside CloudSim; as part of the PSO algorithm, to update the inertia weight
(ω) in (2.1) randomly, as discussed previously.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
procedure Calculate ω(nodesList)
define k value in 2.11 and 2.12
define ωmax and ωmin in 2.9
if itr is a multiple of k do then
calculate ρ according to 2.12
i
according to 2.13
calculate ωitr
end if
if itr is not a multiple of k do then
i
calculate ωitr
according to 2.9
end if
end procedure
Algorithm 2: RIW-SA method pseudo code
3.4
Simulations and Results
The proposed scheduling method extended the functionalities of the cloud broker
inside CloudSim. In PSO, 100 particles were created where each particle’s fitness is
evaluated during 1,000 iterations. Additionally, based on a study by Eberhart and
Shi [60], which compared inertia weights and constriction factors in PSO, the values
of the acceleration coefficients C1 and C2 in (2.1) were set to 1.49445, as these values
showed better influence on the performance of PSO. Finally, all of k, ωmax , ωmin in
Algorithm 2 were set to 5, 0.9, and 0.1, respectively.
27
Table 3.5: Some of the processors used in data centers.
3.4.1
Processor name
MIPS
Cores
Intel Core i7 2600K
128,300
4
Pentium 4 Extreme Edition
9,726
1
AMD FX-8150
108,890
8
AMD Phenom II X6 1100T
78,440
6
Setting the computing power in MIPS
A list was compiled of the most commonly used processors in data centers and
made available inside CloudSim for use. This enhancement will allow simulation
developers to easily build simulation scenarios using the processor’s brand name (e.g.
Intel core i7 2600k), instead of the raw values of MIPS and number of processing
elements (PEs) when defining the processing power of VMs and Hosts.
Table 3.5 shows some of the processors with their MIPS processing power values
and number of PEs (A.K.A. cores). The values were retrieved from [61], where the
Dhrystone benchmark program was used to calculate the cumulative processing power
of all cores in MIPS.
3.4.2
Results of scheduling without oversubscription
The simulation was set as follows: (1) one data center was created with the default
characteristics defined by CloudSim authors as in example 6 provided with the source
code, (2) two different hosts were created on the data center with: 2 GB ram, 1 TB
storage, 10 GB/s bandwidth. Time-shared scheduling was chosen to schedule VMs
on the two hosts. The first host has an Intel Core 2 Extreme X6800 processor that
comes with 2 cores (PEs) and gives a cumulative processing power of 27079 MIPS, as
taken from [61]. The second host has an Intel Core i7 Extreme Edition 3960X with
6 cores that gives a cumulative processing power of 177730 MIPS. (3) cloud broker
28
that implements PSO was used, (4) 5 VMs were created, where each has an Intel
Pentium 4 Extreme Edition processor with: 10 GB image size, 0.5 GB memory, 1
GB/s bandwidth and 1 processing element that gives 9726 MIPS processing power.
Xen VM monitor [62] was also used for all of them, in addition to, using the timeshared scheduling to schedule cloudlets inside the VMs, (5) cloudlets were generated
from a standard formatted workload of a high performance computing center called
HPC2N in Sweden [46]. Each row in the workload represents a cloudlet where we
get the id of the cloudlet from the first column, the length of the cloudlet from the
fourth column (the runtime value multiplied by the rating which is defined as 1 MI
in CloudSim), and finally the number of the requested processing elements from the
eighth column.
Fig. 3-1 shows the time elapsed between submission to completion when executing
a group of cloudlets at the same time on the available cloud resources for four times,
the first time using the simple brokering and the rest using the implemented PSO
method. The simulation was developed for four different groups of cloudlets, the 1st ,
2nd , 3rd and 4th group of cloudlets consisted of 20, 50, 100 and 200 cloudlets respectively. The figure clearly shows that the makespan was minimized when using PSO
in most of the cases. The optimization values ranged from 46% to 51% improvement
for the 20 cloudlets, 17% to 26% improvement for the 50 cloudlets, 14% to 20% for
the 100 cloudlets, and from 11% minimization to -4% maximization of makespan for
the 200 cloudlets.
Fig. 3-2 shows the decrease of the convergence of the global fitness value (the
value recorded by the VMs) during the 1,000 iterations of the PSO algorithm. This
shows how decreasing the size of the search space and increasing the number of
iterations will increase the probability of finding more optimal solutions. Notably,
an improvement to the solution for the 20 cloudlets problem was found during the
last 100 iterations, with no improvements for approximately the prior 400 iterations.
29
Simple Brokering
First PSO Try
Second PSO Try
Third PSO Try
1600
Execution Time in Seconds
1400
1200
1000
800
600
400
200
0
20
50
100
200
Number of Cloudlets
Figure 3-1: Simulations makespans of four groups of cloudlets
20
50
100
110
Fitness Value in Seconds
100
90
80
70
60
50
40
30
0
100
200
300
400
500
600
700
800
900
1000
Iteration Number
Figure 3-2: Fitness value vs. Iteration number
Such findings demonstrate the advantage of using RIW to update ω, which updates
the velocity and position of particles in a way that increased the probability of finding
a near-optimal solution even in the later iterations.
Furthermore, the Mann-Whitney test was developed for two different sets, where
every value is the makespan of executing a workload of 100 cloudlets. The first
numeric set was generated after using the regular CloudSim brokering while the other
set was generated after using the metaheuristic method. The two distributions were
found to be significantly different with a significance level of p ≤ 0.5. The regular
CloudSim scheduling method was found to have constant fitness values for the same
30
Table 3.6: Results of running the BSOPSO scheduling method for 100 times
Function
20 Clts
50 Clts
100 Clts
200 Clts
AVG
359.15
737.91
1136.88
1670.50
STDEV
22.67
94.58
130.05
163.21
AVG - STDEV
336.48
643.33
1006.83
1507.29
AVG + STDEV
381.82
832.49
1266.93
1833.71
Table 3.7: Results of the simple CloudSim brokering method
Function
20 Clts
50 Clts
100 Clts
200 Clts
AVG
634.64
902.75
1179.37
1521.06
set of cloudlets, in contrast to the metaheuristic method, where fitness values were
found to be adaptable and variable along the distribution.
The final test was prepared by running the simulation for 100 times using the
PSO method, the average makespans in seconds and standard deviations were then
calculated for the same 4 groups of cloudlets the results are listed in Table 3.6. The
simple brokering of CloudSim was also tested and the output is listed in Table 3.7,
standard deviations in this case will always be zero due to the fact that the simple
broker will always give the same execution sequences and makespans for the same set
of cloudlets and cloud resources.
Table 3.6 clearly shows that the larger the search space the less reliable the result
is, since the result data is widely spread around the mean. For example, the difference
between the best and worst solution found by PSO for the 200 group of cloudlets was
approximately 844 seconds. The results mean that the worst solution had approximately 34% more seconds than the best PSO solution. For small populations, like the
first three cloudlet groups, the average makespan improvement ranged approximately
from 43% to 4%.
31
3.4.3
Results of scheduling with oversubscription
The network nodes in the simulation scenario were carefully created to allow the
oversubscription to happen. The Cloud simulation was implemented as follows: (1)
One data center was created, (2) two different hosts were created on the data center
with: 4 GB ram, 1 TB storage, 10 GB/s bandwidth and time-shared scheduling
with oversubscription algorithm was chosen to schedule VMs on hosts. The first host
has a cumulative 27079 MIPS and 2 cores (PEs), where the second host has 49161
MIPS and 4 PEs, (3) Cloud broker that implements PSO was used, (4) 10 VMs
were created, where each request 12,000 MIPS from 1 PE, 512 MB Ram, 1 GB/s
Bandwidth and use Time-shared scheduling to schedule Cloudlets inside the VMs,
(5) Cloudlets were generated also from the previously mentioned workload HPC2N,
(6) Simple VM Allocation policy was also used to assign VMs to run on specific hosts,
the method will simply search for the less utilized host and assign the next VM to
run on it.
Using Time-Shared scheduling in this case will result in creating 6 VMs of the
10 that the user requested to create, the rest 4 VMs will fail due to a shortage in
hosts processing power/MIPS. To make the only cause of failing to instantiate VMs
hosts’ Rams were upgraded to 4 GB. In this particular example CloudSim will create
VM3 and VM5 on the first host and VM1 , VM2 , VM4 and VM6 on the second host.
The other VMs will fail due to shortage in the CPU capacity of hosts. Each VM
successfully instantiated will get the MIPS amount requested from the cumulative
MIPS amount of the host it was assigned to run at, so in this case Host1 will be
able to accommodate 2 VMs (24,000/27079) and Host2 will be able to host 4 VMs
(48,000/49161).
Using the oversubscription method will allow instantiating the whole 10 VMs. In
this particular example VM number 3, 5, 7 and 9 will run on host1 and VM number
32
Table 3.8: Makespans using Time-shared with or without oversubscription
20 Clts
50 Clts
100 Clts
200 Clts
no-oversubscription
1161.47 1969.28
2881.96
4125.37
oversubscription
2059.21 3491.33
5109.32
7313.64
1, 2, 4, 6, 8 and 10 on host2 . Running 2 more VMs on host1 and 2 more VMs on host2
will certainly decrease the performance of the other VMs on the same host, since the
oversubscription method will divide the processing power of the host among all VMs.
So VMs running on host1 will have 27079/4 = 6769.75 MIPS for each VM instead
of 12000 requested by the user when initializing the VMs, in the same way VMs on
host2 will have 49161/6 = 8193.5 MIPS for each VM instead of 12000.
As discussed before the oversubscription method will have a negative impact on
the overall performance of the VMs and since CloudSim normally assign Cloudlets
iteratively to VMs the assignment process in this case will have a huge impact on the
makespan time of the workload. An intelligent algorithm such as PSO should be used
to minimize the makespan and try to decrease the impact of using oversubscription.
Table 3.8 shows the makespans time when using and when not using the oversubscription method for four different workloads of different lengths. From the table we
can clearly see how using the oversubscription method will always cause a decrease
in the performance by up to 77%.
Using PSO was very successful in decreasing the impact of using the oversubscription method. Table 3.9 shows the average, minimum, and maximum makespans
achieved when using the BSOPSO scheduling method for the same workloads as
above. The worst solution found by PSO during the 100 trials was still better than
the solution found by the normal oversubscription method. The minimization in the
fitness values ranged from 42% to 58% better and in some cases the minimum solution
found by PSO ranged from 3% to 7% longer than the time-shared method.
33
Table 3.9: Makespans’ averages, minimums, and maximums while using
Time-shared scheduling with oversubscription and the BSOPSO
Function
3.5
20 Clts
50 Clts
100 Clts
200 Clts
AVG
1577.52 2521.74
3531.47
4943.49
Min
1205.46 2050.47
3092.59
4370.92
Max
1932.77 2946.85
4145.89
5707.53
Conclusion
In this chapter, PSO was used to assign VMs to run different cloudlets as part of
the broker in CloudSim tool. The results clearly show that PSO was able to minimize the makespan of the workload, and obviously excels at optimizing the simulated
scheduling results of CloudSim in a way that will maximize the utilization and minimize the costs of using the on demand cloud services. At the same time, PSO like
any other metaheuristic method does not give any guarantees on finding the most
optimal solution. Consequently, the chance of finding an optimal solution becomes
harder and harder with the expanding of the search space. However, using SA gave
particles the ability to find better solutions during late stages of the search which in
return enhanced the exploration abilities of the PSO particles.
PSO was also able to decrease the negative impact of oversubscribing the CPU
power of hosts on the overall performance and improve the result by up to 58% better
than the normal CloudSim solution. Additionally, for small group of cloudlets, the
makespan was minimized for about 51% of the original makespan while using the
simple brokering method. The optimization tend to decrease with the expansion of
the search space, but still PSO was able to minimize the makespan by up to 11% for
a workload that consists of 200 cloudlets.
34
Chapter 4
Swarm Optimized WSN
Localization
Trilateration-based and multilateration-based localization (TBL & MBL) techniques are among the best known and most used methods for localization. In this
chapter, the various performance aspects of the TBL algorithm are examined through
the application of single and multi-objective variants of PSO. Three versions of PSO
were implemented in this study
1
to allow nodes to vary the transmission power
level when broadcasting messages during localization. Trade-offs between multiple
objectives — the number of transmitted messages, number of localized nodes, power
consumption and the time needed to localize as many nodes as possible — are studied.
The key novelty of these methods is the optimization of the power consumption
of the whole network without the need to cluster or build any small sensor islands
such as in [64] or in [65]. This study takes advantage of the functionalities of today’s
WSNs nodes to enhance the performance of the whole network, the ZigBee technology
of transceivers in wireless nodes made that possible by allowing us to use multiple
transmission power levels. The different variants of PSO were used to programmically
change the transmission level after the evaluation of the designed fitness functions.
1
This chapter is based on work published in [63]
35
However, for the sake of demonstrating the applicability of the methods, ranging
and location estimations were both assumed as being correctly calculated with minimal errors, which means that this study do not really discuss the localization accuracy
or signal noises. Instead, the methods of this study try to allow WSNs to reduce the
overall power consumption of the localization process without affecting the localization time or localizability (i.e. the number of localized nodes). So the meta-heuristic
methods implemented in this paper allow one to find optimal and balanced solutions
in terms of energy consumption by minimizing the number of messages sent and average transmission ranges used by all nodes while trying not to negatively affect the
localizability.
Again, in this chapter, the results of the three implemented versions of PSO in
addition to the performance of each method while trying to optimize the WSN work
is discussed and examined. Additionally, this chapter provides a complete parameter
study formulated to enhance the performance of the MOPSO.
4.1
Nodes Output Power Levels
As discussed in Chapter 2, wireless transceivers allow the use of multiple output
levels. In order to observe the effect of having multi-output power levels, three different PSO versions were designed. The first two of them are binary PSO, single
and multi-objective, designed to vary the output power between three discrete levels
only. The third one is a continuous multi-objective version implemented to show the
extreme case where we have infinite output power levels. The continuous version
will show the maximum optimization can actually be achieved having more power
levels in hand for use, even though it is not possible by modern transceivers as mentioned before. Next, the detailed calculations of power consumption while nodes are
in transmit mode and using the discrete or the continuous output power levels.
36
4.1.1
Discrete Levels
In this particular method transceivers were assumed to have three different power
levels, and then the wave length can be calculated based on the corresponding output
power level. The allowable ranges were calculated based on work done by [20] as in
(4.1) and (4.2), where R is the range in meters, Po is the sender transmit power in
dBm, Fm is the fade margin in dB, Pr is the receiver sensitivity in dBm, f is the
signal frequency in MHz, and n is the pass-loss exponent.
R = 10x
(4.1)
x = (Po − Fm − Pr + 30 ∗ n − 32.44
(4.2)
−(10 ∗ n ∗ Log[10]f ))/(10 ∗ n)
Three power levels were chosen to represent the minimum, medium, and maximum
possible output power levels. The first power level transmit with -3 dBm (≈ 0.5mW )
transmission power. The second power level as the medium range power level with
1 dBm (≈ 1.26mW ), and the third power level as the maximum power level with
5 dBm (≈ 3.16mW ). The rest of the variables in the previous equations are taken
from [20] where Fm = 8 dBm, Pr = −98 dBm, f = 2405 MHz and n = 2.5.
The three used power levels allowed transmission to a maximum range of: 132.22;
91.47; and 63.28 meters, for the maximum, medium and minimum power levels,
respectively. Transmission ranges are assumed to be perfect circles where the sender
node is the center of the circle and the calculated distance is the radius of the circle.
Also, knowing the transmission distance (R) allows the determination of which nodes
can communicate directly through a 1-hop connection.
37
4.1.2
Continuous Levels
In this method it is assumed that the maximum range a ZigBee transceiver can
reach is 132 meters and the minimum is 60 meters. As in the discrete method before,
the energy consumption is calculated based on [20], but in this method the transmission range varies continually, as in (4.3) and (4.4), rather than the power range
before. Finally, (4.4) is used to convert the power consumption Po from dBm to mW .
Po = (10 ∗ n ∗ log10 R) + (10 ∗ n ∗ log10 f )
(4.3)
−(30 ∗ n) + Fm + Pr + 32.44
Po
Pˆo = 10( 10 )
4.2
(4.4)
WSNs Localization Problem formulation
A typical WSN consists of N sensor nodes scattered among a field of M × M
meters. Each node has a transmission range of R and may or may not be equipped
with various sensors such as temperature or humidity sensors, or radios such as GPS.
Each node also holds a state of being localized, i.e. aware of it’s own position in
the global or local positioning system, or unlocalized, i.e. not aware of it’s own
position in space. Each node in the WSN can eventually be localized with the help
of three already localized neighbor nodes that a node can communicate with over
1-hop connections (throughly discussed and proved in [66]). Two nodes are said to
have a 1-hop connection if the distance between them is less than or equal to the
transmission range, R. The localization procedure is the step that precedes actual
network transmissions which, in the long run, will help in data forwarding and routing
procedures between nodes in the network [67].
38
The received signal strength (RSS) is widely used in localization protocols for
ranging and position estimation, therefore, a study by Chen and Terzis [49] was made
to calibrate the raw RSS indicator (RSSI) values from transceivers in order to allow
better distance estimations when using RSSI values of the received messages. The
study took advantage of the multiple discrete output power levels of the transceivers
to send messages between nodes which put the applicability of using multiple power
levels and the RSSI under the test. On the other hand, according to a study by [16],
the increase of the output power do not necessarily increase the distance a message
can reach. But, a recent study conducted by [68] showed stability in the RSSI values
and reliable range measurements while using external antennas on Z1 motes.
In this study, the TBL method is used to allow the unlocalized nodes (known as
blind nodes) to localize themselves based on the difference in distance between them
and the already localized neighbor nodes [15]. As an example of this process, Fig.
4-1 shows two steps of a simple localization process. The process starts by flooding
information from the already localized nodes, in this case a, d and f , where all nodes
transmit using 132 meters transmission range which consumes 3.14 mW per message
sent, as calculated using (4.3) and (4.4). Step 1 of this localization process ends
after flooding from the three nodes, where each of them broadcast one message that
includes its global position within the X − Y plane. The distance between all nodes
can then be calculated from the RSS of the received messages. In this process and
as a kind of relaxation to the problem, the events of broadcasting from senders and
receiving by receivers were assumed to cost one unit of time, ignoring, for example,
the possibility of collisions or retransmissions. Instead, each unit of time represent
a step in the localization procedure where the units of time can later be mapped to
actual localization scenarios using different protocols.
The transmit mode power consumption of the three nodes after step-1 is: 3.14 ×
3 = 9.42mW , the complete details are listed in Table 4.1. By the end of Step-1, the
39
d
c
b
a
e
Step-1
f
d
c
e
b
a
Step-2
f
Localized Nodes
non-Localized Nodes
Figure 4-1: Simulating the first localization method using single output
power level. The data of the steps are listed in Tables 4.1 and
4.2
broadcast messages are able to reach the three non-localized nodes, but only two of
them were able to localized themselves after receiving three messages. Nodes c and
e are now considered localized after estimating their positions and, as all localized
nodes, will contribute in step-2 of the localization process to help the other blind
nodes. Meanwhile, node b will keep waiting to receive two other messages to be able
to localize itself.
In Step-2 of this localization process, nodes c and e broadcast, but none of their
messages reach node b which terminates the localization process since no new blind
nodes were able to localize themselves. Nodes a, d and f are now finished participating
in the localization process and will not broadcast again. The process will terminate
after two time steps where, in this case, the nodes consumed 9.42 mW in Step-1
and 6.28 mW in Step-2 (3.14 × 2 nodes = 6.28mW ), leading to a cumulative power
consumption of 15.7 mW .
40
Table 4.1: TBL process using single power level (Step 1 Data)
Nodes
a
d
Messages
1
Unit of Time
1st
Transmission Range
132
Power Consumption (mW )
3.14
f
Non-Localized Nodes
b
c
e
P
1
3
3
Received Messages
Localized?
No
Yes
Table 4.2: TBL process using single power level (Step 2 Data)
Nodes
c e
Messages
1
Unit of Time
2nd
Transmission Range
132
Power Consumption (mW )
3.14
Non-Localized Nodes
b
P
1
Received Messages
Localized?
No
It is clear that node d is very close to both c and e, which are the only blind
nodes it can reach using the maximum transmission range, meaning that these nodes
can minimize their power consumption by using a different, yet smaller, output power
levels. The second method shown in Fig. 4-2 will allow each node to broadcast using a
different power range, allowing each node to vary the reach distance of their messages.
As listed in Tables 4.3 and 4.4, of the two steps of the localization, nodes a, d, f and
e transmit using 132, 63.2, 91 and 83.4 meter transmission ranges respectively. Here,
the transmission ranges of each node were chosen randomly from the aforementioned
discrete output levels for the sake of demonstrating the multi-power level method.
Later, PSO will be the one choosing the different power levels leading to different
41
d
c
e
b
a
Step-1
f
d
e
c
b
a
Step-2
f
Localized Nodes
non-Localized Nodes
Figure 4-2: Simulating the second localization method using multiple output
power level. The data of the steps are listed in Tables 4.3 and
4.4
transmission ranges.
The localization process, in this case, was not able to finish after two units of time
as the previous example. Instead, it needed an extra one unit of time after to allow
the new localized node in Step-2 (c) to broadcast. Assuming that node c broadcasts
using the maximum power range, the overall power consumption would be: Step-1
power consumption (3.14 + 0.5 + 1.24) mW ; Step 2 power consumption (1 mW ); and
Step 3 power consumption (3.14 mW ), which is equal to 9.02 mW . Thus, the main
distinction between the outcomes of the two methods is the localization time and the
power consumption. The second method was able to minimize the power consumption
by 6.68 mW while extending the localization time by adding an additional one unit
of time or an extra step.
From the two examples it can be clearly seen that there is a trade-off between
consuming more power while minimizing localization time and consuming less power
42
Table 4.3: TBL process using multiple power levels (Step 1 Data)
Nodes
a
d
f
Messages
1
Unit of Time
1st
Transmission Range
132
63.2
91
Power Consumption (mW )
3.14
0.5
1.24
Non-Localized Nodes
b
c
e
P
1
2
3
Received Messages
Localized?
No
Yes
Table 4.4: TBL process using multiple power levels (Step 2 Data)
Nodes
e
Messages
1
Unit of Time
2nd
Transmission Range
83.4
Power Consumption (mW )
1
Non-Localized Nodes
b
c
P
1
3
No
Yes
Received Messages
Localized?
but maximizing the localization time. This demonstrates the need to find better solutions by balancing the competing goals of maximizing the number of localized nodes
(i.e. increase localizability), minimizing the overall power consumption (i.e. minimizing the output power levels and number of broadcast messages), and minimizing the
localization time (i.e. minimizing localization steps). Consequently, the three PSO
methods were designed to solve such problem and intelligently find better solutions.
43
4.3
Proposed Approach
The method proposed in this study involves the use of single- and multi-objective
PSO to choose the appropriate, discrete or continuous output power level for each
wireless sensor node. PSO was used in order to optimize various single or combinations of objectives including localization time, messages sent during localization,
and the power consumed. In order to make the methods as protocol independent as
possible and allow the methods to be used with any localization protocol, the methods will only optimized the Transmit mode of transceivers in order to minimize the
average output power levels used by all nodes.
In the proposed implementation the power consumption is calculated from the
modified transmission range of each node. The transmission ranges are adjusted
by the SOPSO and MOPSO algorithms in order to achieve better solutions of the
localization problems. In other words, PSO will optimize the selection of the R value
and the remaining variables are taken from [20] as explained in the discrete output
power level §4.1.1. Appropriately, the remainder of this section will examine the use of
the previously discussed three discrete power ranges and the continuous transmission
ranges in the variants of the designed PSO.
4.3.1
PSO Formulation
In order to implement PSO, multiple objective functions as well as a problem
specific representation are defined in the following sections. Three variants of PSO
are presented including a binary single-objective PSO (BSOP SO), a binary multiobjective PSO (BM OP SO), and the last one a continuous multi-objective PSO
(CM OP SO).
44
4.3.1.1
Objective Functions
As previously mentioned, the SOPSO and MOPSO algorithms are used in order to
intelligently adjust the various discrete power ranges or the continuous transmission
ranges of each sensor node. Accordingly, a representation consisting of N dimensions
is used in order to represent each sensor that is deployed in the field. Furthermore,
the objective functions for messages sent, time required for localization, power consumption, and number of nodes localized (A.K.A. localizability) are calculated as
follows:
• Messages sent: Depending on the localization procedure and communication
mechanism between nodes, the number of messages sent back and forth between
nodes will vary. However, in this study we assume that each already localized
node will broadcast once in order to help other non-localized nodes achieve
localization. Thus, the number of messages sent depends on the number of
localized nodes.
• Time required: In the proposed method, one unit of time is equivalent to
one step in which sender nodes broadcast their locations and receivers receive
the information. The step ends by running the location estimation using TBL
method for each blind node that receives at least three messages from three
different localized nodes. The localization procedure terminates when no any
new blind node was able to localize itself by the end of each step.
• Power consumption: The variance in this objective mainly comes from the
use of discrete and continuous transmission ranges, leading to various levels of
power consumption. The power consumption is measured based on the power
level or the transmission range each node uses to broadcast its message. Accordingly, the power consumption is the sum of each node’s power consumption
as chosen by PSO.
45
Table 4.5: Nodes output power levels assignment table (Binary PSO positions matrix)
Min Range
Mid Range
Max Range
node1
0
1
0
node2
1
0
0
node3
0
0
1
• Localizability: Choosing which power range a node will use to transmit messages or the transmission range of each node plays a significant role in the
number of localized nodes. Through use of this objective, the proposed method
maximizes the number of nodes capable of localizing using the least amount
of consumed power, which means the least average transmission ranges of all
nodes.
4.3.1.2
Binary PSO Representation
In all versions of PSO, each particle represents a potential solution of the localization problem. Similar to the representation of the method in Chapter 3, the variant
of binary PSO (the single and multi-objective implementations) used in this study
creates a random positions matrix where each element in the matrix takes the value of
0 or 1. Each row of the matrix represents a single node and consists of three columns
corresponding to the min, mid, and max power ranges. An example of the matrix is
shown in Table 4.5. In this particular example, nodes 1, 2, and 3 are assigned the
mid, min, and max power ranges. The velocity matrix of each particle also look like
the positions matrix but with continuous values ranges between 0 and 1.
4.3.1.3
Continuous PSO Representation
The variant of continuous MOPSO used in this study creates a random positions
matrix where each element in the matrix takes a value between 60 to 132 meters,
46
Table 4.6: Nodes output transmission range assignment table (CMOPSO positions matrix)
Transmission Range
node1
83.4
node2
63.2
node3
91.0
instead of a binary value, where each row of the matrix represents a single node’s
transmission range. An example of the matrix is shown in Table 4.6 where in this
particular example, nodes 1, 2, and 3 are assigned 83.4, 63.2 and 91.0 meters as
the transmission ranges. The velocity matrix of each particle will also look like the
positions matrix but with values between δ calculated by (2.2), and the minimum
transmission range value chosen, which is 60.
4.3.1.4
MOPSO
As discussed previously, the problem presented in this study is a MOP consisting
of the number of transmitted messages, the overall energy consumption, localization
time, and localizability. To accommodate these objectives, a PSO method capable
of handling MOPs was implemented in order to find the trade-off between the contradicted objectives and to have a set of optimal solutions instead of having only a
single objective solution where the main target is to maximize the number of localized
nodes without over-consuming power or taking more time to localize the nodes.
Algorithm 3 shows the pseudo-code of the two implemented MOPSO versions.
The algorithm starts by initializing the swarm, in Line 2, where the positions and
velocities matrices are initialized and the fitness values are calculated for all particles. In Line 3, the leaders archive is initialized from the swarm particles, but a
series of checks must take place before adding a solution to the archive. A modified
version of the implemented dominance and equality comparators of SMPSO [39] and
47
OMOPSO [40] in JMetal [69] (open source Java-based metaheuristic libraries) are
used to make sure that no dominated solution is added to the archive. However,
to calculate the crowding distances of leaders (Line 4), a non-dominated sorting algorithm is implemented based on the NSGA-II proposed by [70], to decide which
particles must remain in the archive. JMetal also has the complete implementation
of the algorithm which was easily integrated with the MOPSO method here.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
procedure MOPSO(nodesList)
initialize swarm;
initialize leaders archive;
measure crowding distances;
for i ← 0 → numberIterations do
for j ← 0 → numberP articles do
procedure calculateNewVelocities
choose random leader as global best;
update velocities;
end procedure
calculate new positions;
run MOPSO mutation;
evaluate the solution;
update particle memory;
update leaders archive;
measure crowding distances;
end for
end for
end procedure
Algorithm 3: MOPSO Algorithm
The swarm begins traveling in the problem space to find optimal solutions between
Line 5 and Line 18. In Line 8, two leaders are chosen from the archive randomly and
the crowding distance comparator compares the two leaders where the higher crowding distance will be chosen to assure diversity. Line 9 and 11 update the velocities
and positions of each particle using (2.1) and (2.4), respectively.
In Line 12, a boundary mutation is implemented to mutate a portion of the swarm
population. The flooding procedure and the localization process starts in Line 13 to
48
measure the fitness of each particle’s solution. In Line 14, the memory of the particle is
updated to set the best fitness of the particle as the current fitness if it was considered
as a better position by the comparators. Line 15 adds new particles to the leaders
archive in the case that quality solutions are found. Finally, Line 16 does the same
job as Line 4. All the initialization values of all parameters are described in the
Parameter Study section.
4.4
Simulations and Results
This section details the implementation, simulation, and results of the proposed
methodology.
4.4.1
Implementation
Java SE 7 was used to develop the tool used in this study. To allow an examination
of the same localization scenario but using different power levels or methods, the
network topology was saved in a file (topology file) using the tool. The file contains
the X and Y coordinates of each node in addition to the type of the node, if its a
normal or anchor node. For the purposes of this study, the term “trial” refers to a
single application of single- or multi-objective PSO resulting in an optimal solution.
Fig. 4-3 shows the flow chart of the simulation procedure. Note that in Step-2,
the implemented Java code reads the positions of each node from a saved topology
file, where each node’s types: anchor or blind node, in addition to the X and Y
coordinates of the anchor nodes are saved. For this study a random WSN topology
file containing 240 nodes, 40 of which are anchor nodes, are scattered among a field of
1000×1000 meters. In Step-3, one of the three proposed PSO versions is used. Step-4
and Step-5 is part of the fitness function where each particle’s solution is examined
by flooding the network and using the TBL localization method.
49
Start
1
2
Read Nodes Information
3
Run PSO
6
Simulation
Results
4
Start Flooding
5
Localize Nodes
Figure 4-3: Simulation Flow Chart
4.4.2
Baseline Results
Initially, the WSN topology was examined three times using statically chosen
power ranges. The networks created using these initial configurations are shown in
Fig. 4-4 where lines between each two nodes represents a 1-hop connection based on
the chosen communication range.
As is listed in Table 4.7, the first run used only minimum power ranges for all
the 240 nodes (Fig. 4-4-a) which allowed each node to transmit to a distance up to
63.28 meters. The localization procedure consumed 20.55 mW , which is the cost of
transmitting 41 messages from the 40 anchors and the only one localized node by the
anchors over 480 units of time. The second run used only medium range transmission
(Fig. 4-4-b) which allowed each node to transmit to a distance up to 91.47 meters.
After flooding, 96 nodes were localized with the help of the 40 anchor nodes and the
localized nodes during 1, 200 units of time through 136 messages which consumed
171.21 mW . In the last run (Fig. 4-4-c) all 200 nodes were localized where all nodes
used the maximum power range during 960 units of time. The localization procedure
50
a
b
c
Figure 4-4: Simulation Panels showing three different simulation scenarios
using the (a) minimum, (b) medium, and (c) maximum output
power levels. Lines between nodes represents a 1-hop connection
between nodes.
consumed 758.95 mW of power when using the maximum power range where each
node was able to transmit to a distance up to 132.22 meters.
From these baseline scenarios, it is clearly shown that using only one kind of power
range may cause a large reduction in the number of localized nodes or consume excess
energy to localize more nodes. The solution found by using the maximum allowed
transmission is considered as the best solution in terms of the localization time and
number of localized nodes. Yet, for power consumption, the solution using maximum
51
Table 4.7: Baseline results using multiple discrete output power levels.
Run1
Power Ranges
Run2
Run3
Minimum Medium Maximum
Transmission Ranges
63.28
91.47
132.22
Time
480
1,200
960
Energy Consumption
20.55
171.21
758.95
Localized Nodes
41
136
240
transmission range is considered to be very poor. Note that in this baseline scenario
and in the non-continuous versions of PSO the maximum allowed transmission range
is 132.22 meters while in the continuous version the maximum is 132 meters.
4.4.3
BSOPSO Results
Driven by the baseline solutions, BSOPSO was used to explore the potential methods of improving the solutions to the presented problem. Accordingly, BSOPSO was
applied using each objective function previously discussed. The values of the BSOPSO
parameters are listed in Table 4.8, where the same version of the RIW method, as
in Chapter 3, was implemented to make sure that PSO will not stuck in the local
optima and improve the exploration ability by using a version of the SA method.
Fig. 4-5 shows 50 trials of simulating the WSN topology designed, where the
objective function was implemented to only maximize localizability. In this scenario,
the number of non-localized nodes ranged from 0 to 10 nodes, which means that the
worst solution PSO was able to find was 5% worse in terms of the number of localized
nodes. However, the power consumption was also reduced in addition to increasing
the number of nodes localized. Yet, since the main focus of the trials was to maximize
the number of localized nodes, the total time required for localization was worse, by
50% to 225%.
Fig. 4-6 suggests that a tremendous decrease in the number of lower power ranges
52
Table 4.8: BSOPSO Parameters’ values
Parameter
value
# Particles
100
# Iterations
200
Min Tran Range
64
Max Tran Range
132
C1 and C2
1.49445
Inertia Weight (ω)
RIW proposed by [44]
Power Consumption
Localized Nodes
470
5000
4500
420
4000
370
3500
320
3000
2500
270
2000
220
Units of Time
Localized Nodes & Power Consumption (mW)
Time
1500
170
1000
0
5
10
15
20
25
30
35
40
45
50
Trial Number
Figure 4-5: Results from 50 Trials of BSOPSO with an objective of maximizing localizability.
used will aid in decreasing the localization time, representing a clear trade-off between
these two objectives. Number of messages represents the number of nodes that are
transmitting using one of the three power levels since each localized node will transmit
only once. An obvious conclusion would be that whenever the larger power ranges are
used, the overall power consumption will increase. Additionally, increasing the power
ranges may increase the number of localized nodes and by increasing localizability
the power consumption will increase for the fact that the newly localized nodes will
also broadcast participating in the localization protocol.
53
Min Ranges
Mid Ranges
Max Ranges
170
3000
150
2500
130
2000
110
1500
90
1000
70
500
50
Units of Time
Number of Transmitted Messages
Time
0
0
5
10
15
20
25
30
35
40
45
50
Simulation Times
Figure 4-6: Number of transmitted messages when using multiple discrete
output power levels over 50 Trials of BSOPSO with an objective
of maximizing Localizability.
4.4.4
Simulations of MOP
4.4.4.1
BMOPSO Results
BMOPSO was used to overcome the low quality of solutions found by BSOPSO
when minimizing the time needed for localization as well as power consumption while
maximizing the number of localized nodes. The values of the BMOPSO parameters
are listed in Table 4.9, where a binary mutation method was implemented and a FIW
was chosen based on experiments discussed in the parameter study section 4.4.5.
The method was able to find a balance between all competing objectives and give
solutions that outperforms the baseline and the BSOPSO methods as detailed in Fig.
4-7, where the results of 50 trials are shown. It was able to find a balance between
all of the competing objectives and, in some cases, it outperformed the two previous
methods at all levels, (i.e. localizing all nodes during the shortest time possible and
with power consumption less than any other solutions found by the methods before).
During the 50 trials, 115 different, yet optimal, solutions were found. Of these,
28 outperformed the baseline in terms of power consumption while maintaining the
same time and number of localized nodes. The total power consumption ranged from
54
Table 4.9: BMOPSO Parameters’ values
Parameter
value
# Particles
100
# Iterations
200
Min Tran Range
64
Max Tran Range
132
Mutation Percentage
15%
Mutation Value
Min Tran Range
C1 and C2
1.49445
Inertia Weight (ω)
0.1
Power Consumption
1500
250
1400
1300
200
1200
1100
150
1000
900
100
800
700
50
Localized Nodes
Units of Time & Power Consumption (mW)
Time
600
500
0
0
20
40
60
80
100
Solution Number
Figure 4-7: Power consumption, localization time, and number of localized
nodes of a solution set containing 115 solutions while using the
BMOPSO method over 50 trials.
4% to 21% lower than the baseline measurements. In the best case, the BMOPSO
method improved the power consumption by 29%, but was only capable of localizing
145 nodes — a clear trade-off.
4.4.4.2
CMOPSO Results
The main point of implementing this method was to enhance the quality of the
solutions found by BMOPSO by using a continuous transmission range instead of 3
discrete levels. The values of the parameters used in the method are listed in Table
55
Table 4.10: CMOPSO Parameters’ values
Parameter
value
# Particles
50
# Iterations
50
Min Tran Range
64
Max Tran Range
132
Mutation Percentage
20%
Mutation Value
Min Tran Range
C1 and C2
1.49445
Inertia Weight (ω)
0.1
Table 4.11: Averages and Standard Deviations of the CMOPSO
Time
Power Consumption Localized Nodes
AVG
905.26
541.70
234.65
STDEV
100.70
32.03
10.52
AVG+STDEV
1005.96
573.73
245.17
AVG-STDEV
804.56
509.67
224.13
4.10, the values was chosen after a parameter study explained in §4.4.5. The method
was able to find a balance between all competing objectives and give solutions that
outperforms the other implementations as detailed in Table 4.11 and shown in Fig.
4-8, where the results of 50 trials of CMOPSO are shown.
During the 50 trials, 57 different, yet optimal, solutions were found. Of these,
53% outperformed the baseline in terms of power consumption while maintaining
the same time and number of localized nodes (i.e localizing all nodes in 960 units
of time). The total power consumption ranged from 24% to 32% lower than the
baseline solutions. Fig. 4-9 shows how CMOPSO outperforms BMOPSO method in
terms of the average localization time which was decreased by up to 15.94% and the
average power consumption by up to 12.29%, which means decreasing the average
transmission ranges used by all nodes. Finally, the average number of localized nodes
56
Average Ranges
LocalizedNodes
300
280
260
240
220
200
180
160
140
120
100
1000
900
800
700
600
500
400
300
200
100
0
0
10
20
30
40
Units of Time and Power Consumption
AVG Ranges and # Localized Nodes
Time
PowerConsumption
50
Solution Number
Figure 4-8: Power consumption, average transmission ranges used by all
nodes, localization time, and number of nodes localized of a
solution set containing 57 solutions while using the CMOPSO
method over 50 trials.
was worse by up to 0.72%.
An important observation to mention here is that BMOPSO tends to find more
diverse solutions where some of solutions are of low quality, while the continuous
version was able to find a set of more quality/balanced results, resulting in a lesser
average number of localized nodes. CMOPSO method tried to optimize the three
other objectives by only making one of the objectives worse by as minimum as 0.72%
which is reasonable in MOPs.
The best solutions found by the two methods were when localizing all the nodes
in less time and less power consumption. The best binary method solution consumed
600.97 mW to localize all the nodes in the shortest time, while the continuous method
was able to minimize that by almost 14% when the best solution was able to localized
all the nodes with the same time by consuming only 517.73 mW . This clearly shows
the advantage of using continuous transmission ranges instead of discrete ones.
After finishing all our simulations, the parameter study, and collecting all the results, the performance of the method was tested by creating a new network topology
and evaluating it. The results showed consistent performance with our previous re57
Continuous MOPSO
Binary MOPSO
(Avg) # Localized Nodes
(Avg) Power Consumption
(Avg) Localization Time
0%
10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Figure 4-9: Results of CMOPSO vs. BMOPSO
Table 4.12: Averages and Standard Deviations of the CMOPSO for the Second Network Topology
AVGs
Time
Power Consumption
Localized Nodes
AVG
818.57
500.56
219.24
STDEV
117.75
36.23
10.97
AVG+STDEV 936.32
536.79
230.21
AVG-STDEV
464.32
208.27
700.82
sults where the best solutions were between 34% to 41% of the whole solutions pool.
The best solution found was around 30% better than the baseline solution in terms
of power consumption while maintaining the same number of localized nodes and
units of time. Table 4.12 show the average results of running the simulation for 2
times where each time we executed the method for 50 times. Note that the maximum
number of localizable nodes found by the baseline method are 228 nodes in addition
to the 40 anchor nodes when using the maximum transmission power. The power
consumption and time of localization of the baseline method were 721 mW and 960
units of time respectively.
58
4.4.5
Parameter Study
Parameter studies, in general, are concerned with tuning the values of the elements
that are involved in finding a solution. PSO is no exception and as was mentioned
previously, PSO has many constant parameters that affect the global and local navigation abilities which in return affect the speed and direction of the particle. This
study is based on the implemented CMOPSO method.
To achieve faster convergence and performance stability when executing the algorithm, a series of experiments was run while varying the value of 6 key parameters.
Each time an experiment was run, the value of only one of the parameters was changed
while not changing the values of the others. Additionally, each time a value changed,
the algorithm was executed 50 times and the average results were calculated. The
values of each experiment were then collected to measure the behavior of the algorithm where we generally measured the quality of the solution by comparing the
average values of the 4 main objectives across all experiments. The results show a
clear trade-off between the competing objectives and it was clear that in order to
improve one objective another other objective may be made worse, so the values of
each parameter was carefully chosen based on the quality and stability of solutions
found by PSO. Below is a list of the parameters with the results of each experiment:
1. Number of Particles and PSO iterations: As shown in Fig. 4-10, 7 different combinations of the number of particles and iterations were used with the
values varied from 5 to 200. Normally, it would be expected more improvement
when increasing the value of the two parameters and worse solutions when decreasing the value of each. After comparing the average values of the solutions,
it was found that using a swarm larger than 100 particles and running the PSO
for more than 100 iterations will not improve the quality of the solutions. Instead, using smaller swarms and a lesser number of iterations negatively affected
59
EnergyConsumption
LocalizedNodes
240
1100
235
1000
230
900
225
800
220
700
215
600
210
500
205
400
(5,5)
(10,10)
(20,20)
(50,50)
(100,100)
(100,200)
AVG Localized Nodes
AVG Energy Consumption and Time
Time
1200
200
(200,200)
Run Samples, in the form ( #iterations, #particles )
Figure 4-10: Experiments Varying The Number of Particles and Iterations
the quality of the solutions.
The values of the two parameters were set to 50 iterations and 50 particles
due to the fact that when using this combination, a better stability than the
other combinations was achieved, with standard deviation measured as up to
60% better than the (5,5) experiment for example. Additionally, the number
of localized nodes was given the priority when measuring the quality of the
solution, so the greatest average of localized nodes was chosen, but without
affecting the other objectives. So, for example, (50,50) was chosen over (20,20)
for the fact that in (50,50) more nodes were localized in the same units, but
with slightly more power — a reasonable result.
2. Minimum and Maximum transmission ranges: Transmission ranges are
the most important parameter in this study as the focus was to minimize the
average transmission ranges used by all nodes while trying not to affect the
quality of the results. ZigBee was designed to transmit data on no lower than
-3 dBm which means the shortest transmission range of ZigBee is 62.68 meters
and, thus, 64 was chosen as the lowest transmission power. For the maximum
transmission ranges, it was found that using greater transmission ranges was
60
EnergyConsumption
LocalizedNodes
1100
1000
500
900
400
800
300
700
600
200
500
100
AVG Unites of Time
AVG Energy Consumption and Localized Nodes
Time
600
400
0
64
79
99
119
125
300
132
Maximum Transmission Range in Meters
Figure 4-11: Experiments Varying The Maximum Transmission Ranges
able to maximize the number of localized nodes, and if all nodes are localizable
the algorithm is able to localize all of them a majority of the time. On the
other hand, using smaller transmission ranges consume less power but gives
poor results. In some cases the algorithm was not able to localize any nodes, as
shown in Fig. 4-11.
Using large or small transmission ranges play a significant role in finding
quality results. Generally, PSO will pick the values of the transmission ranges
of each node from a continuous interval where the upper and lower bounds are
the maximum and minimum ranges. We varied the the upper bound from 64
to 132 meters while maintaining the same lower bound as 64 meters. Using
maximum power ranges as 119-132 increased the average number of localized
nodes from 412 to 418% more, i.e from 0 to as maximum as 233.9 of the 240
nodes. Obviously the time and energy consumption increased due to the fact we
were able to localized more nodes. For range, values of 132 and 125 meters were
found to be the best values, but 132 was chosen as the maximum transmission
range because priority was given to the number of localized nodes which slightly
increased the power consumption but was able to decrease the localization time
and increase the number of localized nodes.
61
3. Mutation operator: The boundary mutation method, as discussed before, is
used to avoid premature convergence to a solution by improving the search ability in the problem space. This method was adjusted by changing the mutation
percentage of the swarm in addition to the value of mutation. The mutation
percentage was first varied from 0 to 30% during 7 experiments as shown in
Fig. 4-12, then the mutation value was swapped between the chosen minimum
and maximum transmission as shown in Fig. 4-13.
From Fig. 4-12, it can clearly be seen that 20 and 30% are the best choice
in terms of the average number of localized nodes. 20% was chosen over the
30% mark because of the fact that the 20% mutation percentage was found to
be more stable with a standard deviation ranging from 12 to 22% less than the
30% mutation for all of the energy consumption and time, respectively. The
20% mutation gave slightly better solutions (by around 0.10% better), but the
standard deviation of the 30% was found to be around 6% less than the 20%
mutation with mutation value of 8.82, while the standard deviation for the 20%
mutation was recorded as 9.38.
Fig. 4-13 shows how when using minimum transmission range as the mutation percentage the power consumption was found to be less without really
affecting the number of localized nodes. However, reducing the power consumption when using the minimum transmission range increased the localization
time but reduced the power consumption around 13%. We chose the minimum
transmission range as the mutation value to prevent the MOPSO of using larger
transmission ranges in order to minimize the power consumption as much as we
can.
4. PSO local weight (C1 ) and global weight (C2 ): The two parameters are
used in (2.1) in updating the velocity matrix to optimize the behavior of parti62
EnergyConsumption
LocalizedNodes
1000
239
950
238
900
237
850
236
800
235
750
234
700
233
650
232
600
231
550
230
0%
5%
10%
15%
20%
25%
500
30%
AVG Unites of Time and Energy Consumption
AVG Number of Localized Nodes
Time
240
Mutation Percentage
AVG Time, Energy Consumption, and Localized Nodes
Figure 4-12: Experiments Varying The Mutation Percentage
Time
EnergyConsumption
LocalizedNodes
1600
1400
1200
1000
800
600
400
200
0
Min (64)
Max (132)
Mutation Value
Figure 4-13: Experiments Varying The Mutation Value
cles. Based on a study by Eberhart and Shi [60] that compared inertia weights
and constriction factors in PSO, the values of the acceleration coefficients, C1
and C2 in (2.1), were set to 1.49445. This was found to have better influence
on the performance of PSO and was thus used.
5. Inertia weight (ω): ω is one of the most important adjustable parameters
in PSO besides the acceleration coefficients and random variables. ω value can
impact the overall performance of the algorithm in finding a potential optimal
solution in less computing time. In the proposed method a FIW value of ω is
used as stated in (2.1). It is a fixed constant that is defined before running
63
EnergyConsumption
LocalizedNodes
1000
550
990
980
500
970
450
960
400
950
350
940
930
300
920
250
AVG Unites of Time
AVG Energy Consumption and Localized Nodes
Time
600
910
200
900
0.1
0.3
0.5
0.73
0.9
1
Inertia Weight Value
Figure 4-14: Experiments Varying The Inertia Weight (ω)
the algorithm. The value of ω was varied from 0.1 to 1 as shown in Fig. 4-14.
The values used are widely used in literature by many proposed inertia weight
methods [42–44, 60, 71].
After collecting the data, a value of 0.1 was chosen as the fixed value of ω,
instead of the other relatively good choice 0.5. The value of 0.1 was chosen for
the fact that it showed more stable results (up to 21.5% less standard deviation
than the 0.5 level). Additionally, a value of 0.1 localized 0.7% more nodes than
the 0.5, but increased the power and time by 2.3% and 0.75% respectively.
4.5
Conclusions
This chapter has presented single-objective and multi-objective PSO-based solutions for the power consumption of WSN during TBL procedure. The overall performance of the TBL algorithm was evaluated and improved through the simultaneous
optimization of various objective functions. Results clearly show that the use of
SOPSO and MOPSO to optimize the TBL algorithm in terms of power consumption is effective, providing improvements up to 32% only on the Transmit mode of
transceivers. Also, as shown by the study, using single global output power is less
64
stable in localizing nodes than using multiple levels and using the maximum possible
output level is not cost effective solution to the stability of localization, therefore, PSO
was found to solve the problem without negatively affect the TBL work in terms of
localizability in particular. However, our study can be mapped to real test beds using
techniques such as the component based localization, nodes clustering, and RTS/CTS
methods, in addition to many others, as suggest by [24, 72, 73]
65
Chapter 5
Conclusions and Future Work
CCom is becoming, year after year, a more promising technology of providing
services over networks in the form of infrastructure as a service (IaaS), platform
as a services (PaaS), and software as a service (SaaS). However, CCom still facing
some difficulties that affect the quality of the services provided. This thesis however,
optimizes the cloudlet scheduling procedure by which it assures shorter makespans,
which reduces costs by increasing the utilization of the reserved cloud resources.
Additionally, this thesis is presenting a method that optimizes the power consumption
of WSN nodes during localization procedures, by taking advantage of the modern
functionalities of transceivers (i.e. multi-output power levels).
5.1
The Methods Used and The Goals Achieved
The major contributions of this thesis were achieved by doing the following:
• Implementing a discrete SOPSO method, aims at minimizing makespans of the
submitted cloudlets inside the cloud simulator, CloudSim.
• Implementing a SA method, by which the cloudlet-PSO method was able to
optimize its performance and achieve better solutions. The SA method enhanced the global exploration abilities of particles through randomly adjusting
66
the inertia weight (ω) inside the velocity equation of PSO.
• Implementing a discrete and continuous single- and multi-objective PSO methods, to minimize the power consumption of sensor nodes without affecting the
other competing objectives, such as localization time and number of localized
nodes. The method adjust the output level of sensor nodes dynamically using
the PSO method as a global optimizer.
• Implementing dominance comparators and mutation operators as part of the
BMOPSO and CMOPSO, to ensure the diversity of particles and balanced
solutions found by the methods implemented.
• A parameter study of all the adjustable constant parameters of the CMOPSO,
to achieve faster convergence, better quality, and stability in the performance
of the method.
5.2
Future Work
CCom and WSNs are two huge and important research fields, the work presented
by this thesis contributed to the two fields in two main areas: Cloudlet scheduling
and WSN localization. However, the work conducted in this thesis can be further
extended in the future in many directions. Following are some of the suggested paths
in which the cloud method can be extended:
• The cloudlet scheduling method showed great opportunity for optimizing single
user makespans. However, a more complex and realistic virtualized cloud environment can be created to test the method with multi-user and real cloudlets.
The experimental testbed can be created using a CCom platform such as OpenStack.
67
• The use of multi-objective method to optimize makespans taking into consideration factors like costs, bandwidth, and locations of service providers. This
can be achieved using carefully designed workloads inside a simulator or a real
testbeds.
In terms of the WSN localization, the optimization of the localization method
was also very successful in reducing the power consumption while achieving balanced
solutions. However, the following can be done in order to extend the functionalities
of the implemented methods and verify the results achieved:
• The work can be further extended by implementing different localization protocols using, for example, Contiki-OS either on a virtualized environment; using
Cooja, or on real sensor nodes, such as the Z1 mote. The results of such more
realistic simulations and implementations will allow us to verify the performance
and quality of solutions achieved by this thesis.
• Different CI methods can be implemented, then, the performance of them can
be tested and compared with the performance and quality of the solutions by
the PSO-based variants.
• The power consumption can be further optimized by allowing the localization
protocol to choose which nodes can play the role of the anchor nodes. This
might be achieved by analyzing things related to nodes position, number of
neighbor nodes, and battery level.
68
References
[1] L. SIEGELE, “Let It Rise: A Special Report on Corporate IT,” 2008.
[2] K. Rangan, A. Cooke, and M. Dhruv, “The Cloud Wars: $100+ billion at stake,”
Merrill Lynch, Tech. Rep., 2008.
[3] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski,
G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “A view of cloud
computing,” Commun. ACM, vol. 53, no. 4, pp. 50–58, Apr. 2010. [Online].
Available: http://doi.acm.org/10.1145/1721654.1721672
[4] P. Mell and T. Grance, “The NIST Definition of Cloud ComputingRecommendations of the National Institute of Standards and Technology.
NIST,” NIST Special Publication 800-145, pp. 1–3, 2011. [Online]. Available:
http://goo.gl/C4OT1
[5] O. Morariu, C. Morariu, and T. Borangiu, “A genetic algorithm for workload
scheduling in cloud based e-learning,” Proceedings of the 2nd International
Workshop on Cloud Computing Platforms - CloudCP ’12, pp. 1–6, 2012.
[Online]. Available: http://goo.gl/7E4SI
[6] A. Quiroz, H. Kim, M. Parashar, N. Gnanasambandam, and N. Sharma,
“Towards autonomic workload provisioning for enterprise Grids and clouds,”
2009 10th IEEE/ACM International Conference on Grid Computing, pp. 50–57,
Oct. 2009. [Online]. Available: http://goo.gl/1uzwC
[7] R. N. Calheiros, R. Ranjan, A. Beloglazov, and C. A. F. De Rose, “CloudSim
69
: a toolkit for modeling and simulation of cloud computing environments and
evaluation of resource provisioning algorithms,” Softw. Pract. Exper. 2011 Wiley Online Library, vol. 41, no. August 2010, pp. 23–50, 2010. [Online].
Available: http://goo.gl/Y886a
[8] N. H. Centers, “Getting Familiar with Cloud Terminology Cloud Dictionary,”
Tech. Rep. 1, 2013. [Online]. Available: http://goo.gl/ODFvc
[9] S. J. Chapin, W. Cirne, D. G. Feitelson, J. P. Jones, S. T. Leutenegger,
U. Schwiegelshohn, W. Smith, and D. Talby, “Benchmarks and standards
for the evaluation of parallel job schedulers,” in Proceedings of the Job
Scheduling Strategies for Parallel Processing, ser. IPPS/SPDP ’99/JSSPP
’99.
London, UK, UK: Springer-Verlag, 1999, pp. 67–90. [Online]. Available:
http://goo.gl/CtnDb
[10] Y. Cao, C. Ro, and J. Yin, “Comparison of Job Scheduling Policies in Cloud
Computing,” Future Information Communication Technology and Applications,
vol. 235, pp. 81–87, 2013. [Online]. Available: http://goo.gl/Ed6CR
[11] D. Williams, H. Jamjoom, Y.-H. Liu, and H. Weatherspoon, “Overdriver:
Handling memory overload in an oversubscribed cloud,” SIGPLAN Not., vol. 46,
no. 7, pp. 205–216, Mar. 2011. [Online]. Available: http://goo.gl/vGvvY
[12] S. A. Baset, L. Wang, and C. Tang, “Towards an understanding of
oversubscription in cloud,” in Proceedings of the 2Nd USENIX Conference on
Hot Topics in Management of Internet, Cloud, and Enterprise Networks and
Services, ser. Hot-ICE’12. Berkeley, CA, USA: USENIX Association, 2012, pp.
7–7. [Online]. Available: http://goo.gl/CDILG
[13] C. Sivaranjani, A. Surendar, and T. C. Sakthevel, “Energy efficient deployment
of mobile node in wireless Sensor networks ‘,” International Journal of
70
Communication and Computer Technologies, vol. 01, no. 20, pp. 75–78, 2013.
[Online]. Available: http://goo.gl/7D1RW
[14] I. Amundson and X. D. Koutsoukos, “A survey on localization for mobile
wireless sensor networks,” in Proceedings of the 2Nd International Conference
on Mobile Entity Localization and Tracking in GPS-less Environments, ser.
MELT’09.
Berlin, Heidelberg: Springer-Verlag, 2009, pp. 235–254. [Online].
Available: http://goo.gl/Cx9F4
[15] Francisco
Journal,
Sant,
“Localization
in
vol. V, no. November,
wireless
pp. 1–19,
sensor
networks,”
ACM
2008. [Online]. Available:
http://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=4379664
[16] M. Mallinson, P. Drane, and S. Hussain, “Discrete Radio Power Level
Consumption Model in Wireless Sensor Networks,” 2007 IEEE Internatonal
Conference on Mobile Adhoc and Sensor Systems, pp. 1–6, Oct. 2007. [Online].
Available: http://goo.gl/YU8yf
[17] J. Robles, S. Tromer, M. Quiroga, and R. Lehnert, “A low-power scheme for localization in wireless sensor networks,” International Federation for Information
Processing, pp. 259–262, 2010. [Online]. Available: http://goo.gl/H7FRd
[18] M. Bhuiyan, G. Wang, J. Cao, and J. Wu, “Energy and Bandwidth-Efficient
Wireless Sensor Networks for Monitoring High-Frequency Events,” Proc. of
IEEE SECON, 2013. [Online]. Available: http://goo.gl/i8Rws
[19] W. C. Everywhere, “Wirelessly Connecting Everywhere,” Wireless Connectivity,
no. 2Q, pp. 1–72, 2013. [Online]. Available: http://goo.gl/tmD3o
[20] S. Farahani, ZigBee wireless networks and transceivers. Elsevier, 2011. [Online].
Available: http://goo.gl/3pWmQ
71
[21] T. I. Inc, “CC2420 2.4 GHz IEEE 802.15.4 / ZigBee-ready RF Transceiver
Applications,” Texas Instruments Inc, Dallas, Texas, USA., Tech. Rep., 2014.
[Online]. Available: http://www.ti.com/lit/ds/symlink/cc2420.pdf
[22] Atmel
Corporation,
grammer’s
Guide,”
“AVR2001:
Tech.
Rep.,
AT86RF230
2007.
Software
[Online].
Pro-
Available:
http://www.atmel.com/Images/doc8087.pdf
[23] M.-H. Meng, “Power Adaptive Localization Algorithm for Wireless Sensor
Networks Using Particle Filter,” IEEE Transactions on Vehicular Technology,
vol. 58, no. 5, pp. 2498–2508, 2009. [Online]. Available: http://goo.gl/khuNA
[24] K.-B. Chang, Y.-B. Kong, and G.-T. Park, “Clustering algorithm in wireless
sensor networks using transmit power control and soft computing,” in Intelligent
Control and Automation, ser. Lecture Notes in Control and Information
Sciences, D.-S. Huang, K. Li, and G. Irwin, Eds.
Springer Berlin Heidelberg,
2006, vol. 344, pp. 171–175. [Online]. Available: http://dx.doi.org/10.1007/9783-540-37256-1 23
[25] J. Kennedy and R. Eberhart, “Particle swarm optimization,” Proceedings of
ICNN’95 - International Conference on Neural Networks, vol. 4, pp. 1942–1948,
1995. [Online]. Available: http://goo.gl/Srnox
[26] J. Kennedy and R. C. Eberhart, “A DISCRETE BINARY VERSION OF
THE PARTICLE SWARM ALGORITHM,” IEEE international conference on
Systems, Man, and Cybernetics, pp. 4104 – 4108, 1997. [Online]. Available:
http://goo.gl/rwSjO
[27] J. Moore and R. Chapman, “Application of particle swarm to multiobjective optimization,” Department of Computer Science and Software
72
Engineering Department, Auburn University, pp. 1–4, 1999. [Online]. Available:
http://goo.gl/NPkun
[28] K. E. Parsopoulos and M. N. Vrahatis, “Particle swarm optimization method
in multiobjective problems,” Proceedings of the 2002 ACM symposium on
Applied computing - SAC ’02, vol. 1, p. 603, 2002. [Online]. Available:
http://goo.gl/XqAUZ
[29] C. a. Coello Coello and M. Reyes-Sierra, “Multi-Objective Particle Swarm
Optimizers:
A Survey of the State-of-the-Art,” International Journal of
Computational Intelligence Research, vol. 2, no. 3, pp. 287–308, 2006. [Online].
Available: http://goo.gl/Y0G0u
[30] Y. Wang and Y. Yang, “Particle swarm optimization with preference order
ranking for multi-objective optimization,” Information Sciences, vol. 179, no. 12,
pp. 1944–1959, May 2009. [Online]. Available: http://goo.gl/nj34N
[31] H. S. Urade and R. Patel, “Dynamic Particle Swarm Optimization to Solve
Multi-objective Optimization Problem,” Procedia Technology, vol. 6, pp.
283–290, Jan. 2012. [Online]. Available: http://goo.gl/u2OOY
[32] S.-J. Tsai, T.-Y. Sun, C.-C. Liu, S.-T. Hsieh, W.-C. Wu, and S.-Y. Chiu,
“An improved multi-objective particle swarm optimizer for multi-objective
problems,” Expert Systems with Applications, vol. 37, no. 8, pp. 5872–5886,
Aug. 2010. [Online]. Available: http://goo.gl/xIIXd
[33] D. Y. Sha and H. Hung Lin, “A particle swarm optimization for multi-objective
flowshop scheduling,” The International Journal of Advanced Manufacturing
Technology, vol. 45, no. 7-8, pp. 749–758, Feb. 2009. [Online]. Available:
http://goo.gl/OTAZf
73
[34] B. Alatas and E. Akin, “Multi-objective rule mining using a chaotic particle
swarm optimization algorithm,” Knowledge-Based Systems, vol. 22, no. 6, pp.
455–460, Aug. 2009. [Online]. Available: http://goo.gl/WU8P0
[35] S. Pang, H. Zou, W. Yang, and Z. Wang, “An Adaptive Mutated
Multi-objective Particle Swarm Optimization with an Entropy-based Density
Assessment Scheme,” Information & Computational Science, vol. 4, pp.
1065–1074, 2013. [Online]. Available: http://goo.gl/IVsV1
[36] L. Wang, W. Ye, X. Fu, and M. Menhas, “A modified multi-objective binary
particle swarm optimization algorithm,” Advances in Swarm Intelligence, pp.
41–48, 2011. [Online]. Available: http://goo.gl/wyHyt
[37] R. Eberhart, “Multiobjective optimization using dynamic neighborhood particle
swarm optimization,” Proceedings of the 2002 Congress on Evolutionary
Computation. CEC’02 (Cat. No.02TH8600), vol. 2, pp. 1677–1681, 2002.
[Online]. Available: http://goo.gl/Iuh8L
[38] C. A. Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives
with particle swarm optimization,” Trans. Evol. Comp, vol. 8, no. 3, pp.
256–279, Jun. 2004. [Online]. Available: http://goo.gl/Hhd2J
[39] a.J. Nebro, J. Durillo, J. Garcia-Nieto, C. Coello Coello, F. Luna, and
E. Alba, “SMPSO: A new PSO-based metaheuristic for multi-objective
optimization,” 2009 IEEE Symposium on Computational Intelligence in
Milti-Criteria Decision-Making, no. 2, pp. 66–73, Mar. 2009. [Online]. Available:
http://goo.gl/LbGty
[40] M. Sierra and C. Coello, “Improving PSO-Based multi-objective optimization
using crowding, mutation and -dominance,” Evolutionary Multi-Criterion
Optimization, 2005. [Online]. Available: http://goo.gl/cEaYBE
74
[41] Y. Shi and R. Eberhart, “Empirical study of particle swarm optimization,”
CEC 99. Proceedings of the 1999, pp. 1945–1950. [Online]. Available:
http://goo.gl/h3oH2
[42] K. Deep and Madhuri, “Application of globally adaptive inertia weight pso
to lennard-jones problem,” in Proceedings of the International Conference on
Soft Computing for Problem Solving (SocProS 2011) December 20-22, 2011,
ser. Advances in Intelligent and Soft Computing, K. Deep, A. Nagar, M. Pant,
and J. C. Bansal, Eds.
Springer India, 2012, vol. 130, pp. 31–38. [Online].
Available: http://goo.gl/fviMF
[43] R. Ojha and M. Das, “An Adaptive Approach for Modifying Inertia
Weight using Particle Swarm Optimisation,” IJCSI International Journal of
Computer Science Issues, vol. 9, no. 5, pp. 105–112, 2012. [Online]. Available:
http://goo.gl/IAEnB
[44] L. Chong-min, G. Yue-lin, and D. Yu-hong, “A New Particle Swarm
Optimization Algorithm with Random Inertia Weight and Evolution Strategy,”
Journal of Communication and Computer, vol. 5, no. 11, pp. 42–48, 2008.
[Online]. Available: http://goo.gl/QPtsH
[45] S. Tayal, “Tasks scheduling optimization for the cloud computing systems,”
INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES
AND TECHNOLOGIES, vol. 5, no. 2, pp. 111–115, 2011. [Online]. Available:
http://goo.gl/szKF6
[46] U.
University,
“The
HPC2N
Seth
log,”
2006.
[Online].
Available:
http://goo.gl/wrxAK
[47] E. Mokoto, “Scheduling to minimize the makespan on identical parallel
75
Machines: an LP-based algorithm,” Investigacion Operative, pp. 97–107, 1999.
[Online]. Available: http://goo.gl/qnlSi
[48] P. Pongchairerks, “Particle swarm optimization algorithm applied to scheduling
problems,” ScienceAsia, vol. 35, pp. 89–94, 2009. [Online]. Available:
http://goo.gl/Hl5OT
[49] W.-n. Chen and J. Zhang, “An Ant Colony Optimization Approach to a
Grid Workflow Scheduling Problem With Various QoS Requirements,” IEEE
TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNATICS, vol. 39, no. 1,
pp. 29–43, 2009. [Online]. Available: http://goo.gl/f0P8xR
[50] S. Pandey,
L. Wu,
S. Guru,
and R. Buyya,
“A Particle Swarm
Optimization ( PSO ) -based Heuristic for Scheduling Workflow Applications
in Cloud Computing Environments,” in Advanced Information Networking and
Applications (AINA), 2010 24th IEEE International Conference on, 2010, pp.
400 – 407. [Online]. Available: http://goo.gl/uMztxi
[51] A. Abraham, R. Buyya, and B. Nath, “Nature’s Heuristics for Scheduling Jobs
on Computational Grids,” IEEE International Conf on Advanced Computing
and Communications, pp. 45–52, 2000. [Online]. Available: http://goo.gl/n0SQu
[52] M. Rana, S. K. KS, and N. Jaisankar, “Comparison of Probabilistic Optimization
Algorithms for Resource Scheduling in Cloud Computing Environment,”
International Journal of Engineering and Technology (IJET), vol. 5, no. 2, pp.
1419–1427, Apr-May 2013. [Online]. Available: http://goo.gl/B8I1w
[53] L. Zhang, Y. Chen, and R. Sun, “A task scheduling algorithm based on pso for
grid computing,” International Journal of Computational Intelligence Research.,
vol. 4, no. 1, pp. 37–43, 2008. [Online]. Available: http://goo.gl/05eD3
76
[54] S. Mirzayi and V. Rafe, “A survey on heuristic task scheduling on distributed
systems,” AWERProcedia Information Technology & Computer Science, vol. 1,
pp. 1498–1501, 2013. [Online]. Available: http://goo.gl/Jak6i
[55] A. Bardsiri and S. Hashemi, “A Review of Workflow Scheduling in Cloud
Computing Environment,” International Journal of Computer Science and
Management Research, vol. 1, no. 3, pp. 348–351, 2012.
[56] H. Zhang, P. Li, Z. Zhou, and X. Yu, “A PSO-Based Hierarchical Resource
Scheduling Strategy on Cloud Computing,” Trustworthy Computing and
Services, pp. 325–332, 2013. [Online]. Available: http://goo.gl/HV8yo
[57] Y. Wang, J. Wang, C. Wang, and X. Song, “Research on resource scheduling of
cloud based on improved particle swarm optimization algorithm,” in Proceedings
of the 6th International Conference on Advances in Brain Inspired Cognitive
Systems, ser. BICS’13. Berlin, Heidelberg: Springer-Verlag, 2013, pp. 118–125.
[Online]. Available: http://dx.doi.org/10.1007/978-3-642-38786-9 14
[58] Z. Wang, K. Shuang, L. Yang, and F. Yang, “Energy-aware and revenueenhancing Combinatorial Scheduling in Virtualized of Cloud Datacenter,”
Journal of Convergence Information Technology, vol. 7, no. 1, pp. 62–70, Jan.
2012. [Online]. Available: http://goo.gl/Lq3LV
[59] S. Zhan and H. Huo, “Improved PSO-based Task Scheduling Algorithm in Cloud
Computing,” Journal of Information & Computational Science, vol. 13, pp. 3821–
3829, 2012.
[60] R. Eberhart and Y. Shi, “Comparing inertia weights and constriction factors in
particle swarm optimization,” Proceedings of the 2000 Congress on Evolutionary
Computation. CEC00 (Cat. No.00TH8512), vol. 1, no. 7, pp. 84–88, 2000.
[Online]. Available: http://goo.gl/YJbz23
77
[61] T. F. E. Wikipedia, “Instructions per second,” p. 1, 2013. [Online]. Available:
http://goo.gl/YYuN
[62] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer,
I. Pratt, and A. Warfield, “Xen and the art of virtualization,” SIGOPS
Oper. Syst. Rev., vol. 37, no. 5, pp. 164–177, Oct. 2003. [Online]. Available:
http://goo.gl/Yc9p3
[63] H. S. Al-Olimat, R. C. Green, M. Alam, V. Devabhaktuni, and W. Cheng,
“Particle Swarm Optimized Power Consumption of Trilateration,” International
Journal in Foundations of Computer Science & Technology (IJFCST), 2014.
[64] H.
Ali,
W.
Shahzad,
and
F.
A.
Khan,
Wireless Sensor Networks and Energy Efficiency, N. Zaman, K. Ragab, and
A. B. Abdullah,
Eds.
IGI Global,
Jan. 2012. [Online]. Available:
http://goo.gl/ERT98k
[65] W. Cheng, N. Zhang, and M. Song, “Time-bounded essential localization for
wireless sensor networks,” IEEE Transactions on Networking, pp. 1–14, 2010.
[Online]. Available: http://goo.gl/P8BnS
[66] J. Aspnes,
ley,
“A
Y.
T. Eren,
R.
theory
Yang,
of
D. K. Goldenberg,
B.
network
D.
O.
A. S. Morse,
Anderson,
localization,”
IEEE
and
P.
W. White-
N.
Transactions
Belhumeur,
on
Mobile
Computing, vol. 5, no. 12, pp. 1663–1678, Dec. 2006. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=1717436
[67] Z. Hu, D. Gu, Z. Song, and H. Li, “Localization in wireless sensor networks
using a mobile anchor node,” in Advanced Intelligent Mechatronics, 2008.
AIM 2008. IEEE/ASME International Conference on, July 2008, pp. 602–607.
[Online]. Available: http://goo.gl/4xN6T
78
[68] M.-P. Uwase, N. Long, J. Tiberghien, K. Steenhaut, and J.-M. Dricot,
“Poster abstract:
Outdoors range measurements with zolertia z1 motes
and contiki,” in Real-World Wireless Sensor Networks, ser. Lecture Notes in
Electrical Engineering, K. Langendoen, W. Hu, F. Ferrari, M. Zimmerling, and
L. Mottola, Eds. Springer International Publishing, 2014, vol. 281, pp. 79–83.
[Online]. Available: http://goo.gl/XaVJzm
[69] J. J. Durillo and A. J. Nebro, “jMetal: A Java framework for multi-objective
optimization,” Advances in Engineering Software, vol. 42, no. 10, pp. 760–771,
Oct. 2011. [Online]. Available: http://goo.gl/tZ5Rw
[70] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary
Computation, vol. 6, no. 2, pp. 182–197, 2002.
[71] S. Sivanandam and P. Visalakshi, “Multiprocessor scheduling using hybrid
particle swarm optimization with dynamically varying inertia,” International
Journal of Computer Science & Applications, vol. 4, no. 3, pp. 95–106, 2007.
[Online]. Available: http://goo.gl/52UXL
[72] D. K. Goldenberg, P. Bihler, Y. R. Yang, M. Cao, J. Fang, a. S.
Morse, and B. D. O. Anderson, “Localization in sparse networks using
sweeps,” Proceedings of the 12th annual international conference on Mobile
computing and networking - MobiCom ’06, p. 110, 2006. [Online]. Available:
http://portal.acm.org/citation.cfm?doid=1161089.1161103
[73] X. Wang, J. Luo, Y. Liu, S. Li, and D. Dong, “Component-Based
Localization in Sparse Wireless Networks,” IEEE/ACM Transactions on
Networking, vol. 19, no. 2, pp. 540–548, Apr. 2011. [Online]. Available:
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5586657
79