Statistical Analysis of Velocity Update Rules in Particle

International Journal of Artificial Intelligence and Applications for Smart Devices
Vol.3, No.2 (2015), pp.11-22
http://dx.doi.org/10.14257/ijaiasd.2015.3.2.02
Statistical Analysis of Velocity Update Rules in Particle Swarm
Optimization
Er. Harman Preet Singh1 and Er. Avneet Kaur2
1
Student, Electronics Department, Guru Nanak Dev University, Amritsar, India
2
Student, Computer Science Department, Guru Nanak Dev University
Regional Campus, Jalandhar, India
[email protected], [email protected]
Abstract
Particle Swarm Optimization Algorithm evolved from the behavior of members of a
swarm. It is based on their movements within the swarm, taking all the members to a better
position. Lots of work has been done using PSO in different fields like in: solving unimodal
and multimodal problems, solving one dimensional and multi-dimensional problems.
Efficient problem solving capabilities has been proposed by using topological
communications, initial distribution of particles and parameter adjustments. The Idea of PSO
has emerged from the swarming behaviors of bird flocks, fish schools, bee swarms and even
human behaviors. It is an optimization tool which can be used to optimize even function
optimization problems. PSO has its advantage in fast converging behavior but it is costly as it
gets stuck into local optima and converges prematurely. In order to limit the algorithm from
converging prematurely, the parameters of algorithm are varied from the classical PSO
algorithm, in order to have better performance and results. This paper presents the statistical
analysis of the various velocity update rules in Particle Swarm Optimization algorithm.
Keywords: Particle Swarm Optimization, premature convergence, velocity update
1. Introduction
In PSO, particles fly in the problem space, at each step of the algorithm, the velocity of the
particle is changed (accelerated) towards its pbest and lbest positions. Sometimes, the
particles can not retain the fast converging feature of the algorithm and converges
prematurely, the process is named stagnation or premature convergence. With this premature
convergence, the useful information may not be utilized to the fullest. SO, for the exchange
of information between the particles, better location of particles is required to be found. It is
required to improve the precision of convergence and the velocity of convergence i.e. To
maintain the fast converging feature of PSO.
1.1. Classical PSO Algorithm
In the classical PSO algorithm, the particles fly along the search space in order to find the
best position i.e. Best solution. During each iteration, the particle updates its velocity and
position using following equations:
V(t+1) = (w * v(t)) + (c1 * r1 * (p(t) – x(t)) + (c2 * r2 * (g(t) – x(t))
X(t+1) = x(t) + v(t+1)
V(t+1) refers to the velocity at time t+1.
The new velocity depends on the following three terms:
W * v(t) - The w factor which is a constant like 0.73 and called inertia weight, v(t) is the
current velocity at time t.
C1 * r1 * (p(t) – x(t)) - c1 is a constant called cognitive weight
ISSN: 2288-6710 IJAIASD
Copyright ⓒ 2015 SERSC
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
R1 is a random variable in the range (0, 1)
P(t) is the particle’s best position found so far
X(t) is the particle’s current position.
C2 * r2 * (g(t) – x(t)) - c2 is a constant called the global weight
R2 is a random variable in range (0, 1),
The g(t) vector value is the best known position found by any particle in the swarm so far.
Once the new velocity, v(t+1), has been determined, it’s used to compute the new particle
position x(t+1) [16].
2. Literature Survey
1. Mohammad Reza Bonyadi Zbigniew Michalewicz Xiaodong Li, “An analysis of the velocity
updating rule of the particle swarm optimization algorithm”- The particle swarm
optimization algorithm includes three vectors associated with each particle: inertia, personal,
and social influence vectors Previously velocity update was done using random diagonal
matrices. A new method of updating the velocities of the particles was to use randomly
generated rotational matrices over random diagonal matrices in order to overcome the
problems while using it:
i.
ii.
iii.
Uncontrollable length and direction changes of the matrices, leading to delay in
convergence.
For the vectors lying close to the co-ordinate axes, weaker alternation of their
directions occur.
There is limited particle movement, leading to premature convergence. The
rotational matrices used were: Euclidean matrices.
2. Heng-Chou Chen and Oscal T.-C. Chen, Particle Swarm Optimization incorporating a
Preferential Velocity-Updating Mechanism and Its Applications in IIR Filter Design”- In this
paper, the particle swarm optimization (PSO) with preferential velocity-updating mechanism
to improve the evolutionary performance. The PSO is directed by imposing a preference on
the different parts of velocity-updating rule to all particles. The particles lying far away from
the global best position are then provided with more spontaneity to search, while ones close
to the global best position are provided with better exploitation capability to search toward
the direction of the global best position.
3. Andries Engelbrecht, “Particle Swarm Optimization: Velocity Initialization”- Velocity
Initialization is an important concept in PSO as if velocity initialization of the particles in the
swarm are done in an effective manner, this will not let the search effort to get wasted by
searching in wrong directions. Sometimes during the search process of the particles for
finding the best solution in the search space, these particles may leave the intended
boundaries of the search space, which will tend the wastage of searching effort and has a
negative impact on the performance of the PSO algorithm. This initialization can be done in
three ways: initialize to zero, initialize to random values close to zero and initialize to small
random values.
4. Daniel N. Wilke, Schalk Kok and Albert A. Groenwold Structural, “Comparison of linear
and classical velocity update rules in particle swarm optimization”- Diversity is an important
aspect in the particle swarm optimization algorithm. For investigating the diversity, two
different implications of the PSO has been done, called linear PSO formulation and classical
PSO formulation. These two formulations differ in respect to the formulation of velocity
update rules only. In the linear update rule, in the n-dimensional space, the search trajectories
collapse into the line searches while in classical velocity updates, this drawback does not
occur and the search trajectories are retained.
12
Copyright ⓒ 2015 SERSC
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
5. Sabine Helwig_, Frank Neumann†, and Rolf Wanka, “Particle Swarm Optimization with
Velocity Adaptation”- To deal with the situations of boundary constraints, different bound
handling methods were proposed.
The success of PSO algorithms depends upon the used bound handling methods. An
alternative approach to cope with bounded search space has been proposed. The velocity
adaptation leads to the better results for the PSO algorithm.
3. Statistical Analysis of Velocity Update Rules in PSO Algorithm
Researchers have been working to overcome the limitations that existed in the PSO
algorithm by bringing different variations to the classical PSO algorithm. Many researchers
have worked on overcoming the premature convergence [6, 7, 8,1] proposed the method of
using non-linear inertia weights and perturbation factors in PSO algorithm.[2] proposed
adaptive tuning parameters in PSO algorithm,[3] proposed hybridization of PSO with chaos
and cloud,[4] proposed a dimension selection method which could check the influence of
randomness in PSO algorithm. A mechanism called “divergence-convergence” was found for
improving the diversity of population and bringing the algorithm out of prematurity[5]. The
work done by various researchers in the field of velocity updates in PSO algorithm have been
presented.
1. Nearest Neighbour Velocity Matching and Craziness
This algorithm was satisfactorily simulated and is based on two properties: nearestneighbor velocity matching and craziness. In this proposed algorithm, the population of
particles was initialized randomly with some positions on a grid having two velocities,
namely, X and Y. At each iteration, a loop for each particle determines each particle’s nearest
particle and assigned its velocities X and Y to that particle which lie in its focus. In some
iterations, the flock could settle on unanonimous and unchanging direction, so the
variable”craziness” was introduced. At each iteration, some changes were added to the chose
X and Y velocities randomly, which introduced enough variations into the system.
Another variation proposed a mechanism in which each particle already knew the global
best positions the member of the flock had found. It was done by assigning an array index of
the particle with best value, gbest to all the members of the flock. Each member’s vx[] and
vy[] were adjusted by using following rules:
Ifpresentx[ ]> pbestx[gbest] then vx[ ] = vx[ ] - rand() *g-increment
Ifpresentx[ ] < pbestx[gbest] then vx[ ] = vx[ ] + rand() *g-increment
Ifpresenty[ ] > pbesty[gbest] then vy[ ] = vy[ ] - rand() *g-increment
Ifpresenty[ ]< pbesty[gbest] then vy[ ] = vy[ ] + rand() *g-increment
Where g_increment is the system parameter.
1. Acceleration by Distance
In this algorithm, velocity adjustments were based on inequality tests:
If present> bestx, make velocity smaller
If present< bestx, make velocity larger.
The revision in this algorithm brought improvement in its performance. The velocities were
adjusted according to the differences per dimension from the best locations:
Vx[ ][ ] = vx[ ][ ] + rand()* p_increment*(pbestx[ ][ ]- presentx[ ] [ ])
Where, vx and present are now matrices of particles by dimensions.
Copyright ⓒ 2015 SERSC
13
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
2. Current Simplified Version
Further revision in the algorithm was done. The p-increment and g-increment factors were
removed from the algorithm. The stochastic factor was multiplied by 2, to give the mean 1, so
the particles would “overfly” the target nearly half the times. Now the velocities are adjusted
by following formula:
Vx[ ] [ ] = VX[ ] [ ] + 2 * rand( ) * (pbestx[ ] [ ] - presentx[ ] [ ]) +
2 * rand() * (pbestx[ ] [ ] gbes[ ] - present[ ] [ ])
Another version considered two types of particle: explorers and settlers. The explorers
used inequality tests that helped them to overrun the target by a large distance while the
settlers would explore the regions that had been found good. The adjustment was done as
follows:
V X [ ] [ ] = 2 * rand() * (pbestx[ ] – present[ ] [ ] ) +
2 * rand() * (pbestx[ ][gbest] - presentx[ ] [ ] )
This version, though simplified, was quite ineffective at finding global optima.
3. Convergence-Divergence Mechanism
The convergence-divergence" mechanism has been divided into two stages: convergence
stage and the divergence stage. In convergence stage, the classical PSO is conducted for
several generations, which leads the population to a convergence state with a lower precision.
In divergence stage, the particles in population are forced to go away from the current
positions and disperse in the searching space. The final stage need to be convergence stage to
get an optimal solution. These two stages alternate one turn and then come into the final
convergence stage. When the population runs into divergence stage, it jumps out of premature
convergence firstly, and then, more excellent solution may be found in the process of
particles search and information exchange. So, when the population runs into convergence
stage again, it has higher possibilities to find better solutions.
The iteration process in convergence stage is the same with classical PSO. While in
divergence, the velocity replacement formula is as follows:
Vi(t + 1) = ¡(vi(t + 1) + c1r1(pi ¡ xi(t)) + c2r2(pg ¡ xi(t)))
4. Velocity Initialization
Velocity Initialization is an important concept in PSO as if velocity initialization of the
particles in the swarm are done in an effective manner, this will not let the search effort to get
wasted by searching in wrong directions. Sometimes during the search process of the
particles for finding the best solution in the search space, these particles may leave the
intended boundaries of the search space, which will tend the wastage of searching effort and
has a negative impact on the performance of the PSO algorithm, so it is required to initialize
the particles’ velocity if any of them gets out of the boundaries of search space as this will
counter for the unintended randomness in the swarm and increase in the number of roaming
particles. This initialization can be done in three ways: initialize to zero, initialize to random
values close to zero and initialize to small random values[9]. Different strategies of velocity
initialization has different impacts on the performance of PSO algorithm. The velocity of
every particle in the swarm is updated using the following equation:
Vi(t+1) = w*vi(t)+c1r1(t)(yi(t)- xi(t))+c2r2(t)(^y-xi(t))
Where w is the inertia weight [10],
C1 and c2 are the acceleration coefficients,
R1(t); r2(t) € U(0; 1)nx , nx is the dimension of the search space,
14
Copyright ⓒ 2015 SERSC
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
Xi(t) is the current position of the ith particle,
Yi(t) is the particle’s pbest position,
^y(t) is the gbest position.
Particle positions are updated using the following equation:
Xi(t + 1) = xi(t) + vi(t)
During every iteration of the algorithm, the velocity and position of the particles get
updated following the velocity and position update rules. In a situation, when during this
process, a particle leaves the boundary positions, they need to be re-initialized. This reinitialization of particles can be done in three ways, in order to save the searching effort that
has to be made in finding the solutions in the algorithm.
1. Initialize Velocities to Zero
For all the particles in the swarm(i=1,2,... Nx) where nx is the total number of particles.
But this initialization limits the initial exploration ability of the swarm and the surface
occupied initially by the particles of the swarm. The momentum of every particle is zero
initially i.e., They do not move, they are initialized to zero velocity. If the momentum value is
greater than zero, it will lead to larger step size , which will further increase the randomness
of the particles. The positions of the particles also are uniformally distributed throughout the
swarm.
2. Initialize of Velocities to the Random Values within the Domain
The domain is set for the optimization problem i.e., Vi(0) € U(-x min, x max)nx. Because
of the larger step sizes, the random initialization of the velocities help in the improvement of
exploration ability of the swarm. These larger step sizes also increase the initial diversity of
the whole swarm and it may result in particles violating the boundaries of the search space.
As the initial positions of the particles are initialized in the domain [x min, x max], similarly,
the initial velocities of the particles are also initialized in this domain. In this way, these
roaming particle’s best positions may also violate the boundary constraint.
3. Initializing the Velocities to Small Random Values
In response to the problem of particles leaving the boundary values due to large step sizes,
the velocities of particles were initialized to small values. It was supposed that this kind of
initialization will not suffer from the problem of boundary violation and will contribute to the
diversity of the swarm. But it was noticed that particles still leave the boundaries. Another
problem arise that how the small values will be found, which depends on the characterization
of the optimization problems.
Each of these initialization strategies suffered from some problems. It was seen that it is
better idea of the initialization of velocities to small values within the domain of the
optimization problems. The random initialization means the initial velocities are sampled
from a uniform distribution from within the domain of the optimization problem, the
initialization to small random variables means the initial velocities are sampled from a
uniform distribution in the range [- 0:1; 0:1] and the zero initialization means the initial
velocities be set to zero.
5. Linear and Classical Velocity Update Rules in PSO
Diversity is an important aspect in the particle swarm optimization algorithm. For
investigating the diversity, two different implications of the PSO has been done, called linear
PSO formulation and classical PSO formulation. These two formulations differ in respect to
the formulation of velocity update rules only. In the linear update rule[12], in the ndimensional space, the search trajectories collapse into the line searches while in classical
velocity updates [13, 14], this drawback does not occur and the search trajectories are
Copyright ⓒ 2015 SERSC
15
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
retained. On comparing, the performance of classical formulation and implementation on the
PSO works better as it avoids the premature convergence [11]. In order to have the effective
results, all the heuristics of PSO algorithm are disabled and the velocity updates are isolated
completely from these heuristics.
1. Linear PSO Velocity Update Rule
The stochastic vector vi k is given by
Vi k = c1ri1k (pik − xik ) + c2ri2k (pgk− xik )
Where ri1 k and ri2 k represent two uniform real random scalar numbers between 0 and 1.
Ri1k and ri2k are updated at every iteration k, and for each particle i in the swarm.
Hence ri1 k and ri2k defines the magnitudes of the cognitive and social vectors c1(pi k –xi k )
and c2(pg k −xik ).
The linear PSO by the following pseudocode fragment:
For I = 1 to number of particles do
R1 = uniform random number [0;1]
R2 = uniform random number [0;1]
For J = 1 to number of dimensions do
V[I][J]=W*V[I][J]
+C1*R1*(P[I][J]-X[I][J])
+C2*R2*[G[I][J]-X[I][J])
X[I][J] = X[I][J]+V[I][J]
End do
End do [11]
2. Classical Velocity Update Rule
Each component of (pi k – xi k ) and (pg k – xi k ) is scaled independently. In this case we do
not preserve the directions of the social and cognitive vectors. In order to scale each
component independently, the scalar random numbers ri1 k and ri2 k in the stochastic vector in
the equation are replaced by two random vectors ri1 k and ri2 k as follows:
Vi k =c1ri1k ◦ (pi k – xi k ) + c2ri2 k ◦ (pg k – xi k )
Where the operator again indicates element-by-element multiplication.
Here, the random vectors ri mk are given as ri mk =(pi 1k , pi 2k, . . . , pi nk), m =1, 2
Where in turn, each component of the two random vectors is an independent uniform
random scalar between 0 and 1.
Two random diagonal matrices [21], denoted by Ui 1k and Ui 2k are used. The stochastic
vector is then given by:
Vik =c1Ui 1k (pi k – xi k ) + c2Ui 2k (pg k – xi k )
Where each diagonal component of Ui1k and Ui2k is an independent uniform random
number between 0 and 1.
The stochastic contribution vik) is no longer a vector representation of a bounded plane Pi k,
since the non-zero components of (pi k – xi k ) and (pg k – xi k ) are independently scaled.
The classical PSO by the following pseudocode fragment:
For I = 1 to number of particles do
For J=1 to number of dimensions do
R1=uniform random number [0;1]
R2=uniform random number [0;1]
V[I][J]=W*V[I][J]
+C1*R1*(P[I][J]-X[I][J])
+C2*R2*[G[I][J]-X[I][J])
16
Copyright ⓒ 2015 SERSC
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
X[I][J] = X[I][J]+V[I][J]
End do
End do
The only difference is that the random numbers are updated inside the for-loop that runs
over the search dimensions (1, . .\ . , n) [11].
6. Heuristics in PSO Algorithm
1. Non-zero Initial Velocities and Velocity Re-Initialization
Initialization with non-zero initial velocities introduces diversity in the PSO, ensuring that
the stochastic contributions are translated in the search space, which can increase both the
magnitude and directional diversity. The non-zero initial velocities can delay the searches in
the algorithm. This only helps during initial iterations, since these contributions damp out
over time.
2. Maximum Velocity Restriction
Maximum velocity restriction [14] stabilizes the PSO.
implementing the maximum velocity restriction.
There are two methods for
1. A restriction can be placed on each component of the vector vi k+1
2. The length of vi k+1 can be restricted
Restricting each component eases the implementation by restricting each component of the
velocity vector but the direction of the velocity vector is not preserved. It stabilize the
algorithm, by limiting the maximum component value in each dimension i.e. It limits the
magnitude diversity and it introduces the directional diversity as the direction of the velocity
vector is not preserved.
The restriction of the velocity magnitude does not alter the velocity direction between
consecutive iterations. So, it only stabilizes the algorithm i.e. It only limits the magnitude
diversity, while directional diversity is not introduced.
3. Minimum Velocity Restriction
Velocity can be restricted on the lower end of the scale, The mechanism prevents the
absolute convergence of the whole swarm by restricting the components or the magnitude of
velocity.
4. Craziness
The craziness can be implemented as the ‘crazy’ particles are randomly relocated in the
search space. It is done using the low probability Pcr mostly, the craziness increases the
magnitude and directional diversity and it prevents the collapsing of the swarm to line
searches. At high probability Pcr, craziness results in an ineffective search.
5. Increasing Social Awareness
An increase in the number of vectors during the construction of the stochastic contribution
vi k. There is more information about the cost function is used in each particle’s position
update rule and directional diversity is increased.
6. Inertia Weight
The magnitude and directional diversity increases for the higher values of w. The PSO
search become ineffective with the unstable higher values and the diversities damps out after
some iterations of the algorithm.
Copyright ⓒ 2015 SERSC
17
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
7. Velocity Update Rules using Rotational Matrices
Previously velocity update was done using random diagonal matrices. A new method of
updating the velocities of the particles was to use randomly generated rotational matrices over
random diagonal matrices in order to overcome the problems while using it:
1. Uncontrollable length and direction changes of the matrices, leading to delay in
convergence.
2. For the vectors lying close to the co-ordinate axes, weaker alternation of their directions
occur.
3. There is limited particle movement, leading to premature convergence.
The rotational matrices used were: Euclidean matrices.
7.1. Role of Random Matrices in Velocity Updating Rule
Several researchers have made efforts in determining the role of rotational matrices in
updating the velocities. The identification of some problems caused by the multiplication of
personal and social influence vectors by the two random matrices was done. The new
approach based on Euclidean rotation matrices with random directions for updating the
velocity of particles to address these problems is proposed. An investigation of the
parameters of the normal distribution used in the Euclidean rotation matrices, and
experiments have been done with an adaptive approach, which identified the values of the
parameter of the normal distribution during an iteration. This adaptive approach is
independent from the algorithm and the number of dimensions of the problem at hand.
8. Strategies of Velocity Update
Different ways and strategies are used for the velocity updates of the particles in the
swarm.
1. One of the method is to set a domain of values for the minimum and maximum values of
the velocities. (Vmin, Vmax) be the domain of velocities. If the particle’s velocity
becomes more than the maximum velocity Vmax, then its velocity is again set to be
Vmax, and if its velocity becomes less than Vmin, it is again set to be Vmin. Thus the
velocities are re-initialized.
2. Another method is used in Constriction coefficient PSO (copso) in which a new form of
the velocity updating rule is used:
Vi t+1= χ ( Vi t + c1r1t (pi t – xi t ) + c2r2t (gi t – xi t )
The parameter χ was called the constriction factor.
Tuning the values of χ, c1 andc2 prevents the swarm from explosion and can lead to better
balance between exploration and exploitation within the search space. They proved that
velocity does not grow to infinity if the parameters (χ, c1 and c2) satisfy the following
equation:
Χ = 2/ |2 − c − √ c2 − 4c|
Where c = c1 + c2 > 4.
3. In some situations the global best positions get stuck and does not move from it position,
it is called stagnation. To overcome this situation, the Guaranteed Convergence PSO
(GCPSO) was used. It used the local search around the global best particle and the
velocity vector is set according to it.
4. Another approach was used for updating the velocities using rotational matrices. The
velocity was updated using Euclidean matrices using the formula:
Vi t+1 = ω Vi t + ϕ1r1t Rotmt (σ1) (pi t− xi t) + ϕ2r2t Rotmt (σ2) (gi t – xi t)
18
Copyright ⓒ 2015 SERSC
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
Where Rotmt (σ1) and Rotmt (σ2) are two rotation matrices
R1t and R2t that were used in earlier PSO variants
R1t and r2t are two real random numbers which are uniformly distributed in [0, 1], which are
used to control the magnitude of movement.
A rotation matrix is a d × d orthogonal matrix where its determinant.
⎪ Rota,b (α) = ri, j |
⎪
⎪ ra,a = rb,b = cos (α) ⎪
⎪ ra,b = −sin (α)
⎪
⎪ rb,a = sin (α)
⎪
⎪r j, j = 1, j _= a, j _= b ⎪
⎪ ri, j = 0, elsewhere
⎪
Euclidean rotation matrix preserves the length of the vectors during rotation and only
changes the direction of the vectors. The random components r1t and r2t are responsible for
the changes of the length of the vectors [15].
9. Focussing on Different Parts of Velocity Update Rule
The focus is placed on the different parts of the velocity-updating rule to all the particles.
The particles away from the global best position are given more spontaneity to the search
than those lying close to the global best position and more better exploitation capability is
provided to particles close to the global best position. The algorithm is initiated through
randomly assigning positions to the particle. Assuming total number of particles in the swarm
be l. The velocity is updated at each iteration according to the following rule:
vi(t + At) = w vi(t)+ c1 rand( [xibest(t) - xi(t)] (1) +c2 -rand( [xbest(t) -xi(t)]
where X(t) = {x 1(t) x2 (t) ... x1(t)} at instant t,
xi (t) denotes the ith particle in the search space.
The position is updated as:
x,(t+ At) = xi(t)+vi(t+At) . At,
where At is the duration between iterations,
w represents an inertia coefficient,
Cl and c2 are acceleration constants,
rand is a function generating random numbers with uniform distribution from within [0,1],
best
(t) and represent the best ever position of particle i (i.e., local best) at instant t.
Xi
best(
x t) denotes the global best position in the swarm at instant t.
10. Velocity-Updating Mechanism
To improve the particle's velocity to make the particles reach optimal positions, the
number of particles can be increased in a Swarm but it demands extra computation in each
iteration, which results in the longer time required to obtain the results. If particle x1 (t) lies
close to the local best X best(t), the second term in the velocity updating rule approaches to
zero and for particles xi (t) lying close to their global best Xbest (t) , the third term approaches
zero . So, the particles close to their local best Xi (t) or global best Xbest(t) can be evolved
easier than those far away from their best ones. The velocity-updating rule as follows:
vi(t + At)= w. vi(t) + cl rand( )[X
(t)]
best
i
(t)-x i (t)]- PFi(t) + 2 * rand() [x best (t) - x i (t)] [1 - PFi
Calculating the Preference Function
Step 1: Calculate the distance deviated from X (t) for every Particle i in the swarm as follows:
di = |X (t) xi(t)| 22
Copyright ⓒ 2015 SERSC
19
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
Step2: Sort {di, i = 1,2 , I} in ascending order to obtain the index of the results:
k = Arg sortascending{dl d2 ... di... dl} , where k=[k k2 ...ki ...k]
Step 3: Calculate PF (t) for every particle i in the swarm:
PFi (t) = ki/ I
11. Bound-Constrained Search Spaces
Optimization problems with boundary constraints are considered, which means the search
space S = [lb1, ub1]×[lb2, ub2]×. . .×[lbn, ubn] consisting of n real-valued parameters (x1, . .
. , xn) where each parameter xi, 1 ≤ i ≤ n, is bounded with respect to some interval [lbi, ubi].
S = [−r, r ]n, i.e., lbi = −r and ubi = r, 1 ≤ i ≤ n.
Velocity update rule is given by:
vi,t = ω · vi,t−1 + c1 · r1 ⊙ (pi,t−1 − xi,t−1) + c2 · r2 ⊙ (li,t−1 − xi,t−1)
The position update rule is given by:
xi,t = xi,t−1 + vi,t
(2)
where c1, c2, and ω are user-defined parameters, ⊙ denotes component-wise vector
multiplication, r1 and r2 are vectors of random real numbers chosen uniformly at random in
range [0, 1].
12. Velocity Clamping
Velocity Clamping is used to accelerate the algorithm’s convergence speed and to avoid
premature convergence of algorithm. If the velocity v of some particle i exceeds a maximum
allowed velocity limit, it is set to the maximum value of velocity (vmax,j), which is the
maximum allowed value of velocity. Velocity is updated as:
vij (t + 1) = vij(t+1), if vij(t+1)<vmax, j else vij(t+1)= vmax,j
Larger values of vmax,j cause global exploration and smaller values encourage local
exploitation.
13. Conclusion and Future Scope
It is essential to fully utilize the searching capability of the Particle Swarm Optimization
Algorithm. In some cases, the algorithm suffers from the problem of stagnation if the
particles in the swarm could not retain the fast converging feature of the PSO Algorithm. If
the particles leave the boundary values of the search space, they are needed to be reinitialized. Lots of work has been done using PSO in different fields like in: solving unimodal
and multimodal problems, solving one dimensional and multi-dimensional problems.
Efficient problem solving capabilities has been proposed by using topological
communications, initial distribution of particles and parameter adjustments. The Idea of PSO
has emerged from the swarming behaviors of bird flocks, fish schools, bee swarms and even
human behaviors. It is an optimization tool which can be used to optimize even function
optimization problems. PSO has its advantage in fast converging behavior but it is costly as it
gets stuck into local optima and converges prematurely. In order to limit the algorithm from
converging prematurely, the parameters of algorithm are varied from the classical PSO
algorithm, in order to have better performance and results.
References
[1]
J. Jiang, M. Tian and X. Wang, “Adaptive particle swarm optimization via disturbing acceleration
coefficents”, Journal of Xidian University, vol. 39, no. 4, (2012), pp. 74-80.
20
Copyright ⓒ 2015 SERSC
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
G. Xu, “An adaptive parameter tuning of particle swarm optimization algorithm”, Applied Mathematics and
Computation, vol. 219, no. 9, (2013), pp. 4560-4569; J. Jiang, Journal of Information & Computational
Science, vol. 12, no. 4, (2015), pp. 1349-1356.
M. Li, H. Kang and P. Zhou, “Hybrid optimization algorithm based on chaos, cloud and particle swarm
optimization algorithm”, Journal of Systems Engineering and Electronics, vol. 24, no. 2, (2013), pp. 324334.
X. Jin, Y. Liang and D. Tian, “Particle swarm optimization using dimension selection methods”, Journal of
Applied Mathematics and Computation, vol. 219, no. 10, (2013), pp. 5185-5197.
J. Jiang, H. Ye, X. Lei, H. Meng and L. Wang, “Particle Swarm Optimization via convergence divergence
Mechanism”, Journal of Information & Computational Science, vol. 12, no. 4, (2015), pp. 1349-1356.
M. Hu, T. Wu and J. D. Weir, “An intelligent augmentation of particle swarm optimization with multiple
adaptive methods”, Journal of Information Sciences, vol. 213, (2012), pp. 68-83.
M. Kaucic, “A multi-start opposition-based particle swarm optimization algorithm with adaptive velocity for
bound constrained global optimization”, Journal of Global Optimization, vol. 55, no. 1, (2013), pp. 165-188.
W. Chen, J. Zhang and Y. Lin, “Particle swarm optimization with an aging leader and Challengers”, IEEE
Transactions on Evolutionary Computation, vol. 17, no. 2, (2013), pp. 241-258.
A. Engelbrecht, “Particle Swarm Optimization: Velocity Initialization”, WCCI 2012 IEEE World Congress
on Computational Intelligence, Brisbane, Australia, (2012) June, pp. 10-15.
Y. Shi and R. Eberhart, “A Modified Particle Swarm Optimizer”, Proceedings of the IEEE Congress on
Evolutionary Computation, (1998) May, pp. 69-73.
D. N. Wilke, S. Kok and A. A. Groenwold, “Comparison of linear and classical velocity update rules in
particle swarm optimization, Notes on diversity”, International Journal of Numer. Meth. Engng, vol. 70,
(2007), pp. 962-984.
U. Paquet and A. P. Engelbrecht, “A new particle swarm optimiser for linearly constrained optimisation”,
IEEE Congress on Evolutionary Computation, vol. 1, (2003), pp. 227-233.
J. Kennedy and R. C. Eberhart, “Particle swarm optimization”, IEEE International Conference on Neural
Networks, vol. 4, (1995), pp. 1942-1948.
R. C. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory”, Sixth International
Symposium on Micro Machine and Human Science, (1995), pp. 39-43.
M. Reza Bonyadi,·Z. Michalewicz and X. Li, “An analysis of the velocity updating rule of the particle
swarm optimization algorithm”, Springer Science+Business Media New York, (2014).
E. Avneet Kaur and E. Mandeep Kaur, “A Review of Parameters for Improving the Performance of Particle
Swarm Optimization”, International Journal of Hybrid Information Technology, vol. 8, no. 4, (2015), pp. 714.
Authors
Er. Harman Preet Singh, has done B.Tech, ECE from Guru Nanak
Dev University, Amritsar(India). Takes a keen interest in machines and
electrical devices.
Er. Avneet Kaur, has done M.Tech, CSE from Guru Nanak Dev
University, Regional Campus, Jalandhar(India). Has published eight
papers in journals and conferences.
Copyright ⓒ 2015 SERSC
21
International Journal of Artificial Intelligence and Applications for Smart Devices
Vol. 3, No. 2 (2015)
22
Copyright ⓒ 2015 SERSC