Document

Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 289 ‐ 296, 2016 doi:10.21311/001.39.7.36 Autonomous Groups Particle Swarm Optimization Algorithm Based
On Exponential Decreasing Inertia Weight
Haojun Li*
College of Education Science and Technology, Zhejiang University of Technology, Hangzhou 310023, Zhejiang,
China
*Corresponding author(E-mail: [email protected])
Zhongfeng Liu
College of Education Science and Technology, ZheJiang University of Technology, Hangzhou 310023, Zhejiang,
China
Wanliang Wang
College of Computer Science and Technology,ZheJiang University of Technology ,Hangzhou
310023,Zhejiang,China
Abstract
Aiming at the problem that particle swarm optimization algorithm is easy to fall into local optimal solution, this
paper proposes autonomous groups particle swarm optimization algorithm based on exponential decreasing
inertia weight. The exponential decreasing inertia weight is using an exponential function, to adjust inertia
weight decreasingly. The algorithm uses the exponential decreasing inertia weight to replace the linear inertia
weight of autonomous groups particle swarm optimization algorithm, to better balance global search and local
search of the algorithm for improving the algorithm’s ability of avoiding falling into local optimal solution.
Through the simulation experiments on six benchmark functions, the results show that the exponential
decreasing inertia weight better enhances autonomous groups particle swarm optimization algorithm
performance. The algorithm has higher ability of avoiding falling into local optimal solution, better stability and
better performance in solving high dimensional problems.
Key words: Particle Swarm Optimization, Autonomous Groups, Exponential Function, Inertia Weight, Global
Optimization.
1. INTRODUCTION
Particle Swarm Optimization(PSO)(Kennedy and Eberhart, 1995)is proposed by Kennedy and Eberhart
inspired by the foraging behavior of birds. Shi et al. proposed the inertia weight selection method which comes
into being the current standard PSO(Shi and Eberhart, 1998).Because of few parameters and simple realization,
the algorithm is very popular. These advantages make PSO widely used in many fields, such as multi-object
optimization(Moon et al., 2014; García et al., 2014; Shokrian and High, 2014). But the application process of
PSO is prone to fall into local optimal solution, and the problem with the increase of problem dimensions will
be more prominent. There are many studies been committed to solve this problem.
Because the parameters of PSO are few and the parameter tuning is simple, using the method of dynamic
parameter tuning to improve the quality of algorithm has become a widely used method. Cognitive parameter
and social parameter affect particles’ability of learning individual optimal solution and population optimal
solution and can balance global search and local search of algorithm. Due to the complexity of optimization
problem, the constant or the linear tuning ways for cognitive parameter and social parameter, in many cases, are
not good. In order to solve this problem, Ziyu et al. introduced an exponential time-varying function to tune
cognitive parameter and social parameter, in order to make the algorithm converge to global optimum(Ziyu and
Dingxue, 2009); Bao et al. proposed an asymmetric time-varying acceleration parameter tuning strategy to
balance local search and global search(Bao and Mao, 2009);Cui et al. proposed nonlinear time-varying cognitive
parameter and time-varying social parameter, and the social parameter is the function of cognitive
parameter(Cui et al., 2008).However, these methods used to tune cognitive and social parameter are for every
particle. Due of lacking diversity of particles, they cannot be good solutions to the problem of easily converging
to local optimal solution. Mirjalili proposed the mathematical model of groups called autonomous groups
strategy to form autonomous groups particle swarm optimization(AGPSO);AGPSO uses different slope,
breakpoints, and curvature functions to tune cognitive parameter and social parameter, to better balance global
289 Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 289 ‐ 296, 2016 search and local search; AGPSO’s linear inertia weight limits the algorithm further improvement of
performance(Mirjalili et al., 2014).
Because inertia weight is also an important parameter affecting performance of the algorithm, many papers
focus on improving PSO based on inertia weight. Larger inertia weight making particles conducive to global
exploration and smaller making particles tend to local exploit. The inertia weight tuning strategy generally uses
the way of decrease(Chauhan et al., 2013; Nickabadi et al., 2011; Arasomwan and Adewumi, 2013).Shi et al.
proposed a strategy reducing inertia weight linearly and the optimization effect of the algorithm is improved, but
the linear decreasing strategy cannot reflect the actual optimization process(Shi and Eberhart, 1999);Shi et al.
proposed using fuzzy system to dynamically tune inertia weight for improving the average fitness, but more
parameters increase the complexity of the algorithm(Shi and Eberhart, 2001);Eberhart et al. proposed a random
tuning strategy of inertia weight, which is mainly applied in the problem of optimizing the mutable
target(Eberhart and Shi, 2001);Liu et al. proposed improved PSO algorithms, and their global convergence
abilities are improved, but convergence speeds are not stable.(Liu et al., 2007; Tsai et al., 2013; Worasucheep,
2008; Zhou et al., 2010).
It is important to balance local search and global search in the whole iterative process for getting global
optimal solution(Shi and Eberhart, 2001; Črepinšek et al., 2013).The aforementioned researches focus on inertia
weight or cognitive parameter and social parameter, but consider the common effect of them about balancing
global search and local search little. In view of this problem, this paper proposed autonomous groups particle
swarm optimization algorithm based on exponential decreasing inertia weight (AGWPSO); AGWPSO
introduces a new inertia weight tuning method, namely using the exponential function in decreasing way to tune
inertia weight, for improving the original linear inertia weight of AGPSO. In this algorithm, the common
characteristics of the exponential decreasing inertia weight tuning method and autonomous groups strategy are
used to balance global search and local search of algorithm in order to improve the algorithm’s ability of
avoiding falling into local optimal solution.
2. STANDARD PSO ALGORITHM
PSO is inspired by the foraging behavior of birds in search space and proposed as evolutionary computer
technology. Particles in the feasible space fly to find optimal solution. Particles can record the present positions
and the historical optimal location and each other shares optimal location information, namely particles are
affected by their own optimal solution and population optimal solution. The formula (1) and formula (2) tune
speeds and positions of particles in the iterative process.
k
k
Vi,dk+1 = w Vi,dk + c1  rand (pBest i,d
- X i,d
)+
k
c 2  rand (gBest dbest - X i,d
)
k+1
k
Xi,d
= Xi,d
+ Vi,dk+1
(1)
(2)
w controls the stability of PSO which is inertia weight of PSO and is usual among 0.4~0.9; c1 is cognitive
parameter which controls the particles’ability of learning individual optimal solution; c 2 is social parameter
which controls the particles’ability of learning population optimal solution; c1 and c 2 are both called acceleration
k
coefficients among (0,2] in general; Vi,dk , Xi,d
are the speeds and positions of the d-th dimension of the i-th
particle at the k-th iteration; rand is a random number uniformly from the interval [0,1],which is used to give
k
is individual optimal position of the d-dimension of the i-th
PSO more abilities of random search; pBest i,d
particle at k-th iteration; gBest dbest is population best position of the d-th dimension; It can been observed PSO are
affected by three parameters: w , c1 and c 2 .Dynamically tuning these parameters gives particles different
behavior.
3. AUTONOMOUS GROUPS PARTICLE SWARM OPTIMIZATION ALGORITHM BASED ON
EXPONENTIAL DECREASING INERTIA WEIGHT
At present, there are many literatures through improving inertia weight tuning methods to improve particle
swarm optimization algorithm(Taherkhani and Safabakhsh, 2016; Chauhan, Deep, and Pant, 2013);This paper
introduces the exponential decreasing inertia weight to improve AGPSO by replacing it’s linear inertia weight;
Thereby utilize the exponential decreasing inertia weight and autonomous groups strategy to balance local
search and global search, in order to make the algorithm have better ability of avoiding falling into local optimal
solution.
290 Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 289 ‐ 296, 2016 3.1. Autonomous Groups Particle Swarm Optimization Algorithm
The concept of autonomous groups is subject to the inspiration of diversity of animal populations and
insect community. In this method, each particle autonomously attempts to search problem space with its own
strategy which is based on different tuning methods on c1 and c 2 In AGPSO, firstly, set the number of particles,
randomly initialize positions of particles in search space and randomly initialize velocities of particles; Then
through fitness function determining fitness value, individual optimal solution and population optimal solution;
Particles will be randomly divided into four groups in advance; Particles of different groups update parameters
of c1 and c 2 with different methods respectively; Use the formula (3),the formula (4),the formula (5) and the
formula (6) to update c1 and c 2 of group 1, group 2, group 3 and group 4;When update their speeds and
positions, particles with different updating strategies of c1 and c 2 show different abilities of learning individual
optimal and population optimal solutions which will make particles exhibit diversity.
Autonomous groups use different strategies to update c1 and c 2 , so the abilities of global exploration and
local expolition are different with standard PSO. Dynamic and diverse c1 and c 2 tuning modes give particles
different behaviors and abilities of random search ,which increase population diversities, balance local search
and global search in the iterative process and improve particles ‘abilities of avoiding falling into local optimal
solution(Mirjalili et al., 2014).
c1 = 3 - 2 exp[-(4iter/ Max_iter) 2 ]
(3)
c 2 = 4 - c1
c1 = 3 + 2(
iter
2iter
)^ 2-2
Max_iter
Max_iter
(4)
iter
2iter
)^ 2Max_iter
Max_iter
(5)
c 2 = 4 - c1
c1 = 2.5 + (
c 2 = 4 - c1
c1 = 2.5 - exp[-(4iter/ Max_iter) 2 ]
c 2 = 4 - c1
(6)
3.2. Exponential Decreasing Inertia Weight
The formula (7) is a linear inertia weight tuning function in AGPSO; The formula (8) is an exponential
decreasing inertia weight tuning function in AGWPSO.
w = wMax- iter(wMax- wMin) / Max_iter
(7)
w = wMin(wMax/ wMin)1/ (1+iter/ Max_iter)
(8)
wMax is the maximum inertia weight and set 0.9; wMin is the minimum inertia weight and set 0.4; iter is
the current iteration; Max_iter is the maximum iterations. For formula (8), when iter = 0 , w = wMax = 0.9 ;
when iter = Max_iter , w = 0.6 ensuring that w is in { wMin , wMax }.
0.9
Formula(7)
Formula(8)
Inertia weight
0.8
0.7
0.6
0.5
0.4
0
500
1000
1500
2000
Iteration
Figure 1. Formula (7) and formula (8) curve
291 Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 289 ‐ 296, 2016 From figure 1,it can be observed that formula (8) is different with the uniform changes of formula (7) and
it’s changing rate is fast at first and slow later; Using these two functions to tune inertia weight will both give
PSO convergence effect; But with formula (7),PSO once falling into local optimal solution is difficult to jump
out; With formula (8),at early stage larger changing rate gives PSO rapid global exploration, so as to determine
global optimal region and later smaller changing rate gives PSO meticulous local exploit in smaller speed, so as
to ensure the convergence of PSO to global optimal solution; Hence using the exponential function to tune
inertia weight is more conducive to balance global search and local search.
The exponential decreasing inertia weight is using the exponential function as formula (8) shown to tune
inertia weight by decrease. This paper proposes autonomous groups particle swarm optimization algorithm
based on exponential decreasing inertia weight (AGWPSO), which uses the exponential decreasing inertia
weight and autonomous groups strategy to better balance global search and local search, so as to improve the
PSO’abilities of avoiding falling into local optimal solution.
3.3. Basic Steps of Autonomous Groups Particle Swarm Optimization Algorithm Based on Exponential
Decreasing Inertia Weight
Basic steps of AGWPSO:
Step 1 set the number of particles, the maximum number of iterations, and the maximum value of inertia
weight and the minimum inertia weight.
Step 2 random initialization of particles ‘positions in the feasible region and random initialization of
particles’speeds.
Step 3 obtain fitness values of particles through the fitness function, and record individual optimal
solutions and population optimal solutions.
Step 4 using formula (8) to tune inertia weight.
Step 5 particles are divided into pre assigned group 1,group 2,group 3and group 4 respectively, using
formula (3),formula (4),formula (5) and formula (6) to updates cognitive parameter and social parameter of
group 1,group 2,group 3 and group 4.
Step 6 group 1, group 2, group 3 and group 4, respectively, use formula (1) to update velocities of particles
by group.
Step 7 use formula (2) to update particles ‘positions.
Step 8 determine whether to meet the termination conditions, if satisfied, that is, to achieve the maximum
number of iterations, then implement step 9,otherwise return to step 4.
Step 9 terminate the operation, output of the particle's global optimal solution and optimal location.
4. EXPERIMENT AND RESULT
4.1. Test Functions
Table 1 lists the six benchmark functions, which are used as test functions to assess the performance of
AGWPSO and can be divided into three categories: unimodal functions ( F1 and F2 ),multimodal functions ( F3
and F4 ) and fixed dimensional multimodal functions( F5 and F6 );Unimodal and multimodal functions for
evaluating the performance of PSO in high dimensional problems and fixed dimensional multimodal functions
for a comprehensive study. The parameter Dim points out dimensions of the benchmark functions; The
parameter Range gives ranges of search space at the benchmark functions; the parameter fmin is the minimum
value of the benchmark functions.
Table 1. Benchmark functions and their parameters
Function
Dim
Range
fmin
F1 (x) =  n x 2
i=1 i
300
[-50,50]
0
F2 (x) =  n-1[100(x - x 2 )2 + (x -1)2 ]
i=1
i+1 i
i
1 n 2
F3 (x) = -20 exp(-0.2  x )
n i=1 i
1
- exp(  n cos(2 x )) + 20 + e
i
n i=1
x
1
n 2 n
F4 (x) =
 i=1 xi -  i=1 cos( i ) +1
4000
i
300
[-50,50]
0
300
[-32,32]
0
300
[-600,600]
0
292 Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 289 ‐ 296, 2016 2
[-5,5]
0.398
F6 (x) = [1+ (x + x +1)2 (19 -14 x + 3 x 2 -14 x + 6 x x + 3 x 2 2
1 2
1
1
2
1 2
2
2
2
×[30 + (2 x - 3 x )  (18 - 32 x +12 x + 48 x - 36 x x + 27 x
1
2
1
1
2
1 2
[-2,2]
3
F5 (x) =( x 2
5.1 2 5
1
x + x - 6)2 +10(1- ) cosx +10
1  1
1
2

8
4
4.2. Results and Discussion
In order to verify the performance of AGWPSO, we use the following algorithms to carry out simulation
experiments:
(1) Basic particle swarm optimization algorithm, SPSO(Yuhui. Shi and Eberhart 1998);
(2) Particle swarm optimization algorithm, TAPSO(Ziyu and Dingxue 2009);
(3) Particle swarm optimization algorithm, IPSO(Cui et al. 2008);
(4) Autonomous groups particle swarm optimization algorithm, AGPSO(Mirjalili et al. 2014);
(5) Autonomous groups particle swarm optimization algorithm based on exponential decreasing inertia
weight, AGWPSO;
In order to be easy to observe, the maximum number of iterations for benchmark functions F1 , F2 , F3 and
F4 are set 2000 and for benchmark functions F5 and F6 the maximum number of iterations are set
50;SPSO,IPSO,TAPSO and AGPSO through formula (7) to tune inertia weight and AGWPSO through the
formula (8) to tune inertia weight; wMax, the maxmum value of inertia weight, is 0.9 and wMin, the minimum
value, is 0.4;SPSO’cognitive parameter and social parameter are both 2;The remaining algorithms adopt
dynamic tuning method to tune c1 and c 2 ,TAPSO by formula (3) updating c1 and c 2 ,IPSO by the formula (4)
updating c1 and c 2 ,AGPSO and AGWPSO by formula (3),formula (4),formula (5) and formula (6) updating c1
and c 2 by group.
In order to compare the performance of all the algorithms. The data of simulation experiments is collected
by 30 independent experiments; After the termination of the iteration, mean and variance values of the optimal
solutions are listed in table 2,the optimal results with bold font display.
Table 2. Simulation results of algorithm
Function
SPSO
TAPSO
IPSO
AGPSO
AGWPSO
Mean
7.054 4E+03 6.278 9E+03 6.7520E+03 4.512 2E+03 3.647 3E+03
F1
Variance 6.561 5E+05 6.574 2E+05 6.4754E+05 2.788 8E+05 1.854 8E+05
Mean
6.845 7E+07 5.375 9E+07 6.7309E+07 3.205 1E+07 2.444 4E+07
F2
Variance 1.086 6E+14 1.851 9E+14 2.5655E+14 7.786 7E+13 2.369 3E+13
Mean
F3
1.452 4E+01 1.408 3E+01 1.3826E+01 1.423 4E+01 1.229 5E+01
Variance 1.259 8E+00 1.120 1E+00 1.4022E+00 1.595 3E+00 1.572 0E-01
Mean
1.002 4E+01 1.498 3E+01 5.8296E+00 5.034 8E+00 3.995 5E+00
F4
Variance 1.317 8E+00 2.463 8E+00 3.2234E-01 1.673 3E-01 1.253 3E-01
Mean
3.979 0E-01 3.979 0E-01 3.9790E-01 3.979 0E-01 3.979 0E-01
F5
Variance 0.000 0E+00 0.000 0E+00 0.0000E+00 0.000 0E+00 0.000 0E+00
Mean
3.000 0E+00 3.000 0E+00 3.0000E+00 3.000 0E+00 3.000 0E+00
F6
Variance 0.000 0E+00 0.000 0E+00 0.0000E+00 0.000 0E+00 0.000 0E+00
The unimodal function only has global minima and no local minima, so this kind of function is especially
suitable for evaluating convergence ability of algorithms. From the results of the benchmark functions F1 and
F2 in table 2 can be seen, the convergence accuracies of AGWPSO are higher than that of AGPSO at the two
unimodal functions, and are the best among all algorithms. From the figure 2 and figure 3, it can be observed
that AGWPSO has the best convergence ability at the unimodal functions and can keep a fast convergence speed
in the later period of iteration. The results of simulation experiments show that AGWPSO can improve the
convergence ability of PSO at unimodal functions; The reason is autonomous groups strategy can make particles
have diversity which can make particles make full use of the location close to optimal solution and the
exponential decreasing inertia weight can make PSO at early stage determine quickly optimal solution region
and keep faster convergence speed to global optimum solution at later stage.
Contrary to unimodal functions, multimodal functions have several local minima, which increases
exponentially with problem dimensions, so it is suitable to test the algorithms ‘ability of avoiding falling into
293 Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 289 ‐ 296, 2016 local optimal solution. The results of simulation experiments of the benchmark functions F3 and F4 in table 2
show that AGWPSO has better convergence precision than AGPSO at the multimodal functions and is the best
of all algorithms. From figure 4 and figure 5, it is seen that in the later period of iteration the convergence
speeds of AGWPSO are the fastest and AGWPSO is not trapped in local optimal solution. The results of
simulation experiments show that AGWPSO improves PSO in avoiding falling into local optimal solution. The
reason is that autonomous groups strategy gives PSO more random search and the exponential decreasing inertia
weight makes PSO with smaller speed changing rate in later iterations, which gives PSO meticulous local search
to avoid falling into local optimal solution.
Best-so-far
4
10
SPSO
IPSO
TAPSO
AGPSO
AGWPSO
10
10
Best-so-far
SPSO
IPSO
TAPSO
AGPSO
AGWPSO
5
10
9
10
8
10
500
1000
Iteration
1500
2000
Figure 2. Evolution of fitness at benchmark
function F1 500
1000
Iteration
1500
2000
Figure 3. Evolution of fitness at benchmark
function F2
From the convergence curves of unimodal and multimodal functions and the experimental data can be seen,
AGWPSO at benchmark functions F1 , F2 , F3 and F4 throughout the iterative process are always in evolution and
has the best convergence accuracy. This shows that at these benchmark functions AGWPSO outperforms
AGPSO and has obvious advantage compared with other algorithms. The results of simulation experiments at
the unimodal and multimodal functions show that AGWPSO can not only improve the convergence speed of
PSO, but can also improve the algorithm ‘ability of avoiding falling into local optimal solution.
Compared with unimodal functions and multimodal functions, fixed dimensional multimodal functions
have some local minima values. The results of simulation experiments at the benchmark functions F5 and F6 in
table 2 show that all the algorithms are equivalent at the convergence of the two fixed dimensional multimodal
functions, that is, all the algorithms converge to global optimal solution. Figure 6 and Figure 7 show that the
convergence curves of all algorithms are similar and the AGWPSO convergence speed is fast. From
experimental data and convergence figures can be seen, because of the low dimensional character of these
benchmark functions, differences of experimental results are not great, illustrating the performances of
AGWPSO in high dimensional problems are better and more easily observed.
Data in table 2 show that the AGWPSO’s mean values of function evolutions at six benchmark functions
are the best illustrating the convergence precision of AGWPSO performs best among all the algorithms, and the
variances of PSO are minimum illustrating it’s stability at these algorithms is best. These show that the
combination of the exponential decreasing inertia weight and autonomous groups strategy can better improve
PSO algorithm in convergence accuracy and stability of performance.
From the evolution figures of the six benchmark functions can be seen, AGWPSO in the whole iterative
process maintains faster convergence speed and AGWPSO evolutionary speed is higher than AGPSO. Except
for low dimensional functions F5 and F6 at which all algorithms converge to the global optimum, AGWPSO
and AGPSO at the whole iterative process of surplus functions always keep evolution, but the other algorithms
quickly fall into local optimal optimization. This shows that at all the algorithms, AGWPSO is best in term of
avoiding falling into local optimal solution, namely AGWPSO better balances the global search and local search
and improves PSO’ability of avoiding falling into local optimal solution.
294 Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 289 ‐ 296, 2016 1.3
SPSO
IPSO
TAPSO
AGPSO
AGWPSO
SPSO
IPSO
TAPSO
AGPSO
AGWPSO
3
10
Best-so-far
Best-so-far
10
1.2
10
2
10
1
10
500
1000
Iteration
1500
2000
500
1000
Iteration
1500
2000
Figure 5. Evolution of fitness at benchmark
function F4
SPSO
IPSO
TAPSO
AGPSO
AGWPSO
SPSO
IPSO
TAPSO
AGPSO
AGWPSO
2
10
Best-so-far
Best-so-far
Figure 4. Evolution of fitness at benchmark
function F3
0
10
1
10
10
20
30
Iteration
40
Figure 6. Evolution of fitness at benchmark
function F5
50
10
20
30
Iteration
40
50
Figure 7. Evolution of fitness at benchmark
function F6
5. CONCLUSIONS
According to the problem that PSO is easy to fall into local optimal solution, this paper has proposed the
AGWPSO. In order to test whether the proposed algorithm is effective, six benchmark functions have been
simulated and the results compared with other algorithms shows AGWPSO is better in avoiding falling into
local optimal solution, especially in high dimensional problems. AGWPSO allowing particles to have different
individual and social behaviors which makes the algorithm have more random search, coordinates cognitive
coefficient, social coefficient and inertia weight to improve the algorithm ‘ability of avoiding trapping into local
optimal solution. In the future research, problems, such as tuning the number of groups and applying this
algorithm to solve practical problems, have yet to be explored.
Acknowledgements
This work was supported by the National-sponsored Social Sciences Funding Program in China under
Grant 16BTQ084, the humanities and social science project of the Ministry of Education in China under Grant
15YJCZH023.
295 Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 289 ‐ 296, 2016 REFERENCES
Arasomwan, M. A., Adewumi, A. O. (2013) "On the performance of linear decreasing inertia weight particle
swarm optimization for global optimization", The Scientific World Journal, 78, pp.1648-1653.
Bao, G. Q., Mao, K. F. (2009) "Particle swarm optimization algorithm with asymmetric time varying
acceleration coefficients", Proc. of 2009 IEEE International Conference on Robotics and Biomimetics
(ROBIO), pp.2134-2139.
Chauhan, P., Deep, K., Pant, M. (2013) "Novel inertia weight strategies for particle swarm optimization",
Memetic Computing, 5(3), pp.229-251.
Črepinšek, M., Liu, S. H., Mernik, M. (2013) "Exploration and exploitation in evolutionary algorithms: A
survey", ACM Computing Surveys, 45(3), pp.533-545.
Cui, Z., Zeng, J., Yin, Y. (2008) "An improved PSO with time-varying accelerator coefficients", Proc. of 2008
Eighth International Conference on Intelligent Systems Design and Applications, pp.638-643.
Eberhart, R.C., Shi, Y. (2001) "Tracking and optimizing dynamic systems with particle swarms", Proceedings of
the 2001 Congress on Evolutionary Computation, pp.94-100.
García, I. C., Coello, C. A. C., Arias-Montano, A. (2014) "Mopsohv: A new hypervolume-based multi-objective
particle swarm optimizer", Proc. of 2014 IEEE Congress on Evolutionary Computation, pp.266-273.
Kennedy, J., Eberhart, R. (1995) "Particle swarm optimization", Proceedings of 1995 IEEE International
Conference on Neural Networks, pp.1942-1948.
Liu, Y., Qin, Z., Shi, Z., Lu, J. (2007) "Center particle swarm optimization", Neurocomputing, 70(4), pp. 672679.
Mirjalili, S., Lewis, A., Sadiq, A. S. (2014) "Autonomous particles groups for particle swarm optimization",
Arabian Journal for Science and Engineering, 39(6), pp.4683-4697.
Moon, S. K., Park, K. J., Simpson, T. W. (2014) "Platform design variable identification for a product family
using multi-objective particle swarm optimization", Research in Engineering Design, 25(2), pp.95-108.
Nickabadi, A., Ebadzadeh, M. M., Safabakhsh, R. (2011) "A novel particle swarm optimization algorithm with
adaptive inertia weight", Applied Soft Computing, 11(4), pp.3658-3670.
Shi, Y., Eberhart, R. C. (1998) "Parameter selection in particle swarm optimization", Proc. of International
Conference on Evolutionary Programming, 1447(25), pp.591-600.
Shi, Y., Eberhart, R. C. (1999) "Empirical study of particle swarm optimization", Proceedings of the 1999
Congress on Evolutionary Computation, pp.32-49.
Shi, Y., Eberhart, R. C. (2001) "Fuzzy adaptive particle swarm optimization", Proceedings of the 2001 Congress
on Evolutionary Computation, pp.101-106.
Shokrian, M., Highand, K. A. (2014) "Application of a multi objective multi-leader particle swarm optimization
algorithm on nlp and minlp problems", Computers & Chemical Engineering, 60(2), pp.57-75.
Taherkhani, Mojtaba., Safabakhsh, Reza. (2016) "A novel stability-based adaptive inertia weight for particle
swarm optimization", Applied Soft Computing, 38, pp.281-295.
Tsai, H. C., Tyan, Y. Y., Wu , Y. W., Lin, Y. H. (2013) "Gravitational particle swarm", Applied Mathematics and
Computation, 219(17), pp.9106-9117.
Worasucheep, C. (2008) "A particle swarm optimization with stagnation detection and dispersion", 2008 IEEE
Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), 35(8),
pp.424-429.
Zhou, L., Shi, Y., Li, Y., Zhang, W. (2010) "Parameter selection, analysis and evaluation of an improved particle
swarm optimizer with leadership", Artificial Intelligence Review, 34(4), pp.343-367.
Ziyu, T., Dingxue, Z. (2009) "A modified particle swarm optimization with an adaptive acceleration
coefficients", Proc. of Asia-Pacific Conference on Information Processing, 2(6), pp.330-332.
296