A differential-based harmony search algorithm for the optimization of

Expert Systems With Applications 62 (2016) 317–332
Contents lists available at ScienceDirect
Expert Systems With Applications
journal homepage: www.elsevier.com/locate/eswa
A differential-based harmony search algorithm for the optimization of
continuous problems
Hosein Abedinpourshotorban a,b, Shafaatunnur Hasan a,b, Siti Mariyam Shamsuddin a,∗b,
Nur Fatimah As’Sahra b
a
b
UTM Big Data Centre, Ibnu Sina Institute for Scientific and Industrial Research, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia
Faculty of Computing, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia
a r t i c l e
i n f o
Article history:
Received 3 November 2015
Revised 4 February 2016
Accepted 7 May 2016
Available online 10 May 2016
Keywords:
Harmony search algorithm
Continuous optimization
Evolutionary optimization
Differential evolution
Meta-heuristics
a b s t r a c t
The performance of the Harmony Search (HS) algorithm is highly dependent on the parameter settings
and the initialization of the Harmony Memory (HM). To address these issues, this paper presents a new
variant of the HS algorithm, which is called the DH/best algorithm, for the optimization of globally continuous problems. The proposed DH/best algorithm introduces a new improvisation method that differs
from the conventional HS in two respects. First, the random initialization of the HM is replaced with a
new method that effectively initializes the harmonies and reduces randomness. Second, the conventional
pitch adjustment method is replaced by a new pitch adjustment method that is inspired by a Differential
Evolution (DE) mutation strategy known as DE/best/1. Two sets of experiments are performed to evaluate
the proposed algorithm. In the first experiment, the DH/best algorithm is compared with other variants of
HS based on 12 optimization functions. In the second experiment, the complete CEC2014 problem set is
used to compare the performance of the DH/best algorithm with six well-known optimization algorithms
from different families. The experimental results demonstrate the superiority of the proposed algorithm
in convergence, precision, and robustness.
© 2016 Elsevier Ltd. All rights reserved.
1. Introduction
Optimization refers to the process of selecting the best solution
from the set of all possible solutions to maximize or minimize the
cost of the problem (Moh’d Alia & Mandava, 2011). Optimization
problems can be categorized into discrete or continuous groups
based on the solution set (Velho, Carvalho, Gomes, & de Figueiredo,
2011). An additional category is based on the properties of the objective function, such as unimodal or multimodal.
Therefore, various optimization algorithms are required to
tackle different problems. There are two types of optimization algorithms: exact and approximate (Stützle, 1999). Exact algorithms
are guaranteed to find the best solution within a certain period of
time (Weise, 2009). However, real world problems are mostly NPhard, and finding the solutions for this type of problem using exact
algorithms consumes exponential amounts of time (Johnson, 1985;
Michael & David, 1979). Thus, approximate algorithms have been
∗
Corresponding author. Tel.: 6075531993; Fax: 6075565044.
E-mail
addresses:
[email protected]
(H.
Abedinpourshotorban),
[email protected] (S. Hasan), [email protected] (S.M. Shamsuddin),
[email protected] (N.F. As’Sahra).
http://dx.doi.org/10.1016/j.eswa.2016.05.013
0957-4174/© 2016 Elsevier Ltd. All rights reserved.
applied recently to find near-optimal solutions to NP-hard problems in reasonable amounts of time.
Meta-heuristics are approximate algorithms that are able to
find satisfactory solutions for optimization problems in reasonable
amounts of time (Blum & Roli, 20 03, 20 08). Meta-heuristics are
also used to address a major drawback of approximate local search
algorithms, which is finding local minima instead of global minima.
Differential Evolution (DE) (Price, Storn, & Lampinen, 2006;
Storn & Price, 1995, 1997) emerged in the late 1990 s and is one of
the most competitive metaheuristic algorithms. The DE algorithm
is somewhat similar to the Genetic Algorithm (GA), but the solutions consist of real values instead of binary values and generally converge faster than the GA (Hegerty, Hung, & Kasprak, 2009).
The performance of DE greatly depends on the parameter settings (Islam, Das, Ghosh, Roy, & Suganthan, 2012). Many variants
of DE have been proposed to address different problems, but DE
still faces several difficulties in optimizing some types of functions
as has been pointed out in several recent publications (Hansen &
Kern, 2004; Ronkkonen, Kukkonen, & Price, 2005). However, due
to the optimization power of the DE algorithm, it is commonly
applied for the optimization of real world problems, such as optimizing compressor supply systems (Hancox & Derksen, 2005),
318
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
Initialization
of
parameter
vectors
Differentialbased
mutation
Greedy
selection
Crossover
Fig. 1. General process of DE.
determining earthquake hypocenters (Růžek & Kvasnička, 2005)
and for 3D medical image registration (Salomon, Perrin, Heitz, &
Armspach, 2005).
Another recent and well-known meta-heuristic algorithm is
Harmony Search (HS), which was proposed by Geem, Kim, and
Loganathan (2001). HS is inspired by the way that musicians experiment and change the pitches of their instruments to improvise better harmonies. The HS algorithm has been applied to many
optimization problems, such as the optimization of heat exchangers, telecommunications, vehicle routing, pipe network design, and
so on (Cobos, Estupiñán, & Pérez, 2011; Geem, Kim, & Loganathan,
2002; Geem, Lee, & Park, 2005; Manjarres et al., 2013; Omran &
Mahdavi, 2008; Pan, Suganthan, Tasgetiren, & Liang, 2010; Wang &
Yan, 2013; Xiang, An, Li, He, & Zhang, 2014).
Although HS has achieved significant success, several shortcomings prevent it from rapidly converging toward global minima. HS
generally has inadequate local search power due to its reliance on
the parameter settings, which greatly affects its performance. The
Improved Harmony Search (IHS) (Mahdavi, Fesanghary, & Damangir, 2007) was proposed to address the local search issue of HS by
dynamically adjusting the Pitch Adjustment Rate (PAR) and Bandwidth (bw) when the algorithm is run. However, IHS has a high
demand on the parameter settings prior to starting the optimization process. To dynamically adjust the HS control parameters with
respect to the evolution process and the search space of optimization problems, Chen et al. (Chen, Pan, & Li, 2012) introduced a
new variant of HS called NDHS. Although NDHS outperforms conventional HS and IHS, some of the parameters must be set before
the search process begins. To eliminate the labor that is associated
with setting the parameters, a new parameter-less variant of HS,
called GDHS, was proposed by Khalili, Kharrat, Salahshoor, and Sefat (2014) , which outperforms the existing variants of HS.
Although many variants of HS have been proposed, the demand
for improvement in evolutionary algorithms is increasing as real
world problems become increasingly complicated. Motivated by
the facts that the bw parameter of the HS algorithm is problemdependent and significantly influences the performance of the algorithm and that most of the existing methods address this issue
by dynamically changing the bw based on the number of harmony
improvisations instead of considering the problem’s surface, this
paper proposes a new variant of HS, called the DH/best algorithm,
to eliminate the need for bw and to improve the accuracy and robustness of HS.
2. Methodology
In this section, we study the DE and HS algorithms in detail.
Later in the paper (Section 4), we propose a new hybrid algorithm
by combining these two algorithms.
2.1. Differential Evolution
The DE algorithm is a population-based global optimizer that
outperforms many optimization algorithms in terms of robustness
and convergence speed. Many versions of DE have been developed. The original version of DE is known as DE/rand/l/bin or “classic DE” (Storn & Price, 1997). The DE variant that is used in this
paper is called DE/best/1; it differs from the classical version of
DE only in terms of its mutation strategy. Experiments by Geem,
2010; Mezura-Montes, Velázquez-Reyes, and Coello Coello (2006)
on various types of optimization functions demonstrated that the
DE/best/1 scheme is the most competitive DE scheme regardless of
the characteristics of the problem to be solved. The general process
of the DE algorithm is illustrated in Fig. 1.
Step 1: Initialization of parameter vectors
A random population of parameter vectors is initialized in this
step. The number of parameters of each vector is equal to the
number of problem parameters, and each parameter of the vector corresponds to one problem variable. In addition, the value of
each parameter is allocated randomly from the parameter range. A
DE parameter vector is defined as:
−−−
Xi,G = [x1,G , x2,G , x3,G , . . . , xD,G ]
(1)
where D is the number of parameters, which will not change during the optimization, and G indicates the current generation and
increases generation by generation.
Step 2: Differential-based mutation
The new parameter vectors are generated by adding the
weighted difference between two randomly selected population
vectors to the best vector (fittest). The mutant vector is calculated
by Eq. (2):
−−−−−
−−−−−
vi,G+1 = Xbest,G + F.
−−−−−
−−−−−
Xr1,G − Xr2,G
(2)
where best is the index of the fittest vector in the population, and
“r1 ” and “r2 ” are the indexes of randomly selected vectors from the
population and are different from the index of the best vector.
Step 3: Crossover
To increase the diversity of the population, DE mixes the mutated vector’s parameters with the parameters of another predetermined vector, which is called the target vector. The new vector
is called the trial vector. The generation of the trial vector is formulated in Eq. (3):
uj
i, G+1
=
vj
xj
i, G+1 i f (randb( j )≤CR ) or j=r nbr (i )
i, G i f (randb( j )>CR)and j=r nbr (i )
j = 1, 2, . . . , D
(3)
where randb(j) is the jth evaluation of a random function, which
generates uniform random numbers between (0 and 1), CR is the
crossover constant (between 0 and 1), which is determined by the
user, and rnbr(i) is a randomly selected number 1, 2, . . . , D that
ensures that ui, G+1 obtains at least one parameter from vi, G+1 .
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
Algorithm 1
Improvisation of a new harmony.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
3. Competitive variants of the harmony search algorithm
In this section, we study several of the competitive variants of
the harmony search algorithm in detail.
For j = 1 to D do
If rand(0, 1) ≤ HMCR then/∗ memory consideration∗ /
xj = xi, j where i is a random integer from (1,…, HMS)
If rand(0, 1) ≤ PAR then/∗ pitch adjustment∗ /
xj = xj ± rand (0, 1 ) × bw
end if
else
xj = rand (L j , U j )/∗ random selection∗ /
end if
end for
3.1. Improved harmony search
To improve the performance of HS, Mahdavi et al., 2007 proposed a dynamic approach to adjust the HS parameters called Improved Harmony Search (IHS). In this algorithm, the parameters are
updated based on iterations of the algorithm. Eq. (5) is used to dynamically update the PAR:
Step 4: Greedy selection
PAR(t ) = PARmin +
The fitness of the generated trial vector (ui,G+1 ) is compared to
the fitness of the target vector (xi, G ) to determine if it should become a member of the next generation or not. If the trial vector
yields a better fitness, xi,G+1 is set to ui,G+1 ; otherwise, the target
vector is retained.
(PARmax − PARmin )
⎛
⎝
Geem et al., 2001 proposed the HS algorithm based on the
improvisation of new harmonies in musicians’ memory. HS is a
population-based evolutionary algorithm in which the solutions
are represented by the harmonies, and the population is represented by the Harmony Memory (HM). There are several control
parameters in HS, such as the Harmony Memory Size (HMS), which
determine the number of solutions (harmonies) inside the HM. Another HS parameter is the Harmony Memory Consideration Rate
(HMCR), which determines whether pitches (decision variables)
should be selected from the HM or randomly from the pre-defined
range. Another important control parameter of HS is the Pitch Adjusting Rate (PAR), which determines the probability of adjusting
the original value of the selected pitches from the HM. The last
parameter of the HS algorithm is the bandwidth (bw), which determines the adjustment range. The conventional HS consists of four
steps as follows:
The HM is equivalent to the population of other populationbased optimization algorithms. The HM is initially randomly initialized by Eq. (4):
×t
ln
bwmin
bwmax
NI
(4)
where i = 1, 2, 3, . . . , HMS, j = 1, 2, 3, . . . , D, U and L are the
lower and upper boundaries of j, respectively, and xi is a Ddimensional solution. After the initialization of the HM, the cost
of each solution is evaluated.
Step 2: Improvisation of a new harmony
In this step, a new harmony x = (x1 , x1 , . . . , xD ) is improvised
based on three rules: memory consideration, pitch adjustment, and
random selection. Algorithm 1 shows the improvisation procedure.
Step 3: Updating the harmony memory
If the cost of the improvised x is better than that of the worst
harmony xw , then xw will be replaced by x in the HM.
Step 4: Checking the termination criteria
If the maximum number of harmony improvisations is reached
or the cost of the best harmony xB is better than the expected
cost, then stop; otherwise, the improvisation of new harmonies
will continue by repeating steps 2-4.
(5)
⎞
×t ⎠
bw(t ) = bwmax ∗ e
(6)
where bw(t) is the bandwidth of iteration t, and bwmin and bwmax
are predetermined minimum and maximum bandwidths, respectively.
According to the authors, IHS outperforms HS in convergence
and finds better solutions.
3.2. The novel dynamic harmony search
Chen et al., 2012 proposed a variant of HS called NDHS that differs from the conventional HS in two respects. First, memory consideration is performed by a tournament selection rule instead of
random selection. Second, a new method is proposed to dynamically adjust the PAR and bw with respect to the evolution process.
xnew ( j ) = xts ( j ) j = 1, 2, . . . , n
Step 1: Initialization of the harmony memory
NI
where t indicates the current iteration (algorithm run), NI is the
maximum number of iterations allowed (maximum number of harmony improvisations), PAR(t) is the PAR of iteration t, and PARmin
and PARmax are the minimum and maximum PARs, respectively.
In addition, bw is adjusted dynamically by Eq. 6:
2.2. Harmony search algorithm
xi j = L j + rand (0, 1 ) × U j − L j
319
(7)
where xts (j) is the jth pitch of harmony xts , which is selected by the
tournament selection rule.
PAR(t ) = PARmin + (PARmax − PARmin ) × exp(−kt/NI )
(8)
where PAR(t) is the pitch adjustment rate in iteration t, and PARmin
and PARmax are the minimum and the maximum PARs, respectively.
The author suggested setting the value of k/NI to 1e-5 based on
experiments.
In NDHS, the improvisation process is divided into two stages.
In the first phase, the bw is updated automatically with respect
to xnew (j) to globally explore the search space. During the second
phase, the bw is computed based on the distance between xnew (j)
and a randomly selected pitch (which is taken from the same column in HM) to locally explore the search space. The pitch adjustment process in NDHS is illustrated by Algorithm 2.
⎧ (U −L )
j
j
⎪
⎨ 1000 x = 0,
bw(x ) = x 0 <x ≤ 1, ⎪
⎩max U j −L j , 1 x > 1,
50
(9)
where t is the current iteration, and NI is the maximum number of
iterations. In NDHS, bw is updated based on Eq. (9), which determines the value of bw according to the current decision variable.
According to the authors, NDHS outperforms HS and IHS in convergence and finds more accurate solutions.
320
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
Fig. 2. Example of initialized harmony memory by the proposed method.
Algorithm 2
Pitch adjustment procedure of NDHS.
1.
2.
3.
4.
5.
6.
7.
Algorithm 3
DH/best HM initialization.
if (rand(0, 1) < PAR)
if (t < NI/2)
xnew ( j ) = xnew ( j ) + rand (−1, 1 ) × bw(xnew ( j ) )
else
xnew ( j ) = xnew ( j ) + rand (−1, 1 ) × bw(xnew ( j ) − xa ( j ) ))
where a (1,2,…,HMS)
endif
endif
1.
2.
3.
4.
5.
6.
7.
8.
9.
For j = 1 to D do
For i = 1 to HMS do
tem pi = ( (i − 0.5)/HMS) × (U j − L j ) + L j
end for
Shuffle(temp)/∗ shuffling the temporary array (temp)
For i = 1 to HMS do
xi j = tempi
end for
end for
3.3. Global dynamic harmony search
Khalili et al., 2014 proposed a parameter-less variant of HS
called GDHS in which all of the parameters of HS and the domain
are dynamically updated as the algorithm runs. The HMCR, PAR,
domain and bw are updated as follows:
HMCR = 0.9 + 0.2 ×
iteration − 1
×
max.imp − 1
iteration − 1
×
max.imp − 1
PAR = 0.85 + 0.3 ×
1−
1−
iteration − 1
max.imp − 1
iteration − 1
max.imp − 1
(10)
(upper and lower limits)
bwden = 20 × abs 1 + log10 U initial − Linitial
(13)
(14)
Lnew = LHM − bwmax
(15)
New limit = [Lnew , U new ]
(16)
iteration−1
UHM
LHM
coe f = (1 + (HMS − i ) ) ×
iteration − 1
1−
max.imp − 1
(19)
4. Proposed DH/best algorithm
U new = U HM + bwmax
bw = bwmax × e((ln (0.001))∗( max.imp−1 ) )
(18)
(11)
(12)
bwden
xj = xij ± bw×coe f
where i is the index of the selected harmony in the sorted HM,
and j is the jth pitch of the selected harmony xi .
where iteration is the current iteration, and max.imp is the maximum number of improvisations.
bwmax =
quality of the selected value from HM. The coefficient is based on
the position of the corresponding harmony in the sorted HM:
(17)
where
and
are the maximum and minimum values of the
variables inside the HM, respectively.
The pitches are adjusted according to Eq. (18), and a correction coefficient is used to increase or decrease bw according to the
Because the bw has a profound effect on the performance of HS,
a new pitch adjustment method is introduced in this paper. The
proposed method updates the values of the pitches based on the
distance between the pitches in the HM and eliminates the need to
set the bw parameter. Moreover, a new HM initialization method
is proposed to effectively initialize the harmonies and reduce the
chance of finding local minima. DH/best follows the same steps as
HS, but step 1 in HS is replaced by Algorithm 3, and step 2 in HS
is replaced by Algorithm 4.
Algorithm 3 initializes the HM vertically instead of horizontally. It also ensures that all of the solution variables are scattered throughout their search ranges in the HM. For example,
Fig. 2 shows the HM that was initialized by the proposed initialization method when HMS = 5 and x j ∼ U (1 − 5 ).
As shown in Fig. 2, the columns of the HM are filled with
pitches that are scattered over the variable ranges. This is achieved
by dividing the pitch ranges by the HMS and making an interval
equal to the calculated value between the initialized pitches starting from the lower boundary. The value -0.5 in Algorithm 3 helps
to initialize the pitches so they are between the boundaries and
not on the boundaries.
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
321
Algorithm 4
DH/best improvisation scheme.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
For j = 1 to D do
If rand(0, 1) ≤ HMCR then/∗ memory consideration∗ /
xj = xi, j where i is a random integer from (1,…, HMS)
If rand(0, 1) ≤ PAR then
xj = xbest, j + rand (0, 1 ) ∗ (xr1, j − xr2, j )/∗ new pitch adjustment ∗ /
If xj L j or xj U j then
xj = rand (L j , U j ) /∗ random selection∗ /
end if
end if
else
xj = rand (L j , U j ) /∗ random selection∗ /
end if
end for
Table 1
Parameter settings for the compared algorithms.
Algorithm
HMS
HMCR
PAR
bw
ts
HS
IHS
5
5
0.9
0.95
5
0.99
0.01
bwmin = 1e − 6
bwmax = (U − L )/20
-
-
NDHS
GDHS
DH/best
5
50 (100 for function H)
0.99
0.3
PARmin = 0.35
PARmax = 0.99
PARmin = 0.01
PARmax = 0.99
0.9
-
-
2
Table 2
Mean and standard deviation (±SD) of the errors for the optimization of the benchmark functions (D = 30).
Function
HS
IHS
NDHS
GDHS
DH/best
A
B
C
D
E
F
G
H
I
J
K
L
1.05e+01 (2.81e+00)
1.03e+04 (7.22e+03)
1.64e-01 (5.98e-02)
1.01e+02 (4.02e+01)
2.88e+01 (8.76e+00)
7.06e+0 0 (2.63e+0 0)
4.15e+03 (1.44e+03)
1.05e+04 (3.66e+03)
3.61e+03 (3.15e+03)
2.91e+0 0 (1.10e-0 0)
1.51e+00 (9.70e-01)
1.12e+00 (3.96e-01)
2.82e-01 (3.11e-01)
1.17e+03 (4.78e+02)
2.85e-02 (7.48e-03)
4.07e+01 (2.81e+01)
3.13e+0 0 (1.78e+0 0)
3.86e+0 0 (4.55e+0 0)
3.19e+03 (1.08e+03)
1.37e+04 (4.40e+03)
3.81e+03 (3.45e+03)
2.35e+00 (1.35e+00)
9.79e-01 (5.01e-01)
5.02e-01 (2.09e-1)
0.0 0e+0 0 (0.0 0e+0 0)
2.91e+01 (8.86e+00)
1.91e-01 (1.21e-01)
2.73e+01 (4.27e-01)
7.68e-02 (1.47e-02)
2.10e-02 (5.05e-03)
8.47e+02 (5.32e+02)
1.44e+04 (8.21e+03)
5.93e+02 (1.66e+03)
1.05e+00 (1.66e-01)
2.89e+0 0 (1.51e+0 0)
1.38e+00 (6.19e-01)
1.43e-134 (4.65e-134)
1.14e-06 (8.39e-07)
3.70e-03 (8.73e-03)
2.34e+01 (1.17e+01)
1.01e-06 (2.63e-06)
3.04e-06 (1.02e-06)
1.11e-02 (6.08e-03)
1.21e+03 (1.15e+03)
6.52e+02 (1.72e+03)
4.77e-03 (4.73e-03)
2.68e+0 0 (1.42e+0 0)
4.54e-04 (8.77e-05)
1.57e-64 (4.62e-64)
5.54e-25 (2.95e-24)
5.71e-11 (3.07e-10)
9.87e+00 (1.74e+01)
1.27e-12 (2.73e-12)
1.92e-13 (1.25e-12)
3.12e-04 (4.45e-04)
7.49e+02 (6.12e+02)
4.13e+01 (5.19e+01)
2.82e-03 (1.00e-02)
4.77e-12 (1.69e-11)
1.74e-13 (1.92e-13)
Table 3
Rank sum test results of the comparison between the DH/best algorithm and
other variants of HS (D = 30).
DH/best vs.
A
B
C
D
E
F
G
H
I
J
K
L
HS
IHS
NDHS
GDHS
1
1
-1
-1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
1
1
1
1
1
1
1
0
1
1
1
1
1
1
1
1
Because the DH/best algorithm improvises new harmonies by
selecting pitches from the HM and adjusting the pitches based on
the distance between the pitches in the HM, the proposed initialization method can profoundly improve the improvisation scheme
in the global search by covering all of the search space.
Algorithm 4 eliminates the need to set bw and adjust the
pitches based on the distances between the pitches in the HM by
replacing the conventional HS pitch adjustment method with the
DE/best/1 mutation.
where best is the index of the best harmony, and r1 and r2 are
random integers (1,2,…,HMS).
In contrast to IHS, which adjusts bw based on the algorithm
runs and does not consider the harmonies in the HM, the proposed algorithm adjusts pitches based on the distance between the
harmonies in the HM. This makes the DH/best algorithm problemindependent and effective in local and global searches.
On the other hand, the cooperation between the harmonies in
the DH/best algorithm is greater than in conventional HS. In the HS
algorithm, each pitch is selected randomly from a harmony inside
the HM, while in the DH/best algorithm, three different pitches
from different harmonies cooperate to determine the value of the
new pitch. In addition, harmony is composed of several pitches,
and therefore many harmonies cooperate to form a new harmony.
This cooperation between harmonies in the DH/best algorithm results in a vast amount of knowledge about the search space and
leads to rapid convergence and high accuracy.
5. Computational evaluation
This section presents a comprehensive evaluation of the
DH/best algorithm based on two sets of experiments. In experiment A, we evaluated the proposed technique with other state-ofthe-art HS variants based on the conventional function set as in
other studies (Khalili et al., 2014; Yadav, Kumar, Panda, & Chang,
2012). In experiment B, we compared the DH/best algorithm to
other state of the art algorithms from different families (Artificial
Bee Colony (ABC; (Karaboga & Basturk, 2007), Group Search Optimizer (GSO; (He, Wu, & Saunders, 2009), Comprehensive Learning Particle Swarm Optimizer (CLPSO; (Liang, Qin, Suganthan, &
Baskar, 2006), Self-adaptive DE (SaDE; (Qin, Huang, & Suganthan,
2009), DE with Ensemble of Parameters and Mutation Strategies
322
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
Table 4
Mean and standard deviation (±SD) of the errors for the optimization of the benchmark functions (D = 50).
Function
HS
IHS
NDHS
GDHS
DH/best
A
B
C
D
E
F
G
H
I
J
K
L
6.60e+02 (1.24e+02)
3.27e+07 (1.24e+07)
2.21e+00 (5.02e-01)
7.46e+02 (2.01e+02)
9.02e+02 (1.64e+02)
5.39e+02 (1.04e+02)
3.26e+04(6.90e+03)
5.29e+04 (1.27e+04)
2.52e+06 (8.65e+05)
4.16e+03 (7.54e+02)
5.19e+01 (6.10e+00)
5.52e+00 (4.49e-01)
3.03e+01 (7.69e+00)
6.02e+04 (4.77e+04)
2.82e-01 (6.35e-02)
9.77e+01 (4.12e+01)
8.17e+01 (1.74e+01)
3.39e+01 (8.91e+00)
1.89e+04(4.98e+03)
5.09e+04 (1.05e+04)
1.85e+04 (1.29e+04)
2.19e+02 (4.80e+01)
1.45e+01 (2.54e+00)
2.24e+00 (2.62e-01)
0.0 0e+0 0 (0.0 0e+0 0)
2.29e+03 (9.38e+02)
1.06e+00 (3.07e-01)
4.78e+01 (4.38e-01)
1.15e+01 (2.31e+01)
1.60e+01 (6.85e+00)
1.26e+04(3.07e+03)
6.54e+04 (1.45e+04)
1.60e+04 (8.35e+03)
1.67e+01 (9.41e+00)
9.01e+0 0 (2.67e+0 0)
2.13e+00 (3.10e-01)
7.10e-89 (2.01e-88)
1.46e-03 (4.48e-03)
4.84e-02 (4.10e-02)
6.34e+01 (4.10e+01)
7.10e-05 (3.82e-05)
1.25e-04 (3.58e-05)
8.52e+01(4.91e+01)
2.76e+04 (7.85e+03)
5.29e+02 (1.09e+03)
1.74e-01 (4.33e-02)
9.98e+0 0 (2.71e+0 0)
2.47e-03 (3.80e-04)
5.77e-23 (4.76e-22)
1.68e-16 (5.53e-16)
4.38e-06 (2.23e-05)
3.36e+01 (2.63e+01)
3.61e-12 (4.09e-12)
7.26e-11 (2.01e-10)
4.35e+01 (3.69e+01)
1.61e+04 (5.33e+03)
1.36e+02 (2.53e+02)
6.58e-02 (6.90e-02)
1.40e-02 (3.37e-02)
2.58e-10 (3.40e-10)
1.0E+05
1.0E+04
1.0E+03
1.0E+02
Error
1.0E+01
1.0E+00
DH/best
1.0E-01
GDHS
1.0E-02
NDHS
1.0E-03
IHS
1.0E-04
Hs
1.0E-05
1.0E-06
Number of improvisation
Fig. 3. Convergence of the error curves for the 30-D Shifted Sphere function.
1.0E+06
1.0E+05
1.0E+04
1.0E+03
Error
1.0E+02
1.0E+01
DH/best
1.0E+00
GDHS
1.0E-01
NDHS
1.0E-02
IHS
1.0E-03
HS
1.0E-04
1.0E-05
Number of improvisation
Fig. 4. Convergence of the error curves for the 30-D Shifted Schwefel 1.2 function.
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
323
Error
4.0E+05
4.0E+04
DH/best
GDHS
NDHS
4.0E+03
IHS
HS
4.0E+02
Number of improvisation
Error
Fig. 5. Convergence of the error curves for the 30-D Shifted Schwefel 1.2 function with noise.
1.0E+12
1.0E+11
1.0E+10
1.0E+09
1.0E+08
1.0E+07
1.0E+06
1.0E+05
1.0E+04
1.0E+03
1.0E+02
1.0E+01
1.0E+00
1.0E-01
DH/best
GDHS
NDHS
IHS
HS
Number of improvisation
Fig. 6. Convergence of the error curves for the 30-D Shifted Rosenbrock function.
(EPSDE; (Mallipeddi, Suganthan, Pan, & Tasgetiren, 2011) and Composite Differential Evolution (CoDE; (Wang, Cai, & Zhang, 2011))
using the CEC2014 problem set (Liang, Qu, & Suganthan, 2013). In
addition, a comprehensive study of the effects of the DH/best parameter settings is provided.
There are two main reasons for applying two different function
sets. First, the authors of the HS variants that are compared in this
paper used a function set that is similar to the one we used in experiment A. Therefore, to avoid studying the best parameter setting
of each algorithm separately and to conduct a fair comparison, we
used a similar function set.
Secondly, we wanted to study the DH/best parameter setting
based on one function set and then apply the same parameter setting to optimize another function set to show that the DH/best
algorithm is robust and is able optimize a vast range of functions using the same parameter setting. We also wanted to pro-
vide a fair platform for the comparison between our algorithm
and other state of the art algorithms in experiment B. Because
the DH/best algorithm parameter setting is not biased for the optimization of the CEC2014 benchmarks, we can compare our algorithm with other algorithms based on their original parameter
setting and conduct a fair comparison.
5.1. Experiment A
This section presents a comprehensive set of experimental evaluation of the DH/best algorithm and comparison with HS and three
variants of HS (IHS, NDHS, and GDHS) using a set of 12 highdimensional unimodal, multimodal, shifted and shifted rotated optimization functions, which are introduced in detail in the appendix (Ali, Khompatraporn, & Zabinsky, 2005; Jamil & Yang, 2013;
Suganthan et al., 2005).
324
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
1.0E+06
1.0E+05
1.0E+04
1.0E+03
Error
1.0E+02
DH/best
1.0E+01
GDHS
1.0E+00
NDHS
1.0E-01
IHS
1.0E-02
HS
1.0E-03
1.0E-04
Number of improvisation
Fig. 7. Convergence of the error curves for the 30-D Shifted Rotated Griewank function.
1.0E+03
1.0E+02
Error
DH/best
GDHS
NDHS
1.0E+01
IHS
HS
1.0E+00
Number of improvisation
Fig. 8. Convergence of the error curves for the 30-D Shifted Rastrigin function.
Of the functions that were selected for experiment A, the
Sphere function is a separable unimodal function, while the Shifted
Schwefel 1.2 function and the noisy version of the Shifted Schwefel
1.2 function are non-separable unimodal functions. The Rosenbrock
function, Shifted Rosenbrock function and Shifted Ackley function
are non-separable multimodal functions, and the remaining optimization functions are separable multimodal functions in which
the number of local minima increases exponentially as the dimension of the function increases.
We evaluated the optimization algorithms based on 30- and
50-dimensional versions of the benchmark functions and set the
maximum Number of Improvisation (NI) to 50∗103 for both dimensions. Tables 2 and 4 show the means and standard deviations of the errors based on 30 independent runs of the algorithms on each optimization function. To statistically analyze the
results of the comparison, a non-parametric statistical test called
Wilcoxon’s rank sum test (García, Molina, Lozano, & Herrera, 2009;
Yadav et al., 2012) is conducted at a 5% significant level for the
mean error of the DH/best algorithm against the mean error of
the other variants of HS; the statistical results are reported in
Tables 3 and 5 for the 30- and 50-dimensional optimization problems, respectively. In Table 3 and 5, a value of h equal to -1 or
1 indicates that the results of the DH/best algorithm are worse
or better than the compared variants of HS, respectively, while a
value of 0 indicates that there is no significant difference.
The parameter settings for HS and IHS as well as those of
the NDHS algorithm are taken from (Chen et al., 2012) and were
held constant. The GDHS algorithm does not have any parameters.
Table 1 shows the parameter settings for all of the algorithms including the proposed DH/best algorithm.
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
325
1.0E+02
1.0E+01
Error
1.0E+00
DH/best
1.0E-01
GDHS
NDHS
1.0E-02
IHS
1.0E-03
HS
1.0E-04
Number of improvisation
Fig. 9. Convergence of the error curves for the 30-D Shifted Ackley function.
Table 5
Rank sum test results of the comparison between the DH/best algorithm and
other variants of HS (D = 50).
DH/best vs.
A
B
C
D
E
F
G
H
I
J
K
L
HS
IHS
NDHS
GDHS
1
1
–1
–1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Table 6
Success rate, average improvisation number and standard deviations of iterations
(D = 30).
Algorithm BEV
A
10−5
B
10−5
C
10−5
D
20
E
10−2
F
10−2
HS
100%
1854
(486)
100%
1738
(97)
100%
5580
(622)
100%
43,113
(302)
100%
7998
(532)
78%
32,028
(817)
100%
7800
(1195)
14%
27,250
(9420)
98%
21,900
(12,037)
4%
46.052
(6032)
90%
42,684
(2791)
100%
19,594
(7956)
100%
3818
(1688)
50%
24,493
(6678)
100%
34,061
(1129)
100%
31,030
(440)
100%
3259
(316)
IHS
5.1.1. Comparison of HS variants
As demonstrated in Table 3 and 5, DH/best outperforms the
other variants of HS in almost all of the functions in both 30 and
50 dimensions. DH/best outperforms HS and IHS in the optimization of all of the optimization functions. Compared with NDHS and
GDHS, the DH/best algorithm only has worse performance in the
optimization of the Sphere function, where all of the algorithms
find high precision solutions. The results of the optimization of the
various optimization functions demonstrate the global and the local search power of the DH/best algorithm, which makes it powerful for the optimization of unimodal, multimodal, separable and
non-separable functions.
5.1.2. Robustness and convergence analysis
To evaluate the robustness of the DH/best algorithm, its convergence speed and success rate are compared with those of other
variants of HS. In this experiment, we set the NI to 50∗103 and run
each algorithm 50 times over 30-dimensional optimization functions. The algorithms stop when the Best Error Value (BEV) falls
below the predefined threshold value or reaches the maximum NI
value. The BEV value of each function is selected based on the lowest possible value that at least one of the compared algorithms
is able to reach the predefined precision. The Success Rate (SR)
and the Average Improvisation Number (AIN), which is the average number of fitness function evaluations that are required by the
HS variants to optimize the 30-dimensional optimization functions
(A-L), are reported in Tables 6 and 7.
Table 6 and 7 demonstrate that the DH/best algorithm is more
robust than the other variants of HS because it achieved higher SRs
than the other algorithms. Moreover, the proposed algorithm out-
NDHS
GDHS
DH/best
SR
AIN
SD
SR
AIN
SD
SR
AIN
SD
SR
AIN
SD
SR
AIN
SD
Table 7
Success rate, average improvisation number and standard deviations of iterations
(D = 30) (continued).
Algorithm BEV
G
10−2
H
10+3
I
30
J
10−2
K
10−1
L
10−4
HS
100%
38,503
(989)
100%
30,477
(2915)
40%
35,406
(12,307)
30%
31,365
(10,792)
22%
26,065
(8812)
100%
9455
(3.109)
100%
9.110
(3.183)
48%
35,383
(1643)
80%
18,393
(11,370)
22%
30,451
(11,618)
98%
34,381
(1966)
100%
44,954
(346)
100%
5936
(5328)
26%
44,027
5115
64%
40,257
6463
4%
48,970
(521)
12%
46,986
(1763)
100%
21,237
(6853)
100%
48,571
(172)
100%
9993
(2713)
IHS
NDHS
GDHS
DH/best
SR
AIN
SD
SR
AIN
SD
SR
AIN
SD
SR
AIN
SD
SR
AIN
SD
326
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
Table 8
Evaluation of the components of the DH/best algorithm based on the success rate.
Algorithm
Shifted Rastrigin function
Shifted Ackley function
HS
HS + new harmony memory initialization
DH/best with random harmony memory initialization
DH/best
0%
0%
93.30%
100%
0%
0%
96.60%
100%
Table 9
The effect of the HMS on the mean and standard deviation of the error of function optimization (D = 30).
Function
HMS = 5
HMS = 30
HMS = 50
HMS = 100
A
B
C
D
E
F
G
H
I
J
K
L
4.94e-02 (2.89e-02)
3.26e+01 (2.40e+01)
5.69e-03 (2.85e-03)
2.95e+01 (3.21e+01)
2.58e-03 (6.06e-03)
4.92e-02 (3.27e-02)
7.88e+03 (2.48e+03)
3.85e+04 (1.18e+04)
2.57e+03 (6.33e+03)
7.53e-02 (3.81e-02)
8.35e-02 (1.76e-01)
5.73e-02 (1.89e-02)
9.23e-19 (4.96e-18)
5.84e-09 (1.71e-08)
1.62e-04 (5.18e-04)
1.26e+01 (2.12e+01)
7.95e-11 (2.29e-10)
6.17e-05 (2.65e-04)
4.53e+01 (1.91e+01)
1.07e+04 (4.90e+03)
1.66e+03 (6.20e+03)
5.54e-02 (4.96e-02)
9.95e-02 (2.98e-1)
2.12e-04 (4.07e-04)
1.57e-64 (4.62e-64)
5.54e-25 (2.95e-24)
5.71e-11 (3.07e-10)
9.87e+00 (1.74e+01)
1.27e-12 (2.73e-12)
1.92e-13 (1.25e-12)
3.12e-04 (4.45e-04)
4.35e+03 (4.33e+03)
4.13e+01 (5.19e+01)
2.82e-03 (1.00e-02)
4.77e-12 (1.69e-11)
1.74e-13 (1.92e-13)
1.13e-45 (3.43e-45)
1.65e-28 (6.13e-29)
9.62e-15 (3.54e-15)
1.22e+01 (2.19e+1)
3.30e-12 (6.37e-12)
1.80e-13 (8.31e-14)
7.27e-03 (7.99e-03)
7.49e+02 (6.12e+02)
3.84e+01 (5.88e+01)
8.92e-03 (1.19e-02)
4.85e-12 (1.54e-11)
2.94e-14 (9.31e-15)
Table 10
The effect of the HMCR on the mean and standard deviation of the error of function optimization (D = 30).
Function
HMCR = 0.8
HMCR = 0.9
HMCR = 0.95
HMCR = 0.99
A
B
C
D
E
F
G
H
I
J
K
L
1.73e+02 (4.41e+01)
4.80e+06 (1.77e+06)
9.52e+0 0 (2.74e+0 0)
2.88 + 02 (1.07e+02)
5.12e-02 (3.77e-02)
1.83e+02 (6.21e+01)
5.48e+03 (1.28e+03)
1.03e+04 (3.08e+03)
7.56e+05 (3.43e+05)
1.53e+00 (1.63e-01)
1.54e+02 (1.86e+01)
4.78e+00 (4.45e-01)
3.61e-03 (2.31e-03)
1.47e+01 (1.26e+01)
2.31e+02 (5.21e-02)
5.47e+01 (3.40e+01)
3.55e-02 (3.51e-02)
4.74e-03 (3.95e-03)
5.01e+02 (1.28e+02)
3.59e+03 (1.52e+03)
7.45e+02 (1.52e+03)
3.70e-02 (2.05e-02)
3.84e+01 (2.48e+01)
2.46e-02 (1.40e-02)
3.23e-017 (1.44e-16)
1.80e-11 (9.69e-11)
6.94e-04 (3.73e-03)
4.04e+01 (3.23e+01)
4.27e-10 (1.77e-09)
9.43e-13 (1.02e-12)
9.17e+0 0 (7.38e+0 0)
1.59e+03 (1.143e+03)
9.56e+01 (1.61e+02)
3.13e-02 (1.04e-02)
2.9e-10 (1.46e-9)
2.12e-09 (7.12e-09)
1.57e-64 (4.62e-64)
5.54e-25 (2.95e-24)
5.71e-11 (3.07e-10)
9.87e+00 (1.74e+01)
1.27e-12 (2.73e-12)
1.92e-13 (1.25e-12)
3.12e-04 (4.45e-04)
4.35e+03 (4.33e+03)
4.13e+01 (5.19e+01)
2.82e-03 (1.00e-02)
4.77e-12 (1.69e-11)
1.74e-13 (1.92e-13)
Table 11
The effect of the PAR on the mean and standard deviation of the error of function optimization (D = 30).
Function
PAR = 0.3
PAR = 0.5
PAR = 0.7
PAR = 0.9
A
B
C
D
E
F
G
H
I
J
K
L
8.85e-17 (4.76e-16)
5.20e-14 (2.25e-13)
3.14e-04 (1.69e-03)
5.44e+01 (3.28e+01)
6.43e-03 (6.70e-03)
1.08e-11 (4.68e-11)
3.25e+02 (1.52e+02)
1.24e+03 (8.88e+02)
9.68e+01 (6.34 + 01)
2.03e-2 (3.04e-02)
9.62e-09 (5.17e-08)
2.51e-12 (9.41e-12)
2.87e-56 (1.21e-55)
3.37e-26 (1.32e-25)
1.52e-14 (2.16e-14)
3.44e+01 (3.24e+01)
3.86e-05 (6.26e-05)
5.13e-13 (8.26e-13)
7.71e+0 0 (4.77e+0 0)
7.71e+02 (5.33e+02)
5.40e+01 (5.79e+01)
2.63e-02 (2.75e-02)
1.70e-05 (9.17e-05)
1.74e-13 (6.05e-13)
1.93e-61 (8.02e-61)
2.03e-27 (3.91e-27)
1.26e-14 (6.43e-15)
1.38e+01 (2.05e+01)
1.19e-11 (1.42e-11)
1.05e-12 (2.85e-12)
7.84e-02 (6.63e-02)
8.75e+02 (8.04e+02)
1.39e+02 (2.77e+02)
1.67e-02 (2.31e-02)
7.61e-13 (9.95e-13)
7.77e-14 (7.30e-14)
1.57e-64 (4.62e-64)
5.54e-25 (2.95e-24)
5.71e-11 (3.07e-10)
9.87e+00 (1.74e+01)
1.27e-12 (2.73e-12)
1.92e-13 (1.25e-12)
3.12e-04 (4.45e-04)
4.35e+03 (4.33e+03)
4.13e+01 (5.19e+01)
2.82e-03 (1.00e-02)
4.77e-12 (1.69e-11)
1.74e-13 (1.92e-13)
performs the other variants of HS in convergence speed because
it requires a lower AIN to achieve the expected result. This is true
for all of the benchmark optimization functions except the Sphere
function, where the NDHS and GDHS algorithms have better convergence than the DH/best algorithm. Therefore, the DH/best algorithm is more reliable and efficient than the other variants of HS
in optimizing global optimization functions.
Figs. 3-9 show the convergence of the DH/best algorithm compared to the other variants of HS for the optimization of the shifted
and shifted rotated benchmark functions. The DH/best algorithm
clearly outperforms the other variants of HS in convergence.
5.1.3. Evaluation of the components of the DH/best algorithm
To evaluate the two proposed components of the DH/best algorithm (the new harmony memory initialization method and the
new differential-based pitch adjustment method), we studied them
separately using two highly multimodal functions (the Shifted Rastrigin function and the Shifted Ackley function) in 30-dimensional
mode. In this experiment, we evaluated all of the algorithms based
on 50∗103 times harmony improvisation. The SR is defined as the
percentage of successful runs in which the global minima is found
with 10−7 precision over 30 times evaluation of each algorithm
by each function. As shown in Table 8, the new harmony mem-
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
327
Table 12
Means and standard deviations of errors for the CEC 2014 benchmark functions (dimension = 30).
Function
CLPSO
ABC
GSO
SaDE
EPSDE
CoDE
DH/BEST
1.
1.20E+08
(3.16E+07)
3.15E+09
(7.96E+08)
4.04E+04
(1.16E+04)
4.96E+02
(7.33E+01)
2.10E+01
(7.07E-02)
2.85E+01
(1.47E+00)
2.92E+01
(5.31E+00)
1.62E+02
(1.27E+01)
2.51E+02
(1.22E+01)
4.26E+03
(2.91E+02)
6.30E+03
(4.23E+02)
2.47E+00
(3.52E-01)
6.79E-01
(8.40E-02)
5.14E+00
(2.71E+00)
8.27E+02
(4.27E+02)
1.29E+01
(2.28E-01)
4.31E+06
(1.72E+06)
1.12E+07
(6.16E+06)
4.87E+01
(1.61E+01)
2.06E+04
(9.05E+03)
7.15E+05
(3.96E+05)
4.98E+02
(1.27E+02)
3.37E+02
(4.88E+00)
2.70E+02
(3.52E+00)
2.22E+02
(2.80E+00)
1.01E+02
(5.94E-01)
5.00E+02
(3.04E+01)
1.61E+03
(1.80E+02)
9.25E+04
(3.55E+04)
3.02E+04
(8.06E+03)
3.08E+07
(1.90E+07)
1.82E+04
(1.24E+04)
2.13E+03
(1.51E+03)
1.05E+02
(2.63E+01)
2.04E+01
(4.71E-02)
1.92E+01
(1.61E+00)
3.12E-01
(1.30E-01)
6.45E+00
1.59E+00
1.25E+02
(2.12E+01)
1.01E+02
(7.57E+01)
2.90E+03
(2.51E+02)
4.66E-01
(8.11E-02)
3.82E-01
(4.32E-02)
2.39E-01
(2.98E-02)
1.38E+01
(3.13E+00)
1.13E+01
(3.30E-01)
5.98E+06
(3.90E+06)
7.51E+03
(7.16E+03)
1.81E+01
(7.10E+00)
1.49E+04
(9.14E+03)
8.89E+05
(6.78E+05)
4.55E+02
(1.21E+02)
3.19E+02
(2.36E+00)
2.30E+02
(1.72E+00)
2.12E+02
(1.59E+00)
1.00E+02
(5.90E-02)
4.84E+02
(1.51E+01)
1.18E+03
(1.46E+02)
1.71E+03
(4.92E+02)
8.24E+03
(3.90E+03)
2.20E+08
(5.38E+07)
1.44E+10 (2.10E+09)
6.27E+06
(3.83E+06)
9.09E+03
(4.82E+03)
2.04E+02
(3.86E+02)
1.04E+02
(2.81E+01)
2.08E+01
(4.50E-02)
7.37E+00
(5.31E+00)
1.04E-02
(1.76E-02)
3.25E+01
(4.08E+00)
1.45E+02
(7.90E+00)
1.36E+03
(2.70E+02)
5.67E+03
(2.94E+02)
1.69E+00
(2.33E-01)
3.64E-01
(6.60E-02)
2.96E-01
(3.93E-02)
1.47E+01
(1.47E+00)
1.23E+01
(2.54E-01)
6.00E+04
(5.26E+04)
4.23E+02
(6.10E+03)
8.21E+00
(1.09E+01)
1.07E+03
(1.62E+03)
5.80E+03
(6.74E+03)
5.73E+02
(1.13E+02)
3.19E+02
(2.20E-02)
2.39E+02
(4.61E+00)
2.19E+02
(1.82E+00)
1.00E+02
(7.86E-02)
6.52E+02
(5.05E+01)
9.06E+02
(4.88E+01)
1.39E+03
(2.40E+02)
2.82E+03
(6.99E+02)
1.52E+07
(2.77E+07)
4.22E+01
(9.19E+01)
4.32E+01
(1.89E+02)
3.33E+01
(4.17E+01)
2.10E+01
(6.23E-02)
2.94E+01
(1.25E+00)
3.04E-03
(6.06E-03)
3.06E+01
(6.02E+00)
1.21E+02
(1.23E+01)
2.14E+03
(3.22E+02)
5.63E+03
(3.19E+02)
1.11E+00
(1.11E-01)
4.02E-01
(7.74E-02)
3.67E-01
(3.03E-02)
1.25E+01
(1.20E+00)
1.25E+01
(2.54E-01)
4.86E+05
(2.19E+06)
1.62E+04
(6.17E+04)
1.62E+01
(1.05E+00)
8.91E+02
(2.56E+03)
4.88E+04
(1.04E+05)
6.44E+02
(1.22E+02)
3.17E+02
(1.59E-02)
2.39E+02
(5.02E+00)
2.14E+02
(4.34E+00)
1.00E+02
(6.34E-02)
1.10E+03
(3.31E+01)
3.97E+02
(1.45E+01)
2.18E+02
(1.35E+00)
1.00E+03
(2.63E+02)
7.81E+06
(5.32E+06)
3.28E+07
(1.32E+07)
2.20E+02
(7.34E+01)
1.71E+02
(1.34E+01)
2.08E+01
(4.55E-02)
3.01E+01
(1.40E+00)
1.30E+00
(9.26E-02)
1.05E+02
(7.50E+00)
2.00E+02
(1.22E+01)
3.34E+03
(2.25E+02)
6.17E+03
(2.72E+02)
1.72E+00
(2.22E-01)
5.86E-01
(8.43E-02)
4.00E-01
(5.41E-02)
2.07E+01
(1.87E+00)
1.26E+01
(2.75E-01)
1.64E+04
(9.04E+03)
5.43E+03
(2.73E+03)
1.30E+01
(8.47E-01)
1.13E+02
(1.98E+01)
2.67E+03
(4.83E+02)
4.89E+02
(9.56E+01)
3.16E+02
(2.68E-01)
2.41E+02
(3.53E+00)
2.08E+02
(1.24E+00)
1.01E+02
(8.04E-02)
5.07E+02
(1.28E+02)
1.14E+03
(2.19E+01)
7.30E+03
(3.08E+03)
5.57E+03
(1.19E+03)
1.01E+06
(6.31E+05)
8.95E+03
(8.43E+03)
1.75E+04
(1.20E+04)
8.12E+01
(3.97E+01)
2.10E+01
(4.85E-02)
1.46E+01
(2.38E+00)
2.71E-02
(3.17E-02)
1.29E+00
(1.05E+00)
7.77E+01
(1.94E+01)
3.93E+00
(2.70E+00)
3.78E+03
(2.38E+03)
3.04E+00
(4.05E-01)
5.34E-01
(1.12E-01)
3.65E-01
(1.43E-01)
1.22E+01
(5.31E+00)
1.21E+01
(4.52E-01)
3.60E+05
(2.54E+05)
5.16E+03
(6.36E+03)
2.11E+01
(2.55E+01)
2.30E+04
(1.38E+04)
1.79E+05
(1.38E+05)
5.11E+02
(1.93E+02)
3.15E+02
(3.88E-11)
2.30E+02
(1.74E+00)
2.08E+02
(4.88E+00)
1.43E+02
(6.97E+01)
6.47E+02
(1.86E+02)
1.13E+03
(1.89E+02)
1.28E+03
(3.68E+02)
2.77E+03
(8.40E+02)
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
9.96E+04 (1.76E+04)
1.84E+03
(2.51E+02)
2.04E+01
(8.21E-02)
1.26E+02
(6.15E+00)
9.62E+01
(1.92E+01)
4.17E+02
(3.38E+01)
8.09E+02
(6.48E+01)
7.66E+03
(1.01E+03)
1.76E+04
(1.39E+03)
1.14E+00
(2.12E-01)
5.54E-01
(4.85E-02)
5.59E+00
(1.01E+01)
2.98E+03
(1.75E+03)
4.48E+01
(8.60E-01)
4.50E+07
(1.46E+07)
8.16E+06
(1.06E+07)
2.32E+02
(4.93E+01)
1.26E+05
(3.31E+04)
1.88E+07
(6.27E+06)
3.05E+03
(4.81E+02)
4.26E+02
(1.53E+01)
4.03E+02
(8.92E+00)
3.35E+02
(2.00E+01)
2.05E+02
(1.55E+00)
3.55E+03
(1.85E+02)
1.99E+04
(3.25E+03)
8.13E+05
(6.96E+05)
7.73E+05
(2.26E+05)
ory initialization method increases the robustness of the DH/best
algorithm by increasing the SR and reducing the chance of finding local minima, and the new improvisation method improves the
search power and makes the main contribution to the performance
of the DH/best algorithm.
5.2. Effects of HMS, HMCR and PAR
This section investigates the effects of the DH/best parameters
on the optimization performance. Tables 9-11 show the results of
the optimization of the benchmark functions using different settings of HMS, HMCR and PAR.
The results in Table 9 demonstrate that the performance
of the DH/best algorithm depends on the size of the HM. A
small harmony size causes local minima to be found instead of
global minima. Therefore, HMS values between 50 and 100 are
suggested.
As illustrated in Table 10, HMCR values less than 0.99 deteriorate the performance of the proposed algorithm by increasing
random selections. However, randomness is necessary to bring di-
328
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
Table 13
Means and standard deviations of errors for the CEC 2014 benchmark functions (dimension = 50).
Function
CLPSO
ABC
GSO
SaDE
EPSDE
CoDE
DH/BEST
1.
1.48E+08
(2.57E+07)
6.81E+09
(1.12E+09)
9.86E+04
(1.68E+04)
9.77E+02
(1.39E+02)
2.11E+01
(4.87E-02)
5.08E+01
(2.46E+00)
6.32E+01
(8.63E+00)
2.92E+02
(1.87E+01)
4.73E+02
(2.13E+01)
7.62E+03
(5.19E+02)
1.14E+04
(5.09E+02)
2.67E+00
(3.28E-01)
7.61E-01
(8.47E-02)
1.60E+01
(3.71E+00)
3.31E+03
(1.93E+03)
2.24E+01
(2.33E-01)
1.77E+07
(4.74E+06)
2.51E+07
(8.22E+06)
9.23E+01
(1.19E+01)
5.17E+04
(1.06E+04)
5.71E+06
(2.15E+06)
1.36E+03
(1.71E+02)
3.90E+02
(8.19E+00)
3.39E+02
(6.72E+00)
2.46E+02
(5.30E+00)
1.10E+02
(2.83E+01)
1.33E+03
(3.66E+02)
3.02E+03
(4.20E+02)
4.10E+05
(1.64E+05)
6.00E+04
(1.52E+04)
2.81E+07
(1.01E+07)
2.88E+04
(4.11E+04)
1.06E+04
(3.66E+03)
1.59E+02
(2.76E+01)
2.04E+01
(3.53E-02)
3.78E+01
(2.65E+00)
5.72E-01
(1.36E-01)
1.26E+01
(1.74E+00)
2.58E+02
(2.83E+01)
2.29E+02
(1.07E+02)
5.74E+03
(3.27E+02)
4.71E-01
(5.73E-02)
4.51E-01
(4.11E-02)
2.98E-01
(2.50E-02)
3.14E+01
(6.02E+00)
1.97E+01
(4.02E-01)
1.01E+07
(4.96E+06)
9.92E+03
(9.94E+03)
3.33E+01
(1.06E+01)
3.96E+04
(1.29E+04)
7.30E+06
(4.36E+06)
1.14E+03
(1.89E+02)
3.57E+02
(7.30E+00)
2.71E+02
(1.78E+00)
2.22E+02
(2.80E+00)
1.01E+02
(6.75E-02)
1.08E+03
(3.78E+02)
2.15E+03
(3.42E+02)
3.32E+03
(1.46E+03)
1.61E+04
(4.10E+03)
2.39E+07
(6.94E+06)
4.88E+07
(2.58E+07)
2.00E+04
(6.02E+03)
2.92E+02
(5.23E+01)
2.00E+01
(2.90E-02)
4.72E+01
(4.07E+00)
1.81E+00
(5.33E-01)
7.70E+01
(1.60E+01)
3.09E+02
(4.95E+01)
1.01E+03
(3.73E+02)
6.79E+03
(9.11E+02)
6.03E-01
(1.91E-01)
5.59E-01
(6.64E-02)
3.09E-01
(2.80E-02)
1.47E+02
(3.90E+01)
2.28E+01
(3.05E-01)
4.00E+06
(1.42E+06)
1.28E+03
(9.26E+02)
5.37E+01
(3.36E+01)
1.94E+04
(1.11E+04)
3.36E+06
(1.65E+06)
1.48E+03
(3.27E+02)
3.54E+02
(1.97E+00)
2.71E+02
(7.39E+00)
2.48E+02
(8.18E+00)
1.94E+02
(2.55E+01)
1.63E+03
(9.43E+01)
7.04E+03
(1.17E+03)
5.43E+03
(2.32E+03)
5.17E+04
(1.36E+04)
8.10E+06
(2.58E+06)
4.00E+03
(4.80E+03)
1.12E+04
(4.35E+03)
1.70E+02
(2.86E+01)
2.12E+01
(4.19E-02)
1.81E+01
(3.56E+00)
4.05E-02
(2.36E-02)
6.47E+01
(6.76E+00)
2.93E+02
(1.99E+01)
3.12E+03
(3.81E+02)
1.12E+04
(3.98E+02)
2.10E+00
(2.57E-01)
5.26E-01
(6.49E-02)
3.34E-01
(3.54E-02)
4.27E+01
(5.85E+00)
2.19E+01
(2.50E-01)
6.85E+05
(4.67E+05)
6.74E+02
(6.56E+02)
5.44E+01
(2.70E+01)
5.65E+03
(3.80E+03)
2.14E+05
(1.48E+05)
1.06E+03
(2.02E+02)
3.46E+02
(8.70E-04)
2.76E+02
(3.57E+00)
2.17E+02
(9.12E+00)
1.87E+02
(3.45E+01)
7.73E+02
(9.54E+01)
1.58E+03
(1.87E+02)
1.89E+03
(4.18E+02)
1.57E+04
(2.75E+03)
2.44E+07
(4.69E+07)
7.13E+03
(6.98E+03)
5.06E+03
(3.58E+03)
8.52E+01
(3.18E+01)
2.12E+01
(3.99E-02)
5.92E+01
(2.47E+00)
9.35E-03
(1.30E-02)
1.05E+02
(1.23E+01)
2.75E+02
(1.56E+01)
5.91E+03
(7.31E+02)
1.13E+04
(3.57E+02)
1.63E+00
(1.87E-01)
5.04E-01
(6.57E-02)
4.32E-01
(1.37E-01)
3.47E+01
(4.33E+00)
2.21E+01
(3.65E-01)
1.40E+06
(2.33E+06)
1.36E+04
(3.03E+04)
3.03E+01
(9.87E+00)
9.37E+03
(1.44E+04)
1.03E+06
(2.68E+06)
1.53E+03
(1.90E+02)
3.47E+02
(9.27E-06)
2.74E+02
(5.69E+00)
2.17E+02
(1.21E+01)
1.01E+02
(8.47E-02)
1.87E+03
(3.49E+01)
3.95E+02
(1.56E+01)
2.31E+02
(5.99E+00)
1.98E+03
(3.38E+02)
1.21E+07
(4.48E+06)
1.89E+07
(9.45E+06)
4.16E+03
(1.89E+03)
1.44E+02
(1.55E+01)
2.10E+01
(6.56E-02)
5.57E+01
(2.67E+00)
1.20E+00
(7.20E-02)
2.30E+02
(1.45E+01)
3.80E+02
(1.89E+01)
7.26E+03
(3.84E+02)
1.21E+04
(4.27E+02)
2.47E+00
(2.74E-01)
6.53E-01
(6.56E-02)
4.31E-01
(8.50E-02)
3.78E+01
(2.26E+00)
2.28E+01
(3.26E-01)
1.81E+05
(1.24E+05)
3.62E+03
(2.31E+03)
3.62E+01
(1.08E+01)
5.04E+02
(3.17E+02)
2.12E+04
(1.61E+04)
1.44E+03
(1.59E+02)
3.55E+02
(1.77E-01)
2.83E+02
(1.80E+00)
2.18E+02
(1.94E+00)
1.04E+02
(1.82E+01)
1.28E+03
(1.47E+02)
1.92E+03
(1.26E+02)
2.00E+04
(7.15E+03)
1.97E+04
(2.00E+03)
2.09E+06
(7.68E+05)
6.22E+03
(8.29E+03)
2.22E+04
(1.04E+04)
1.09E+02
(3.72E+01)
2.12E+01
(3.63E-02)
3.03E+01
(3.97E+00)
3.63E-02
(4.10E-02)
8.00E-01
(9.93E-01)
1.54E+02
(3.14E+01)
3.13E+00
(1.41E+00)
1.10E+04
(3.44E+03)
3.94E+00
(3.10E-01)
6.74E-01
(1.03E-01)
5.06E-01
(2.75E-01)
3.42E+01
(1.23E+01)
2.19E+01
(3.57E-01)
6.78E+05
(5.32E+05)
1.72E+03
(1.54E+03)
4.07E+01
(2.77E+01)
1.58E+04
(1.17E+04)
6.85E+05
(3.74E+05)
1.00E+03
(2.89E+02)
3.44E+02
(3.60E-10)
2.71E+02
(1.52E+00)
2.16E+02
(4.58E+00)
1.73E+02
(6.11E+01)
1.20E+03
(9.68E+01)
2.29E+03
(6.01E+02)
1.96E+03
(6.04E+02)
1.28E+04
(2.28E+03)
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
versity to the HM and to avoid local minima. Therefore, a value
of 0.99 is suggested for the HMCR based on the experimental
results.
Table 11 shows that small values of PAR reduce the convergence
rate of the DH/best algorithm. Based on the experimental results,
PAR values greater than 0.7 are suggested.
5.3. Experiment B
To demonstrate the superiority of the DH/best algorithm, we
compared it to six well-known optimization algorithms from other
families. The Artificial Bee Colony (ABC; (Akbari, Hedayatzadeh,
Ziarati, & Hassanizadeh, 2012; Karaboga & Basturk, 2007) and
Group Search Optimizer (GSO; (He et al., 2009) are recently
published algorithms, while the Comprehensive Learning Particle
Swarm Optimizer (CLPSO; (Liang et al., 2006) is a competitive variant of PSO. Composite Differential Evolution (CoDE; (Y. Wang et al.,
2011), the differential evolution algorithm with an Ensemble of Parameters and Mutation Strategies (EPSDE; (Mallipeddi et al., 2011),
and Self-adaptive DE (SaDE; (Qin et al., 2009) are competitive variants of DE.
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
Table 14
Summary of Wilcoxon’s rank sum at a 5% significance level.
Vs. DH/best
CLPSO
ABC
GSO
SaDE
EPSDE
CoDE
+(better)
−(worse)
∼
=(no sig.)
+(better)
−(worse)
∼
=(no sig.)
+(better)
−(worse)
∼
=(no sig.)
+(better)
−(worse)
∼
=(no sig.)
+(better)
−(worse)
∼
=(no sig.)
+(better)
−(worse)
∼
=(no sig.)
30 Dimensions
50 Dimensions
3
24
3
9
14
7
4
22
4
11
15
4
11
16
3
7
17
6
4
24
2
8
16
6
5
20
5
10
16
4
8
15
7
8
17
5
During our experiments, the population size of all of the algorithms was 50, which was used in previous studies (He et al.,
2009; Karaboga, Gorkemli, Ozturk, & Karaboga, 2014; Liang et al.,
2006; Mallipeddi et al., 2011; Qin et al., 2009). The other parameters of the algorithms were set as suggested in the original publications. The parameters of the DH/best algorithm are set as shown
in Table 1.
The DH/best algorithm and the other optimization algorithms
are compared based on the 30- and 50-dimensional versions of
the CEC 2014 benchmark functions because most of the compared
algorithms have tuned their parameters based on the previously
mentioned dimensions. The maximum number of fitness evaluations is set to D∗103 . We chose a lower number of fitness function evaluations than was used in the CEC2014 competition due to
329
the importance of finding solutions for computationally expensive
computers or problems that involve physical simulations to evaluate candidate solutions (Liu et al., 2013).The means and standard
deviations of the errors of 30 independent runs of each algorithm
are presented in Tables 12 and 13. The error is defined as the distance between the fitness of the best solution that is found by the
optimization algorithm to the fitness of the optimum solution. The
highlighted values show the lowest mean errors of the compared
algorithms for each function.
To statistically analyze the results, the Wilcoxon’s rank sum test
(Derrac, García, Molina, & Herrera, 2011) is conducted at a 5% significance level. Table 14 shows the results of the statistical testing for the CEC 2014 problems. The symbols (+, −, ∼
=) indicate that
a given algorithm performed significantly better (+), significantly
worse (−), or not significantly different (∼
=) than DH/best.
Generally, the DH/best algorithm outperforms all of the compared algorithms based on the pairwise statistical comparison provided in Table 14. The statistical results show that the SaDE algorithm, which is the most competitive algorithm to the DH/best algorithm, outperforms DH/best in the optimization of nearly 30% of
the functions in both dimensions, while DH/best produces better
results than SaDE for the optimization of nearly 50% of the functions; for the remaining functions, there is no significant difference
between the results. When the DH/best algorithm is compared
with CLPSO, which is the least competitive algorithm among the
compared algorithms, DH/best outperforms CLPSO in the optimization of 80% of the functions, while CLPSO outperforms DH/best for
13% of the functions, and there is no significant difference between
the two in the optimization of 7% of the functions.
Because more than two algorithms are compared, they must be
compared using Friedman ranks (Abedinpourshotorban, Shamsuddin, Beheshti, & Jawawi, 2015; Derrac et al., 2011) to avoid transitivity between the results. Tables 15 and 16 show the results of the
Friedman ranks of both dimensions (30 and 50). The DH/best algo-
Table 15
Mean Friedman ranks of errors for the CEC 2014 benchmark functions (dimension = 30).
Function
CLPSO
ABC
GSO
SaDE
EPSDE
CoDE
DH/BEST
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
Sum
5.967
6.0 0 0
5.867
6.0 0 0
6.267
4.767
6.0 0 0
6.0 0 0
6.0 0 0
6.0 0 0
5.200
6.100
6.500
6.733
6.0 0 0
5.567
5.400
6.700
5.800
5.167
5.867
3.700
6.0 0 0
6.0 0 0
5.967
5.667
3.967
5.833
6.033
6.0 0 0
173.067
4.800
3.367
3.967
3.800
1.433
2.800
4.0 0 0
2.0 0 0
2.800
2.0 0 0
1.467
1.500
2.200
1.533
2.900
1.067
5.600
3.833
3.933
4.533
5.400
3.0 0 0
4.933
2.833
4.333
2.667
2.433
4.267
3.500
4.667
97.567
6.967
7.0 0 0
7.0 0 0
7.0 0 0
1.600
7.0 0 0
7.0 0 0
7.0 0 0
7.0 0 0
7.0 0 0
7.0 0 0
2.767
4.800
3.967
7.0 0 0
7.0 0 0
7.0 0 0
6.300
7.0 0 0
7.0 0 0
7.0 0 0
7.0 0 0
7.0 0 0
7.0 0 0
7.0 0 0
6.933
7.0 0 0
7.0 0 0
6.967
7.0 0 0
194.300
3.100
2.800
2.233
3.633
4.467
1.500
2.333
3.667
3.767
3.033
4.200
4.800
2.400
2.867
3.700
3.167
2.700
1.867
1.833
2.433
2.300
2.300
3.0 0 0
2.667
3.133
2.467
3.667
2.267
2.967
2.900
88.167
2.533
1.0 0 0
2.133
1.200
6.667
4.700
1.767
3.333
2.367
3.967
3.733
3.467
3.0 0 0
4.067
2.333
4.067
2.433
2.333
4.233
2.300
2.800
6.100
2.0 0 0
2.133
3.200
2.533
6.0 0 0
1.500
1.0 0 0
1.0 0 0
89.900
3.367
5.0 0 0
2.800
5.0 0 0
4.600
5.533
5.0 0 0
5.0 0 0
5.0 0 0
5.0 0 0
4.833
4.567
5.500
5.167
4.900
4.233
1.533
4.133
2.967
1.300
1.467
3.400
4.067
4.633
2.700
4.033
3.100
4.033
5.0 0 0
4.233
122.100
1.267
2.833
4.0 0 0
1.367
2.967
1.700
1.900
1.0 0 0
1.067
1.0 0 0
1.567
4.800
3.600
3.667
1.167
2.900
3.333
2.833
2.233
5.267
3.167
2.500
1.0 0 0
2.733
1.667
3.700
1.833
3.300
2.533
2.200
75.100
330
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
Table 16
Mean Friedman ranks of errors for the CEC 2014 benchmark functions (dimension = 50).
Function
CLPSO
ABC
GSO
SaDE
EPSDE
CoDE
DH/BEST
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
Sum
6.967
7.0 0 0
7.0 0 0
7.0 0 0
5.967
4.967
7.0 0 0
6.500
7.0 0 0
6.800
4.667
6.100
6.567
7.0 0 0
7.0 0 0
6.300
6.833
7.0 0 0
6.733
6.667
6.267
4.700
7.0 0 0
7.0 0 0
6.400
4.167
4.167
5.833
7.0 0 0
6.700
190.300
5.400
3.567
3.200
4.100
2.0 0 0
2.933
4.0 0 0
2.0 0 0
2.533
2.0 0 0
1.233
1.667
1.100
1.833
2.400
1.0 0 0
6.100
5.233
3.267
5.967
6.300
2.667
5.600
2.933
4.333
2.800
3.0 0 0
4.233
3.967
3.633
101.0 0 0
4.933
5.833
5.167
6.0 0 0
1.0 0 0
4.233
5.967
3.967
4.333
3.0 0 0
2.100
1.733
3.800
2.267
6.0 0 0
2.167
5.033
2.567
4.100
4.633
5.167
5.100
5.400
2.633
6.600
6.433
5.800
7.0 0 0
4.867
6.300
134.133
2.833
2.167
3.567
4.233
4.300
1.433
2.967
3.667
4.033
4.0 0 0
5.300
4.233
3.200
3.033
4.367
3.567
3.200
1.867
4.067
2.667
2.433
3.433
3.0 0 0
4.233
3.333
4.933
1.367
2.100
2.600
3.433
99.567
3.0 0 0
1.933
2.967
1.300
6.933
6.667
1.500
5.367
3.233
5.033
4.467
3.400
2.667
4.700
1.867
5.067
2.500
4.167
3.100
2.967
3.100
5.767
2.0 0 0
3.0 0 0
1.867
2.200
6.967
1.500
1.0 0 0
1.0 0 0
101.233
3.700
5.167
2.600
3.067
4.767
6.100
5.033
5.500
5.867
6.167
6.100
5.167
5.367
5.200
3.633
5.600
1.300
4.467
3.600
1.067
1.0 0 0
5.200
4.0 0 0
5.833
3.0 0 0
3.233
3.633
3.500
5.967
4.500
129.333
1.167
2.333
3.500
2.300
3.033
1.667
1.533
1.0 0 0
1.0 0 0
1.0 0 0
4.133
5.700
5.300
3.967
2.733
4.300
3.033
2.700
3.133
4.033
3.733
1.133
1.0 0 0
2.367
2.467
4.233
3.067
3.833
2.600
2.433
84.433
rithm has a lower summary rank than the other algorithms and
outperforms all of the compared algorithms in both dimensions.
Therefore, the proposed DH/best algorithm is a highly competitive
optimization algorithm and can be applied for the optimization of
real world problems.
In the future, applications of the DH/best algorithm in solving real world problems will be investigated. In addition, selfadaptive parameter setting approaches will be proposed to dynamically change the parameters of the DH/best algorithm during the
optimization process to further improve the accuracy and search
power of the algorithm.
6. Conclusion
Acknowledgements
This research presented a new variant of the HS algorithm,
called the DH/best algorithm, for the optimization of continuous
optimization problems. The proposed algorithm introduces a novel
harmony memory initialization method that effectively initializes
harmonies over the entire search space to avoid finding local minima. In addition, a new differential-based pitch adjustment method
was proposed, which eliminates the need to set the value of bw
and adjust pitches based on the distances between the harmonies
in the HM. The proposed pitch adjustment method also increases
the cooperation between the harmonies, which result in gathering
more information about the search space.
In contrast with the conventional HS, the DH/best algorithm is
a problem-independent algorithm that is able to optimize all continuous optimization functions with the same parameter settings
that were used in this paper.
Two sets of experiments were presented. First, the proposed
DH/best algorithm was compared with the HS, IHS, NDHS and
GDHS algorithms based on 12 moderate to hard high dimensional
optimization functions. The experimental results showed that the
proposed algorithm outperforms other variants of the HS in convergence rate, accuracy and robustness.
We then compared the DH/best algorithm with six state-of-theart algorithms from different families based on a more comprehensive problem set (the full set of CEC2014 functions). The numerical analysis of the results demonstrated the superiority of the proposed algorithm.
The authors would like to thank the Universiti Teknologi
Malaysia (UTM) for their support in research and development and
the UTM Big Data Centre and the Soft Computing Research Group
(SCRG) for the inspiration to make this study a success. This work
was supported by the Ministry of Higher Education (MOHE) under
the Fundamental Research Grant Scheme (FRGS 4F802 and FRGS
4F786).
Appendix. Function set A
In this section, we introduce the functions that are used in experiment A in detail.
A The Sphere function is defined as
fSphere (x ) =
n
x2i
i=1
where the global optimum x∗ = 0, and f (x∗ ) = 0 for −100 ≤ xi ≤
100.
B The Qing function is defined as
fQing (x ) =
n i=1
2
x2i − i
√
where the global optimum x∗ = ± i, and f (x∗ ) = 0 for −500 ≤
xi ≤ 500.
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
C The Alpine01 function is defined as
fAl pine01 (x ) =
n
fSh−Ackley (x ) = −20e
|xi sin (xi ) + 0.1xi |
i=1
where the global optimum x∗ = 0, and f (x∗ ) = 0 for −10 ≤ xi ≤ 10.
−0.2
1
n
331
n
i=1
zi2
− e
1
n
n
i=1
cos (2π zi )
+ 20 + e1
where z = x − o, o = {o(1 ), o(2 ), . . . , o(n ) is the shifted global optimum, x∗ = o, and f (x∗ ) = 0 for −32 ≤ xi ≤ 32.
References
D The Rosenbrock function is defined as
fRosenbrock (x ) =
n−1 2
100 x2i − xi+1 + (xi − 1 )2
i=1
where the global optimum x∗ = 1, and f (x∗ ) = 0 for −5 ≤ xi ≤ 10.
E The Schwefel26 function is defined as
fSchwe f el26 (x ) = 418.98288727n −
n
xi sin
|xi |
i=1
where the global optimum x∗ = 420.9687, and f (x∗ ) = 0 for
−50 0 ≤ xi ≤ 50 0.
F The Shifted sphere function (CEC’05) is defined as
fSh_Sphere (x ) =
n
Zi2 + f bias
i=1
where z = x − o, o = {o(1 ), o(2 ), . . . , o(n )} is the shifted global optimum, x∗ = o, and f (x∗ ) = f bias = −450 for −100 ≤ xi ≤ 100.
G The Shifted Schwefel 1.2 function (CEC’05) is defined as
fSh−Schwe f el1.2 (x ) =
n
i
i=1
2
+ f bias
Zj
j=1
where z = x − o, o = {o(1 ), o(2 ), . . . , o(n )} is the shifted global optimum, x∗ = o, and f (x∗ ) = f bias = −450 for −100 ≤ xi ≤ 100.
H The Shifted Schwefel 1.2 function with noise (CEC’05) is defined
as
fSh_noisy_Schwe f el1.2 (x ) =
n
i
i=1
× (1 + 0.4|N (0, 1 )|) + f bias
Z 2j
j=1
where z = x − o, o = {o(1 ), o(2 ), . . . , o(n )} is the shifted global optimum, x∗ = o, and f (x∗ ) = f bias = −450 for −100 ≤ xi ≤ 100.
I The Shifted Rosenbrock function (CEC’05) is defined as
fSh_Rosenbrock (x ) =
n−1 100 Zi2 − Zi+1
2
+ (Zi − 1 )2 + f bias
i=1
where z = x − o + 1, o = {o(1 ), o(2 ), . . . , o(n )} is the shifted global
optimum, x∗ = o, and f (x∗ ) = f bias = 390 for −100 ≤ xi ≤ 100.
J The Shifted rotated Griewank function (CEC’05) is defined as
fSh_r_Griewank (x ) =
1 2 Zi −
cos
40 0 0
n
n
i=1
i=1
Zi
√
i
+ 1 + f bias
where z = (x − o)×M, o = {o(1 ), o(2 ), . . . , o(n )} is the shifted
global optimum, M is a linear transformation matrix, x∗ = o, and
f (x∗ ) = f bias = −180 for 0 ≤ xi ≤ 600.
K The Shifted Rastrigin function (CEC’05) is defined as
fSh_r_Rastrigin (x ) =
n Z2i − 10cos(2π Zi ) + 10 + f bias
i=1
where z = x − o, o = {o(1 ), o(2 ), . . . , o(n )} is the shifted global optimum, x∗ = o, and f (x∗ ) = f bias = −330 for −5 ≤ xi ≤ 5.
L The Shifted Ackley function (CEC’2010) is defined as
Abedinpourshotorban, H., Shamsuddin, S. M., Beheshti, Z., & Jawawi, D. N. (2015).
Electrosmagnetic field optimization: A physics-inspired metaheuristic optimization algorithm. Swarm and Evolutionary Computation, 26, 8–22.
Akbari, R., Hedayatzadeh, R., Ziarati, K., & Hassanizadeh, B. (2012). A multi-objective
artificial bee colony algorithm. Swarm and Evolutionary Computation, 2, 39–52.
Ali, M. M., Khompatraporn, C., & Zabinsky, Z. B. (2005). A numerical evaluation of
several stochastic algorithms on selected continuous global optimization test
problems. Journal of Global Optimization, 31, 635–672.
Blum, C., & Roli, A. (2003). Metaheuristics in combinatorial optimization: Overview
and conceptual comparison. ACM Computing Surveys (CSUR), 35, 268–308.
Blum, C., & Roli, A. (2008). Hybrid metaheuristics: An introduction. In Hybrid metaheuristics (pp. 1–30). Springer.
Chen, J., Pan, Q.-k., & Li, J.-q (2012). Harmony search algorithm with dynamic control parameters. Applied Mathematics and Computation, 219, 592–604.
Cobos, C., Estupiñán, D., & Pérez, J. (2011). GHS+ LEM: Global-best Harmony Search
using learnable evolution models. Applied Mathematics and Computation, 218,
2558–2578.
Derrac, J., García, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use
of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation, 1,
3–18.
García, S., Molina, D., Lozano, M., & Herrera, F. (2009). A study on the use of non–
parametric tests for analyzing the evolutionary algorithms’ behaviour: A case
study on the CEC’2005 special session on real parameter optimization. Journal
of Heuristics, 15, 617–644.
Geem, Z., Kim, J., & Loganathan, G. (2002). Harmony search optimization: Application to pipe network design. International journal of modelling & simulation, 22,
125–133.
Geem, Z. W. (2010). State-of-the-art in the structure of harmony search algorithm.
In Recent advances in harmony search algorithm (pp. 1–10). Berlin Heidelberg:
Springer.
Geem, Z. W., Kim, J. H., & Loganathan, G. (2001). A new heuristic optimization algorithm: Harmony search. Simulation, 76, 60–68.
Geem, Z. W., Lee, K. S., & Park, Y. (2005). Application of harmony search to vehicle
routing. American Journal of Applied Sciences, 2, 1552.
Hancox, E. P., & Derksen, R. W. (2005). Optimization of an industrial compressor supply system. In Differential evolution (pp. 339–351). Berlin Heidelberg:
Springer.
Hansen, N., & Kern, S. (2004). Evaluating the CMA evolution strategy on multimodal
test functions. In Parallel problem solving from Nature-PPSN VIII (pp. 282–291).
Springer.
He, S., Wu, Q. H., & Saunders, J. (2009). Group search optimizer: An optimization
algorithm inspired by animal searching behavior. Evolutionary Computation, IEEE
Transactions on, 13, 973–990.
Hegerty, B., Hung, C.C., & Kasprak, K. (2009). A comparative study on differential
evolution and genetic algorithms for some combinatorial problems. website:
http://www.micai. org/2009/proceedings/.../cd/ws.../paper88.micai09.pdf.
Islam, S. M., Das, S., Ghosh, S., Roy, S., & Suganthan, P. N. (2012). An adaptive differential evolution algorithm with novel mutation and crossover strategies for
global numerical optimization. Systems, Man, and Cybernetics, Part B: Cybernetics,
IEEE Transactions on, 42, 482–500.
Jamil, M., & Yang, X. S. (2013). A literature survey of benchmark functions for global
optimisation problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4, 150–194.
Johnson, D. S. (1985). The NP-completeness column: An ongoing guide. Journal of
Algorithms, 6, 145–159.
Karaboga, D., & Basturk, B. (2007). A powerful and efficient algorithm for numerical
function optimization: Artificial bee colony (ABC) algorithm. Journal of global
optimization, 39, 459–471.
Karaboga, D., Gorkemli, B., Ozturk, C., & Karaboga, N. (2014). A comprehensive survey: Artificial bee colony (ABC) algorithm and applications. Artificial Intelligence
Review, 42, 21–57.
Khalili, M., Kharrat, R., Salahshoor, K., & Sefat, M. H. (2014). Global dynamic
harmony search algorithm: GDHS. Applied Mathematics and Computation, 228,
195–219.
Liang, J., Qu, B., & Suganthan, P. (2013). Zhengzhou China and Technical Report,
Nanyang Technological University, Singapore. Computational Intelligence Laboratory, Zhengzhou University.
Liang, J. J., Qin, A. K., Suganthan, P. N., & Baskar, S. (2006). Comprehensive learning particle swarm optimizer for global optimization of multimodal functions.
Evolutionary Computation, IEEE Transactions on, 10, 281–295.
Liu, B., Chen, Q., Zhang, Q., Liang, J., Suganthan, P., & Qu, B. (2013). Zhengzhou China
and Nanyang Technological University, Singapore, Tech. Rep.. Computational Intelligence Laboratory, Zhengzhou University.
Mahdavi, M., Fesanghary, M., & Damangir, E. (2007). An improved harmony search
algorithm for solving optimization problems. Applied mathematics and computation, 188, 1567–1579.
332
H. Abedinpourshotorban et al. / Expert Systems With Applications 62 (2016) 317–332
Mallipeddi, R., Suganthan, P. N., Pan, Q. K., & Tasgetiren, M. F. (2011). Differential
evolution algorithm with ensemble of parameters and mutation strategies. Applied Soft Computing, 11, 1679–1696.
Manjarres, D., Landa-Torres, I., Gil-Lopez, S., Ser Del, J., Bilbao, N. M., Salcedo-Sanz, S., & Geem, Z. W. (2013). A survey on applications of the harmony
search algorithm. Engineering Applications of Artificial Intelligence, 26, 1818–1831.
Mezura-Montes, E., Velázquez-Reyes, J., & Coello Coello, A. C. (2006). A comparative study of differential evolution variants for global optimization. In Proceedings of the 8th annual conference on Genetic and evolutionary computation
(pp. 485–492). ACM.
Michael, R. G., & David, S. J. (1979). Computers and intractability: A guide to the theory
of NP-completeness.
Moh’d Alia, O., & Mandava, R. (2011). The variants of the harmony search algorithm:
An overview. Artificial Intelligence Review, 36, 49–68.
Omran, M. G., & Mahdavi, M. (2008). Global-best harmony search. Applied Mathematics and Computation, 198, 643–656.
Pan, Q. K., Suganthan, P. N., Tasgetiren, M. F., & Liang, J. J. (2010). A self-adaptive
global best harmony search algorithm for continuous optimization problems.
Applied Mathematics and Computation, 216, 830–848.
Price, K., Storn, R. M., & Lampinen, J. A. (2006). Differential evolution: A practical
approach to global optimization. Springer.
Qin, A. K., Huang, V. L., & Suganthan, P. N. (2009). Differential evolution algorithm
with strategy adaptation for global numerical optimization. Evolutionary Computation, IEEE Transactions on, 13, 398–417.
Ronkkonen, J., Kukkonen, S., & Price, K. V. (2005). Real-parameter optimization with
differential evolution. In Proceeding IEEE CEC: 1 (pp. 506–513).
Růžek, B., & Kvasnička, M. (2005). Determination of the earthquake hypocenter:
A challenge for the differential evolution algorithm. In Differential evolution
(pp. 379–391). Springer.
Salomon, M., Perrin, G. R., Heitz, F., & Armspach, J. P. (2005). Parallel differential
evolution: Application to 3-d medical image registration. In Differential evolution
(pp. 353–411). Springer.
Storn, R., & Price, K. (1995). Differential evolution-a simple and efficient adaptive
scheme for global optimization over continuous spaces. Berkeley: ICSI.
Storn, R., & Price, K. (1997). Differential evolution–a simple and efficient heuristic
for global optimization over continuous spaces. Journal of global optimization,
11, 341–359.
Stützle, T. G. (1999). Local search algorithms for combinatorial problems: Analysis, improvements, and new applications: 220. Germany: Infix Sankt Augustin.
Suganthan, P. N., Hansen, N., Liang, J. J., Deb, K., Chen, Y. P., Auger, A., & Tiwari, S. (2005). KanGAL Report. KanGAL Report: 2005005.
Velho, L., Carvalho, P., Gomes, J., & de Figueiredo, L. (2011). Mathematical optimization in computer graphics and vision. Morgan Kaufmann.
Wang, X., & Yan, X. (2013). Global best harmony search algorithm with control parameters co-evolution based on PSO and its application to constrained optimal
problems. Applied Mathematics and Computation, 219, 10059–10072.
Wang, Y., Cai, Z., & Zhang, Q. (2011). Differential evolution with composite trial vector generation strategies and control parameters. Evolutionary Computation, IEEE
Transactions on, 15, 55–66.
Weise, T. (2009). Global optimization algorithms-theory and application. SelfPublished.
Xiang, W.-l., An, M.-Q., Li, Y.-Z., He, R.-C., & Zhang, J.-F. (2014). An improved global-best harmony search algorithm for faster optimization. Expert Systems with
Applications, 41, 5788–5803.
Yadav, P., Kumar, R., Panda, S. K., & Chang, C. (2012). An intelligent tuned harmony
search algorithm for optimisation. Information Sciences, 196, 47–72.