COMPETITION AND COOPERATION IN EVOLUTIONARY

COMPETITION AND COOPERATION IN EVOLUTIONARY
ALGORITHMS: A COMPARATIVE STUDY
Josef Tvrdı́k
University of Ostrava, Department of Computer Science
30. dubna 22, 701 03 Ostrava, Czech Republic
phone: +420/596160231, fax: +420/596120478
e-mail: [email protected]
Abstract. The aim of the paper is to study stochastic algorithms with high self-adaptation which could be
used in ”routine” tasks when the global optimization problem is to be solved. For such problems it is required
to have algorithms that do not need a special tuning of their input parameters. The generalized controlled
random search algorithm with competing heuristics and two cooperative algorithms using differential evolution
and self-organizing migrating algorithm (SOMA) were compared experimentally on several test functions.
Keywords: global optimization, controlled random search, differential evolution, self-organizing migrating
algorithm, competition and cooperation in search.
Introduction
We consider continuous global optimization problem, i.e. for a given objective function f : D → R, D ⊂ Rd ,
the point x∗ is to be found such that x∗ = arg minx∈D f (x). The point x∗ is called the global minimum
Qd
and D is the searching space. D is defined as D = i=1 [ai , bi ], ai < bi , i = 1, 2, . . . , d and the objective
function is computable, i.e. there is an algorithm capable to evaluate f (x) with sufficient accuracy in any
point x ∈ D.
There are many stochastic algorithms for solving the problem formed above. Their authors claim the
efficiency and the reliability of searching for the ”true” global minimum. The ”true” global minimum is
found by a search algorithm if the point with minimal function value found in the process is sufficiently close
to the global minimum. However, in using such algorithms we face the problem of the setting their tuning
parameters. The efficiency and the reliability of many algorithms is strongly dependent on the values of
input parameters and recommendations of authors are rather vague, see e.g. [7, 14]. A user is supposed to
be able to change parameter values according to partial results obtained in searching process. Such attempt
is not acceptable in tasks where the global optimization is one step only on the way to the solution of the
problem and the user has no experience in fine art of parameter setting. Examples of these problems can be
found in computational statistics like parameter estimate in non-linear models or maximization of likelihood
functions.
It would be very useful to have a robust algorithm reliable enough at reasonable time-consumption without
the necessity of fine setting its input parameters. Such an ideal algorithm cannot be found, see ”No free lunch
theorem” which states that any search algorithm cannot outperform the others for all objective functions
[12]. But it is also known that some algorithms can outperform others for relatively wide class of problems
both in convergence rate and in reliability of finding the global minimum. The aim of this paper is to
study some population-based stochastic algorithms with high self-adaptation. It is supposed that the selfadaptation is enhanced if cooperation and competition are involved into the algorithms. A small step towards
such adaptive algorithms was preformed by competition of heuristics [8, 9, 10]. Here we focus on involving
cooperation into search algorithms.
Algorithms for cooperation
Three classes of algorithms were chosen for cooperation, namely differential evolution (DE) [7], self-organizing
migrating algorithm (SOMA) [13], and generalized controlled random search with competing heuristics [10].
These algorithms use different strategies of search and all have proven being efficient in solving optimization
problems. The differential evolution is used frequently with very good reputation [5] and SOMA tops DE in
some problems [14].
The differential evolution works with two population P and Q of the same size N . A new trial point y is
composed of the point x of old population and the point u obtained by using mutation. If f (y) < f (x) the
point y is inserted into the new population Q instead of x. After completion of the new population Q the
old population P is replaced by Q and the search continues until stopping condition is fulfilled. The most
popular variant of DE (called DERAND) generates the point u by adding the weighted difference of two
points
u = r1 + F (r2 − r3 ),
(1)
where r1 , r2 and r3 are three distinct points taken randomly from P (not coinciding with the current x) and
F > 0 is an input parameter. The elements yj , j = 1, 2. . . . , d of trial point y are built up by the crossover
of its parents x and u using the following rule:
(
uj if Uj ≤ C or j = l
yj =
(2)
xj if Uj > C and j 6= l,
where l is a randomly chosen integer from {1, 2, . . . , d}, U1 , U2 , . . . , Ud are independent random variables
uniformly distributed in [0, 1), and C ∈ [0, 1] is an input parameter influencing the number of elements to
be exchanged by crossover. The recommended values F = 0.8 and C = 0.5 were used in our experiments.
Ali and Törn [1] suggested to adapt the value of the scaling factor F within searching process according to
the equation
(
|) if | ffmax
|<1
max(Fmin , 1 − | ffmax
min
min
(3)
F =
fmin
max(Fmin , 1 − | fmax |) otherwise,
where fmin , fmax are respectively the minimum and maximum function values in the population and Fmin
is an input parameter ensuring F ∈ [Fmin , 1). The algorithm which uses Eq.(3) for evaluation of F with
Fmin = 0.5 is denoted DERADP in the following text.
SOMA [13, 14] can be viewed as a simple model of behaviour of predators hunting in pack. Thus, the
population P of size N migrates through the searching space D. In all-to-one strategy of SOMA the leader
(point of P with minimal function value) is found for the migration run and then other N − 1 individuals of
P move by steps towards the leader. Each individual records the coordinates of the minimal function value
found on the way to the leader and uses it as its starting point in next migration run. The movement of an
individual towards the leader is shown graphically in Figure 1. The way of the individual’s movement from
Figure 1: Movement to the leader
its starting position r0 is controlled by three input parameters, namely mass, step and prt. Parameter mass
gives the relative size of skipping over the leader,
0
mass =
k xl − r0 k
,
k xl − r0 k
0
0
where k xl − r0 k is the norm of the vector (xl − r0 ), similarly k xl − r0 k is the norm of the vector (xl − r0 ),
0
meaning of xl and xl is seen in the Fig. 1.
Parameter step controls the size δ of the skip towards the leader according to the following relation δ =
step∗(xl −r0 ). Parameter prt is used for the selection of dimensions in which the movement of an individual
to the leader is carried out. It is the probability of selection, prt ∈ [0; 1]. Dimensions are chosen at random
for each individual’s movement, a dimension is selected to change if Ui < prt, i = 1, 2, . . . , d, where
U1 , U2 , . . . , Ud are independent random variables uniformly distributed in [0, 1). At least one dimension
must be chosen like in the defferential evolution for crossover. The directions of possible movement of
the individual are marked by full or dotted arrows in Fig. 1. The parameter values were set mass = 1.9,
step = 0.3 and prt = 0.5 in all numerical experiments with SOMA.
Generalized controlled random search with competing heuristics was described in [10]. A heuristic in this
context is any stochastic rule generating a new trial point y ∈ D. The algorithm can be shortly written as
follows:
generate P (population of N points in D at random);
find xmax (the point in P with the highest value of objective function, fmax = f (xmax ));
repeat
choose a heuristic among h according to its successfulness;
generate a new trial point y ∈ D by chosen heuristic;
if f (y) < f (xmax ) then xmax := y; find xmax ;
endif
until stopping condition;
At each iteration step a heuristic is chosen at random among h heuristics at our disposal with the probability
qi , i = 1, 2, . . . , h. The probabilities are changing according to the successfulnes of heuristics in preceding
steps of searching process. The heuristic is successful in the current step of evolutionary process if it generates
such a trial point y that f (y) < fmax and the measure of success is weighted by
wi =
fmax − max(f (y), fmin )
.
fmax − fmin
(4)
Thus wi ∈ (0, 1) and probability qi is evaluated as
qi = Ph
Wi + w0
j=1 (Wj
+ w0 )
,
(5)
where Wi is sum of wi in previous search and w0 > 0 is input parameter of the algorithm. In order to avoid
the degeneration of evolutionary process the current values of qi are reset to their starting values (qi = 1/h)
when any probability qi decreases bellow a given limit δ > 0.
The variant of this algorithms called COMP4 with four competing heuristics and using Eq. (5) for evaluation
of probabilities qi is used in tests carried out in this work. Three of the competing heuristics in COMP4 are
based on randomized reflection in the simplex S (d + 1 points chosen from P ) proposed in [3]. A new trial
point y is generated from the simplex by the relation
y = g + U (g − xH ).
(6)
where xH = arg maxx∈S f (x) and g is the centroid of remaining d points of the simplex S. The multiplication
factor U is a random variable distributed uniformly in [s, α − s), where α and s are input parameters,
0 < s < α/2. All the d + 1 points of simplex are chosen at random in two heuristics with α = 2, s = 0.5
and α = 4, s = 0.5, resp. In the third heuristic one point of simplex is the point of P with the minimal
function value and remaining d points of simplex S are chosen at random, α = 2, s = 0.5 in the numerical
experiments. The fourth heuristic used in the COMP4 is based on differential evolution with adaptive value
of F according to the Eq. (3). The input parameters for competition were set w0 = 0.5 and δ = 0.05.
The COMP4 algorithm proved to be efficient in difficult tasks of parameter estimate [11] where it outperformed the algorithm COMP1 with eight competing heuristics proposed and tested in [10].
Any of heuristics used for generating a new trial point y in all the algorithms described above does not
guarantee that y belongs to D. In case y 6∈ D so called perturbation is applied, see [4].
Experiments
The algorithms were tested on several functions. All the functions used in tests are described in Appendix
of the full version of this paper on CD or their definitions can also be found e.g. in [7] and [1]. The searching
spaces D in all test tasks were symmetric and with the same interval in each dimension, ai = −bi , bi =
bj i, j = 1, 2, . . . , d. The values of bi were set as follows: 2.048 for Rosenbrock’s function, 5.12 for first De
Jong’s and Rastrigin’s function, 30 for Ackley’s function, 400 for Griewangk’s function and 500 for Schwefel’s
function. Population size N was set to N = 10 d for all the algorithms. The search for the global minimum
was stopped if fmax − fmin < 1E − 07 or the number of objective function evaluations exceeds the input
upper limit 5000 d. Four type of the search result were distinguished:
Type 1 – correct, i.e. fmax − fmin < 1E − 07 and fmin ≤ fnear ,
Type 2 – slow convergence, fmin ≤ fnear but the search stopped due to function evaluations limit,
Type 3 – premature convergence, fmax − fmin < 1E − 07 but fmin > fnear ,
Type 4 – failure, the search stopped due to function evaluations limit.
The value of fnear is an input parameter for each function. It was set to 1E − 06 for all the tested functions
except Ackley’s function and Schwefel’s function when the fnear = 1E−03 and fnear = 418.982 d, respectively.
One hundred of independent runs were carried out for each task. Type of result, number of function evaluations (ne) and time consumption in milliseconds per one function evaluation (cpu1 ) were recorded for each
run. One hundred independent runs were performed for each test task. The value ne is the average number
of objective function evaluations, vc is variation coefficient of ne defined as 100 × standard deviation/ne. All
those characteristics were calculated only from the runs with result type 1. The reliability R of the search
is the count of runs with the result type 1, i. e. percentage of correct results obtained by repeated search.
Preliminary results
The algorithms SOMA, DERAND, DERADP, and COMP4 were tested in preliminary experiments in order
to decide which of them will be selected for cooperation. Three functions were chosen for the preliminary
tests: one of them (first De Jong) is unimodal and very easy, Rosenbrock’s function is also unimodal but
rather difficult for the search algorithms and Ackley’s function is multimodal function of medium difficulty
level. Results of these preliminary experiments are given in Table 1. If the results of search were not only of
type 1 the frequencies by observed types are given in columns denoted Typres. Both differential evolution
Table 1: Comparison of SOMA, DERAND, DERADP, and COMP4 algorithms
Algorithm
Function
d
R
DeJong1
3 100
Ackley
2 100
Rosenbrock 2 100
Ackley
10
0
Rosenbrock 10
0
DERAND
DERADP
SOMA
Typres
Typres
Typres
ne 2
4
R
ne
2
4
R
ne 2 3
4
2676 0
0 100 3156
0
0 100 3420 2 3
4
2369 0
0 100 2452
0
0 78 3539 20 0
2
4061 0
0 100 4843
0
0 48 4288 52 0
0
97
3
0
100
0 11 43624 0 82
7
0 100
0
0 100
0
0 0 100
COMP4
R
ne
100 1777
100 1713
100 1156
100 20798
100 19686
and SOMA failed completely in finding minimum of Rosenbrock’s function with d = 10, SOMA tended to
premature convergence (type 3) frequently in the case of Ackley’s function with d = 10, whereas differential
evolution often found the global minimum but without getting the convergence condition fmax − fmin <
1E − 07 (frequent stops with type 2). The COMP4 algorithm was the only one with the total reliability in
all the five tasks and also its ne was the least. The results for DERAND and DERADP differ very slightly
both in reliability and ne. SOMA surpasses differential evolution only in Ackley’s function for d = 10 but
its reliability is rather poor (R = 11). DERADP is more time consuming than DERAND in ne. Due to the
similarity of DERAND and DERADP preliminary results, the DERADP algorithm was not included into
implementation of cooperative algorithms tested in this paper.
Cooperation of algorithms
The cooperation of algorithms in the search for the global minimum can be implemented in many ways.
A very simple model of cooperation was chosen in this study. All cooperating algorithms must be populationbased and the size of population is the same for all the cooperating algorithms. Three algorithms are included
in cooperation, namely SOMA, DERAND and COMP4. Each algorithm works with the population for a
period and then passes the result to next algorithm and the length of the period is the same for each algorithm
in cooperation. SOMA must carry out k(N − 1) function evaluations in one migrating run (k is integer part
of mass/step) and DERAND needs N function evaluations for the completing of the new population. In
order to keep the periods for each cooperating algorithm approximately the same, SOMA performs one
migrating run, DERAND performs k generations and COMP1 carries out kN function evaluations per its
period. Such cooperation was implemented in two variants called SERIAL and PARALLEL. The basic idea
of the SERIAL cooperation algorithm is described as follows:
generate P (population of N points in D at random);
repeat
perform one migration run of SOMA;
perform k generation of DERAND;
perform N ∗ k function evaluations by COMP4;
until stopping condition;
In PARALLEL variant of the cooperative algorithm the SOMA and DERAND search for the minimum
paralelly and then COMP1 finishes an iteration step on the population which is created by merging the
better half of results obtained by both SOMA and DERAND. Better half of population is its N/2 points
after sort with respect to their objective function values. The basic idea of such parallel cooperation is
described as follows:
generate two populations P 1 and P 2;
repeat
perform one migration run of SOMA on P 1;
perform k generation of DERAND on P 2;
P := merge(better half of P 1, better half of P 2;
perform N k function evaluations by COMP4 on P ;
P 1=P ;
P 2=P ;
until stopping condition;
The implementation of the algorithms in Matlab [6] is presented in the Appendix B of the paper full version
in the CD Proceedings. The results of numerical experiments are shown in Table 2. Despite the expectation
Table 2: Comparison of cooperation and competition
Algorithm
Function
DeJong1
Ackley
Rosenbrock
Rastrigin
Schwefel
Griewangk
Rastrigin
Ackley
Rosenbrock
Schwefel
Griewangk
d
3
2
2
2
2
2
5
10
10
10
10
R
100
99
100
99
99
63
91
100
100
97
63
PARALLEL
ne
vc
2773
5.3
2461
7.0
1852 10.6
1964 10.7
1678 11.1
2591 11.5
12704
6.8
36959
3.0
42825
3.8
30624
5.9
32122 10.0
R
100
99
100
100
91
77
94
100
98
96
73
SERIAL
ne
2348
2047
1836
1641
1362
2843
11387
32643
43522
27178
31212
vc
10.8
8.1
11.2
12.7
13.1
19.1
10.8
3.6
3.2
5.8
13.8
R
100
100
100
100
99
72
40
100
100
99
97
COMP4
ne
1777
1713
1156
1390
1032
2594
22789
20798
19686
27946
28246
vc
5.6
9.8
10.2
15.0
11.9
17.4
8.3
2.2
3.3
11.9
24.6
COMP1 [10]
R
ne
100
1187
100
1227
100
937
97
955
98
817
92
1405
97
5432
100 13604
100 19340
100 13087
94 12239
the SERIAL cooperation was better than PARALLEL almost in all the test tasks. Comparing results, the
reliability R of SERIAL was less in 3 of 11 tasks but the difference was statistically significant only in the
case of Schwefel, d = 2, where the level of significance obtained by the Fisher exact test in four-fold table
[2] is 0.018, the other differences are not significant. The time consumption measured by ne is slightly
higher for SERIAL in two tasks only. In comparison cooperative algorithms with COMP4 the only benefit
brought by cooperation is in the task of Rastrigin, d = 5, where both reliability and ne is better with very
high significance for the cooperative algorithms. Besides, this small benefit of cooperation is reduced by the
comparison with the results of COMP1 algorithm [10]. The COMP1 outperformed cooperative algorithms
both in reliability and time consumption as well as the COMP4 on these eleven test tasks.
Conclusions
The cooperation and competition in population-based search algorithms were studied. The experimental
results showed that the model of simple cooperation does not bring significant improvement in reliability
and efficiency of the search for the global minimum. Thus, the competition of heuristics seems to be more
promising idea for the implementation of robust algorithms with high self-adaptive features suitable for the
problem of the search for the global minimum without special setting of their input parameters in ”routine”
tasks of the global optimization. It was also found that the reliability and the efficiency of algorithms
with competing heuristics is highly dependent on the selection of proper competing heuristics for different
classes of problems. The COMP1 algorithm [10] outperformed COMP4 in the test tasks in this paper where
objective functions in most tasks are multimodal with ”wild” shape and many local minima. On the other
side, COMP4 outperformed COMP1 in non-linear regression parameter estimate [11] where the objective
functions are mostly of the shape of shallow flat valleys. Such objective functions seem to be more frequent
in solving practical problems of optimization and the algorithms which are suitable for the test tasks like
Rosenbrock’s function are usually also suitable for such problems. The selection of competing heuristics is
the crucial question for the field of algorithm applicability and it should be in the centre of research in the
future.
References
[1] Ali M. M., Törn A. (2004). Population set based global optimization algorithms: Some modifications
and numerical studies, Computers and Operations Research, 31, 1703 – 1725.
[2] Hintze, J. (2001). NCSS and PASS, Number Cruncher Statistical Systems. Kaysville, Utah.
WWW.NCSS.COM.
[3] Křivý I., Tvrdı́k J. (1995). The Controlled Random Search Algorithm in Optimizing Regression Models.
Comput. Statist. and Data Anal. 20, 229 – 234.
[4] Kvasnička, V., Pospı́chal, J., Tiňo, P. (2000). Evolutionary Algorithms (in Slovak). Slovak Technical
University, Bratislava.
[5] Lampinen J. (1999). A Bibliography of Differential Evolution Algorithm. Technical Report. Lappeenranta
University of Technology, Department of Information Technology, Laboratory of Information Processing,
available via Internet http://www.lut.fi/∼jlampine/debiblio.htm
[6] MATLAB, version 6.5.1, The MathWorks, Inc. (2003).
[7] Storn R., Price K. 1997. Differential evolution - a Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces, J. Global Optimization 11, 341 – 359.
[8] Tvrdı́k J., Křivý I., Mišı́k, L. (2001). Evolutionary Algorithm with Competing Heuristics. MENDEL
2001, 7th International Conference on Soft Computing (Ošmera, P. ed.). Technical University, Brno,
58 – 64.
[9] Tvrdı́k J., Mišı́k L., Křivý I. (2002). Competing Heuristics in Evolutionary Algorithms. 2nd EuroISCI, Intelligent Technologies - Theory and Applications (Sinčák P. et al. eds.), IOS Press, Amsterdam,
159 – 165.
[10] Tvrdı́k J. (2004). Generalized controlled random search and competing heuristics. MENDEL 2004,
10th International Conference on Soft Computing (Matoušek R. and Ošmera P. eds). University of
Technology, Brno, 228 – 233.
[11] Tvrdı́k J. (2004). Stochastic Algorithms in Regression-Parameter Estimate (in Czech). Proceedings of
ROBUST 2004 (Antoch J. and Dohnal G. eds). Czech Mathematical Society, Praha, 475 – 482.
[12] Wolpert D. H., Macready, W.G. (1997). No Free Lunch Theorems for Optimization. IEEE Transactions
on Evolutionary Computation, 1, 67 – 82.
[13] Zelinka, I., Lampinen J. (2000). SOMA – Self-Organizing Migrating Algorithm, MENDEL 2000, 6th
Int. Conference on Soft Computing, University of Technology, Brno, 177 – 187.
[14] Zelinka, I. (2002). Artificial intelligence in global optimization problems. (in Czech), BEN, Praha.
Acknowledgment
This research was supported by the grant 201/05/0284 of the Czech Grant Agency and by the institutional
research MSM 6198898701 solved in Institute for Research and Applications of Fuzzy Modeling, University
of Ostrava.
Appendix A – Test functions
First De Jong’s function
f (x) =
d
X
x2i
i=1
xi ∈ [−5.12, 5.12], x∗ = (0, 0, . . . , 0), f (x∗ ) = 0
Rosenbrock’s function (banana valley)
f (x) =
d−1
X
100(x2i − xi+1 )2 + (1 − xi )2
i=1
xi ∈ [−2.048, 2.048], x∗ = (1, 1, . . . , 1), f (x∗ ) = 0
Ackley’s function
v

u d
u1 X
f (x) = −20 exp −0.02 t
x2  − exp
d i=1 i

d
1X
cos 2πxi
d i=1
!
xi ∈ [−30, 30], x∗ = (0, 0, . . . , 0), f (x∗ ) = 0
Griewangk’s function
d
d
X
Y
xi 2
xi
f (x) =
−
cos √ + 1
4000
i
i=1
i=1
xi ∈ [−400, 400], x∗ = (0, 0, . . . , 0), f (x∗ ) = 0
+ 20 + exp(1)
Rastrigin’s function
f (x) = 10d +
d
X
[xi 2 − 10 cos(2πxi )]
i=1
xi ∈ [−5.12, 5.12], x∗ = (0, 0, . . . , 0), f (x∗ ) = 0
Schwefel’s function
f (x) =
d
X
p
xi sin( | xi |)
i=1
xi ∈ [−500, 500], x∗ = (s, s, . . . , s), s = 420.97, f (x∗ ) = −418.9829d
Appendix B – Algorithms in Matlab
SOMA – one migration run
function [P,fn_evals]=...
soma_to_one1(fn_name,fn_evalsin,P,a,b,mass,step,prt)
[N d]=size(P); d=d-1;
P=sortrows(P,d+1);
pocet_kroku=fix(mass/step);
fn_evals=fn_evalsin;
leader=P(1,1:d);
for i=2:N
r0=P(i,1:d);
delta=(leader-r0)*step;
yfy=go_ahead(fn_name,r0, delta, pocet_kroku, prt, a, b);
fn_evals=fn_evals+pocet_kroku;
if yfy(d+1) < P(i,d+1) % trial point y is good
P(i,:)= yfy;
end
end
% find the best point y with function value fy
% on the way from r0 to leader
function yfy=go_ahead(fn_name,r0, delta, pocet_kroku, prt, a, b);
d=length(r0);
tochange=find(rand(d,1)<prt);
if length(tochange)==0 % at least one is changed
tochange=1+d*fix(rand(1));
end
C=zeros(pocet_kroku,d+1);
r=r0;
for i=1:pocet_kroku
r(tochange)=r(tochange)+delta(tochange);
r=zrcad(r,a,b);
fr=feval(fn_name,r);
C(i,:)=[r, fr];
end
[fmin, indmin]=min(C(:,d+1));
yfy=C(indmin,:); % row with min. fy is returned
% perturbation y into <a,b>
function result=zrcad(y,a,b)
zrc=find(y<a|y>b);
for i=zrc
while (y(i)<a(i)|y(i)>b(i))
if y(i)>b(i)
y(i)=2*b(i)-y(i);
elseif y(i)<a(i)
y(i)=2*a(i)-y(i);
end
end
end
result=y;
DERAND – one generation
function [Q,func_evals]= der_smp1(fn_name, func_evals, P, a, b, F, CR)
%
[N d]=size(P); d=d-1;
Q=P;
for i=1:N % in each generation
y=derand(P,F,CR,i);
y=zrcad(y,a,b);
% perturbation
fy=feval(fn_name,y);
func_evals=func_evals+1;
if fy < P(i,d+1) % trial point y is good
Q(i,:)= [y fy];
end
end % end of generation
function y=derand(P,F,CR,expt);
N=length(P(:,1));
d=length(P(1,:))-1;
y=P(expt(1),1:d);
vyb=nahvyb_expt(N,3,expt); % three points without expt
r1=P(vyb(1),1:d);
r2=P(vyb(2),1:d);
r3=P(vyb(3),1:d);
v=r1+F*(r2-r3);
change=find(rand(1,d)<CR);
if length(change)==0 % at least one element is changed
change=1+fix(d*rand(1));
end
y(change)=v(change);
% random sample, k of N without repetition,
% numbers given in vector expt are not included
%
function vyb=nahvyb_expt(N,k,expt);
opora=1:N;
if nargin==3 opora(expt)=[]; end
vyb=zeros(1,k);
for i=1:k
index=1+fix(rand(1)*length(opora));
vyb(i)=opora(index);
opora(index)=[];
end
COMP4 – N evaluations of objective function
function [P,fn_evals, ni, wi, nrst]=...
comp4_1(fn_name, fn_evalsin, P, a, b, delta, w0, ni, wi, nrst)
%
[N d]=size(P); d=d-1;
h=4; % pocet heuristik v soutezi !!!!!
fn_evals=fn_evalsin;
for i=1:N
[fmax, indmax]=max(P(:,d+1));
[fmin, indmin]=min(P(:,d+1));
[hh p_min]=roulete(wi);
if p_min<delta
wi=zeros(1,h)+w0;
nrst=nrst+1;
end %reset
switch hh % number of selected heuristic
case 1
y=refl_rwd(P,[2 0.5]);
case 2
y=refl_rwd(P,[5 1.5]);
case 3
y=refl_bestrwd(P, 2, 0.5, indmin, P(indmin,1:d));
case 4
y=decrs_radp(P,[0.4 0.9 fmin fmax]);
end
y=zrcad(y,a,b);
% perturbation
fy=feval(fn_name, y);
fn_evals=fn_evals+1;
if fy < fmax % trial point y is good for renewing population
P(indmax,:)= [y fy];
ni(hh)=ni(hh)+1;
if fmax-fmin <= eps
w=w0;
else w=(fmax-max([fy fmin]))/(fmax-fmin); % weight of increase
end
wi(hh)=wi(hh)+w;% zmena prsti qi
[fmax, indmax]=max(P(:,d+1));
[fmin, indmin]=min(P(:,d+1));
end
end
function [res, p_min]=roulete(cutpoints)
%
% returns an integer from [1, length(cutpoints)]
% with probability proportional
% to cutpoints(i)/ summa cutpoints
%
h =length(cutpoints);
ss=sum(cutpoints);
p_min=min(cutpoints)/ss;
cp(1)=cutpoints(1);
for i=2:h
cp(i)=cp(i-1)+cutpoints(i);
end
cp=cp/ss;
res=1+fix(sum(cp<rand(1)));
function y=refl_rwd(P,alpha);
% randomized shifted - worst (with highest f) point of simplex is reflected
shift_d=alpha(2);
alpha=alpha(1);
N=length(P(:,1)); d=length(P(1,:))-1;
vyb=nahvyb(N,d+1); % simplex S chosen at random from P
S=P(vyb,:);
[x,indx]=max(S(:,d+1)); % worst point is reflected
x=S(indx,1:d);
S(indx,:)=[];
S(:,d+1)=[];
g=mean(S);
y=g+(g-x)*(shift_d +(alpha-2*shift_d)*rand(1));
function y=refl_bestrwd(P,alpha,shift_d, indmin, xmin);
% randomized, shifted - worst (with highest f) point of simplex is reflected
% centroid g computed also from xmin
% see Ali and Torn 2004
N=length(P(:,1)); d=length(P(1,:))-1;
vyb=nahvyb_expt(N,d,indmin); % d points chosen at random from P except indmin
S=P(vyb,:);
[x,indx]=max(S(:,d+1)); % worst point is reflected
x=S(indx,1:d);
S(indx,:)=[];
S(:,d+1)=[];
g=(xmin + sum(S))/d;
y=g+(g-x)*(shift_d +(alpha-2*shift_d)*rand(1));
% randomized reflection
% heuristic - de_rand binomial
% F is adapted, see Ali and Torn 2004
function y=decrs_radp(P,vecpar);
Fmin=vecpar(1); CR=vecpar(2); fmin=vecpar(3); fmax=vecpar(4);
N=length(P(:,1)); d=length(P(1,:))-1;
vyb=nahvyb(N,4); % four random points
r1=P(vyb(1),1:d);
r2=P(vyb(2),1:d);
r3=P(vyb(3),1:d);
y=P(vyb(4),1:d);
pom1=Fmin; pom2=1; pom3=1;
if abs(fmin)>0 pom2=abs(fmax/fmin); end
if pom2<1 pom1=1-pom2;
elseif abs(fmax)>0 pom1=1-abs(fmin/fmax);
end
F=max([Fmin, pom1]);
v=r1+F*(r2-r3);
change=find(rand(1,d)<CR);
if length(change)==0 % at least one element is changed
change=1+fix(d*rand(1));
end
y(change)=v(change);
SERIAL – the core of Matlab code
% serial cooperation
% core of code
P=zeros(N,d+1);
for i=1:N
P(i,1:d)=a+(b-a).*rand(1,d);
P(i,d+1)= feval(fn_name,P(i,1:d));
end % 0-th generation initialized
fmax=max(P(:,d+1));
fmin=min(P(:,d+1));
func_evals=N;
ni=zeros(1,h); nrst=0; wi=zeros(1,h)+w0;
while (fmax-fmin > my_eps) & (func_evals < d*max_evals) % main loop
[P,func_evals]=soma_to_one1(fn_name, func_evals, P, a, b, mass, step, prt);
for i=1:fix(mass/step)
[P,func_evals]=der_smp1(fn_name, func_evals, P, a, b, F, C);
end
for i=1:fix(mass/step)
[P,func_evals,ni, wi, nrst]=...
comp4_1(fn_name, func_evals, P, a, b, delta, w0, ni, wi, nrst);
end
fmax=max(P(:,d+1));
fmin=min(P(:,d+1));
end
% main loop - end
PARALLEL – the core of Matlab code
% parallel cooperation
% core of code
for i=1:N
P1(i,1:d)=a+(b-a).*rand(1,d);
P2(i,1:d)=a+(b-a).*rand(1,d);
P1(i,d+1)= feval(fn_name,P1(i,1:d));
P2(i,d+1)= feval(fn_name,P2(i,1:d));
end % 0-th generation initialized
fmax=max(P1(:,d+1));
fmin=min(P1(:,d+1));
func_evals=2*N;
ni=zeros(1,h); nrst=0; wi=zeros(1,h)+w0;
while (fmax-fmin > my_eps) & (func_evals < d*max_evals) % main loop
[P1,func_evals]=soma_to_one1(fn_name, func_evals, P1, a, b, mass, step, prt);
for i=1:fix(mass/step)
[P2,func_evals]=der_smp1(fn_name, func_evals, P2, a, b, F, C);
end
P1=sortrows(P1,d+1);
P2=sortrows(P2,d+1);
P=[P1(1:N/2,:); P2(1:N/2,:)];
for i=1:fix(mass/step)
[P,func_evals,ni, wi, nrst]=...
comp4_1(fn_name, func_evals, P, a, b, delta, w0, ni, wi, nrst);
end
fmax=max(P(:,d+1));
fmin=min(P(:,d+1));
P1=P; P2=P;
end
% main loop - end