Non-Uniform Mapping in Binary-Coded Genetic

Non-Uniform Mapping in
Binary-Coded Genetic Algorithms
Kalyanmoy Deb, Yashesh D. Dhebar, and N. V. R. Pavan
Kanpur Genetic Algorithms Laboratory (KanGAL)
Indian Institute of Technology Kanpur
PIN 208016, U.P., India
deb,[email protected],[email protected]
KanGAL Report No. 2012011
Abstract. Binary-coded genetic algorithms (BGAs) traditionally use a
uniform mapping to decode strings to corresponding real-parameter variable values. In this paper, we suggest a non-uniform mapping scheme
for creating solutions towards better regions in the search space, dictated by BGA’s population statistics. Both variable-wise and vector-wise
non-uniform mapping schemes are suggested. Results on five standard
test problems reveal that the proposed non-uniform mapping BGA (or
NBGA) is much faster in converging close to the true optimum than the
usual uniformly mapped BGA. With the base-line results, an adaptive
NBGA approach is then suggested to make the algorithm parameterfree. Results are promising and should encourage further attention to
non-uniform mapping strategies with binary coded GAs.
1
Introduction
Genetic algorithms (GAs) were originally started with a binary-coded representation scheme [1, 2], in which variables are first coded in binary strings comprising of Boolean variables (0 and 1). Since such a string of Boolean variables
resembled a natural chromosomal structure of multiple genes concatenating to
a chromosome, the developers of GAs thought of applying genetic operators (recombination and mutation) on to such binary strings – hence the name genetic
algorithms.
For handling problems having real-valued variables, every real-valued variable
xi is represented using a binary substring si = (si1 , si2 , . . . , siℓi ) where sij ∈ {0, 1}.
P
All n variables are then represented by ℓ = ni=1 ℓi bits. Early GA researchers
suggested a uniform mapping scheme to compute xi from a substring si :
(L)
xi = xi
(U)
+
xi
(L)
− xi
ℓ
i
2 −1
DV (si ),
(1)
where ℓi = |si | is the number of Boolean variables (or bits) used to represent
(U)
(L)
variable xi , xi and xi are lower and upper bounds of variable xi and DV (si )
Pℓ
is the decoded value of string si , given as follows: DV (si ) = j=1 2j−1 sij . This
2
Non-Uniform Mapping in Binary Genetic Algorithms
(L)
(U)
mapping scheme is uniform in the range [xi , xi ], because there is a constant
(U)
gap between two consecutive values of a variable and the gap is equal to (xi −
(L)
ℓi
xi )/(2 − 1). The uniform mapping mentioned above covers the entire search
space with uniformly distributed set of points. Without any knowledge about
good and bad regions in the search space, this is a wise thing to do and early
users of BGA have rightly used such a scheme. However, due to its uniformity,
BGA requires a large number of iterations to converge to the requisite optimum.
Moreover, in every generation, the GA population usually has the information
about the current-best or best-so-far solution. This solution can be judiciously
used to make a faster search.
In this paper, we suggest the possibility of a non-uniform mapping scheme
in which the above equations are modified so that binary substrings map nonuniformly in the search space. If the biasing can be done to have more points
near the good regions of the search space, such a non-uniform mapping may
be beneficial in creating useful solutions quickly. The binary genetic algorithm
(BGA) framework allows such a mapping and we investigate the effect of such
a mapping in this paper.
In the remainder of the paper, we first describe the proposed non-uniform
mapping scheme and then describe the overall algorithm (we call it as NBGA).
Thereafter, we present simulation results of NBGA on a number of standard
problems and compare its performance with original uniformly mapped BGA.
The NBGA approach involves an additional parameter. Finally, we propose an
adaptive scheme (ANBGA) that do not require the additional parameter and
present simulation results with the adaptive NBGA approach as well. Conclusions of this extensive study are then made.
2
Past Efforts of Non-Uniform Mapping in BGA
Despite the suggestion of BGAs in early sixties, it is surprising that there does
not exist too many studies related to non-uniform mapping schemes in coding
binary substrings to real values. However, there are a few studies worth mentioning.
ARGOT [3] was an attempt to adaptively map fixed binary strings to the
decoded variable space. The methodologies used in the study used several environmentally triggered operators to alter intermediated mappings. These intermediate mappings are based on internal measurements such as parameter
convergence, parameter variance and parameter ‘positioning’ within a possible
range of parameter values. The dynamic parameter encoding (DPE) approach
[4] adjusts the accuracy of encoded parameters dynamically. In the beginning of
a GA simulation, a string encodes only most significant bits of each parameter
(say 4bits or so), and when GA begins to converge, most significant bits are
recorded and dropped from the encoding. New bits are introduced for additional
precision. In the delta coding approach [5], after every run, the population is
reinitialized with the substring coding for each parameter representing distance
Non-Uniform Mapping in Binary-Coded Genetic Algorithms
3
or ∆ value away from the corresponding parameter in best solution of the previous run, thereby forming a new hypercube around the best solution. The size
of hypercube is controlled by adjusting number of bits used for encoding.
All the above were proposed in late eighties and early nineties, and have
not been followed up adequately. Non-uniform mapping in BGA can produce
faster convergence if some information about a good region in the search space is
identified. Here, we suggest a simple procedure that uses the population statistics
to create a non-uniform mapping in BGAs.
3
Proposed Non-Uniformly Mapped BGA (NBGA)
In NBGA, we create more solutions near the best-so-far solution (xb,t ) at any
generation t. We suggest a polynomial mapping function for this purpose with a
user-defined parameter η. Let us consider Figure 1 in which the lower bound is
a, upper bound is b of variable xi and i-th variable value of best-so-far solution is
i
xb,t
i at the current generation t. Let us also assume that the substring s decodes
A
non−uniform
mapping
uniform
mapping
c
d
e
a
x
x’
x
x2
x’
xb,t
xb,t
b
B
x1
Fig. 1. Non-uniform mapping scheme is
shown. Area acdxa is equal to area aex′ a.
Fig. 2. Vector-wise mapping scheme is
shown.
(L)
(U)
to the variable value xi using Equation 1 with a = xi and b = xi . The
quantity in fraction varies between zero and one, as the decoded value DV (si )
varies between zero and (2ℓi − 1). Assuming the fraction ζ = DV (si )/(2ℓi − 1),
which lies in [0, 1], we observe a linear behavior between xi and ζ for uniform
mapping used in BGA, xi = a + ζ(b − a), or ζ = (xi − a)/(b − a).
For NBGA, we re-map value xi to obtain x′i (m : xi → x′i ) so that there is
more concentration of points near xb,t
i , as follows (with η ≥ 0) and as shown in
Figure 1:
m(ζ) = kζ η ,
(2)
where k is a constant, which is chosen in such a manner that the point xi = xb,t
i
R (xb,t −a)/(b−a)
−
a)/(b
−
a).
This
remains at its own place, that is, 0 i
m(ζ)dζ = (xb,t
i
4
Non-Uniform Mapping in Binary Genetic Algorithms
η
condition gives us: k = (η +1)(b−a)η−1 /(xb,t
i −a) . Substituting k to Equation 2
R (x′i −a)/(b−a)
and equating area under the mapped curve ( 0
m(ζ)dζ) with area
under uniform mapping ((xi − a)/(b − a)), we have the following mapping:
1/(1+η)
η
.
x′i = a + (xi − a)(xb,t
i − a)
(3)
are re-mapped to
Figure 1 shows how the points in between a ≤ xi ≤ xb,t
i
b,t
move towards xb,t
for
η
>
0.
For
a
point
in
the
range
x
≤
x
i ≤ b, a mapping
i
i
b,t
similar to the above will cause the solution to move towards xi as well, as shown
in the figure. It is clear that depending on the location of xb,t
i at a generation,
points towards left and right of it will get re-mapped towards the best-so-far
point – thereby providing a non-uniform mapping of the binary substrings in
the range [a, b].
The above non-uniform mapping scheme can be implemented in two ways: (i)
variable-wise and (ii) vector-wise. We describe them in the following subsections.
3.1
Variable-wise Mapping
In variable-wise mapping, we perform the above re-mapping scheme for every
variable one at a time, taking the corresponding variable value of the best-so-far
′
solution (xb,t
i ) and computing corresponding re-mapped value xi . This remapping operation is performed before an individual is evaluated. For an individual
string s, first the substring si is identified for i-th variable, and then x′i is calculated using Equation 3. The variable value of the best-so-far solution of the
previous generation is used. Thus, the initial generation does not use the nonuniform mapping, but all population members from generation 1 onwards are
mapped non-uniformly as described. It is important to note that the original
string s is unaltered. The individual is then evaluated using variable vector x′ .
The selection, recombination and mutation operators are performed on the string
s as usual.
3.2
Vector-wise Mapping
In the vector-wise mapping, we coordinate the mapping of all variables in a
systematic manner. The mapping operation is explained through Figure 2 for
a two-variable case. For any individual s, its location (x) in the n-dimensional
variable space can be identified using the original uniform mapping. Also, the
location of the best-so-far (xb,t ) solution in the n-dimensional space is located
next. Thereafter, these two points are joined by a straight line and the line is
extended to find the extreme points (A and B) of the bounded search space.
Parameterizing, x′ = A + d(x − A), and substituting a = 0, xi = 1 and xb,t
i =
kxb,t − Ak/kx − Ak in Equation 3, we compute the re-mapped point x′i and save
it as d′ . Thereafter, the re-mapped variable vector can be computed as follows:
x′ = A + d′ (x − A).
(4)
Non-Uniform Mapping in Binary-Coded Genetic Algorithms
5
The re-mapped vector will be created along the line shown in the figure and will
be placed in between x and xb,t . The vector-wise re-mapping is done for every
population member using the best-so-far solution and new solutions are created.
They are then evaluated and used in the selection operator. Recombination and
mutation operators are used as usual. Binary strings are unaltered. The vectorwise re-mapping is likely to work well on problems having a linkage among
variables. We present the results obtained from both methods in the following
section.
4
Results
We use the binary tournament selection, a multi-variable crossover, in which a
single-point crossover is used for each substring representing a variable, and the
usual bit-wise mutation operator.
We consider six standard unconstrained objective functions, which are given
below:
n
X
2
Sphere: f (x) =
xi ,
(5)
ix2i ,
(6)
i=1
n
Ellipsoidal: f (x) =
X
i=1
v

!
u n
n
X
u1 X
1
x2i −exp
cos(2πxi) +20+e, (7)
Ackley: f (x) = −20 exp−0.2t
Schwefel: f (x) =
i
n
X
X
i=1
Rosenbrock: f (x) =

n−1
X
i=1
j=1
xi
n
!2
n
i=1
i=1
(8)
,
100(xi+1 − x2i )2 + (1 − x2i ) .
(9)
All problems are considered for n = 20 variables. Following parameter values
are fixed for all runs:
–
–
–
–
–
–
Population size = 100,
String length of each variable = 15,
Lower Bound = −8, Upper Bound = 10,
Selection type: Tournament Selection,
Crossover: Variable wise crossover with crossover probability = 0.9,
Mutation: Bitwise mutation with mutation probability = 0.0033.
An algorithm terminates when any of the following two scenarios take place:
(i) a maximum generation of tmax = 3, 000 is reached, or (ii) the difference between the best objective function value and the corresponding optimal function
value of 0.01 or less is achieved. For each function, 50 runs are performed and the
number of function evaluations for termination of each run is recorded. Thereafter, minimum, median and maximum number of function evaluations (FE) are
6
Non-Uniform Mapping in Binary Genetic Algorithms
reported. Note that a run is called success (S), if a solution with a difference
in objective function value from the optimal function value of 0.01 is achieved,
otherwise the run is called a failure (F). In the case of failure runs, the obtained
objective function values (f) are reported.
4.1
Linear Increase in η:
Figure 1 suggests that a large value of η will bring solutions close to the current
best solution. During initial generations, the use of a large η may not be a good
strategy. Thus, first, we use an η-update scheme in which η is increased from
zero linearly with generations to a user-specified maximum value ηmax at the
maximum generation tmax :
η(t) =
t
ηmax .
tmax
(10)
We use different values of ηmax = 10, 100, 1000, 5000, 10000, and 15000, for
variable-wise mapping case and compare results with the usual uniformly-mapped
binary coded GA. The results are tabulated for sphere function in Table 1. The
Table 1. Performance of linearly increasing η schemes are shown for the sphere function.
Method
BGA
NBGA
NBGA
NBGA
NBGA
NBGA
NBGA
(ηmax
(ηmax
(ηmax
(ηmax
(ηmax
(ηmax
= 10)
= 100)
= 1, 000)
= 5, 000)
= 10, 000)
= 15, 000)
FE or f
FE
f
FE
FE
FE
FE
FE
FE
S or F
S = 17
F = 33
S = 50
S = 50
S = 50
S = 50
S = 50
S= 50
min
9,701
0.0517
10,501
7,401
3,901
2,301
1,801
1,901
median
13,201
0.0557
12,901
10,101
5,901
2,901
2,301
2,301
max
23,501
0.157
176,701
87,901
19,501
3,501
2,801
3,101
first aspect to note is that the original BGA (with uniform mapping) is not able
to solve the sphere problem in 33 out of 50 runs in 3,000 generations, whereas
all NBGA runs with ηmax ≥ 10 are able to find the desired solution in all 50
runs. This is a remarkable performance depicted by the proposed NBGA.
The best NBGA performance is observed with ηmax = 10, 000. Note that
although such a high ηmax value is set, the median run with 2,301 function
evaluations required 23 generations only. At this generation, η value set was η =
(23/3000)×10000 or 76.67. The important matter in setting up ηmax = 10, 000 is
that it sets up an appropriate rate of increase of η with generations (about 3.33
per generation) for the overall algorithm to work the best. An increase of η from
1.67 per generation to 3.33 per generation seems to work well for the Sphere
function, however, an increase of η by 5 per generation (equivalent to ηmax =
Non-Uniform Mapping in Binary-Coded Genetic Algorithms
7
15, 000) is found to be not so good for this problem. Since the sphere function
is a unimodal problem, a large effective η maps the solution closer to the bestso-far solution. But a too close a solution to the best-so-far solution (achieved
with a large η) may cause a premature convergence, hence a deterioration in
performance is observed.
Figure 3 shows the variation in function value of the population-best solution
with generation for the sphere function of the best run. It is clear that NBGA
is much faster compared to BGA.
SPHERE
Elipsoidal
3
1
10
BGA
NBGA
0
10
−1
10
BGA
NBGA
2
10
Fitness
Fitness
10
1
10
0
10
−1
10
−2
10
0
−2
20
40
60
generation number
80
100
Fig. 3. Population-best objective function value with generation for the sphere
function.
10
0
500
1000 1500 2000
generation number
2500
3000
Fig. 4. Population-best objective function value with generation for the ellipsoidal function.
Next, we consider the ellipsoidal function and results are shown in Table 2.
For this problem, ηmax = 15, 000 turns out to be the best option. This correTable 2. Performance of linearly increasing η schemes are shown for the ellipsoidal
function.
Method
BGA
NBGA (ηmax = 10)
NBGA (ηmax = 100)
NBGA
NBGA
NBGA
NBGA
(ηmax
(ηmax
(ηmax
(ηmax
= 1, 000)
= 5, 000)
= 10, 000)
= 15, 000)
FE or f
f
FE
f
FE
f
FE
FE
FE
FE
S or F
F = 50
S = 11
F = 39
S = 46
F = 04
S = 50
S = 50
S = 50
S = 50
min
0.2374
40,001
0.01
24,501
0.0137
12,001
7,301
6,401
5,701
median
4.5797
236,501
0.0564
78,001
0.01986
31,401
21,601
8,501
7,701
max
29.6476
284,201
3.3013
198,101
0.0246
125,201
48,601
29,801
21,501
sponds to an increase of η of 5 per generation. At the final generation of the best
run (having minimum function evaluations), an effective η = (57/3000) × 15000
or 285 was used. This problem is also unimodal; hence a larger η produced a
faster convergence. Notice here that the original BGA (with uniform mapping
8
Non-Uniform Mapping in Binary Genetic Algorithms
of variables) is not able to find the desired solution (close to the true optimum) in all 50 runs in a maximum of 3,000 generations, whereas all runs with
ηmax ≥ 1, 000 are able to solve the problem in all 50 runs with a small number
of function evaluations. Figure 4 shows the variation in function value of the
population-best solution with generation for the ellipsoidal function of the best
run. Again, NBGA is much faster compared to BGA.
The performance of BGA and NBGA approaches on Ackley’s function are
shown in Table 3. The best performance is observed with ηmax = 10, 000 with
Table 3. Performance of linearly increasing η schemes are shown for the Ackley’s
function.
Method
BGA
NBGA (ηmax = 10)
NBGA (ηmax = 100)
NBGA
NBGA
NBGA
NBGA
(ηmax
(ηmax
(ηmax
(ηmax
= 1, 000)
= 5, 000)
= 10, 000)
= 15, 000)
FE or f
f
f
FE
f
FE
FE
FE
FE
f
S or F
F = 50
F = 50
S = 33
F = 17
S = 50
S = 50
S = 50
S = 36
F = 14
min
0.249
0.0208
74,901
0.01073
20,601
11,001
10,301
9,201
1.1551
median
1.02
0.1113
122,801
0.0203
32,401
40,101
31,601
25,701
1.1551
max
1.805
0.703
232,101
0.08
203,701
110,701
66,701
51,301
2.0133
100% successful runs. Here, η ≈ 1, 054 is reached at the final generation of
a median NBGA’s run. Importantly, a rate of increase of 3.33 per generation
seems to be the best choice for Ackley’s function. The BGA is found to fail in
all 50 runs. Figure 5 shows the objective function value of the population-best
solution with generation for Ackley’s function. The results from the median run
is shown. The proposed NBGA approach is found to be much faster compared
to BGA.
Schwefel
Ackley
1
BGA
NBGA
0
10
−1
10
BGA
NBGA
2
10
Fitness
Fitness
10
1
10
0
10
−1
10
−2
10
0
−2
500
1000 1500 2000
generation number
2500
3000
Fig. 5. Population-best objective function value with generation for Ackley’s
function.
10
0
500
1000 1500 2000
generation number
2500
3000
Fig. 6. Population-best objective function value with generation for Schwefel’s
function.
Non-Uniform Mapping in Binary-Coded Genetic Algorithms
9
Next, we consider Schwefel’s function. Table 4 tabulates the results. Here,
Table 4. Performance of linearly increasing η schemes are shown for the Schwefel’s
function.
Method
BGA
NBGA(ηmax = 10)
NBGA (ηmax = 100)
NBGA (ηmax = 1, 000)
NBGA (ηmax
NBGA (ηmax
NBGA (ηmax
FE or f
f
f
f
FE
f
= 5, 000) FE
= 10, 000) FE
= 15, 000) FE
S or F
F = 50
F = 50
F = 50
S = 19
F = 31
S = 50
S = 50
S = 50
min
4.7183
2.129
1.1814
73,501
0.0107
27,601
35,501
25,701
median
17.417
6.2925
2.3364
251,001
0.0436
144,401
104,901
74,401
max
37.6447
16.093
3.8168
298,501
0.1954
292,801
167,301
124,301
ηmax = 15, 000 performs the best and the original BGA fails to make any of its
50 runs successful. Figure 6 shows the objective function value of the populationbest solution with generation for Schwefel’s function. The proposed NBGA approach is found to be much faster compared to BGA.
Next, we consider Rosenbrock’s function. Table 5 shows the results. This
Table 5. Performance of linearly increasing η schemes are shown for the Rosenbrock’s
function.
Method
BGA
NBGA (ηmax
NBGA (ηmax
NBGA (ηmax
NBGA (ηmax
NBGA (ηmax
= 10)
= 100)
= 1, 000)
= 5, 000)
= 10, 000)
NBGA(ηmax = 15, 000)
FE or f
f
f
f
f
f
FE
f
FE
f
S or F
F = 50
F = 50
F = 50
F = 50
F = 50
S = 27
F = 23
S = 24
F = 26
min
0.3686
0.1328
0.0865
0.0505
0.0112
200,301
0.012
109,001
0.0112
median
42.78
0.4836
0.1918
13.2236
10.4476
247,001
0.3443
183,701
3.9888
max
281.7013
90.1493
17.0241
76.1873
19.3426
288,101
19.0727
213,801
69.2418
function is considered to be a difficult function to solve for the optimum. BGA
could not find the desired optimum in all 50 runs. Also, NBGAs with small ηmax
values could not solve the problem. But, NBGA with ηmax = 10, 000 is able to
find the desired solution in 27 out of 50 runs.
It is clear from the above section that (i) The proposed non-uniform mapping
method (NBGA) performs much better than the usual BGA, and (ii) In general,
a high value of ηmax produces a better performance.
10
Non-Uniform Mapping in Binary Genetic Algorithms
5
Adaptive NBGA
An increasing η with a rate of about 3 to 5 per generation seems to work well,
but the optimal performance depends on the problem. In this section, we propose
an adaptive approach in which, instead of a predefined increase in η with generation, the algorithm will decide whether to increase of decrease η after every
few generations.
In the adaptive NBGA, we initialize η to be zero at the initial generation.
Thereafter, we keep track of the current-best 1 objective function value (f (xcb,t ))
at every five generations, and re-calculate ηmax , as follows:
ηmax = ηmax − 100
f (xcb,t ) − f (xcb,t−5 )
.
f (xcb,t−5 )
(11)
The above equation suggests that if the current-best is better than the previous
current-best solution (five generations ago), ηmax is increased by an amount
proportional to percentage decrease in objective function value. Since an increase
in ηmax causes decoded values to move closer to the best-so-far solution, an
improvement in current-best solution is rewarded by a further movement towards
the best-so-far solution. On the other hand, if the new current-best is worse than
the previous current-best solution, ηmax is reduced. In effect, an effort is being
made to increase diversity of the population by not moving the solutions much
towards the best-so-far solution.
When the change in the current-best function value is less than 0.001 in three
consecutive updates of ηmax , it is likely that the population diversity is depleted
and in such a case we set ηmax = 0 and also set mutation probability to 0.5 to
introduce diversity in the population. From the next generation on, the mutation
probability is set back to 0.0033 as usual, but ηmax is set free to get updated
using Equation 11. After updating ηmax after every five generations, η is varied
from ηmax /5 to ηmax for the next five generations in uniform steps.
5.1
Results with Adaptive NBGA Approach
All five problems are chosen and identical parameter settings as those used in
Section 4 are used here. Table 6 compares the performance of the adaptive
NBGA approach with both variable-wise and vector-wise non-uniform mappings.
In both cases, identical adaptation scheme (Equation 11) is used. The table also
compares the adaptive NBGA results with the original variable-wise NBGA
approach discussed earlier.
Figure 7 shows variation of ηmax and population-best objective value with
generation on three problems. It can be observed that both variable-wise and
vector-wise non-uniform mapping approaches perform well. However, the fact
that adaptive approach does not require a pre-defined ηmax value and is still able
to perform almost similar to the pre-defined increasing η-scheme, we recommend
the use of the adaptive NBGA approach.
1
The current-best solution is different from the best-so-far solution, as the former is
the population-best solution at the current generation.
Non-Uniform Mapping in Binary-Coded Genetic Algorithms
11
Table 6. Comparison of performance between adaptive NBGA and original NBGA
approaches.
Function
Sphere
Method
Variable-wise
Vector-wise
NBGA (ηmax = 10, 000)
Ellipsoidal Variable-wise
Vector-wise
NBGA (ηmax = 15, 000)
Ackley
Variable-wise
Vector-wise
NBGA (ηmax = 10, 000)
Variable-wise
Vector-wise
NBGA (ηmax = 15, 000)
Rosenbrock Variable-wise
Vector-wise
NBGA (ηmax = 10, 000)
Schwefel
6
S or F
S = 50
S = 50
S = 50
S = 50
S = 50
S = 50
S = 50
S = 49
F = 01
S = 50
S = 50
S = 50
S = 50
F = 50
F = 50
S = 27
F = 23
FE or f
min median
max
FE
2,001 3,401
4,501
FE
2,001 2,601
3,501
FE
1,801 2,301 2,801
FE
5,401 8,501 14,601
FE
5,101 7,501 17,101
FE
5,701 7,701 21,501
FE
8,901 29,901 69,001
FE
9,101 25,101 241,001
f
1.841 1.841
1.841
FE
10,301 31,601 66,701
FE
24,501 58,001 103,601
FE
18,801 75,601 113,901
FE
25,701 74,401 124,301
f
0.019 0.092 12.413
f
0.023 0.083 12.888
FE
200,301 247,001 288,101
f
0.012 0.3443 19.0727
Conclusions
In this paper, we have suggested two non-uniform mapping schemes in binarycoded genetic algorithms (BGA). Using the best-so-far solution information at
every generation, the variable-wise and vector-wise non-uniformly coded BGAs
(or NBGAs) map binary substrings to a variable value that is somewhat closer to
the best-so-far solution than its original decoded value. The non-uniform mapping requires a parameter ηmax , which takes a non-negative value. The variablewise approach with an increasing ηmax -update is applied to five standard test
problems and the performance of the proposed NBGA has been found to be
much better than the BGA approach that uses a uniform mapping scheme.
Thereafter, the NBGA approach has been made adaptive by iteratively updating the ηmax parameter associated with the NBGA approach. Starting with
ηmax = 0 (uniform mapping) at the initial generation, the adaptive NBGA approach has been found to update the ηmax parameter adaptively so as to produce
similar results as in the case of increasing ηmax -update strategy. Both variablewise and vector-wise non-uniform approaches have been found to perform equally
well.
The research in evolutionary algorithms originally started with binary-coded
GAs, but recently their use has been limited due to the advent of real-parameter
GAs and other EC methodologies. Instead of BGA’s usual choice of uniform
mapping, the effective use of a non-uniformly mapped representation scheme
demonstrated in this paper should cause a resurrection of BGAs in the near
future.
12
Non-Uniform Mapping in Binary Genetic Algorithms
Sphere
Sphere
2
2
10
10
1
1
10
10
0
0
10
10
−1
10
−2
10
−1
10
fitness
eta bar
0
fitness
eta bar
−2
5
10
15
generation number
20
(a) Variable-wise on sphere function.
10
0
5
20
(b) Vector-wise on sphere function.
Ackley
Ackley
2
2
10
10
1
1
10
10
0
0
10
10
−1
−1
fitness
eta bar
10
−2
10
10
15
generation number
0
20
10
−2
40
60
generation number
80
100
(c) Variable-wise on Ackley’s function.
10
0
fitness
eta bar
20
80
100
(d) Vector-wise on Ackley’s function.
Rosenbrock
Rosenbrock
6
6
10
10
fitness
eta bar
4
10
4
10
2
2
10
10
0
0
10
10
−2
10
40
60
generation number
0
−2
1000
2000
generation number
3000
10
0
fitness
eta bar
500
1000 1500 2000
generation number
2500
3000
(e) Variable-wise on Rosenbrock’s func- (f) Vector-wise on Rosenbrock’s function.
tion.
Fig. 7. Variations in ηmax with generation number for variable-wise and vector-wise
non-uniform mapping are shown.
References
1. J. H. Holland. Adaptation in Natural and Artificial Systems. Ann Arbor, MI: MIT
Press, 1975.
2. D. E. Goldberg. Genetic Algorithms for Search, Optimization, and Machine Learning. Reading, MA: Addison-Wesley, 1989.
3. C. G. Shaefer. The argot strategy: Adaptive representation genetic optimizer technique. In Proceedings of the Second International Conference on Genetic Algorithms,
pages 50–58, 1987.
4. N. N. Schraudolph and R. K. Belew. Dynamic parameter encoding for genetic
algorithms. Technical Report LAUR90-2795, Los Alamos: Los Alamos National
Laboratory, 1990.
5. D. Whitley, K. Mathias, and P. Fitzhorn. Delta coding: An iterative search strategy
for genetic algorithms. In Proceedings of the Fourth International Conference on
Genetic Algorithms, pages 77–84. San Mateo, CA: Morgan Kaufmann, 1991.