Solving Nonlinear Constrained Optimization Problems by the ε

2006 IEEE Conference on
Systems, Man, and Cybernetics
October 8-11, 2006, Taipei, Taiwan
Solving Nonlinear Constrained Optimization Problems
by the ε Constrained Differential Evolution
Tetsuyuki Takahama, Setsuko Sakai and Noriyuki Iwane
Abstract— The ε constrained method is an algorithm transformation method, which can convert algorithms for unconstrained problems to algorithms for constrained problems using
the ε level comparison that compares search points based on
the constraint violation of them. We propose the ε constrained
differential evolution εDE, which is the combination of the
ε constrained method and differential evolution (DE). DE is
a simple, fast and stable search algorithm that is robust to
multi-modal problems. It is expected that the εDE is robust
to multi-modal problems, can run very fast and can find very
high quality solutions. The effectiveness of the εDE is shown by
comparing it with various methods on well known nonlinear
constrained problems.
I. I NTRODUCTION
Constrained optimization problems, especially nonlinear
optimization problems, where objective functions are minimized under given constraints, are very important and frequently appear in the real world. In this study, the following
optimization problem (P) with inequality constraints, equality constraints, upper bound constraints and lower bound
constraints will be discussed.
(P) minimize
subject to
f (x)
gj (x) ≤ 0, j = 1, . . . , q
hj (x) = 0, j = q + 1, . . . , m
li ≤ xi ≤ ui , i = 1, . . . , n,
(1)
where x = (x1 , x2 , · · · , xn ) is an n dimensional vector,
f (x) is an objective function, gj (x) ≤ 0 and hj (x) = 0
are q inequality constraints and m − q equality constraints,
respectively. Functions f, gj and hj are linear or nonlinear
real-valued functions. Values ui and li are the upper bound
and the lower bound of xi , respectively. Also, let the feasible
space in which every point satisfies all constraints be denoted
by F and the search space in which every point satisfies the
upper and lower bound constraints be denoted by S (⊃ F).
There exist many studies on solving constrained optimization problems using evolutionary algorithms (EAs) [1], [2]
and particle swarm optimization (PSO) [3]. However, there
are some problems: (1) The ability to solve multi-modal
problems is insufficient. EAs for constrained optimization
can locate a feasible region and find feasible solutions in
uni-modal problems. But when multi-modal problems that
This work is supported in part by Grant-in-Aid for Scientific Research
(C) (No. 16500083,17510139) of Japan society for the promotion of science
T. Takahama and N. Iwane are with the Department of Intelligent
Systems, Hiroshima City University, Asaminami-ku, Hiroshima, 731-3194
Japan [email protected]
S. Sakai is with the Faculty of Commercial Sciences, Hiroshima
Shudo University, Asaminami-ku, Hiroshima, 731-3195 Japan
[email protected]
1-4244-0100-3/06/$20.00 ©2006 IEEE
have many local solutions in a feasible region are solved,
even if the EAs can locate the feasible region, they are
sometimes trapped to a local solution and cannot search for
an optimal solution. Thus a method that is robust to multimodal problems is required. (2) The stability and efficiency
of searches is not enough. Even when solving the same
problem, EAs sometimes find good solutions but sometimes
find only very bad solutions. The initial search points are
usually generated randomly and the search process includes
many stochastic operations in EAs.
To overcome these problems, we propose the εDE [4],
defined by applying the ε constrained method [5] to a differential evolution (DE) [6], [7]. DE is an evolutionary algorithm for global optimization that incorporates both crossover
and mutation into a simple operation realized by selecting
a random parent and adding a scaled difference between
two new random parents to the parent. By incorporating
DE, problems (1) and (2) can be solved; DE is a simple,
fast and stable search algorithm that is robust to multimodal problems. The εDE is stable because it uses a simple
and stable selection and replacement mechanism excluding
stochastic operations. The εDE is also efficient because it
uses a simple arithmetic operation and does not use any rankbased operations or mutations based on Gaussian and Cauchy
distributions.
The effectiveness of the εDE is shown by comparing
it with various methods on some well known problems
mentioned in [2], where the performance of many methods including EA-based methods and mathematical methods
were compared.
II. P REVIOUS WORKS
EAs for constrained optimization can be classified into
several categories according to the way the constraints are
treated as follows:
(1) Constraints are only used to see whether a search
point is feasible or not [8]. Approaches in this category are
usually called death penalty methods. In this category, the
searching process begins with one or more feasible points
and continues to search for new points within the feasible
region. When a new search point is generated and the point
is not feasible, the point is repaired or discarded. In this
category, generating initial feasible points is difficult and
computationally demanding when the feasible region is very
small.
(2) The constraint violation, which is the sum of the
violation of all constraint functions, is combined with the
objective function. The penalty function method is in this
2322
category [9]–[13]. In the penalty function method, an extended objective function is defined by adding the constraint
violation to the objective function as a penalty. The optimization of the objective function and the constraint violation
is realized by the optimization of the extended objective
function. The main difficulty of the penalty function method
is the selection of an appropriate value for the penalty
coefficient that adjusts the strength of the penalty.
(3) The constraint violation and the objective function are
used separately. In this category, both the constraint violation
and the objective function are optimized by a lexicographic
order in which the constraint violation precedes the objective function. Deb [14] proposed a method that adopts the
extended objective function, which realizes the lexicographic
ordering, and utilizes a genetic algorithm (GA). Takahama
and Sakai proposed the α constrained method [15] and ε
constrained method [5] that adopt a lexicographic ordering
with relaxation of the constraints and utilize evolutionary
algorithms. Runarsson and Yao [16] proposed the stochastic
ranking method that adopts the stochastic lexicographic
order, which ignores the constraint violation with some
probability, and utilizes evolution strategy (ES). MezuraMontes and Coello [17] proposed a comparison mechanism
that is equivalent to the lexicographic ordering and is utilized
in ES. Venkatraman and Yen [18] proposed a two-step optimization method using GA, which first optimizes constraint
violation and then objective function. These methods were
successfully applied to various problems.
(4) The constraints and the objective function are optimized by multiobjective optimization methods. In this category, the constrained optimization problems are solved as the
multiobjective optimization problems in which the objective
function and the constraint functions are objectives to be optimized [19]–[24]. But in many cases, solving multiobjective
optimization problems is a more difficult and expensive task
than solving single objective optimization problems.
It has been shown that the methods in the third category
have better performance than methods in other categories
in many benchmark problems. Especially, the α and the ε
constrained methods are quite new and unique approaches to
constrained optimization. We call these methods algorithm
transformation methods, because these methods does not
convert objective function, but convert an algorithm for
unconstrained optimization into an algorithm for constrained
optimization by replacing the ordinal comparisons with the
α level and the ε level comparisons in direct search methods.
These methods can be applied to various unconstrained direct
search methods and can obtain constrained optimization
algorithms.
We showed the advantage of the α constrained method by
applying the method to Powell’s method, nonlinear simplex
method, GAs and PSO, and by constructing the αPowell
[15], the αSimplex [25], [26], the αGA [27] and the αPSO
[28], respectively. Also, we applied the ε constrained method
to PSO and GA, and proposed the εPSO [5], the hybrid
algorithm of the εPSO and εGA [29] and adaptive algorithm
of the εPSO [30] to improve the stability of the εPSO.
These α and ε constrained algorithms showed very high
performance than other algorithms. In this study, we propose
the εDE that is robust to multi-modal problems and can find
very high quality solutions efficiently and stably.
III. T HE ε CONSTRAINED METHOD
A. Constraint violation and ε level comparisons
In the ε constrained method, constraint violation φ(x)
is defined. The constraint violation can be given by the
maximum of all constraints or the sum of all constraints.
φ(x)
φ(x)
max{max{0, gj (x)}, max |hj (x)|}
(2)
j
j
∑
∑
=
||max{0, gj (x)}||p +
||hj (x)||p (3)
=
j
j
where p is a positive number.
The ε level comparison is defined as an order relation on
a pair of objective function value and constraint violation
(f (x), φ(x)). If the constraint violation of a point is greater
than 0, the point is not feasible and its worth is low. The
ε level comparisons are defined basically as a lexicographic
order in which φ(x) precedes f (x), because the feasibility
of x is more important than the minimization of f (x). This
precedence can be adjusted by the parameter ε.
Let f1 (f2 ) and φ1 (φ2 ) be the function values and the
constraint violation at a point x1 (x2 ), respectively. Then,
for any ε satisfying ε ≥ 0, ε level comparisons <ε and ≤ε
between (f1 , φ1 ) and (f2 , φ2 ) are defined as follows:

 f1 < f2 , if φ1 , φ2 ≤ ε
(f1 , φ1 ) <ε (f2 , φ2 ) ⇔ f1 < f2 , if φ1 = φ2
(4)

φ1 < φ2 , otherwise

 f1 ≤ f2 , if φ1 , φ2 ≤ ε
(f1 , φ1 ) ≤ε (f2 , φ2 ) ⇔ f1 ≤ f2 , if φ1 = φ2
(5)

φ1 < φ2 , otherwise
In case of ε=∞, the ε level comparisons <∞ and ≤∞ are
equivalent to the ordinary comparisons < and ≤ between
function values. Also, in case of ε = 0, <0 and ≤0 are
equivalent to the lexicographic orders in which the constraint
violation φ(x) precedes the function value f (x).
B. The properties of the ε constrained method
The ε constrained method converts a constrained optimization problem into an unconstrained one by replacing
the order relation in direct search methods with the ε
level comparison. An optimization problem solved by the ε
constrained method, that is, a problem in which the ordinary
comparison is replaced with the ε level comparison, (P≤ε ),
is defined as follows:
(P≤ε )
minimize≤ε
f (x),
(6)
where minimize≤ε denotes the minimization based on the ε
level comparison ≤ε . Also, a problem (Pε ) is defined such
that the constraints of (P), that is, φ(x) = 0, is relaxed and
replaced with φ(x) ≤ ε:
2323
(Pε )
minimize
subject to
f (x)
φ(x) ≤ ε
(7)
It is obvious that (P0 ) is equivalent to (P).
For the three types of problems, (Pε ), (P≤ε ) and (P), the
following theorems are given based on the ε constrained
method [5].
Theorem 1: If an optimal solution (P0 ) exists, any optimal
solution of (P≤ε ) is an optimal solution of (Pε ).
Theorem 2: If an optimal solution of (P) exists, any
optimal solution of (P≤0 ) is an optimal solution of (P).
Theorem 3: Let {εn } be a strictly decreasing non-negative
sequence and converge to 0. Let f (x) and φ(x) be continuous functions of x. Assume that an optimal solution x∗ of
(P0 ) exists and an optimal solution x̂n of (P≤εn ) exists for
any εn . Then, any accumulation point to the sequence {x̂n }
is an optimal solution of (P0 ).
Theorem 1 and 2 show that a constrained optimization
problem can be transformed into an equivalent unconstrained
optimization problem by using the ε level comparison. So,
if the ε level comparison is incorporated into an existing
unconstrained optimization method, constrained optimization
problems can be solved. Theorem 3 shows that, in the ε
constrained method, an optimal solution of (P0 ) can be given
by converging ε to 0 as well as by increasing the penalty
coefficient to infinity in the penalty method.
B. The algorithm of the εDE
The algorithm of the εDE based on DE/rand/1/exp variant,
which is used in this study, is as follows:
Step0 Initialization. Initial N individuals xi are generated
as the initial search points, where there is an initial
population P (0) = {xi , i = 1, 2, · · · , N }. The ε
level is fixed to 0 in this study.
Step1 Termination condition. If the number of generations
(iterations) exceeds the maximum generation Tmax ,
the algorithm is terminated.
Step2 Mutation. For each individual xi , three individuals
xp1 , xp2 and xp3 are chosen from the population
without overlapping xi and each other. A new
vector x0 is generated by the base vector xp1 and
the difference vector xp2 − xp3 as follows:
x0 = xp1 + F (xp2 − xp3 )
where F is a scaling factor.
Step3 Crossover. The vector x0 is crossovered with the
parent xi . A crossover point j is chosen randomly
from all dimensions [1, n]. The element at the j-th
dimension of the trial vector xnew is inherited from
the j-th element of the vector x0 . The elements of
subsequent dimensions are inherited from x0 with
exponentially decreasing probability defined by a
crossover factor CR. Otherwise, the elements are
inherited from the parent xi . In real processing,
Step2 and Step3 are integrated as one operation.
Step4 Survivor selection. The trial vector xnew is accepted for the next generation if the trial vector
is better than the parent xi .
Step5 Go back to Step1.
IV. T HE εDE
In this section, we first describe differential evolution.
Then, we describe the εDE, which is the integration of the
ε constrained method and differential evolution.
A. Differential Evolution
Differential evolution (DE) is a variant of ES proposed
by Storn and Price [6], [7]. DE is a stochastic direct search
method using population or multiple search points. DE has
been successfully applied to the optimization problems including non-linear, non-differentiable, non-convex and multimodal functions. It has been shown that DE is fast and robust
to these functions.
In DE, initial individuals are randomly generated within
the search space and form an initial population. Each individual contains n genes as decision variables or a decision
vector. At each generation or iteration, all individuals are
selected as parents. Each parent is processed as follows: The
mutation process begins by choosing 1 + 2 num individuals
from the parents except for the parent in the processing. The
first individual is a base vector. All subsequent individuals
are paired to create num difference vectors. The difference
vectors are scaled by the scaling factor F and added to
the base vector. The resulting vector is then recombined or
crossovered with the parent. The probability of recombination at an element is controlled by the crossover factor CR.
This crossover process produces a trial vector. Finally, for
survivor selection, the trial vector is accepted for the next
generation if the trial vector is better than the parent.
In this study, DE/rand/1/exp variant, where the number of
difference vector is 1 or num = 1, is used.
(8)
V. S OLVING NONLINEAR OPTIMIZATION PROBLEMS
A. Test problems and experimental conditions
In this study, three problems are tested: Himmelblau’s
problem, welded beam design problem and pressure vessel
design problem, which are solved by various methods and
compared in [2]. In the following, the results for the εPSO,
the εGA and the hybrid algorithm of εPSO and εGA are
taken from [29], and other results are taken from [2].
The parameters for the ε constrained method are as
follows: The constraint violation φ is given by the square
sum of all constraints (p=2) in eq. (3). The ε level is assigned
to 0. This means that problems are solved using a lexicographic order in which the constraint violation precedes the
objective function. The parameters for DE are: The number
of individuals N =20, F = 0.7, CR = 0.9. The maximum
number of iterations T = 124 (2,500 evaluations), T = 249
(5,000 evaluations) and T = 499 (10,000 evaluations). For
each problem, 30 independent runs are performed.
B. Himmelblau’s nonlinear optimization problem
Himmelblau’s problem was originally given by Himmelblau [31], and it has been used as a benchmark for several
GA-based methods that use penalties. In this problem, there
2324
TABLE I
R ESULT OF H IMMELBLAU ’ S PROBLEM
Algorithm
εDE
(2,500)
εDE
(5,000)
εDE
(10,000)
hybrid
(5,000)
hybrid (50,000)
εPSO
(5,000)
εGA
(5,000)
GRG [31]
Gen [32]
Death [2]
Static [9]
Dynamic [10]
Annealing [11]
Adaptive [12]
Co-evolutionary [13]
MGA [21]
Best
-31011.7391
-31025.0348
-31025.5601
-31016.8002
-31025.5168
-31011.9988
-30987.1366
-30373.949
-30183.576
-30790.271
-30790.2716
-30903.877
-30829.201
-30903.877
-31020.859
-31005.7966
Average
-30979.3300
-31023.8356
-31025.5579
-30952.2531
-31023.9699
-30947.3262
-30945.5421
N/A
N/A
-30429.371
-30446.4618
-30539.9156
-30442.126
-30448.007
-30984.2407
-30862.8735
are 5 decision variables, 6 nonlinear inequality constraints
and 10 boundary conditions.
This problem was originally solved using the Generalized
Reduced Gradient method (GRG) [31]. Gen and Cheng [32]
used a genetic algorithm based on both local and global
reference. The problem was solved using death penalty [2]
in the first category and various penalty function approaches
in the second category such as static penalty [9], dynamic
penalty [10], annealing penalty [11], adaptive penalty [12]
and Co-evolutionary penalty [13]. Also, the problem was
solved using MGA (multiobjective genetic algorithm) [21]
in the fourth category.
Experimental results on the problem are shown in Table I.
The columns labeled Best, Average, Worst and S.D. are the
best value, the average value, the worst value and the standard deviation of the best agent in each run, respectively. For
some methods, the number of function evaluations is shown
in parentheses. The good approaches were the εDE, the
hybrid algorithm, the εPSO, the εGA and Co-evolutionary
penalty method. Note that MGA and other penalty based
approaches performed only 5,000 function evaluations while
Co-evolutionary penalty method performed 900,000 function
evaluations.
Only with 2,500 function evaluations, the εDE can find
better solutions on average than those of almost all other
methods except for the hybrid algorithm with 50,000 evaluations and Co-evolutionary penalty method with 900,000
evaluations. With 5,000 evaluations, the εDE showed better
performance than all other methods on all values except for
the hybrid algorithm with 50,000 evaluations. With 10,000
evaluations, the εDE found better solutions than all other
methods on all values. The εDE can find better solutions less
than 1/5 to 1/2 evaluations compared with other methods.
So, the εDE is the best method which can find very good
solutions most efficiently and most stably. The εDE found
the objective value −31025.0348 in 5,000 evaluations and
−31025.5601 in 10,000 evaluations that could not be found
Worst
-30925.7070
-31018.9192
-31025.5490
-30855.6951
-31012.5285
-30762.8890
-30904.1387
N/A
N/A
-29834.385
-29834.3847
-30106.2498
-29773.085
-29926.1544
-30792.4077
-30721.0418
S.D.
20.4843
1.2120
0.0022
38.8166
2.6167
55.8631
19.9409
N/A
N/A
234.555
226.3428
200.035
244.619
249.485
73.6335
73.240
by other methods.
Also, the εDE ran very fast and the execution time for
solving this problem was only 0.0066 seconds (5,000 evaluations) using notebook PC with 1.3GHz Mobile Pentium
III, which is about 2.3 times faster than the hybrid algorithm
with 5,000 evaluations in 0.0151 seconds.
In the ε constrained method, the objective function and
the constraints are treated separately. The ε level comparison
can compare two points only based on their constraint
violations without evaluating the objective functions if the
constraint violations are different. So, the εDE can often
omit the evaluation of the objective function when one of or
both of the points are infeasible. When the εDE performed
5,000 function evaluations, the number of evaluations for
constraints was 5,000. However, the number of evaluations
for the objective function was only 2169.2 on average (about
57% omitted). This feature of the εDE is very important
when the evaluation of the objective function is heavily timeconsuming.
C. Welded beam design
A welded beam is designed for minimum cost subject to
constraints on shear stress (τ ), bending stress in the beam
(σ), buckling load on the bar (Pc ), end deflection of the beam
(δ) and side constraints [33]. There are 4 design variables:
weld thickness h(x1 ), length of weld l(x2 ), width of the
beam t(x3 ), thickness of the beam b(x4 ). The problem has
7 inequality constraints.
Experimental results on the problem are shown in Table II.
The good approaches were the εDE, the hybrid algorithm, the
εPSO, the εGA and Co-evolutionary penalty method. Note
that Co-evolutionary penalty method performed 900,000
function evaluations.
Only with 2,500 function evaluations, the εDE can find
better solutions on average than those of almost all other
methods except for the hybrid algorithm with 50,000 evaluations. With 5,000 evaluations, the εDE showed better
2325
TABLE II
R ESULT OF WELDED BEAM DESIGN
Algorithm
εDE
(2,500)
εDE
(5,000)
εDE
(10,000)
hybrid
(5,000)
hybrid (50,000)
εPSO
(5,000)
εGA
(5,000)
Death [2]
Static [9]
Dynamic [10]
Annealing [11]
Adaptive [12]
Co-evolutionary [13]
MGA [21]
Best
1.7267
1.7249
1.7249
1.7268
1.7249
1.7258
1.7852
2.0821
2.0469
2.1062
2.0713
1.9589
1.7483
1.8245
Average
1.7339
1.7249
1.7249
1.7635
1.7251
1.8073
1.8236
3.1158
2.9728
3.1556
2.9533
2.9898
1.7720
1.9190
Worst
1.7423
1.7250
1.7249
1.9173
1.7270
2.1427
1.9422
4.5138
4.5741
5.0359
4.1261
4.8404
1.7858
1.9950
S.D.
0.0039
0.0000
0.0000
0.0463
0.0006
0.1200
0.0329
0.6625
0.6196
0.7006
0.4902
0.6515
0.0112
0.0538
performance than all other methods on all values, except
that the hybrid algorithm can find the same best value with
50,000 evaluations. With 10,000 evaluations, the εDE found
the same best solutions constantly in all runs. The εDE can
find better solutions less than 1/5 to 1/2 evaluations compared
with other methods. For this problem, when the number of
evaluations was 5,000, the substantial number of evaluations
for the objective function was only 2318.5 on average (about
54% omitted). So, the εDE is the best method which can find
very good solutions most efficiently and most stably. Also,
the εDE ran very fast and the execution time for solving this
problem was only 0.0079 seconds (5,000 evaluations), which
is about 1.9 times faster than the hybrid algorithm with 5,000
evaluations in 0.0154 seconds.
Co-evolutionary penalty method performed 900,000 function
evaluations, and other penalty based approaches performed
2,500,000 function evaluations.
Only with 2,500 evaluations, the εDE showed better performance than the hybrid algorithm with 50,000 evaluations
on all values except for the best value, the εPSO with
50,000 evaluations on all values except for the best value,
Co-evolutionary penalty on all values except for standard
deviation, and the other methods on all values. With 10,000
evaluations, the εDE found better solutions than all other
methods on all values except that the hybrid algorithm and
the εPSO can find the same best value with 50,000 evaluations and that the standard deviation of Co-evolutionary
penalty method is slightly smaller than that of the εDE. For
this problem, when the number of evaluations was 5,000, the
substantial number of evaluations for the objective function
was 2766.2 on average (about 45% omitted). So, the εDE
is the best method which can find very good solutions most
efficiently and most stably. Also, the εDE ran very fast and
the execution time for solving this problem was only 0.0123
seconds (10,000 evaluations), which is about 10 times faster
than the hybrid algorithm with 50,000 evaluations in 0.1221
seconds.
This problem is a difficult multi-modal problem. Other
methods cannot find good solutions constantly even with
50,000 evaluations. But the εDE can find good solutions
constantly only with 5,000 evaluations. The εDE can find
better solutions less than 1/20 to 1/10 evaluations compared
with other methods. This result highlighted the robustness of
the εDE to multi-modal problems.
TABLE III
DESIGN OF PRESSURE VESSEL
D. Pressure vessel design
A pressure vessel is a cylindrical vessel which is capped
at both ends by hemispherical heads. The vessel is designed
to minimize total cost including the cost of material, forming
and welding [34]. There are 4 design variables: thickness of
the shell Ts (x1 ), thickness of the head Th (x2 ), inner radius
R(x3 ), length of the cylindrical section of the vessel not
including the head L(x4 ). Ts and Th are integer multiples
of 0.0625 inch, which are the available thickness of rolled
steel plates, and R and L are continuous. The problem has
4 inequality constraints.
This problem was solved by Sandgren [35] using Branch
and Bound, by Kannan and Kramer [34] using an augmented
Lagrangian Multiplier approach. Also, the problem was
solved by Deb [36] using Genetic Adaptive Search (GeneAS)
in the third category. In the εDE, to solve this mixed integer
problem, new decision variable x01 and x02 are introduced and
the value of xi , i = 1, 2 is given by the multiple of 0.0625
and the integer part of x0i .
Experimental results on the problem are shown in Table III. The good approaches were the εDE, the hybrid
algorithm, the εPSO, the εGA, MGA and Co-evolutionary
penalty method. Note that the hybrid algorithm, the εPSO,
the εGA and MGA performed 50,000 function evaluations,
Algorithm
(2,500)
εDE
εDE
(5,000)
εDE
(10,000)
hybrid (50,000)
εPSO
(50,000)
εGA
(50,000)
Sandgren [35]
Kannan [34]
Deb [36]
Death [2]
Static [9]
Dynamic [10]
Annealing [11]
Adaptive [12]
Co-evolutionary [13]
MGA [21]
Best
6060.5071
6059.7144
6059.7143
6059.7143
6059.7143
6074.6305
8129.1036
7198.0428
6410.3811
6127.4143
6110.8117
6213.6923
6127.4143
6110.8117
6288.7445
6069.3267
Average
6087.7730
6065.8780
6065.8767
6112.6750
6154.4386
6172.9887
N/A
N/A
N/A
6616.9333
6656.2616
6691.5606
6660.8631
6689.6049
6293.8432
6263.7925
Worst
6131.0306
6090.5266
6090.5262
6410.0868
6410.0868
6335.7302
N/A
N/A
N/A
7572.6591
7242.2035
7445.6923
7380.4810
7411.2532
6308.1497
6403.4500
S.D.
18.7853
12.3242
12.3248
91.4934
132.6205
72.0513
N/A
N/A
N/A
358.8497
320.8196
322.7647
330.7516
330.4483
7.4133
97.9445
VI. C ONCLUSIONS
Differential evolution is a recently proposed variant of an
evolutionary strategy. DE is known as a simple, efficient
and robust search algorithm that can solve unconstrained
optimization problems. In this study, we proposed the εDE
2326
by applying the ε constrained method to DE and showed that
the εDE can solve constrained optimization problems. We
showed that the εDE could solve three benchmark problems
most effectively and most stably compared with many other
methods.
We showed the advantage of the ε constrained method.
In the ε constrained method, the objective function and
the constraints are treated separately, and the evaluation of
the objective function can often be omitted. For benchmark
problems, the εDE can omit the evaluation of the objective
function about 45% to 57%. Therefore, the εDE can find optimal solutions very efficiently, especially from the viewpoint
of the number of evaluations for the objective function.
In the future, we will apply the εDE to various real world
problems that have large numbers of decision variables and
constraints.
R EFERENCES
[1] Z. Michalewicz, “A survey of constraint handling techniques in evolutionary computation methods,” in Proceedings of the 4th Annual Conference on Evolutionary Programming. Cambridge, Massachusetts:
The MIT Press, 1995, pp. 135–155.
[2] C. A. C. Coello, “Theoretical and numerical constraint-handling
techniques used with evolutionary algorithms: A survey of the state of
the art,” Computer Methods in Applied Mechanics and Engineering,
vol. 191, no. 11-12, pp. 1245–1287, Jan. 2002.
[3] G. Coath and S. K. Halgamuge, “A comparison of constraint-handling
methods for the application of particle swarm optimization to constrained nonlinear optimization problems,” in Proc. of IEEE Congress
on Evolutionary Computation, Canberra, Australia, 2003, pp. 2419–
2425.
[4] T. Takahama and S. Sakai, “Constrained optimization by the ε
constrained differential evolution with gradient-based mutation and
feasible elites,” in Proc. of the 2006 World Congress on Computational
Intelligence, to appear.
[5] ——, “Constrained optimization by ε constrained particle swarm
optimizer with ε-level control,” in Proc. of the 4th IEEE International Workshop on Soft Computing as Transdisciplinary Science and
Technology (WSTST’05), May 2005, pp. 1019–1029.
[6] R. Storn and K. Price, “Minimizing the real functions of the ICEC’96
contest by differential evolution,” in Proc. of the International Conference on Evolutionary Computation, 1996, pp. 842–844.
[7] ——, “Differential evolution – a simple and efficient heuristic for
global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, pp. 341–359, 1997.
[8] X. Hu and R. C. Eberhart, “Solving constrained nonlinear optimization
problems with particle swarm optimization,” in Proc. of the Sixth
World Multiconference on Systemics, Cybernetics and Informatics,
Orlando, Florida, 2002.
[9] A. Homaifar, S. H. Y. Lai, and X. Qi, “Constrained optimization via
genetic algorithms,” Simulation, vol. 62, no. 4, pp. 242–254, 1994.
[10] J. Joines and C. Houck, “On the use of non-stationary penalty
functions to solve nonlinear constrained optimization problems with
GAs,” in Proceedings of the first IEEE Conference on Evolutionary
Computation, D. Fogel, Ed. Orlando, Florida: IEEE Press, 1994, pp.
579–584.
[11] Z. Michalewicz and N. Attia, “Evolutionary optimization of constrained problems,” in Proceedings of the 3rd Annual Conference on
Evolutionary Programming, A. Sebald and L. Fogel, Eds. River Edge,
NJ: World Scientific Publishing, 1994, pp. 98–108.
[12] A. B. Hadj-Alouane and J. C. Bean, “A genetic algorithm for the
multiple-choice integer program,” Operations Research, vol. 45, pp.
92–101, 1997.
[13] C. A. C. Coello, “Use of a self-adaptive penalty approach for engineering optimization problems,” Computers in Industry, vol. 41, no. 2,
pp. 113–127, 2000.
[14] K. Deb, “An efficient constraint handling method for genetic algorithms,” Computer Methods in Applied Mechanics and Engineering,
vol. 186, no. 2/4, pp. 311–338, 2000.
[15] T. Takahama and S. Sakai, “Tuning fuzzy control rules by the α
constrained method which solves constrained nonlinear optimization
problems,” Electronics and Communications in Japan, Part3: Fundamental Electronic Science, vol. 83, no. 9, pp. 1–12, Sept. 2000.
[16] T. P. Runarsson and X. Yao, “Stochastic ranking for constrained
evolutionary optimization,” IEEE Transactions on Evolutionary Computation, vol. 4, no. 3, pp. 284–294, Sept. 2000.
[17] E. Mezura-Montes and C. A. C. Coello, “A simple multimembered
evolution strategy to solve constrained optimization problems,” IEEE
Trans. on Evolutionary Computation, vol. 9, no. 1, pp. 1–17, Feb.
2005.
[18] S. Venkatraman and G. G. Yen, “A generic framework for constrained
optimization using genetic algorithms,” IEEE Trans. on Evolutionary
Computation, vol. 9, no. 4, pp. 424–435, Aug. 2005.
[19] E. Camponogara and S. N. Talukdar, “A genetic algorithm for constrained and multiobjective optimization,” in 3rd Nordic Workshop on
Genetic Algorithms and Their Applications (3NWGA), J. T. Alander,
Ed. Vaasa, Finland: University of Vaasa, Aug. 1997, pp. 49–62.
[20] P. D. Surry and N. J. Radcliffe, “The COMOGA method: Constrained
optimisation by multiobjective genetic algorithms,” Control and Cybernetics, vol. 26, no. 3, pp. 391–412, 1997.
[21] C. A. C. Coello, “Constraint-handling using an evolutionary multiobjective optimization technique,” Civil Engineering and Environmental
Systems, vol. 17, pp. 319–346, 2000.
[22] T. Ray, K. M. Liew, and P. Saini, “An intelligent information
sharing strategy within a swarm for unconstrained and constrained
optimization problems,” Soft Computing – A Fusion of Foundations,
Methodologies and Applications, vol. 6, no. 1, pp. 38–44, Feb. 2002.
[23] T. P. Runarsson and X. Yao, “Evolutionary search and constraint
violations,” in Proceedings of the 2003 Congress on Evolutionary
Computation, vol. 2. Piscataway, New Jersey: IEEE Service Center,
Dec. 2003, pp. 1414–1419.
[24] A. H. Aguirre, S. B. Rionda, C. A. C. Coello, G. L. Lizárraga, and
E. M. Montes, “Handling constraints using multiobjective optimization
concepts,” International Journal for Numerical Methods in Engineering, vol. 59, no. 15, pp. 1989–2017, Apr. 2004.
[25] T. Takahama and S. Sakai, “Learning fuzzy control rules by αconstrained simplex method,” Systems and Computers in Japan,
vol. 34, no. 6, pp. 80–90, June 2003.
[26] ——, “Constrained optimization by applying the α constrained
method to the nonlinear simplex method with mutations,” IEEE
Transactions on Evolutionary Computation, vol. 9, no. 5, pp. 437–
451, Oct. 2005.
[27] ——, “Constrained optimization by α constrained genetic algorithm
(αGA),” Systems and Computers in Japan, vol. 35, no. 5, pp. 11–22,
May 2004.
[28] ——, “Constrained optimization by the α constrained particle swarm
optimizer,” Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 9, no. 3, pp. 282–289, May 2005.
[29] T. Takahama, S. Sakai, and N. Iwane, “Constrained optimization by
the ε constrained hybrid algorithm of particle swarm optimization and
genetic algorithm,” in Proc. of the 18th Australian Joint Conference
on Artificial Intelligence, Dec. 2005, pp. 389–400, Lecture Notes in
Computer Science 3809.
[30] T. Takahama and S. Sakai, “Solving constrained optimization problems
by the ε constrained particle swarm optimizer with adaptive velocity
limit control,” in Proc. of the 2nd IEEE International Conference on
Cybernetics & Intelligent Systems, June 2006, pp. 683–689.
[31] D. Hmmelblau, Applied Nonlinear Programming.
New York:
McGraw-Hill, 1972.
[32] M. Gen and R. Cheng, Genetic Algorithms & Engineering Design.
New York: Wiley, 1997.
[33] S. S. Rao, Engineering Optimization, 3rd ed. New York: Wiley, 1996.
[34] B. K. Kannan and S. N. Kramer, “An augmented lagrange multiplier
based method for mixed integer discrete continuous optimization and
its applications to mechanical design,” Journal of mechanical design,
Transactions of the ASME, vol. 116, pp. 318–320, 1994.
[35] E. Sandgren, “Nonlinear integer and discrete programming in mechanical design,” in Proc. of the ASME Design Technology Conference,
Kissimine, Florida, 1988, pp. 95–105.
[36] K. Deb, “GeneAS: A robust optimal design technique for mechanical
component design,” in Evolutionary Algorithms in Engineering Applications, D. Dasgupta and Z. Michalewicz, Eds. Berlin: Springer,
1997, pp. 497–514.
2327