Appendix A: Optimization Theory

Appendix A: Optimization Theory
Appendix A: Optimization Theory
Computational Intelligence: Second Edition
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Contents
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Basic Ingredients
Each optimization problem consists of the following basic
ingredients:
An objective function, f , which represents the quantity to be
optimized
A set of unknowns or variables, x, which affects the value
of the objective function
A set of constraints, which restricts the values that can be
assigned to the unknowns
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Problem Classificaion
Optimization problems are classified based on a number of
characteristics:
The number of variables that influences the objective
function
The type of variables
The degree of nonlinearity of the objective function
The constraints used
The number of optima
The number of optimization criteria
Whether the objective function changes over time.
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Global Minimum
The solution x∗ ∈ F, is a global optimum of the objective
function, f , if
f (x∗ ) < f (x), ∀x ∈ F
where F ⊆ S.
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Local Minimum
Strong local minimum: The solution, x∗N ∈ N ⊆ F, is a strong
local minimum of f , if
f (x∗N ) < f (x), ∀x ∈ N
where N ⊆ F is a set of feasible points in the neighborhood of x∗N .
Weak local minimum: The solution, x∗N ∈ N ⊆ F, is a weak
local minimum of f , if
f (x∗N ) ≤ f (x), ∀x ∈ N
where N ⊆ F is a set of feasible points in the neighborhood of x∗N .
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Figure A.1: Types of Optima for Unconstrained Problems
Weak local minimum
Strong local minima
Global minimum
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Classes of Optimization Algorithms
Based on the type of solution:
Local search algorithms
Global search algorithms
Based on the way in which candidate solutions are updated:
Deterministic
Stochastic
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Classes of Optimization Algorithms (cont)
Based on the problem characteristics:
unconstrained methods, used to optimize unconstrained
problems
constrained methods, used to find solutions in constrained
search spaces
multi-objective optimization methods for problems with
more than one objective to optimize
multi-solution (niching) methods with the ability to locate
more than one solution
dynamic methods with the ability to locate and track
changing optima
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Definition
Unconstrained optimization problem:
minimize f (x),
subject to
x = (x1 , x2 , . . . , xnx )
xj ∈ dom(xj )
where x ∈ F = S, and dom(xj ) is the domain of variable xj .
For
a continuous problem, the domain of each variable is R, i.e.
xj ∈ R
an integer problem, xj ∈ Z
dom(xi ) for a general discrete problem is a finite set of values
a binary problem, x ∈ B, i.e. xj ∈ {0, 1}
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Some Optimization Algorithms
Algorithm 1: General Local Search Algorithm
Find starting point x(0) ∈ S;
t = 0;
repeat
Evaluate f (x(t));
Calculate a search direction, q(t);
Calculate step length η(t);
Set x(t + 1) to x(t) + η(t)q(t);
t = t + 1;
until stopping condition is true ;
Return x(t) as the solution;
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Some Optimization Algorithms (cont)
Beam search
Tabu search
Gradient descent
LeapFrog
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Some Optimization Algorithms (cont)
Algorithm 2: Simulated Annealing Algorithm
Create initial solution, x(0) and set initial temperature, T (0), t = 0;
repeat
Generate new solution, x, and determine quality, f (x);
Calculate acceptance probability using
(
1
if f (x) < f (x(t))
f (x)−f (x(t−1))
P=
−
cb T
otherwise
e
if U(0, 1) ≤ acceptance probability then
x(t + 1) = x;
end
t=t+1;
until stopping condition is true ;
Return x(t) as the solution;
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Examples: Ackley
f (x) = −20e
−0.2
q
with xj ∈ [−30, 30] and
1
nx
Pnx
2
j=1 xj
f ∗ (x)
1
− e nx
Pnx
j=1
cos(2πxj )
+ 20 + e
= 0.0
18
f(x,0)
f(x1,x2)
16
8
7
6
5
4
3
2
1
0
8
7
6
5
4
3
2
1
0
-2 -1.5
0
-1 -0.5
-0.5
-1
0 0.5
-1.5
1 1.5
x1
2 -2
14
12
f(x1,0)
f(x1,x2)
10
8
6
0.5
1
1.5
2
x2
Computational Intelligence: Second Edition
4
2
0
-10
-5
0
x1
5
Appendix A: Optimization Theory
10
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Examples: Rastrigin
f (x) =
nx
X
(xj2 − 10 cos(2πxj ) + 10)
j=1
with xj ∈ [−5.12, 5.12] and f ∗ (x) = 0.0
45
f(0,x2)
f(x1,x2)
40
80
70
60
50
40
30
20
10
0
35
30
f(0,x2)
f(x1,x2)
80
70
60
50
40
30
20
10
0
25
20
15
6
-6
4
10
x2
5
2
-4
0
-2
x1
0
-2
2
4
-4
6 -6
0
-6
Computational Intelligence: Second Edition
-4
-2
0
2
4
x2
Appendix A: Optimization
Theory
6
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Definition
Constrained optimization problem:
minimize
subject to
f (x),
x = (x1 , . . . , xnx )
gm (x) ≤ 0, m = 1, . . . , ng
hm (x) = 0, m = ng + 1, . . . , ng + nh
xj ∈ dom(xj )
where ng and nh are the number of inequality and equality
constraints resp ectively
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Figure A.3: Constrained Problem Illustration
f(x)
Objective function
1111111111111111111111111
0000000000000000000000000
Feasible space
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
Extremum
0000000000000000000000000
1111111111111111111111111
Infeasible space
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
0000000000000000000000000
1111111111111111111111111
Inequality constraint
Unconstrained Global Minimum
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
x
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Constraint Handling Methods
The following issues need to be considered:
How should two feasible solutions be compared?
How should two infeasible solutions be compared?
should the infeasible solution with the best objective function
value be preferred?
should the solution with the least number of constraint
violations or lowest degree of violation be preferred?
should a balance be found between best objective function
value and degree of violation?
Should it be assumed that any feasible solution is better than
any unfeasible solution? Alternatively, can objective function
value and degree of violation be optimally balanced?
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Penalty Methods
Minimization using the Penalty method:
minimize
F (x, t) = f (x, t) + λp(x, t)
where λ is the penalty coefficient and p(x, t) is the (possibly)
time-dependent penalty function
What are the problems with penalty methods?
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Figure A.4: Effect of Penalty Functions
f(x1,x2)
f(x1,x2)
F(x1,x2)
f(x1,x2)
2
1.5
1
0.5
0
-0.5
-1
-1.5
2.5
2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
-2.5
10
10
5
-10
0
-5
0
x1
5
-5
5
x2
10-10
(e) Original function
Computational Intelligence: Second Edition
-10
0
-5
0
x1
5
-5
x2
10-10
(f) With penalty p(x1 , x2 ) = 3x1 and
λ = 0.05
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Constructing the Penalty Function
ng +nh
p(xi , t) =
X
λm (t)pm (xi )
m=1
where
pm (xi ) =
max{0, gm (xi )α } if m ∈ [1, . . . , ng ] (inequality)
|hm (xi )|α
if m ∈ [ng + 1, . . . , ng + nh ] (equality)
with α a positive constant, representing the power of the penalty
λm (t) represents a time-varying degree to which violation of the
m-th constraint contributes to the overall penalty
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Convert to an Unconstrained Problem
Covert the constrained (primal) problem to an unconstrained
problem by defining the Lagrangian for the constrained problem:
L(x, λg , λh ) = f (x) +
ng
X
ng +nh
λgm gm (x) +
m=1
X
λhm hm (x)
m=ng +1
Then maximize the Lagrangian (dual problem):
maximizeλg ,λh
subject to
L(x, λg , λh )
λgm ≥ 0, m = 1, . . . , ng + nh
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Convert to an Unconstrained Problem (cont)
The vector x∗ that solves the primal problem, as well as the
Lagrange multiplier vectors, λ∗g and λ∗h , can be found by solving
the min-max problem,
min max L(x, λg , λh )
x
λg ,λh
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Definition
Multi-solution problem: Find a set of solutions,
X = {x∗1 , x∗2 , . . . , x∗nX }, such that each x∗ ∈ X is a minimum of the
general optimization problem
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Appendix A: Optimization Theory
Definition
Dynamic optimization problem:
minimize f (x, $(t)),
subject to
x = (x1 , . . . , xnx ), $(t) = ($1 (t), . . . , $n$ )
gm (x) ≤ 0, m = 1, . . . , ng
hm (x) = 0, m = ng + 1, . . . , ng + nh
xj ∈ dom(xj )
where $(t) is a vector of time-dependent objective function
control parameters. The objective is to find
x∗ (t) = min f (x, $(t))
x
where x∗ (t) is the optimum found at time step t.
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Dynamic Environment Types
Type I environments, where the location of the optimum in
problem space is subject to change. The change in the
optimum, x∗ (t) is quantified by the severity parameter, ζ,
which measures the jump in location of the optimum.
Type II environments, where the location of the optimum
remains the same, but the value, f (x∗ (t)), of the optimum
changes.
Type III environments, where both the location of the
optimum and its value changes.
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Example
f (x, $) = |f1 (x, $) + f2 (x, $) + f3 (x, $)|
with
2
2
f1 (x, $) = $1 (1 − x1 )2 e (−x1 −(x2 −1) )
x
2
2
2
1
f2 (x, $) = −0.1
− $2 x13 − x25 e (−x1 −x2 )
5
2
2
f3 (x, $) = 0.5e (−(x1 +1) −x2 )
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Figure A.8
f(x1,x2)
f(x1,x2)
f(x1,x2)
f(x1,x2)
2.5
6
5
4
3
2
1
0
2
1.5
1
0.5
0
-4
-2
x1
-4
-2
0
x1
2
4
-4
-2
0
x2
2
4
Computational Intelligence: Second Edition
0
2
4
-4
-2
0
x2
2
4
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Figure A.8 (cont)
f(x1,x2)
f(x1,x2)
f(x1,x2)
f(x1,x2)
2.5
3.5
3
2.5
2
1.5
1
0.5
0
2
1.5
1
0.5
0
-4
-2
x1
-4
-2
0
x1
2
4
-4
-2
0
x2
2
4
Computational Intelligence: Second Edition
0
2
4
-4
-2
0
x2
2
4
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Definition
Multi-objective problem:
minimize f(x)
subject to gm (x) ≤ 0, m = 1, . . . , ng
hm (x) = 0, m = ng + 1, . . . , ng + nh
x ∈ [xmin , xmax ]nx
where
f(x) = (f1 (x), f2 (x), . . . , fnk (x)) ∈ O ⊆ Rnk
O is referred to as the objective space
The search space, S is also referred to as the decision space
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Meaning of an Optimum
The problem is the presence of conflicting objectives
Need to achieve a balance between these objectives
A balance is achived when a solution cannot improve any objective
without degrading one or more of the other objectives
There is not just one solution
Solutions are referred to as non-dominated solutions
Set of solutions is referred to as the Pareto-optimal set, and the
corresponding objective vectors are referred to as the Pareto front
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Appendix A: Optimization Theory
Weighted Aggregation Methods
Definition:
Pnk
minimize
k=1 ωk fk (x)
subject to gm (x) ≤ 0, m = 1, . . . , ng
hm (x) = 0, m = ng + 1, . . . , ng + nh
x ∈ [xmin , xmax ]nx
ωk ≥ 0, k = 1, . . . , nk
It is also usually assumed that
Pnk
Computational Intelligence: Second Edition
k=1 ωk
=1
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Weighted Aggregation Methods (cont)
Aggregation methods have the following problems:
The algorithm has to be applied repeatedly to find different
solutions if a single-solution algorithm is used
It is difficult to get the best weight values, ωk , since these are
problem-dependent
Aggregation methods can only be applied to generate
members of the Pareto-optimal set when the Pareto front is
concave, regardless of the values of ωk
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Pareto-Optimality
Domination: A decision vector, x1 dominates a decision vector, x2
(denoted by x1 ≺ x2 ), if and only if
x1 is not worse than x2 in all objectives, i.e.
fk (x1 ) ≤ fk (x2 ), ∀k = 1, . . . , nk , and
x1 is strictly better than x2 in at least one objective, i.e.
∃k = 1, . . . , nk : fk (x1 ) < fk (x2 ).
So, solution x1 is better than solution x2 if x1 ≺ x2 (i.e. x1
dominates x2 ), which happens when f1 ≺ f2
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Figure A.5: Illustration of Dominance
f2
f2 (x)
Dominated by f
111111111111111111
000000000000000000
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
000000000000000000
111111111111111111
f1 (x)
Computational Intelligence: Second Edition
f1
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
More Definitions
Pareto-optimal: A decision vector, x∗ ∈ F is Pareto-optimal if
there does not exist a decision vector, x 6= x∗ ∈ F that dominates
it. That is, @k : fk (x) < fk (x∗ ). An objective vector, f ∗ (x), is
Pareto-optimal if x is Pareto-optimal.
Pareto-optimal set: The set of all Pareto-optimal decision vectors
form the Pareto-optimal set, P ∗ . That is,
P ∗ = {x∗ ∈ F| 6 ∃x ∈ F : x ≺ x∗ }
Pareto-optimal front: Given the objective vector, f(x), and the
Pareto-optimal solution set, P ∗ , then the Pareto-optimal front,
PF ∗ ⊆ O, is defined as
PF ∗ = {f = (f1 (x∗ ), f2 (x∗ ), . . . , fk (x∗ ))|x∗ ∈ P}
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Examples of MOPs
With a convex, non-uniform Pareto front:
f(x) = (f1 (x), f2 (x))
(1)
f1 (x) = x1
f2 (x) = g (x)(1 −
where
p
f1 (x)/g (x))
n
x
9 X
g (x) = 1 +
xj
nx − 1
j=2
With partially concave and partially convex Pareto front using the
above equation, but with
p
f2 (x) = g (x)(1 − 4 f1 (x)/g (x) − (f1 (x)/g (x))4 )
Computational Intelligence: Second Edition
Appendix A: Optimization Theory
Appendix A: Optimization Theory
Optimization Problems
Optima Types
Optimization Method Classes
Unconstrained Optimization
Constraint Handling Methods
Multi-Solution Problems
Dynamic Optimization Problems
Multi-Objective Optimization Problems
Figure A.6: Example Pareto-Optimal Fronts
1
5
0.9
4.5
0.8
4
0.7
3.5
3
f2
f2
0.6
0.5
2.5
0.4
2
0.3
1.5
0.2
1
0.1
0.5
0
0
0.1
0.2
0.3
0.4
0.5
f1
0.6
0.7
0.8
0.9
1
Computational Intelligence: Second Edition
0
-36
-30
-24
-18
f1
Appendix A: Optimization Theory
-12
-6
0