3. Linear Programming and Genetic Algorithms

Linear Programming Problem
 cn xn
min f (x)  c1 x1  c2 x2 
x
s.t. a11 x1  a12 x2 
 a1n xn  b1
a21 x1  a22 x2 
 a2 n xn  b2
am1 x1  am 2 x2 
 amn xn  bm
li  xi  ui , i  1,
,n
(bounds)
153
which can be written in the form:min f ( x)  c T x
x
s. t.
Ax  b
lxu
Assuming a feasible solution exists it will occur
at a corner of the feasible region. That is at an
intersection of equality constraints. The
simplex linear programming algorithm
systematically searches the corners of the
feasible region in order to locate the optimum.
154
Example: Consider the problem:
max f ( x)  8.1x1  10.8 x2
x
s. t.
0.8 x1  0.44 x2  24,000
(a)
0.05 x1  0.1x2  2,000
(b)
0.1x1  0.36 x2  6,000
(c)
x1  0,
x2  0
155
Plotting contours of f(x) and the constraints produces:
3
x 10
4
f increasing
2.5
2
(a)
1.5
x2
1
(c)
solution
0.5
(b)
0
0
0.5
1
1.5
2
x1
2.5
3
3.5
156
x 10
4
4
The maximum occurs at the intersection of (a) and (b):-
0.8 x1  0.44 x2  24,000
0.05 x1  01
. x2  2,000
 x1  26,207, x2  6,897
giving f  81
. x1  10.8 x2  286,760
At the other intersections (corners) of the feasible region:
intersection
x
f
x1=0, (c)
(0, 16,667)
180,000
(b), (c)
(15,000, 12,500)
256,500
(a), (b)
(26,207, 6,897)
286,760
(a), x2=0
(30,000, 0)
243,000
max
157
Solution using MATLAB Optimisation Toolbox Routine LP
f=[-8.1,-10.8];
A=[0.8 0.44;0.05 0.1;0.1 0.36];
b=[24000;2000;6000];
vlb=[0;0];vub=[40000;30000];
[x,lambda,how]=lp(f,A,b,vlb,vub);
disp('solution x ='),disp(x)
disp('f ='),disp(-f*x)
disp('constraint values ='),disp(A*x-b)
disp('Lagragian multipliers'),disp(lambda)
disp(how)
DEMO
solution x = 1.0e+004 * 2.6207 0.6897
f = 2.8676e+005
constraint values = 0.0000 0.0000 -896.5517
Lagragian multipliers 4.6552 87.5172
0
0
0
0
0
158
ok
GENETIC ALGORITHMS
Refs: - Goldberg, D.E.: ‘ Genetic Algorithms in
Search, Optimization and Machine Learning’
(Addison Wesley,1989)
Michalewicz, Z.: ‘Genetic Algorithms + Data
Structures = Evolution Programs’ (Springer
Verlag, 1992)
159
Genetic Algorithms are search algorithms based on
the mechanics of natural selection and natural
genetics. They start with a group of knowledge
structures which are usually coded into binary
strings (chromosomes). These structures are
evaluated within some environment and the
strength (fitness) of a structure is defined. The
fitness of each chromosome is calculated and a new
set of chromosomes is then formulated by random
selection and reproduction. Each chromosome is
selected with a probability determined by it’s
fitness and, hence, chromosomes with the higher
fitness values will tend to survive and those with
lower fitness values will tend to become extinct.
160
The selected chromosomes then undergo
certain genetic operations such as crossover,
where chromosomes are paired and randomly
exchange information, and mutation, where
individual chromosomes are altered. The
resulting chromosomes are re-evaluated and
the process is repeated until no further
improvement in overall fitness is achieved. In
addition, there is often a mechanism to
preserve the current best chromosome
(elitism).
“Survival of the fittest”
161
Genetic Algorithm Flow Diagram
Initial Population
and Coding
Selection
Elitism
“survival of the fittest”
Crossover
q
Mating
Mutation
162
Components of a Genetic Algorithm (GA)
• a genetic representation
• a way to create an initial population of
potential solutions
• an evaluation function rating solutions in
terms of their “fitness”
• genetic operators that alter the composition of
children during reproduction
• values of various parameters (population
size, probabilities of applying genetic
operators, etc)
163
Differences from Conventional Optimisation
•GAs work with a coding of the parameter set,
not the parameters themselves
•GAs search from a population of points, not a
single point
•GAs use probabilistic transition rules, not
deterministic rules
•GAs have the capability of finding a global
optimum within a set of local optima
164
Initial Population and Coding
f (x), x n
Consider the problem: max
x
where, without loss of generality, we assume that f
is always +ve (achieved by adding a +ve constant if
necessary). Also assume: ai  xi  bi , i  1,2,, n.
Suppose we wish to represent xi to d decimal places.
ai 
bi
That is each
range
needs to be cut into (biai).10d equal sizes. Let mi be the smallest integer
such that
(b  a ).10d  2mi  1.
i
i
Then xi can be coded as a binary string of length mi.
Also, to interpret the string, we use:
bi  ai
xi  ai  decimal (' binary string' ). mi
2 1
165
Each chromosome (populationn member) is
m  m
represented by a binary string
ofi length:
i 1
where the first m1 bits map x1 into a value from
the range [a1,b1], the next group of m2 bits map
x2 into a value from the range [a2,b2] etc; the
last mn bits map xn into a value from the range
[an,bn].
To initialise a population, we need to decide
upon the number of chromosomes (pop_size).
We then initialise the bit patterns, often
randomly, to provide an initial set of potential
solutions.
166
Selection (roulette wheel principle)
We mathematically construct a ‘roulette wheel’
with slots sized according to fitness values.
Spinning this wheel will then select a new
population according to these fitness values
with the chromosomes with the highest fitness
having the greatest chance of selection. The
167
procedure is:
1) Calculate the fitness value eval(vi) for each
chromosome vi (i = 1,...,pop_size)
2) Find the total fitness of the population
F 
pop _ size
 eval (v )
i
i 1
3) Calculate the probability of a selection, pi,
for each chromosome vi (i = 1,...,pop_size)
eval ( vi )
pi 
F
4) Calculate a cumulative probability qi for
each chromosome vi (i = 1,...,pop_size)
qi 
i
p
i
j 1
168
The selection process is based on spinning the
roulette wheel pop_size times; each time we select a
single chromosome for a new population as follows:
1) Generate a random number r in the range [0,1]
2) If r < q1, select the first chromosome v1; otherwise
select the ith chromosome vi such that:
qi 1  r  qi , (2  i  pop _ size)
Note that some chromosomes would be selected more
than once: the best chromosomes get more copies and
worst die off.
“survival of the fittest”
All the chromosomes selected then replace the
previous set to obtain a new population.
169
example:
p10
p11
p12
p9
p1
p8
p7
p6
p3
p5
p2
p4
segment area
proportional to
pi, i=1,...,12
170
Crossover
We choose a parameter value pc as the probability of
crossover. Then the expected number of chromosomes
to undergo the crossover operation will be pc.pop_size.
We proceed as follows:- (for each chromosome in the
new population)
1) Generate a random number r from the range
[0,1].
2) If r < pc, then select the given chromosome for
crossover.
ensuring that an even number is selected. Now we
mate the selected chromosomes randomly:171
For each pair of chromosomes we generate a
random number pos from the range [1,m-1],
where m is the number of bits in each
chromosome. The number pos indicates the
position of the crossing point. Two
chromosomes:


b1
b2
bpos
bpos 1
bm
c1
c2
c pos
c pos 1
cm


are replaced by a pair of their offspring
(children)


b1
b2
bpos
c pos 1
cm
c1
c2
c pos
bpos 1
bm


172
Mutation
We choose a parameter value pm as the probability of
mutation. Mutation is performed on a bit-by-bit basis
giving the expected number of mutated bits as
pm.m.pop_size. Every bit, in all chromosomes in the
whole population, has an equal chance to undergo
mutation, that is change from a 0 to 1 or vice versa.
The procedure is:
For each chromosome in the current population, and
for each bit within the chromosome:1) Generate a random number r from the range [0,1].
2) If r < pm, mutate the bit.
173
Elitism
It is usual to have a means for ensuring
that the best value in a population is not
lost in the selection process. One way is to
store the best value before selection and,
after selection, replace the poorest value
with this stored best value.
174
Example max f ( x)  x sin(10 x)  10
. , 1 x  2
x
3
global max
2.5
2
1.5
f(x)
1
0.5
0
-0.5
-1
-1
-0.5
0
0.5
x
1
1.5
175
2
Let us work to a precision of two decimal
places. then the chromosome length m must
satisfy:
2  ( 1) 102  2 m  1  2 m  301  m  9
b
g
Also let pop_size = 10, pc = 0.25, pm = 0.04
To ensure that a positive fitness value is
always achieved we will work on val = f(x) + 2
176
Consider that the initial population has been randomly
selected as follows (giving also the corresponding values of x,
val, probabilities and accumulated probabilities)
v1
v2
v3*
v4
v5
v6
v7
v8
v9
v10
*
0
0
1
1
0
0
0
0
0
1
population
1110110
0011101
1010001
0101111
0100110
0010011
0000111
0011100
1111111
0011001
1
0
0
0
1
0
0
0
1
1
x
0.39
-0.66
1.45
1.05
-0.55
-0.78
-0.92
-0.67
0.50
0.80
val
2.89
3.63
4.44
4.04
2.45
2.48
2.51
3.53
3.05
3.06
p
0.09
0.11
0.14
0.13
0.08
0.08
0.08
0.11
0.09
0.09
q
0.09
0.20
0.34
0.47
0.55
0.63
0.71
0.82
0.91
1.00
fittest member of the population
dec(v1 )  1  2 2  2 3  25  2 6  2 7  237
Note for v1:
 x  1 
237  3
 0.39
9
2 1
F   val  32.08  p 
2.89
 0.09
32.08
177
Selection
Assume 10 random numbers, range [0,1], have
been obtained as follows:0.47 0.61 0.72 0.03 0.18 0.69 0.83 0.68 0.54 0.83
These will select:
0.47 0.61 0.72 0.03 0.18 0.69 0.83 0.68 0.54 0.83
v4
v6
v8
v1 v2
v7
v9
v7
v5
v9
giving the new population:
178
v1
v2
v3
v4
v5
v6
v7
v8
v9
v10
Population
before selection
0 1 1 1 0 1 1 0 1
0 0 0 1 1 1 0 1 0
1 1 0 1 0 0 0 1 0
1 0 1 0 1 1 1 1 0
0 0 1 0 0 1 1 0 1
0 0 0 1 0 0 1 1 0
0 0 0 0 0 1 1 1 0
0 0 0 1 1 1 0 0 0
0 1 1 1 1 1 1 1 1
1 0 0 1 1 0 0 1 1
selection
4
5
1
9
2
6,8
3
7,10
Population after
selection
1 0 1 0 1 1 1 1 0
0 0 0 1 0 0 1 1 0
0 0 0 1 1 1 0 0 0
0 1 1 1 0 1 1 0 1
0 0 0 1 1 1 0 1 0
0 0 0 0 0 1 1 1 0
0 1 1 1 1 1 1 1 1
0 0 0 0 0 1 1 1 0
0 0 1 0 0 1 1 0 1
0 1 1 1 1 1 1 1 1
Note that the best chromosome v3 in the original
population has not been selected and would be
destroyed unless elitism is applied.
179
Crossover (pc = 0.25)
Assume the 10 random numbers:1
2
3
4
5
6
7
8
9
10
0.07 0.94 0.57 0.36 0.31 0.14 0.60 0.07 0.07 1.00
These will select v1, v6, v8, v9 for crossover.
Now assume 2 more random numbers in the
range [1,8] are obtained:7.20 335
.
 bits 8 and 4.
180
Mating v1 and v6 crossing over at bit 8:v1 1 0 1 0 1 1 1 1 0
no change
v6 0 0 0 0 0 1 1 1 0
Mating v8 and v9 crossing over at bit 4:-
v8
0 0 0 0 0 1 1 1 0
v9
0 0 1 0 0 1 1 0 1
produces
v8
0 0 0 0 0 1 1 0 1
v9
0 0 1 0 0 1 1 1 0
giving the new population:-
181
v1
v2
v3
v4
v5
v6
v7
v8
v9
v10
population before
crossover
101011110
000100110
000111000
011101101
000111010
000001110
011111111
000001110
001001101
011111111
bit for mutation
population after
crossover
101011110
000100110
000111000
011101101
000111010
000001110
011111111
000001101
001001110
011111111
182
mutation (pm = 0.04)
Suppose a random number generator selects bit 2 of v2
and bit 8 of v9 to mutate, resulting in:v1
v2
v3
v4
v5
v6
v7
v8**
v9
v10
population after
x
val
mutation
101011110
1.05
4.04
010100110
-0.02
3.02
000111000
-0.67
3.53
011101101
0.39
2.89
000111010
-0.66
3.63
000001110
-0.92
2.51
011111111
0.50
3.05
000001101
-0.92
2.37
001001100
-0.55
2.45
011111111
0.50
3.05
Total fitness F = 30.54
** weakest
183
Elitism
So far the iteration has resulted in a decrease in overall fitness
(from 32.08 to 30.54). However, if we now apply elitism we
replace v8 in the current population by v3 from the original
population, to produce:
v1
v2
v3
v4
v5
v6
v7
v8
v9
v10
population after
mutation
101011110
010100110
000111000
011101101
000111010
000001110
011111111
110100010
001001100
011111111
x
val
1.05
-0.02
-0.67
0.39
-0.66
-0.92
0.50
1.45
-0.55
0.50
4.04
3.02
3.53
2.89
3.63
2.51
3.05
4.44
2.45
3.05
Total fitness F = 32.61
184
resulting now in an increase of overall
fitness (from 32.08 to 32.61) at the end of
the iteration.
The GA would now start again by
computing a new roulette wheel and
repeating selection, crossover, mutation
and elitism; repeating this procedure for a
pre-selected number of iterations.
185
Final results from a MATLAB GA program using parameters:
pop_size = 30, m = 22, pc = 0.25, pm=0.01
iteration 40
best and average values
3
5
2
4.5
1
4
0
3.5
-1
-1
0
1
2
3
0
20
x distribution
chromosomes
30
0
20
10
10
20
0
-1
40
30
0
1
2
186
0
10
20
Tabulated results:
x
val
x
val
x
val
1.8500 4.8500 1.8503 4.8502 1.8500
4.8500
1.8496 4.8495 1.8500 4.8500 1.8500
4.8500
0.3503 2.6497 1.8504 4.8502 1.8269
4.3663
1.8504 4.8502 1.8503 4.8502 1.8500
4.8500
1.8265 4.3520 1.8503 4.8502 1.8386
4.7222
1.8500 4.8500 1.8496 4.8495 1.8500
The optimum
4.8500 val = 4.8502 at x = 1.8504
1.8503 4.8502 1.8504 4.8502 1.8500
max f ( x )  x sin(10 x)  10
. , 1 x  2
Hence:4.8500
x
1.8500
4.8500 1.8503 4.8502 1.8500
4.8500
 2.8502 at x  18504
.
1.8496 4.8495 1.8496 4.8495 1.8503
4.8502
remembering
that val(x) = f(x) + 2
1.8500 4.8500 1.8500 4.8500 1.8968
187
DEMO
188
ON-LINE OPTIMISATION INTEGRATED SYSTEM
OPTIMISATION AND PARAMETER
ESTIMATION (ISOPE)
An important application of numerical optimisation is
the determination and maintenance of optimal steadystate operation of industrial processes, achieved
through selection of regulatory controller set-point
values. Often, the optimisation criterion is chosen in
terms of maximising profit, minimising costs,
achieving a desired quality of product, minimising
energy usage etc. The scheme is of a two-layer
hierarchical structure:189
OPTIMISATION
(based on steady-state model)
set points
REGULATORY CONTROL
e.g. PID Controllers
control
signals
measurements
INDUSTRIAL PROCESS
inputs
outputs
Note that the steady-state values of the outputs are determined
by the controller set-points assuming, of course, that the 190
regulatory controllers maintain stability.
The set points are calculated by solving an
optimisation problem, usually based on the
optimisation of a performance criterion (index)
subject to a steady-state mathematical model of the
industrial process. Note that it is not practical to
adjust the set points directly using a ‘trial and error’
technique because of process uncertainty and nonrepeatability of measurements of the outputs.
Inevitably, the steady-state model will be an
approximation of the real industrial process, the
approximation being both in structure and
parameters. We call this the model-reality
191
difference problem.
ISOPE Principle
ROP - Real Optimisation Problem
• Complex
• Intractable
MOP - Model Based Optimisation Problem
• Simplified (e.g... Linear - Quadratic)
• Tractable
???
Can We Find the Correct Solution of ROP By
Iterating on MOP in an Appropriate Way
YES
By Applying Integrated System Optimisation And
Parameter Estimation - ISOPE
192
Iterative Optimisation and Parameter Estimation
In order to cope with model-reality
differences, parameter estimation can be used
giving the following standard two-step
approach:-
193
1. Apply current set points values and, once transients have
died away, take measurements of the real process outputs.
use these measurements to estimate the steady-state model
parameters corresponding to these set point values. This is
the parameter estimation step.
2. Solve the optimisation problem of determining the
extremum of the performance index subject to the steadystate model with current parameter values. This is the
optimisation step and the solution will provide new values
of the controller set points.
The method is iterative applied through repeated application
of steps 1 and 2 until convergence is achieved.
194
Standard Two-Step Approach
MODEL BASED
OPTIMISATION
min J (c, y )
c
s.t. y  f (c,  )
c
y
PARAMETER
ESTIMATION

y (c,  )  y * (c)

y*
REGULATORY
CONTROL
REAL PROCESS
y*
y*  f * (c)
195
min J  ( y  2) 2  c2
Example
c
where y  c  
(model)
y*  2c  1 (reality)
real solution
J *  ( y *  2) 2  c 2  (2c  1  2) 2  c 2  (2c  1) 2  c 2
dJ *
At a min,
 0  4(2c  1)  2c  0  c  0.4
dc
giving y *  18
. and J *  0.2
Now consider the two-step approach:parameter estimation
y ( c ,  )  y * ( c)
i.e. c    2c  1    c  1
196
optimisation
min J  ( y  2) 2  c 2
c
s.t. y  c   with  given
dJ
J  ( c    2)  c ,
 2( c    2)  2c
dc
 0 for a min when 2(c    2)  2c  0  c  1  0.5
2
Hence, at iteration k:
2
 k  ck  1
c k 1  1  0.5 k
i.e.
c k 1  1  0.5c k  0.5  0.5c k  0.5
This first-order difference equation will converge (i.e. stable)
since 0.5  1 giving c   0.5c   0.5  c   0.3333
and
y  2c  1  16667
.
, J  0.2222,   13333
.




197
HENCE, THE STANDARD TWO STEP APPROACH
DOES NOT CONVERGE TO THE CORRECT
SOLUTION!!!
*
Standard Two-Step Final Solution c = 0.3333 J = 0.2222
1
1
0.8
0.6
J*
0.4
3
5
4
0.2
2
final solution
0
0
0.1
0.2
0.3
0.4
c
0.5
0.6
0.7
198
0.8
Integrated Approach
The standard two-step approach fails, in general, to
converge to the correct solution because it does not
properly take account of the interaction between the
parameter estimation problem and the system
optimisation problem.
Initially, we use an equality v = c to decouple the set
points used in the estimation problem from those in
the optimisation problem. We then consider an
equivalent integrated problem:199

c
 model based optimisation
s.t. y  f (c,  )  y (c,  ) 
vc
decoupling
min J (c, y )
y ( v )  f ( v )  f ( v,  )  y ( v,  ) parameter
*
*
estimation
This is clearly equivalent to the real optimisation
problem ROP
*
min J (c, y )
c
s.t. y  f (c)
*
*
200
If we also write the model based optimisation problem
as: (by eliminating y in J(c,y))
min F (c, )
c
where F (c, )  J  c, f (c, ) 
giving the equivalent problem:-
min F (c, )
c
s.t. v = c
y( v, )  y ( v)
*
201
Form the Lagrangian:
L(c, v,  )  F (c,  )   ( v  c)  
T
T
 y ( v,  )  y ( v ) 
*
with associated optimality conditions:
c L  c F    0
(1)
   y T   y * T 
v L       
   0
   v    v  
 y 
 L   F      0
  
(2)
T
(3)
together with:vc
y( v, )  y ( v)
*
202
Condition (1) gives rise to the modified optimisation
problem:
T
min F (c, )   c
c


which is the same as:-

min J (c, y )   c
c
T

s.t. y  f (c,  )
and the modifier  is given from (2) and (3) :   y  T   y *  T    y  T
     
     F
   v    v     
203
Modified Two-Step Approach
MODIFIER
   y  T   y *  T    y  T
  
    c        F
   c 

 
where F (c,  )  J  c,f(c,  ) 

MODEL BASED
OPTIMISATION
min J (c, y )   T c
c

c,
y
y

s.t. y  f (c,  )
c

PARAMETER
ESTIMATION
y (c,  )  y * (c)

y*
REGULATORY
CONTROL
REAL PROCESS
y*
y*  f * (c)
204
Modified two-step algorithm
The starting point of iteration k is an estimated set point
vector vk.
Step 1 : parameter estimation
Apply the current set points vk and measure the
corresponding real process outputs y*k. Also,
compute the model outputs yk = f(vk,) and
determine the model parameters k such that yk = y*k.
Step 2 : modified optimisation
(i) compute the modifier vector
T
k
k
* 
k T


 y 
 y 
y
k
k
k
   k   

F
(
v
,

)

 k
k 
k
   v    v     
205
(ii) solve the modified optimisation problem
min
c

J (c, y )  ( k )T c

s.t. y  f (c,  k )
to produce a new estimated set point vector ck.
(iii) update the set points (relaxation)
v k 1  v k  K (ck  v k )
where the matrix K, usually diagonal, is chosen to
regulate stability. (Note: if K = I, then vk+1 = ck).
We then repeat from Step 1 until convergence is achieved.
206
Acquisition of derivatives
The computation of modifier  requires the derivatives:-
y
v
y







F
T
 [  F ] 


*
y
v
model based - not
usually a problem
on the real measurements
- quite a problem
207
Some methods for obtaining y
v
*
(i) applying perturbations to the set points
and computing the derivatives by finite
differences.
(ii) using a dynamic model to estimate the
derivatives (recommended method).
(iii) estimate the derivative matrix using
Broyden’s method (recommended
method).
208
Note:- Digital filtering can be used with effect to smooth the
computational values of .
Note:- When  = 0 we arrive back to the standard two-step
approach. From the expression for we see that the
standard two step approach will only achieve the
correct solution when the model structure is chosen
such that:
*
y OL
y O
L

M
P
M
P
Nc QNc Q
a condition rarely achieved in practice.
209
Example: Consider the same example as used previously:
min J  ( y  2) 2  c2
c
where y  c  
(model)
y *  2c  1 (reality)
where the real solution is :- c = 0.4, y* = 1.8, J* = 0.2
parameter estimation This is unchanged. Hence:   v  1
   y  T   y *  T    y  T
modifier
     
     F
   v    v     
*

y

y

y
*
where y  v     1,
 1; y  2v  1 
2
v

v
F  J  v, y (v,  )   (v    2) 2  v 2    F  2(v    2)
b gbgb
g
Hence:   1  2 1 1 2(v    2)  2(v    2)
210
modified optimisation

min ( y  2)  c   c
c
2
s.t. y  c   ,
2

 ,  given
i.e. Fm  (c    2) 2  c 2   c
 Fm
 2(c    2)  2c    0
c
 c  1  0.5  0.25
211
At iteration k (with K = 1)
  v 1
k
k
  2 ( v    2 )
k
k
k
c k  1  0.5 k  0.25k
v
i.e.
k 1
c
k
v k 1  1  0.5(v k  1)  0.25( 2)(v k  (v k  1)  2)

v k 1  15
. vk  1
IF this difference equation converges the result will
be:v   15
. v   1  v   0.4
( Note: c  v  )
212
which is the correct solution
. 1
However, it does not converge since 15
(the eigenvalue is outside the unit circle)
Hence, it is necessary to apply relaxation in the
algorithm to produce the iterative scheme: k  vk 1
  2 ( v    2 )
k
k
k
c  1  0.5  0.25
k
v
k 1
k
k
 v  g (c  v )  gc  (1  g )v
k
k
k
k
where g is a gain parameter (g > 0).
k
213
Then:v k 1  g( 15
. v k  1)  (1  g)v k  (1  2.5g)v k  g
This will converge provided
1  2.5g  1,
i.e. 0  g  0.8
(Hence, typically use g=0.4)
Then:
v   (1  2.5g)v   g  v   0.4


Then:    14
. ,   0.4, c  0.4
and: y   y *  18
. , J *  0.2
i.e. THE CORRECT REAL SOLUTION IS
214
OBTAINED
ISOPE (THE MODIFIED
TWO-STEP APPROACH)
ACHIEVES THE CORRECT
STEADY STATE REAL
PROCESS OPTIMUM IN
SPITE OF MODELREALITY DIFFERENCES
215
Modified Two-Step g = 1
(divergence)
25
5
20
15
J
*
4
10
3
5
2
1
0
-2
-1.5
-1
-0.5
0
v
0.5
1
1.5
216
2
Modified Two-Step Final Solution v = 0.3999 J* = 0.2
1
1
0.8
0.6
*
J
2
0.4
3
45
0.2
final solution
0
0
0.1
0.2
0.3
v
0.4
0.5
g = 0.2
0.6
0.7
217
0.8
Modified Two-Step Final Solution v = 0.4 J* = 0.2
1
1
0.8
0.6
*
J
0.4
2
0.2
0
0
0.1
0.2
0.3
v
0.4
0.5
g = 0.4
0.6
0.7
0.8
When g = 0.4 convergence is achieved in a single iteration.
218
This is because the eigenvalue = 0. (|1 - 2.5g| = 0)
Example:
min  J  ( y1  2)  ( y2  3)  c  c
2
2
c1 ,c2
y  2c1  1 
s.t. *

y2  c1  3c2  2 
y1  0.1c1  1
*
1
2
1
2
2
reality

 model
y2  c1  5.5c2   2 
219

0.5
0.4
0.3
standard J = 0.6563
0.2
modified J = 0.2353
v2 0.1
0
-0.1
-0.2
0
0.1
0.2
0.3
0.4
v1
0.5
0.6
0.7
220
0.8
DEMO
221