On a Numerical Algorithm for Uncertain System

On a Numerical Algorithm for Uncertain System
By
Bankole Abiola 1 and Ade Ibiwoye 2
Department of Actuarial Science
Faculty of Business Administration
University of Lagos
E-mail:[email protected]
ABSTRACT: A numerical method for computing stable control signals for system
with bounded input disturbance is developed. The algorithm is an elaboration of the
gradient technique [Ibiejugba&Onumanyi 1984] and variable metric method [Dixon,
1975] for computing control variable in linear and non-linear optimization problems.
This method is developed for an integral quadratic problem subject to a dynamic
system with input bounded uncertainty.
Key words: Stable Control, Gradient Technique, Variable Metric Method and Input
bounded Uncertainty.
1. Senior Lecturer Department of Actuarial Science, University of Lagos. Author to
whom all correspondence concerning this paper should be addressed.
2. Associate Professor of Actuarial Science, University of Lagos
1
Introduction: This paper is concerned with the development of a numerical method
for the computation of controls for system with bounded input disturbances. Existing
numerical methods for computing control signals for optimisation problems do not
consider the situation when uncertainties are involved in the system, this work is set
to give consideration to such problem and for a start we shall develop this method for
an integral quadratic problem subject to a dynamical system with input bounded
uncertainty.
Formulation of Problem:
Consider
Min
T

0
{x(t )Qx (t )  u(t ) Ru (t )}dt … (1)
subject to
x (t )  Ax(t )  Bu (t )  Cv(t )
… (2)
and
x(0)  x 0 , given, t  [0, T ],
. . . (3)
x(t )  X  R N , is the state vector u(t ) U  R M is the control vector and
v(t ) V  R M is the uncertain vector.
U  V   . T is the final time. Q  R N M is positive semi-definite (symmetric)
R  R M is a positive definite matrix.
We state the following assumptions:
A1 : We assume that A is a stable matrix and if otherwise there exist a matrix
K  R M N
such that A  A  B T K .
Given that PD N denotes the set of positive symmetric members of R N  N
Then
PA  A T P  Q1  0, Q1  PD N … (4)
2
A2 :
v(t ) is a Caratheodory function and given m, m  IR , m  v(t )  m
A3 :
( A, B ) ,is controllable.
For our subsequent development of this paper, we now consider
x (t )  Ax(t )
.
… (5)
Let (t 0 , t ) be a transition matrix for (5) then we can write the solution for (2) and
(3) as
T
x(t )  (t 0 , t ) x0    (t ,  ) Bu ( )  Cv( )} d
0
… (6)
We calculate the transition matrix using the following formula
n  A I 
n
j

f ( A)   f ( k ) 


k 1
j 1   i   j 
. . . (7)
Where
 k denotes the eigen-value of the matrix A.
Let B[ IR N , IR M ] be the set of bounded linear map between
IR N and IR M . We now define a map
L : B[ IR N , IR M ]  B[ IR N , IR M ]
Be such that
t
Lu   (t ,  ) Bu ( )d
… (8)
0
And similarly define
t
Kv   (t ,  )Cv ( )d
… (9)
0
3
For arbitrary elements w1 , w2  IR N , define the inner product on IR N as
t
 w1 , w2   w1 (t ) T w2 (t )dt
… (10)
Let (t 0 , t ) x0  z1
… (11)
0
Therefore
x(t )  z1  Lu  Kv
… (12)
We can now write (1) as
J (u, v)  x(t ), Qx (t )    u (t ), Ru (t ) 
=  z1  Lu  Kv, Q( z1  Lu  Kv    u, Ru 
. . . (13)
Therefore,
J (u, v)  u, ( LQ T L  R)u    v, KQ T Lu  2  u, LQ T Kv 
2  z1, Q T Lu  2  v, K T Qz 1    v, KQ T Kv 
… (14)
We state the following remarks
Remark (1): J (u , v) is convex with respect to u  U  IR M
Proof; Consider
[ u, ( LQ T L  R)u ]  ( Lu, LQ T u    u, Ru )  0
This implies convexity.
Remark (2): Under the assumption that u (t ) is a minimizer for J (u , v) and v(t )
is a maximizer , then J (u , v) is concave with respect to v  IR M
Proof: Let i be l  dimensional row vector of matrix KQT K . Denote the norm of v
~ . Now consider
~ such that v  m
by v . Since v is assumed bounded there exist m
4
l
 v, KQ Kv   (i v)  v
T
2
i 1
2
l

i 1
i
2
 m 2Tr[ KQ T K ] 2  d 2  0
. . . (15)
Assumption:
A4 : By Minimax theorem [Abiola, 2009] we assume v(t )  m 0 u(t )
where m  v(t )  m , and
m0 
1
(m  m). Using A4 , we write J (u , v) as
2
J (u )  u, ( LQ T L  R)  ((m 0 ) 2  2m 0 )( KQ T L)u 
 2  z1 , Q T Lu  2m 0  u, K T Qz 
. . . (16)
We state the following proposition from (14) the proof of which is an obvious fact.
Proposition (1.1)
For every m 0  0 , {( LQT L  R)  (mo )2  2mo ( KQT L)}  0
The prove of the above proposition is an obvious fact.
Numerical Method
The problem under consideration is the minimization of J (u ) defined in (16) with
respect to the control signal u (t )  IR n .
J (u ) is non-linear, continuous and twice differentiable. We wish to find numerically
u * such that:
J (u*)  J (u ) u : u  u *  
Suppose J (u ) denotes the gradient of J (u ) , and we also set
J (u )  g (u )
… (17)
5
Let
2 J (u )  A(u )
… (18)
Where A (u ) denotes the Hessian matrix at point u. we assume that A (u ) is nondegenerate at point u * . The first order condition for minimum is that
J (u )  0
… (19)
A second order condition for minimum is that
2 J (u)  A(u)  0
… (20)
We see from proposition (1.1) that A( x)  0
One way of minimizing an unconstrained optimization problem is by application of
Newton’s Method, which is described briefly below:
We consider the problem described as:
Min F ( x), x  IR n
Find x * such that  x
… (21)
x  x*   and F ( x* )  F ( x) .
Suppose F ( x)  g ( x) is the gradient of F ( x) at point x and 2 F ( x)  A( x) is the
Hessian matrix, then the function F ( x) at each iteration is approximated by a
quadratic model  and effectively minimized as:
1
2
 ( x  p)  F ( x)  g ( x)T P  ( PT A( x) P)
… (22)
Using the first order condition for minimum, we have
 ( x  p)  F ( x)  A( x) P  0
… (23)
Solving equation (23) we have
A ( x) p  F ( x)
… (24)
On the basis of (24) we state the Newton’s Algorithm.
6
Newton’s Algorithm
Step 1 Calculate
F ( x k ).g ( x k ), A ( x k )
Step 2. Check if g ( xk )   for a predetermined  , if so stop, else
Step3. Set A ( x k ) Pk =  g ( xk )
Step4. Set xk 1  xk  Pk
Step5. Set k  k  1, and go to step1.
The Newton’s algorithm described above can be shown to have second order
convergence [Dixon, 1975]. This is because it utilizes quadratic model which is
minimised at each iterations. The global convergence of Newton’s method is not
guaranteed because the iteration can diverge from arbitrary stating points. Since
Newton equations have to be solved at each iterations, it break down when A ( x k ) is
singular. However to circumvent the drawback posed by the possibility of singularity
of A ( x k ) a variable metric algorithm was proposed [Dixon, 1975]
The scheme involves the following:
x k 1  x k   k P k g ( x k )
. . . (25)
x 0 , P 0 are guessed arbitrarily  k is determined by a suitable line search method. P 0
is a positive definite matrix and normally P k is intended to be approximation of the
inverse of the Hessian matrix  2 F ( x) .
This matrix is updated using information obtained in each iterative step as follows:
P k 1  ( P k , v k , y k , z k )
...
(26)
Where v k  xk 1  xk , y k  g k 1  g k , and z k is a vector parameter.
Varieties of variable metric algorithm are derived in the way in which the appropriate
inverse matrices are constructed. A lot of methods for determining P k have been
proposed. For example [Dixon, 1975] made reference to a proposed updating rule for
P k such that
P k 1  P k  D
. . . (27)
7
Where D is required to be very small as much as possible in some norm, He suggested
the following Frobenious norm defined as:
  D  Tr[W DW D T ]
2
. . . (28)
W is a positive definite matrix. D is assumed symmetric such that
DT  D  0
. . . (29)
The quasi-newton condition defined as:
Dy  r  0

k
r  W  P y
. . . (30)
is also satisfied.
Minimising (28) subject to (29) and (30) using Lagrange method an analytical
solution for D was derived as:
{ ( w  Py  b ) T  (W  Py  b ) T }
D
yT
. . . (31)
Where
  W 1 y, b 
1 / 2( y T W  y T Py)
y T W 1 y
We next consider a method for constructing an approximation inverse for the Hessian
matrix, and also take into consideration the uncertainty nature of the problem under
investigation. This is the crux of this paper.
8
Minimising Algorithm
Consider the problem defined in (1) – (3) and let
H ( x, u, v,  )  x(t )T Qx (t )  u(t )T Ru (t )  T ( Ax(t )  Bu (t )  Cv(t )) ….
(32)
be the system Hamiltonian.
Compute the gradient of (32) with respect to u (t ) as follows:
H ( x, u  h, v,  ) =
x(t ) T Qx (t )  (u  h)(t ) T R(u  h)(t )  T ( Ax(t )  B(u  h)(t )  Cv(t )) . . . (33)
Since R is symmetric then R T  R
Therefore,
H ( x, u  h, v,  )  H ( x, u, v,  )  2u(t ) T Rh  T Bh
. . . (34)
Now,
lim h0
H ( x, u  h, v,  )  H ( x, u, v,  )
 2u(t ) T R  T B
h
. . . (35)
Therefore
H
 2u (t ) T R  T B
u
. . . (36)
We now choose  (t )  2 Px(t )
. . . (37)
P is a positive definite matrix calculated from (4), we then have
H
 2(u (t ) T R  B T Px(t ))
u
. . . (38)
9
This is expressible as
t
t
T 

H (u)  2
u(t ) R  BP (t 0 , t ) x0  0 (t ,  ) Bu ( )d  m 0 (t ,  )Cu( )d 




. . . (39)
We can now propose the following algorithm:
Define J (u ) by equation (16), H (u ) by (39), then
Step 1: Choose u 0 arbitrarily and compute J (u 0 ) , H (u 0 ) and set
H (u k )  g k , and set g k   Fk , k  0
Step 11: If g k   , for a predetermined  , stop, else set
Step111: u k 1  u k   kWk Fk (u k ), where
k 
 gk, gk 
 Fk , A Fk 
A  ( LQ T L  R)  (( m 0 ) 2  2m 0 )( KQT L)
Wk 
P 1


1
 t H (u ) P 1H (u ) d  2
k
k
 0

To avoid the possibility of W k being negative definite P is a positive definite matrix
calculated from:
PA  AT P  Q  0
Updating of sequences is by gradient technique, which is defined as follows:
10
g k 1  g k   k A Fk
Fk 1   g k 1   k Fk
k 
 g k 1, g k 1 
 gk , gk 
Step 1V Set k  k  1 and go to step 1
The convergence of this algorithm is similar to conjugate gradient algorithm given by
[Ibiejugba and Abiola, 1985]
In what now follows we shall show how we derive the operator W k
Suppose at iteration k and k  1
u k 1  u k  u k
… (40)
Let S (u ) be such that
T  H 

S (u )   
u k (t )dt
0
 u 

… (41)
 H 
Where 
  H (u ) which was defined in (39)
 u 
Let a distance function be defined between u k 1 and u k as follows:
T
(S ) 2   (u k ) P(u k )d
. . . (42)
0
P is a positive definite matrix calculated as before. We consider only when
S  0, and suppose there is an infinitesimal change in u k which is defined as
T

0
 d (u k ) d (u k ) 
P

d =1
ds 
 ds
. . . (43)
Then the corresponding change in (41) is given as
T
T
dS (u )
 H  d (u k ) 

  

d
0
ds
ds 

 u 

. . . (44)
11
We now wish to minimise (44) by choosing
d (u k )
as control. In doing this we use
ds
Lagrange multiplier technique and let  be the multiplier such that:
T
T
d (u k ) 
 d (u k )
 H   d (u k ) 

S   
 T 
P


d

0
ds 

 ds
 u   ds 

… (45)
We define the Hamiltonian as follows:
 H  T  d (u k ) d (u k ) 
d (u k ) 
 d (u k )
H   
P
  T 

 
ds 
ds 
 ds
 u   ds
H 
 d (u k ) 


 ds 

d (u k )
H
 2P T
u
ds
. . . (46)
. . . (47)
Setting (47) equals to zero implies
H 
 d ( u k 


 ds 
0
. . . (48)
Therefore,
d (u k ) 1 P 1H

ds
2 u
. . . (49)
12
Using (49) in (43) we get
1

T
0
 H  1  H  2
 u  P  u  d  2
 


. . . (50)
Therefore
du k

ds
 H 
 P 1 

 u 
 T  H  1  H
P 
 0 
  u   u
 
  d 
 
. . . (51)
1
2
Numerical Example (See, Corless and Leitmann (1981)]
Consider the following system defined by:
 x1 (t )  x2 (t )

 x 2 (t )  x1 (t )  u 2 (t )  v2 (t )
. . .(52)
The objective function is defined as
Minimise J (u , v, x) 


1
2
2
u1 (t )  u 2 (t ) dt

2
From the above it is easy to see that
13
1

R 2
 0


0
 ,
1

2
0
 0 1

B  C    and A  
1
 1 0
We also note that A is unstable, we therefore select K  k1
k1  0, k 2  0
0 1

A  A  BK  
  1 k2 
1 0

Let Q  
0 1
Then if
PA  A T P  Q  0 , we can compute
 1
1
k2  k2

P   2
1

2

1 

2 
1
 k 2 

14
k 2  such that
1
Suppose k 2  
2

2
then, P  
 1
2
1

2
2 

Whose inverse is computed to be:
 8

P   15
2

 15
2

15 
8 

15 

Applying the proposed algorithm, we have the following table of results
15
Table 1: Control and Objective Function
Iteration
Time
Control u
Objective function
1
1
-44.193350
2592.106
2
1
2
2
1
3
2
1
4
2
45.114150
5597.7884
-31.34126
1372.045
-47.49781
2853.739
-23.35895
851.0466
26.9681
1857.691
-19.87427
629.528
24.146440
1389.023
1
5
34.4169
606.081
1
6
23.739
587.460
1
7
21.9034
502.018
1
8
19.6593
422,2720
1
9
18.934
383.1967
1
10
17.881
343.480
1
11
13.191
296.097
1
12
2,3015
13,71743
Conclusion: We achieve the minimum of the objective function in just 12 iterations
which can be considered as good enough. In our subsequent paper we shall consider
the application of this algorithm water quality control system where uncertainties are
involved. Furthermore we have demonstrated in this paper that it is possible to derive
a numerical algorithm for some class of control problems where uncertainties are
involved.
16
References
1. Abiola B.(2009) Control of Dynamical System in the presence of Bounded
Uncertainty PhD thesis , Department of Mathematics , University of Agriculture
Abeokuta
2. Corless .M , Leitmann ,G,(1981) Continuous State Feedback Guaranteeing
Uniform Ultimate Boundedness for Uncertain Dynamical System IEE Transaction on
Automatic Control Volume AC-26 No 5.
3. Dixon, L.C.W.(1975). On the Convergence of the Variable Metric Method with
Numerical Derivatives and the Effect of Noise in the Function Evaluation,
Optimization and Operation Research, Springer Verlag, Heidelberg, New York.
4.Ibiejugba,M.A. and Onumanyi. (1984) A Control Operator and Some of its
Applications: Journal of Mathematical Analysis.
17
18