Introduction to Control Theory Including Optimal Control Nguyen Tan Tien - 2002.3 _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ 3. State Space Formulation 3.1 Introduction (Review) 3.1.1 Eigenvalues and eigenvectors Consider a matrix A of order n × n . If there exists a vector x ≠ 0 and a scalar λ such that Ax = λ x ⎡λ1 0 ⎢ Λ = ⎢ 0 λ2 M M ⎢0 0 ⎣ L L O L ⎤ ⎥ ⎥ = diag{λ1 , λ2 ,L, λn } λn ⎥⎦ 0 0 M is called eigenvalue matrix of A . Hence then x is called an eigenvector of A . λ is called an eigenvalue of A . The above equation can be written in the form [ λ I − A]x = 0 Λ = X −1 A X 3.1.2 The Cayley-Hamilton theorem Given a square matrix A (of order n × n ) and integers r and s , then Ar A s = Ar + s = A s+r = A s Ar where I is the unit matrix (of order n × n ). It is known that the above homogeneous equation has non-trivial (that is, nonzero) solution only if the matrix [ λ I − A ] is singular, that is if det(λ I − A) = 0 This is an equation in λ of great importance. We denoted it by c(λ ) , so that c(λ ) = det(λ I − A) = 0 It is called the characteristic equation of A . Written out in full, this equation has the form a12 ⎡λ − a11 ⎢ a 21 λ − a 22 c (λ ) = ⎢ M ⎢ M an 2 ⎣ a n1 L a1n ⎤ L a 2n ⎥ = 0 O M ⎥⎥ L λ − a nn ⎦ On expansion of determinant, c(λ ) is seen to be a polynomial of degree n in λ , having the form c(λ ) = λn + b1λn−1 + K + bn−1λ + bn = (λ − λ1 )(λ − λ2 ) L (λ − λn ) =0 λ1 , λ2 ,L, λn , the roots of c(λ ) = 0 , are the eigenvalues of A . Assuming that A has distinct eigenvalue λ1 , λ2 ,L, λn , the corresponding eigenvectors x1 , x 2 ,L, x n are linearly independent. The (partitioned) matrix X = [x1 x 2 L x n ] is called the modal matrix of A. Since A xi = λ xi where (i = 1,2,L, n) This property leads to the following definition. Definition Corresponding to a polynomial in a scalar variable λ f (λ ) = λk + b1λk −1 + K + bk −1λ + bk define the (square) matrix f ( A) , called a matrix polynomial, by f ( A) = A k + b1 A k −1 + K + bk −1 A + bk I where I is the unit matrix of order n × n . For example, corresponding to f (λ ) = λ2 − 2λ + 3 and A = ⎡ 1 1⎤ , we have ⎢⎣− 1 3⎥⎦ f ( A) = ⎡⎢ 1 1⎤⎥ ⎡⎢ 1 1⎤⎥ − 2 ⎡⎢ 1 1⎤⎥ + 3⎡⎢1 0⎤⎥ = ⎡⎢ 1 2⎤⎥ ⎣− 1 3⎦ ⎣− 1 3⎦ ⎣− 1 3⎦ ⎣0 1⎦ ⎣− 2 5⎦ Of particularly interest to us are polynomials f having the property that f ( A) = 0 . For every matrix A one such polynomial can be found by the Cayley-Hamilton theorem which states that: Every square matrix A satisfies its own characteristic equations. For the above example, the characteristic equation of A is c(λ ) = (λ − λ1 )(λ − λ2 ) = λ2 − 4λ + 4 where λ1 , λ2 are eigenvalues of A . So that c( A) = A 2 − 4 A + 4 I = ⎡ 1 1⎤ ⎡ 1 1⎤ − 4 ⎡ 1 1⎤ + 4 ⎡1 0⎤ ⎢⎣− 1 3⎥⎦ ⎢⎣− 1 3⎥⎦ ⎣⎢− 1 3⎥⎦ ⎣⎢0 1⎥⎦ = ⎡⎢0 0⎤⎥ ⎣0 0 ⎦ it follows that AX = ΛX In fact the Cayley-Hamilton theorem guarantees the existence of a polynomial c(λ ) of degree n such that c( A) = 0 . ___________________________________________________________________________________________________________ 8 Chapter 3 State Space Formulation Introduction to Control Theory Including Optimal Control Nguyen Tan Tien - 2002.3 _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ 3.2 State Space Forms ⎡ x1 ⎤ y = [1 0 0] ⎢ x 2 ⎥ ⎢x ⎥ ⎣ 3⎦ Consider the system equation in the form y ( n ) + a1 y ( n −1) + K + a n −1 y& + a n y = u (3.1) It is assumed that y (0), y& (0), L , y ( n−1) (0) are known. If we define x1 = y , x 2 = y& , L , x n = y ( n −1) , (3.1) is written as x&1 = x 2 x& 2 = x 3 Let x1 = 1 y + 1 &y& 5 5 1 ( −1 + 2i ) &y& x 2 = 15 (2 + i ) y − 12 iy& + 10 1 (1 + 2i ) &y& x 3 = 15 (2 − i ) y + 12 iy& − 10 (3.6) where i 2 = −1 . Then M x& n −1 = x n x& n = − a n x1 − a n−1 x 2 − K − − a1 x n + u which can be written as a vector-matrix differential equation ⎡ x&1 ⎤ ⎡ 0 1 ⎢ x& 2 ⎥ ⎢ 0 0 ⎢ M ⎥=⎢ M M ⎢ x& ⎥ ⎢ 0 0 − 1 n ⎢ ⎥ ⎢ ⎢⎣ x& n ⎥⎦ ⎣− a n − a n−1 (2) Diagonal form L 0 0 ⎤ ⎡ x1 ⎤ ⎡0⎤ L 0 0 ⎥ ⎢ x 2 ⎥ ⎢0 ⎥ O M M ⎥ ⎢ M ⎥ + ⎢M⎥ u L 0 1 ⎥ ⎢ x n −1 ⎥ ⎢0⎥ L − a 2 − a1 ⎥⎦ ⎢⎢ x ⎥⎥ ⎢⎣1 ⎥⎦ ⎣ n ⎦ (3.2) (3.7) __________________________________________________________________________________________ In general form, a MIMO system has state equations of the form that is, as x& = A x + B u , where x , A and B are defined in equation (3.2). The output of the system ⎡ x1 ⎤ ⎢ ⎥ y = [1 0 L 0] ⎢ x 2 ⎥ M ⎢x ⎥ ⎣ n⎦ ⎡ x&1 ⎤ ⎡2 0 0 ⎤ ⎡ x1 ⎤ 1 ⎡ 2 ⎤ ⎢ x& 2 ⎥ = ⎢0 i 0 ⎥ ⎢ x 2 ⎥ + ⎢− 1 + 2i ⎥ u ⎢ x& ⎥ ⎢0 0 − i ⎥ ⎢ x ⎥ 10 ⎢ − 1 − 2i ⎥ ⎦⎣ 3⎦ ⎣ ⎦ ⎣ 3⎦ ⎣ and ⎡ x1 ⎤ y = [1 1 1] ⎢ x 2 ⎥ ⎢x ⎥ ⎣ 3⎦ (3.3) ⎧ x& = A x + B u ⎨ ⎩ y = C x + Du (3.14) and a SISO system has state equations of the form as (3.4). 3.3 Using the transfer function to define state variables that is, as, where C = [1 0 L 0] . The combination of equations (3.2) and (3.3) in the form It is sometimes possible to define suitable state variables by considering the partial fraction expansion of the transfer function. For example, given the system differential equation ⎧ x& = A x + B u ⎨ ⎩y = C x &y& + 3 y& + 2 y = u& + 3u (3.4) The corresponding transfer function is are known as the state equations of the system considered. The matrix A in (3.2) is said to be in companion form. The components of x are called the state variables x1 , x 2 , L , x n . The corresponding n-dimensional space called the state space. Any state of the system is represented by a point in the state space. G(s) = Hence Y (s) = X 1 (s) + X 2 (s) Example 3.1 _______________________________________ 2U ( s ) s +1 U (s) X 2 (s) = − s+2 X 1 (s) = Obtain two forms of state equations of the system defined by &y&& − 2 &y& + y& − 2 y = u where matrix A corresponding to one of the forms should be diagonal. On taking the inverse Laplace transforms, we obtain x&1 = − x1 + 2u x& 2 = −2 x 2 − u (a) A in companion form Let the state variable as x1 = y , x 2 = y& , x 3 = &y& . Then ⎡ x&1 ⎤ ⎡0 1 0⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ x& 2 ⎥ = ⎢0 0 1 ⎥ ⎢ x 2 ⎥ + ⎢0⎥ u ⎢ x& ⎥ ⎢2 − 1 2⎥ ⎢ x ⎥ ⎢1 ⎥ ⎦⎣ 3⎦ ⎣ ⎦ ⎣ 3⎦ ⎣ and Y ( s) 2 1 s+3 = = − U ( s ) ( s + 1)( s + 2) s + 1 s + 2 or (3.5) ⎡ x&1 ⎤ ⎡− 1 0 ⎤ ⎡ x1 ⎤ ⎡ 2 ⎤ ⎢⎣ x& 2 ⎥⎦ = ⎢⎣ 0 − 2⎦⎥ ⎢⎣ x 2 ⎥⎦ + ⎢⎣− 1⎥⎦ u ⎡x ⎤ y = [1 1] ⎢ 1 ⎥ ⎣x2 ⎦ ___________________________________________________________________________________________________________ 9 Chapter 3 State Space Formulation Introduction to Control Theory Including Optimal Control Nguyen Tan Tien - 2002.3 _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ We can of course make a different choice of state variables, for example x = 2e −3t + 4e −3t = U (s) s +1 U (s) X 2 (s) = s+2 then X 1 (s) = 3τ 0 = 2e −3t + 4e −3t Y (s) = 2 X 1 (s) − X 2 (s) t ∫ e τ dτ ( te − e 22 e −3t 9 3t 1 3 + 19 1 3t 9 ) + 43 t − 94 __________________________________________________________________________________________ To use this method to solve the vector matrix equation (3.15), we must first define a matrix function which is the analogue of the exponential, and we must also define the derivative and the integral of a matrix. x&1 = − x1 + u &x 2 = −2 x 2 + u ∞ Definition Since e z = ∑ n! z 1 n (all z ), we define 0 ∞ Now the state equations are eA = ∑ n! A 1 n = A0 + A + 1 2 1 A + K = I + A + A2 + K 2! 2! ⎡ x&1 ⎤ ⎡− 1 0 ⎤ ⎡ x1 ⎤ ⎡1⎤ ⎢⎣ x& 2 ⎥⎦ = ⎢⎣ 0 − 2⎦⎥ ⎢⎣ x 2 ⎥⎦ + ⎢⎣1⎥⎦ u (where A0 ≡ I ) for every square matrix A . ⎡x ⎤ y = [2 − 1] ⎢ 1 ⎥ ⎣ x2 ⎦ Example 3.4 _______________________________________ 0 ⎡ 1 0 0⎤ Evaluate e At given A = ⎢− 1 0 0⎥ ⎢⎣ 1 1 1 ⎥⎦ 3.4 Direct solution of the state equation By a solution to the state equation x& = A x + B u (3.15) We must calculate A 2 , A3 ,… ⎡ 1 0 0⎤ A 2 = ⎢ − 1 0 0 ⎥ = A = A3 = K ⎢⎣ 1 1 1⎥⎦ we mean finding x at any time t given u (t ) and the value of x at initial time t 0 = 0 , that is, given x(t 0 ) = x 0 . It is instructive to consider first the solution of the corresponding scalar differential equation It follows that x& = a x + b u e At = I + A t + (3.16) 1 2 2 1 33 A t + A t +K 2! 3 1 2 1 3 ⎛ ⎞ = I + A ⎜ t + t + t + K⎟ 2 ! 3 ⎝ ⎠ given x = x0 at t = 0 . The solution of (3.15) is found by an analogous method. Multiplying the equation by the integrating factor e − at , we obtain = I + A(e t − 1) t 0 0 ⎤ ⎡1 0 0⎤ ⎡⎢ e − 1 ⎥ 0 ⎥ = ⎢0 1 0⎥ + ⎢ − e t + 1 0 t t t ⎣⎢0 0 1⎥⎦ ⎢⎣ e − 1 e − 1 e − 1⎦⎥ ⎡ et 0 0⎤ ⎥ ⎢ = ⎢− e t + 1 1 0⎥ t t t e e e − − 1 1 ⎦⎥ ⎣⎢ e − at ( x& − ax ) = e − at b u or d −at [e x] = e −at b u dt Integrating the result between 0 and t gives __________________________________________________________________________________________ e −at x − x0 = t ∫e − aτ b u (τ ) dτ 0 that is x= e at x0 123 complementary function + t ∫ e b u(τ ) dτ 144424443 −a( t −τ ) (3.17) 0 particular int ergral Example 3.3 _______________________________________ Solve the equation x& + 3 x = 4t given that x = 2 when t = 0 . In fact, the evaluation of the matrix exponential is not quite as difficult as it may appear at first sight. We can make use of the Cayley-Hamilton theorem which assures us that every square matrix satisfies its own characteristic equation. One direct and useful method is the following. We know that if A has distinct eigenvalues, say λ1 , λ2 ,L, λn , there exists a non-singular matrix P such that ⎡λ1 0 ⎢ P A P = ⎢ 0 λ2 M M ⎢0 0 ⎣ −1 L L O L 0⎤ 0 ⎥ = diag{λ , λ ,L, λ } = Λ 1 2 n M ⎥ λn ⎥⎦ We then have A = P Λ P −1 , so that Since a = −3 , b = 4 and u = t , on substituting into equation (3.17) we obtain A 2 = ( PΛP −1 )( PΛP −1 ) = PΛ ( P −1P )ΛP −1 = PΛ2 P −1 ___________________________________________________________________________________________________________ 10 Chapter 3 State Space Formulation Introduction to Control Theory Including Optimal Control Nguyen Tan Tien - 2002.3 _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ e At = ⎡⎢ 2 2 ⎤⎥ e −t + ⎡⎢−1 −2⎤⎥ e −2t ⎣1 1⎦ ⎣− 1 − 1⎦ and in general A r = PΛr P −1 (r = 1,2,L) If we consider a matrix polynomial, say f ( A) = A − 2 A + I , we can write it as 2 = P f (Λ ) P −1 Since (for the above example) f ( λ ) = λ 2 − 2λ + I ⎤ ⎡λ1 0 ⎥ ⎢ ⎥ − 2 ⎢ 0 λ2 M M ⎥ ⎢ λ2n ⎥⎦ ⎣ 0 0 0 0 M ⎡λ12 − 2 0 ⎢ λ22 − 2λ2 + 1 =⎢ 0 ⎢ M M ⎢ 0 0 ⎣ called the poles and Definition Let A(t ) = [aij (t )], then = P diag{ f (λ1 ), f (λ2 ),L, f (λn )}P −1 L L O L λ1 , λ2 ,L, λn are e λ1t , e λ2t ,L, e λnt are called the modes of the system. We now define the derivative and the integral of a matrix A(t ) whose elements are functions of t . = P ( Λ2 − 2Λ + I ) P −1 0 ⎢ 2 0 λ 2 =⎢ ⎢M M ⎢0 0 ⎣ eigenvalues __________________________________________________________________________________________ f ( A) = PΛ2 P −1 − 2 PΛP −1 + P I P −1 ⎡λ12 In the solution to the unforced system equation x = e At x 0 , the L L O L 0 ⎤ ⎡1 0 0 ⎥ + ⎢0 1 M ⎥ ⎢M M λn ⎥⎦ ⎢⎣0 0 L L O L 0⎤ 0⎥ M⎥ 1⎥⎦ ⎤ L 0 ⎥ L 0 ⎥ ⎥ O M L λ2n − 2λn + 1⎥⎦ (1) d d A(t ) = A& (t ) = [ (aij )], and dt dt (2) ∫ A(t )dt = [∫ a that is, each element of the matrix is differentiated (or integrated). ⎡ 6t sin 2t ⎤ For example, if A = ⎢ 2 , then 3 ⎥⎦ ⎣t + 2 A& = ⎡⎢ 6 2 cos 2t ⎤⎥ , and 0 ⎦ ⎣2t ∫ = diag{ f (λ1 ), f (λ2 ),L, f (λn )} The results holds for the general case when f (x) is a polynomial of degree n . In generalizing the above, taking f ( A) = e At , we obtain e At = P diag{e λ1t , e λ2t ,L, e λnt } P −1 ij (t ) dt ], (3.18) ⎡ 3t 2 − 12 cos 2t ⎤ ⎥ + C ( C is a constant matrix) Adt = ⎢ 3 3t ⎥⎦ ⎢⎣ 13 t + 2t From the definition a number of rules follows: if α and β are constants, and A and B are matrices, d (i) (α A + β B ) = α A& + β B& dt (ii) b b b a a a ∫ (α A + β B)dt = α ∫ Adt + β ∫ Bdt Example 3.5 _______________________________________ d (iii) ( A B ) = A B& + A& B dt Find e At given A = ⎡ 0 2 ⎤ ⎢⎣− 1 − 3⎥⎦ (iv) If A is a constant matrix, then The eigenvalues of A are λ1 = −1 and λ2 = −2 . It can be verified that P = ⎡ 2 1 ⎤ and that P −1 = ⎡ 1 1 ⎤ . ( P can ⎢⎣− 1 − 1⎥⎦ ⎢⎣− 1 − 2⎥⎦ ⊗ Note that although the above rules are analogous o the rules for scalar functions, we must not be dulled into accepting for matrix functions all the rules valid for scalar functions. For example, although d ( x n ) = n x n−1 x& in general, dt be found from A = P Λ P −1 ). Using (3.18), we obtain it is not true that ⎡ −t 0 ⎤ ⎡ 1 1 ⎤ e At = ⎡ 2 1 ⎤ ⎢e ⎢⎣− 1 − 1⎥⎦ 0 e −2t ⎥ ⎢⎣− 1 − 2⎥⎦ ⎣ ⎦ ⎡ 2e −t − e −2t 2e −t − 2e −2t ⎤ = ⎢ −t ⎥ − 2t − e −t + 2e −2t ⎦ ⎣− e + e then e At in its spectral or modal form as and so that (3.19) where λ1 , λ2 ,L, λn are the eigenvalues of A and A1 , A2 ,L , An are matrices of the same order as the matrix A . In the above example, we can write d dt ( A n ) = n A n−1 A& .For example, ⎡ 2 ⎤ A = ⎢t 2t ⎥ , ⎣0 3⎦ 2 d ⎡ 3 ⎤ ( A 2 ) = ⎢4t 6t + 6⎥ 0 0 dt ⎣ ⎦ ⎡4t 3 4t 2 ⎤ & 2 AA = ⎢ 0 ⎥⎦ ⎣ 0 when From the above discussion it is clear that we can also write e At = A1e λ1t + A2 e λ2t + K + An e λnt d At (e ) = Ae At dt d 2 A ≠ 2 AA& . dt We now return to our original problem, to solve equation (3.15): x& = A x + B u . Rewrite the equation in the form x& − A x = B u ⇒ e − At (x& − A x) = e − At ( B u ) ___________________________________________________________________________________________________________ 11 Chapter 3 State Space Formulation Introduction to Control Theory Including Optimal Control Nguyen Tan Tien - 2002.3 _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ or With this notation equation (3.20) becomes d − At (e x) = e − At B u dt ∫ e − At x(t ) − x(0) = t ∫e − Aτ 3.5 Solution of the state equation by Laplace transforms B u (τ ) dτ Since the state equation is in a vector form we must first define the Laplace transform of a vector. so that t ∫e A(t −τ ) B u (τ ) dτ (3.20) Definition 0 Example 3.6 _______________________________________ A system is characterized by the state equation ⎡ x&1 ⎤ ⎡ 0 2 ⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢⎣ x& 2 ⎥⎦ = ⎢⎣− 1 − 3⎥⎦ ⎢⎣ x2 ⎥⎦ + ⎢⎣1⎥⎦ u If the forcing function u (t ) = 1 for t ≥ 0 and x(0) = [1 − 1]T . Find the state x of the system at time t . We have already evaluated e At in Example 3.5. It follows that −2t −t −2t ⎤ ⎡ −t e At x(0) = ⎢ 2e −t − e −2t 2e −t − 2e −2t ⎥ ⎡ 1 ⎤ − e + 2e ⎦ ⎢⎣− 1⎥⎦ ⎣− e + e = e −2t ⎡⎢ 1 ⎤⎥ ⎣− 1⎦ ∫ t ∫ ⎡ = ⎢2e ∫ ⎣ 2e t 0 −(t −τ ) ⎡ L [ x1 (t )] ⎤ ⎡ X 1 ( s ) ⎤ ⎢ L [ x2 (t )]⎥ ⎢ X 2 ( s) ⎥ We define L [x(t )] = ⎢ ⎥ = ⎢ M ⎥ = X( s ) M ⎢ L [ x (t )]⎥ ⎢ X ( s)⎥ n ⎣ ⎦ ⎣ n ⎦ From this definition, we can find all the results we need. For example, ⎡ L [ x&1 (t )] ⎤ ⎡ sX 1 ( s) − x1 (0) ⎤ ⎥ ⎢ ⎥ ⎢ & L [x& (t )] = ⎢ L [ x 2 (t )]⎥ = ⎢ sX 2 ( s) − x2 (0) ⎥ M M ⎢ L [ x& (t )]⎥ ⎢ sX ( s) − x (0)⎥ n n ⎦ ⎣ ⎦ ⎣ n s X( s ) − x(0) = A X( s ) + B U ( s ) where U ( s ) = L [u (t )] − 2e −2(t −τ ) ⎤ dτ ⎥ − e −(t −τ ) ⎦ −2(t −τ ) or −t − 2t ⎤ ⎡ = ⎢1 − 2−et +−e2t ⎥ e e − ⎣ ⎦ −t −2t ⎤ ⎡ −t −2t ⎤ ⎡ −2t ⎤ ⎡ x(t ) = ⎢ e −2t ⎥ + ⎢1 − 2−et +−e2t ⎥ = ⎢1 − 2−et + 2−e2t ⎥ ⎣− e ⎦ ⎣ e − e ⎦ ⎣ e − 2e ⎦ X( s) = ( sI − A) −1 x(0) + ( sI − A) −1 B U ( s) __________________________________________________________________________________________ At The matrix e in the solution equation (3.20) is of special interest to control engineers; they call it the state-transition matrix and denote it by Φ(t ) , that is, Φ(t ) = e At (3.21) For the unforced system (such as u (t ) = 0 ) the solution (Eq. (3.20)) becomes x(t ) = Φ(t ) x(0) so that Φ(t ) transforms the system from its state x(0) at some initial time t = 0 to the state x(t ) at some subsequent time t hence the name given to the matrix. Since e At e − At = I . It follows that [e At ]−1 = e − At . Φ −1 (t ) = e − At = Φ(−t ) At − Aτ Φ(t ) Φ(−τ ) = e e =e A(t −τ ) ( sI − A) X( s ) = x(0) + B U ( s ) Unless s happens to be an eigenvalue of A , the matrix ( sI − A) is non-singular, so that the above equation can be solved giving Hence the state of the system at time t is Also ⎡ x1 ⎤ ⎢x ⎥ Let x(t ) = ⎢ 2 ⎥ M ⎢x ⎥ ⎣ n⎦ Now we can solve the state equation (3.15). Taking the transform of x& = A x + B u , we obtain t e A(t −τ ) B u (τ ) dτ = e A(t −τ ) ⎡⎢0⎤⎥ dτ ⎣1⎦ 0 0 Hence (3.22) 0 0 x(t ) = e At x(0) = t x(t ) = Φ(t ) x(0) = Φ(t − τ ) B u (τ ) dτ On integration, this becomes = Φ (t − τ ) . (3.23) and the solution x(t ) is found by taking the inverse Laplace transform of equation (3.23). Definition ( sI − A) −1 is called the resolvent matrix of the system. On comparing equation (3.22) and (3.23) we find that Φ (t ) = L −1 {( sI − A) −1} Not only the use of transforms a relatively simple method for evaluating the transition matrix, but indeed it allows us to calculate the state x(t ) without having to evaluate integrals. Example 3.7 _______________________________________ Use Laplace transform to evaluate the state x(t ) of the system describe in Example 3.6. ⎡ x&1 ⎤ ⎡ 0 2 ⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢⎣ x& 2 ⎥⎦ = ⎢⎣− 1 − 3⎥⎦ ⎢⎣ x2 ⎥⎦ + ⎢⎣1⎥⎦ u ___________________________________________________________________________________________________________ 12 Chapter 3 State Space Formulation Introduction to Control Theory Including Optimal Control Nguyen Tan Tien - 2002.3 _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ T z& = A T z + B u y = CT z For this system ( sI − A) = ⎡⎢ s −2 ⎤⎥ , so that ⎣1 s + 3⎦ ( sI − A) −1 = or as 1 ⎡ s + 3 2⎤ s ( s + 3) + 2 ⎢⎣ − 1 s ⎥⎦ z& = A1 z + B1 u (3.26) y = C1 z s+3 2 ⎡ ⎤ ⎢ ( s + 1)( s + 2) ( s + 1)( s + 2) ⎥ ⎥ =⎢ −1 s ⎢ ⎥ ⎢⎣ ( s + 1)( s + 2) ( s + 1)( s + 2) ⎥⎦ where A1 = T −1 A T , B1 = T −1 B, C1 = C T . The transformation (3.25) is called state-transformation, and the matrices A and A1 are similar. We are particular interested in the 1 2 2 ⎤ ⎡ 2 − ⎢ s +1 − s + 2 + + s 1 s 2 ⎥ =⎢ 1 1 1 2 ⎥ ⎢− ⎥ + − + s +1 s + 2⎦ ⎣ s +1 s + 2 transformation when A1 is diagonal (usually denoted by Λ ) and A is in the companion form, such as (3.2) So that −2t −t −2t ⎤ ⎡ −t L −1{( SI − A) −1} = ⎢ 2e −t − e −2t 2e −t − 2e −2t ⎥ = Φ(t ) − e + e − e + 2 e ⎣ ⎦ Hence the complementary function is as in Example 3.6. For the particular integral, we note that since ⎡ x&1 ⎤ ⎡ 0 1 ⎢ x& 2 ⎥ ⎢ 0 0 ⎢ M ⎥=⎢ M M ⎢ x& ⎥ ⎢ 0 0 1 n − ⎥ ⎢ ⎢ ⎢⎣ x& n ⎥⎦ ⎣− a n − a n −1 It is assumed that the matrix A has distinct eigenvalues λ1 , λ2 ,L, λn . Corresponding to the eigenvalue λi there is the 1 L {u (t )} = , then s eigenvector x i such that 1 1 ⎡ s + 3 2⎤ ⎡0⎤ ( sI − A) −1 BU ( s ) = ( s + 1)( s + 2) s ⎢⎣ − 1 s ⎥⎦ ⎣⎢1 ⎥⎦ A x i = λi xi 2 ⎡ ⎤ ⎢ s ( s + 1)( s + 2) ⎥ ⎥ =⎢ 1 ⎢ ⎥ ⎣⎢ ( s + 1)( s + 2) ⎦⎥ V is called the modal matrix, it is non-singular and can be used as the transformation matrix T above. We can write the n equations defined by equation (3.27) as AV = Λ V −t −2 t ⎤ ⎡ {( sI − A) BU ( s )} = ⎢1 − 2−et +−e2t ⎥ − e e ⎣ ⎦ −1 which is the particular integral part of the solution obtained in Example 3.6. __________________________________________________________________________________________ 3.6 The transformation from the companion to the diagonal state form Note that the choice of the state vector is not unique. We now assume that with one choice of the state vector the state equations are x& = A x + B u y =Cx (3.24) where ⎡λ1 0 ⎢ Λ = ⎢ 0 λ2 M M ⎢0 0 ⎣ (3.28) L L O L 0⎤ 0 ⎥ = diag{λ , λ ,L, λ } 1 2 n M ⎥ λn ⎥⎦ From equation (3.28) we obtain Λ = V −1 AV (3.29) The matrix A has the companion form 1 0 ⎡ 0 ⎢ 0 0 1 M M A=⎢ M ⎢ 0 0 0 ⎢⎣− a0 − a1 − a 2 L 0 ⎤ L 0 ⎥ O M ⎥ L 1 ⎥ L − a n−1 ⎥⎦ where A is the matrix of order n × n and B and C are matrices of appropriate order. so that its characteristic equation is Consider any non-singular matrix T of order n × n . Let det( sI − A) = λn + a n−1λn−1 + K + a1λ + a0 = 0 ] x =T z (3.27) V = [x1 x 2 L x n ] On taking the inverse Laplace transform, we obtain L (i = 1,2,L, n) We define the matrix V whose columns are the eigenvectors 2 1 ⎤ ⎡1 + ⎢ − ⎥ = ⎢ s s +1 s + 2 ⎥ 1 1 ⎢ ⎥ − ⎣ s +1 s + 2 ⎦ −1 0 ⎤ ⎡ x1 ⎤ ⎡0⎤ L 0 0 ⎥ ⎢ x 2 ⎥ ⎢0 ⎥ L 0 O M M ⎥ ⎢ M ⎥ + ⎢M⎥ u 1 ⎥ ⎢ x n −1 ⎥ ⎢0⎥ L 0 L − a 2 − a1 ⎥⎦ ⎢⎢ x ⎥⎥ ⎢⎣1 ⎥⎦ ⎣ n ⎦ (3.25) By solving this equation we obtain the eigenvalues λ1 , λ2 ,L, λn . The corresponding eigenvectors have an then z is also a state vector and equation (3.24) can be written interesting form. Consider one of the eigenvalues λ and the as ___________________________________________________________________________________________________________ 13 Chapter 3 State Space Formulation Introduction to Control Theory Including Optimal Control Nguyen Tan Tien - 2002.3 _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ corresponding eigenvector x = [α1 α 2 L α n ]T . Then the equation A x = λ x , corresponds to the system of equations ⎧α 2 ⎪ ⎪α 3 ⎨ ⎪ ⎪α n ⎩ so the transformation V −1x gives the new choice of the state vector ⎡ z1 ⎤ 0 4 ⎤ ⎡ y⎤ ⎡ 4 ⎢ z 2 ⎥ = 1 ⎢8 + 4i − 10i − 2 + 4i ⎥ ⎢ y& ⎥ ⎢ z ⎥ 20 ⎢8 − 4i 10i − 2 − 4i ⎥ ⎢ &y&⎥ ⎣ ⎦⎣ ⎦ ⎣ 3⎦ = λ α1 = λα2 M = λ α n−1 [ Setting α1 = 1 , we obtain x = 1 λ λ2 L λ the modal matrix in this case takes the form 1 ⎡ 1 ⎢ λ1 λ1 V =⎢ M M ⎢ n−1 n−1 λ1 ⎣λ1 ] n −1 T L 1 ⎤ L λ1 ⎥ O M ⎥ ⎥ L λ1n−1 ⎦ The state equations are now in the form of equation (3.26), that is . Hence z& = A1z + B1u y = C1z where A1 = V −1 AV = diag{2, i,−i} (3.30) 1 ⎡ 2 ⎤ ⎢− 1 + 2i ⎥ 10 ⎢ − 1 − 2i ⎥ ⎣ ⎦ C1 = C V = [1 1 1] B1 = V −1B = In this form V is called Vandermonde matrix; it is nonsingular since it has benn assumed that λ1 , λ2 ,L, λn are distinct. We now consider the problem of obtaining the transformation from the companion form to diagonal form. __________________________________________________________________________________________ Example 3.8 _______________________________________ Consider the system in state space form of Eq.(3.24) Having chosen the state vector so that x& = A x + B u &y&& − 2 &y& _ y& − 2 y = u y =Cx is written as the equation (in companion form) Taking the Laplace transforms we obtain ⎡ x&1 ⎤ ⎡0 1 0⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ x& 2 ⎥ = ⎢0 0 1 ⎥ ⎢ x 2 ⎥ + ⎢0⎥ u ⎢ x& ⎥ ⎢2 − 1 2⎥ ⎢ x ⎥ ⎢1⎥ ⎦⎣ 3⎦ ⎣ ⎦ ⎣ 3⎦ ⎣ ⎡ x1 ⎤ y = [1 0 0]⎢ x 2 ⎥ ⎢x ⎥ ⎣ 3⎦ s X (s) = A X (s) + B U (s) Find the transformation which will transform this into a state equation with A in diagonal form. The characteristic equation of A is det(λ I − A) = λ3 − 2λ2 + λ − 2 = (λ − 2)(λ2 + 1) = 0 that is λ1 = 2, λ2 = i and λ3 = −i . From (3.30) the modal matrix is ⎡1 1 1 ⎤ V = ⎢2 i − i ⎥ ⎢⎣4 − 1 − 1⎥⎦ and its inverse can be shown to be V −1 = 0 4 ⎤ 1 ⎡ 4 ⎢8 + 4i − 10i − 2 + 4i ⎥ 20 ⎢8 − 4i 10i − 2 − 4i ⎥ ⎣ ⎦ The transformation is defined by equation (3.25), that is x = V z or z = V −1x . The original choice for x was x = [ y y& &y&] T 3.7 The transfer function from the state equation (3.24) Y (s) = C X (s) ⇒ X ( s ) = ( sI − A) −1 B U ( s ) and G ( s ) = Y (s) = C ( sI − A) −1 B U (s) (3.31) Example 3.9 _______________________________________ Calculate the transfer function from the system whose state equations are x& = ⎡1 −2 ⎤ x + ⎡1 ⎤ u ⎢⎣3 − 4⎥⎦ ⎢⎣2⎥⎦ y = [1 − 2]x G ( s ) = [1 − 2]⎡⎢ s − 1 2 ⎤⎥ ⎣ − 3 s + 4⎦ −1 ⎡1 ⎤ ⎢⎣2⎥⎦ −1 s+4 −2 ⎤ ⎡ ⎢ ( s + 1)( s + 2) ( s + 1)( s + 2) ⎥ 1 ⎥ ⎡ ⎤ = [1 − 2]⎢ s−2 3 ⎥ ⎢⎣2⎥⎦ ⎢ ⎢⎣ ( s + 1)( s + 2) ( s + 1)( s + 2) ⎥⎦ =− 3s + 2 ( s + 1)( s + 2) __________________________________________________________________________________________ Example 3.10_______________________________________ Given that the system x& = A x + B u y =Cx ___________________________________________________________________________________________________________ 14 Chapter 3 State Space Formulation Introduction to Control Theory Including Optimal Control Nguyen Tan Tien - 2002.3 _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ has the transfer function G (s) , find the transfer function of the system z& = T −1 AT z + T −1 B u y = CT z If the transfer function is G1 ( s ) G1 ( s ) = C T ( sI − T −1 AT ) −1T −1 B = C T {T −1 ( sI − A)T }−1T −1 B = C ( sI − A) −1 B = G(s) so that G1 ( s ) = G ( s ) . __________________________________________________________________________________________ ___________________________________________________________________________________________________________ 15 Chapter 3 State Space Formulation
© Copyright 2025 Paperzz