4 State-space solutions and realizations

4 State-space solutions and
realizations
• 4.2 Solutions of LTI State Equations
Consider
⎧x& ( t ) = Ax ( t ) + Bu( t )
⎨
⎩ y( t ) = Cx ( t ) + Du ( t )
• Premultiplying e− At on both sides of first yields
e − At x& ( t ) − e − At Ax ( t ) = e − At Bu ( t )
d − At
• Implies dt (e x( t )) = e− At Bu( t )
• Its integration from 0 to t yields
e − At x ( τ) τt =0 = ∫0t e − Aτ Bu( τ)dτ
• Thus, we have
x ( t ) = e At x (0) + ∫0t e A ( t − τ) Bu( τ)dτ
• The final solution
At
t A( t − τ)
x (0) + C ∫0 e
Bu( τ)dτ + Du ( t )
y( t ) = Ce
• Compute the solutions by using Laplace
transform
x̂ (s) = (sI − A ) −1[ x (0) + Bû (s)]
ŷ(s) = C(sI − A ) −1[ x (0) + Bû (s)] + Dû (s)
• Three methods of computing eAt
1. Using Theorem 3.5
f(λi) = h(λi) (where f(λi) = (s-λi)-1)
2. Using Jordan form of A
Let A = QÂQ-1; then eAt = QeÂtQ-1
3. Using the infinite power series
e
At
∞
1 k k
= ∑ t A
k =0 k!
• See Examples 4.1 and 4.2
4.2.1 Discretization
• Because
x( t + T) − x( t)
x& ( t ) = lim
T
T →0
• We can approximate a LTI system as
x(t+T) = x(t) + Ax(t)T + Bu(t)T
• The discrete-time state-space equation
x (( k + 1)T ) = ( I + TA ) x ( kT ) + TBu ( kT )
y( kT ) = Cx ( kT ) + Du ( kT )
• Computing the first at t = kT and t = (k+1)T
yields
x[ k ] := x ( kT) = e AkT x (0) + ∫0kT e A( kT − τ) Bu( τ)dτ
and
x[ k + 1] := x (( k + 1)T ) = e A( k +1)T x (0) + ∫0( k +1)T e A(( k +1)T − τ) Bu( τ)dτ
• Let α = kT + T - τ. Then, we have
x[ k + 1] = e AT x ( k ) + ∫0T e AαdαBu[k ]
• The continuous-time state equation becomes
x[ k + 1] = Ad x[ k ] + Bd u[ k ]
y[ k ] = Cd x[ k ] + Dd u[ k ]
with Ad = e AT Bd = ( ∫0T e Aτdτ) B Cd = C Dd = D
• If A is nonsingular,
Bd = A-1(Ad - I)B
• 4.2.2 Solution of Discrete-Time Equations
• Consider
x[k+1] = Ax[k] + Bu[k]
y[k] = Cx[k] +Du[k]
• Compute
x[1] = Ax[0] + Bu[0]
x[2] =Ax[1]+Bu[1]=A2x[0]+ABu[0]+Bu[1]
• Proceeding forward, for k > 0,
k −1
x[ k ] = A x[0] + ∑ A k −1− m Bu[ m]
k
m =0
k −1
y[k ] = CA x[0] + ∑ CA k −1− m Bu[ m] + Du[ k ]
k
m =0
• Key computation
Ak = QÂkQ-1
• Suppose the Jordan form of A is
⎡λ1 1 0 0 0 ⎤
⎢0 λ
1 0 0⎥
1
⎥
⎢
 = ⎢ 0 0 λ1 0 0 ⎥
⎥
⎢
0
0
0
0
λ
1
⎥
⎢
⎢⎣ 0 0 0 0 λ 2 ⎥⎦
⎡λk1
⎢
⎢0
Ak = Q⎢ 0
⎢
⎢0
⎢0
⎣
kλk1
k( k − 1)λk1− 2 / 2
0
0
0
0
λk1
0
kλk1
λk1
0⎤
⎥
0 0⎥
−1
0 0 ⎥Q
⎥
k
λ1 0 ⎥
0 λk2 ⎥⎦
0
4.3 Equivalent state Equations
• Definition 4.1 Let P be an n×n real
nonsingular matrix and Let x = Px. Then the
state equation,
x& ( t ) = Ax ( t ) + Bu( t )
y( t ) = C x ( t ) + D u ( t )
where A = PAP −1 B = PB C = CP-1 D = D
is said to be equivalent to 4.24 and x = Px
is called an equivalence transformation.
• Equivalent state equations have the same
characteristic polynomial and consequently,
the same set of eigenvalues and same
transfer matrix.
• Theorem 4.1 Two LTI state equations {A, B,
C, D} and {A, B, C, D} are zero-state
equivalent or have the same transfer matrix
m
m
CA
B
=
C
A
B
if and only if D = D and
m = 0, 1, 2, ...
4.3.1 Canonical Forms
• Let λ1, λ2, α+jβ, and α-jβ be the eigenvalues
and q1, q2, q3, and q4 be the corresponding
eigenvectors. Define Q = [q1 q2 q3 q4]. Then
we have
⎡λ1 0
⎢0 λ
2
J := ⎢
⎢0 0
⎢
⎣0 0
⎤
0
0 ⎥
⎥ = Q −1AQ
0 ⎥
α + jβ
⎥
0
α − jβ ⎦
0
0
• The modal form of A can be obtained
⎡1
⎢0
Q −1J Q := ⎢
⎢0
⎢
⎣0
0 ⎤ ⎡λ1 0
1 0 0 ⎥⎢ 0 λ2
⎥⎢
0 1 1 ⎥⎢ 0 0
⎥⎢
0 j − j⎦ ⎣ 0 0
0 0
⎤ ⎡1
0
0 ⎥ ⎢0
⎥⎢
0 ⎥ ⎢0
α + jβ
⎥⎢
0
α − jβ ⎦ ⎣0
0 0⎤
⎡λ1 0
⎥
⎢0 λ
0
0
1
⎥ =: A
=⎢
⎢ 0 0 α β⎥
⎥
⎢
0
0
−
β
α
⎦
⎣
0
0
0 ⎤
1 0
0 ⎥
⎥
0 0.5 − 0.5 j⎥
⎥
0 0.5 0.5 j ⎦
0
0
• The two transformation (Jordan form and
modal form) can be combined into one as
⎡1
⎢0
P −1 = Q Q = [q1 q 2 q 3 q 4 ]⎢
⎢0
⎢
⎣0
0 ⎤
1 0
0 ⎥
⎥
0 0.5 − 0.5 j⎥
⎥
0 0.5 0.5 j ⎦
0
0
= [q1 q2 Re(q3) Im(q3)]
• The modal form of another example
0
0
0
⎡λ1
⎢0 α
0
β1
1
⎢
A = ⎢ 0 − β1 α1
0
⎢
0
0
α2
⎢0
⎢⎣ 0
0
0 − β2
0⎤
0⎥
⎥
0⎥
⎥
β2 ⎥
α2 ⎥⎦
• Its similarity transformation
P-1 = [q1 Re(q2) Im(q2) Re(q4) Im(q4)]
4.4 Realizations
• The realization problem: given the inputoutput description of a LTI system
ŷ(s) = Ĝ (s) û(s),
finding its state-space equation
x& ( t ) = Ax ( t ) + Bu ( t )
y( t ) = Cx ( t ) + Du ( t )
• A transfer matrix Ĝ (s) is said to be realizable
if there exist a finite-dimensional state
equation, or simply, {A, B, C, D} such that
Ĝ (s) = C(sI -A)-1B + D
• Theorem 4.2 A transfer matrix Ĝ (s) is
realizable if and only if Ĝ (s) is a proper
rational matrix.
Ĝ (s) = Ĝ ( ∞) + Ĝ sp (s) = D + C(sI − A ) −1 B
= D+
1
C[ Adj(sI − A )]B
det(sI − A )
• If D is nonzero matrix, then C(sI-A)-1B+D
is proper.
• Let d(s) = sr+α1sr-1+… +αr-1s+αr be the least
common denominator of all entries of Ĝ
sp(s). Then Ĝ sp(s) can be expressed as
Ĝsp (s) =
1
1
[ N (s)] =
[ N1sr −1 + N 2sr − 2 + ... + N r −1s + N r ]
d (s )
d (s)
• We claim that the set of equations
⎡ − α1I p
⎢ I
⎢ p
x& = ⎢ 0
⎢
⎢ .
⎢ 0
⎣
y = [N1
− α2 I p ... − αr −1I p
0
...
0
Ip
...
0
.
0
.
...
N 2 ... N r −1
Ip
− αr I p ⎤
⎡I p ⎤
⎢0⎥
0 ⎥
⎥
⎢ ⎥
0 ⎥x + ⎢ 0 ⎥u
⎥
⎢ ⎥
. ⎥
⎢.⎥
0 ⎥⎦
⎢⎣ 0 ⎥⎦
N r ]x + Ĝ ( ∞ ) u is a realizatio n of Ĝ(s).
4.5 Solution of Linear-time
varying (LTV) Equations
• Consider
x& ( t ) = A( t ) x ( t ) + B( t ) u ( t )
y ( t ) = C( t ) x ( t ) + D ( t ) u ( t )
• Assume that every entry of A(t) is a
continuous function of t.
• First discuss the solutions of
x& ( t ) = A( t ) x ( t )
• The solution of the scalar time-varying
equation x& = a ( t )x due to x(0) is
x(t) =
t
∫0 a ( τ )dτ
e
x ( 0)
• Extending this to the matrix case becomes
x( t) =
• with
t
∫0 A ( τ ) dτ
e
t
∫0 A ( τ )dτ
e
x ( 0)
1
= I + ∫0t A( τ)dτ + ( ∫0t A( τ)dτ)( ∫0t A(s)ds) + ...
2
• But,
t
d ∫0t A ( τ)dτ
1
∫0 A ( τ ) dτ
t
e
= A( t ) + A( t )( ∫0 A(s)ds) + ... ≠ A( t )e
dt
2
• Arranging n solutions as X = [x1 x2 … x3],
we have
& ( t ) = A( t ) X ( t )
X
• If X(t0) is nonsingular or the initial states
are linearly independent, then X(t) is called
a fundamental matrix x& ( t ) = A( t ) x ( t )
• See example 4.8
• Definition 4.2 Let X(t) be any fundamental
matrix of x& ( t ) = A( t )x ( t ) . Then
Φ ( t, t 0 ) := X ( t ) X −1( t 0 )
is called the state transition matrix of x& ( t ) = A( t )x( t )
The state transition matrix is also the unique
solution of
∂
Φ ( t , t 0 ) = A ( t )Φ ( t , t 0 )
∂t
with the initial condition Φ(t0, t0) = I.
• The important properties of the state
transition matrix:
Φ(t, t) = I
Φ-1(t,t0)=[X(t)X-1(t0)]-1=X(t0)X-1(t)=Φ(t0,t)
• See example 4.9
• We claim that the solution of the LTV
system
x ( t ) = Φ ( t, t 0 ) x 0 + ∫tt Φ ( t, τ) B( τ) u ( τ)dτ
0
= Φ ( t , t 0 )[ x 0 + ∫tt Φ ( t 0 , τ) B( τ) u ( τ)dτ]
0
y( t ) = C( t )Φ ( t, t 0 ) x 0 + C( t ) ∫tt Φ ( t, τ) B( τ) u ( τ)dτ + D( t ) u( t )
0
• The zero-input response
x(t) = Φ(t, t0)x0
• The zero-state response
y( t ) = C( t ) ∫tt Φ ( t, τ) B( τ) u( τ)dτ + D( t ) u ( t )
0
= ∫tt [C( t )Φ ( t, τ) B( τ) u( τ)dτ + D( t )δ( t − τ)]u( τ)dτ
0
• The impulse response matrix
G ( t, τ) = C( t )Φ ( t, τ) B( τ) + D( t )δ( t − τ)
= C( t ) X ( t ) X −1( τ) B( τ) + D( t )δ( t − τ)
4.5.1 Discrete-Time case
• Consider the discrete-time state equation
x[k+1] = A[k]x[k] +B[k]u[k]
y[k] = C[k]x[k] + D[k]u[k]
• As in the continuous-time case, the discrete
state transition matrix
Φ[k+1, k0] = A[k]Φ[k, k0]
with Φ[k0, k0] = I.
• Its solution can be obtained directly as
Φ[k, k0] = A[k-1]A[k-2]… A[k0]
• The solution of the discrete-time system
k −1
x[ k ] = Φ[ k, k 0 ]x 0 + ∑ Φ[ k, m + 1]B[ m]u[ m]
m=k0
k −1
y[ k ] = C[ k ]Φ[ k, k 0 ]x 0 + C[ k ] ∑ Φ[ k, m + 1]B[ m]u[ m] + D[ k ]u[ k ]
m=k0
• The impulse response
G[k,m]=C[k]Φ[k,m+1]B[m]+D[m]δ[k-m]
4.6 Equivalent Time-Varying
Equations
• The state equation
x& = A ( t ) x ( t ) + B( t ) u
y = C ( t ) x + D( t )u
where
A ( t ) = [ P( t ) A( t ) + P& ( t )]P −1 ( t )
B( t ) = P( t ) B( t )
C ( t ) = C( t ) P −1( t )
D ( t ) = D( t )
is said to be equivalent to (4.69) and P(t) is
called equivalent transformation.
• P(t)X(t) is also a fundamental matrix.
• Theorem 4.3 Let A0 be an arbitrary matrix.
Then there exists an equivalence
transformation that transforms (4.69) into
(4.70) with A ( t ) = A0.
• Periodic state equation:
A(t+T) = A(t)
• Then,
& ( t + T ) = A( t + T ) X ( t + T ) = A( t ) X ( t + T )
X
• Thus X(t+T) is also a fundamental matrix
X(t+T) = X(t)X-1(0)X(T)
• Let Q = X-1(0)X(T) be a constant
nonsingular matrix.
• There exists a constant matrix A such that
e AT = Q (Problem 3.24).
• Thus
X ( t + T ) = X ( t )e A T
• Define
P( t ) := e AT X −1( t )
• Note that P(t) is periodic with period T
• Theorem 4.4 Consider (4.69) with A(t) =
A(t+T) for all t and some T > 0. Let X(t) be
a fundamental matrix. Let A be the
constant matrix. Then (4,69) is Lyapunov
equivalent to
x& ( t ) = Ax ( t ) + P( t ) B( t ) u( t )
y ( t ) = C( t ) P −1 ( t ) x ( t ) + D ( t ) u ( t )
where P( t ) = e AT X −1( t )
4.7 Time-Varying Realizations
• Theorem 4.5 A q×p impulse response
matrix G(t, τ) is realizable if and only if G(t,
τ) can be decomposed as
G(t, τ)=M(t)N(τ) + D(t)δ(t - τ)