ch1.1 input output description

Chapter 1
Mathematical Descriptions
of Systems
Control System Design Steps
The design of a controller that can modify the
behavior and response of a plant to meet certain
performance requirements can be a tedious and
challenging problem in many control applications.
Inputs u
Plant
Outputs y
The control design task is to choose the input u(t) so
that the output response y(t) satisfies certain given
performance requirements.
The following control design steps often follow by most
control engineers:
Step 1 (Modeling): The task of the control engineer in
this step is to understand the processing mechanism of
the plant, which takes a given input signal u(t) and
produces the output response y(t), to the point that he or
she can describe it in the form of some mathematical
equations. For example,
u
yc
t
t
t
u
yc
t
Though the controlled physical system may be very
complex, from the above response, the system can be
described approximately by the following first-order
system:
dyc
1
T
 a yc  u  Y c (s ) 
U (s )
dt
Ts a
In many cases, the model, however, could be
complicated enough from the control design viewpoint.
Some of the approaches often used to obtain a
simplified model are (1) Linearization around operating
points, and (2) Model order reduction techniques.
Step 2 (System analysis). Once we obtain a
mathematical model, system analysis can be carried
out, which includes controllability and observability
analysis, stability analysis and other performance
analysis. For example, let the system be described by
1
Y (s )  2
U (s )
s  s 1
Then, we know the system performance.
Step 3 (Controller design): If the analysis shows that
the system performance does not meet the
requirements, a controller is needed. For example, for
the following system,
R(s)=1/s
Controller
Plant model
K
1
T s 1
a proportional controller can be designed to
improve the steady state performance.
C(s)
A PD controller can be used to make the system to be
stable:
R(s)=1/s
Plant model
Controller
K p (1 T ds )
u
u=up+ud
1
Js 2
C(s)
Step 4 (Implementation): In this step, a controller
designed in step 3, which is shown to meet the
performance requirements for the plant model, is
ready to be applied to the plant. The implemen-tation
can be done using a digital computer.
What dose linear systems theory study?
Linear system theory studies physical systems that can
be expressed by linear mathematical models. For
example: Consider the following R-C network
u
yc
whose mathematical model is a linear differen-tial
equation:
1
1
y&
= - yc + u .
c
t
t
How many descriptions are there for linear
systems?
There are two descriptions for linear control systems:
Input-output description and state-space description.
 Input-output description gives a mathematical relation
between the input and output of the system. For
example, transfer functions.
 State-space description describes both input-output
relation and the behavior of internal states of a system.
Therefore, with state-space description,
People can obtain full information of a system, which
makes it possible to design a nice perfor-mance system.
However, in many cases, it is difficult to obtain the statespace description.
The two descriptions both have advantages and
disadvantages. Which description should be used
depends on the problem, on the data available, and on the
question asked.
§1-1 The Input-Output Description
1. Why is I/O description necessary?
For the input-output description, the knowledge of the
internal structure of a system is assumed to be unavailable;
the only access to the system is the input and output.
Under this assumption, a system may be considered as a
“black box” shown below.
u
Black Box
y
Clearly, what we can do to a black box is to apply all
kinds of inputs and measure their corresponding
outputs, and then try to abstract key properties of the
system from these input-output pairs.
A system may have more than one input terminal or
more than one output terminal.
u
y
Black Box
u  [u1 u 2
u p ]T  R p ,
y  [y1 y 2
yq ]T  Rq
Definition: A system is said to be a single-variable
system if and only if it has only one input and one
output.
In other words, a system is said to be a multi-variable
system if it has more than one input terminals or more
than one output terminals.
v1
_
v2
u1
K1
u2
_
K2
_
2
s 1
3
s 2
x1
y1
x2
y2
2. Relaxedness
Question
What conditions should be satisfied if the
output can be excited solely and uniquely by
the input?
This question is important because if for an inputoutput described system the same input corresponds
more than one different outputs, then the input-output
description is of no use to determine the key properties
of the system.
Example. Consider the following second-order system:
&
y&
+ 2y&
+ yc = u
c
c
t ³ t0 = 0
whose initial conditions are:
u
yc
y&
c (0), yc (0).
Assume that only input and output signals are available.
y c (0)
If the initial condition
are not available,
the output
can not be determined solely and uniquely by the input u
since the output is exited by both input signal and the
initial conditions; hence, it is impossible to determine
which part of the output is excited by the input.
In classic control theory, we always assume that all
the initial conditions of a given system are zero and
therefore, the output can be excited by input solely
and uniquely:
t
yc (t )   h (t  t )u (t )d t  yc  Hu  h  u
0
If the concept of energy is applicable to a system, the
system is said to be relaxed at time t1, if no energy is
stored in the system at that instant. Therefore,
yc (0)
, yc (0)stores in time  to
represent the energy the
system
time 0.
As in the engineering literature, we shall assume that
every system is relaxed at . Consequently, if an
input u(, +  ) is applied at t= , the corresponding
output will be excited solely and uniquely by u. Hence,
under the relaxedness assumption, it is legitimate to
write
y=Hu
Definition: A system is said to be an initially relaxed

system if it is relaxed at time
.
For a relaxed system, we have
y=Hu
(1-1)
where H is some operator that specifies uniquely the
output y in terms of the input u of the system. The
equation (1-1) can be expressed in the following
form:
y (t )  Hu (  ,  )
t  ( , )
(1-2)
)
Generally, f (t ,t represents
a function defined on the time
interval (t1, t2).
1
2
Operator H:
Note that transfer function is a typical linear operator:
y (s )  H (s )u (s )
s)
which maps the signal u (into
y (s )can be expressed as
and
H : u (s )  H (s )u (s )
In the time domain the operator H is a convolution:
t
y (t )   h (t  t )u (t )d t
0
H : u  Hu : h u
H : u  Hu : h u
t
i .e. y (t )  Hu  Hu[0,t ]   h (t  t )u (t )d t , t [0, t ]
0
where h(t) is an impulse function.
3. Linearity
Definition: A relaxed system is said to be linear if and
only if
H (a 1u1  a 2u 2 )  a 1Hu1  a 2 Hu 2
(1-3)
for any inputs u1 and u2 , and any real (or complex)
numbers 1 and 2. Otherwise, the relaxed system is said
to be nonlinear.
In engineering literature, the condition of (1-3) is often
written as
Additivity:
Homogeneity:
H(u1  u2 )  Hu1  Hu2
H(a u )  a Hu
1
1
Principle of
superposition
Example. Consider the Laplace transform:
L : f (t )  F (s )
Example. Consider a system described by the
following differential equation:
x  3x  2x  f (t ),
t  0, x (0)  x (0)  0
f
H
x
which is a linear system.t In fact, we have
x (t )  Hf   h (t  t ) f (t )d t
It is easy to verify that
0
H (a1f1  a 2 f2 )  a1Hf1  a 2Hf2 , a1, a 2 R
Example. Consider the following system whose
input and output are related by
 u 2 (t )
, if u (t  1)  0

y (t )  f (u )   u (t  1)
0,
if u (t  1)  0

It is easy to verify that the input-output pair satisfies
the property of homogeneity but not the property of
additivity. Therefore, the system is not a linear system.
Impulse response function of a relaxed system
We need the concept of  function or impulse function,
which can be derived by introducing a pulse function
(tt0):
 Pulse function:
t  t1
0
1

d (t  t1 )  
t1  t  t1  

t  t1  
 0
1/△
t1 t1+△
t
As  approaches zero, the limiting “function” is called
-function:
d(t  t 1 )  lim d (t  t 1 )
 0
d(t  t1 )
t  t1
t
 -function: -function has the properties that
e  0,
t1  e
t e d(t  t1 )dt  1
1
for any positive  and that

 f (t )d(t  t1 )dt  f (t1)
for any function f that is continues at t1.
Every piecewise continuous input
can be approximated by a series of
pulse functions. Since every pulse
function can be described by
1/△u(tn)△
= u(tn)
tn tn+△
t
u (t n )  (t  t n )
u (t n )  (t  t n )  u (t n ),
t  [t n , t n  )
u(t)
Therefore,
u (t )     (t  t n )[u (t n )]
tn
n
 Impulse response function
y  Hu  H [   (t  t n )u (t n )]
n
 (Addit ivit y ) H   (t  t n )u (t n )
n
 (homogeneit y ) [H   (t  t n )]u (t n ) 
n
(1-7)
const .
Let tn=n=. Let 0, the summation becomes an
integration and the pulse function (ttn)
tends
to a -function. Consequently, as 0, (1-7) becomes
y 


[H d(t  t )]u (t )d t
(1-8)
The physical meaning of H (t) is that it is the output
of a relaxed system due to an impulse function applied
at time . Define
g( ·,  )=H  (t)
where the first variable denotes the time at which the
output is observed. For convenience, we write
g(,  )=H  (t)
Hence,
y (x ) 
or



y (t )  
g (x , t )u (t )d t


(1-10)
g (t , t )u (t )d t
 Impulse-response matrix: If a system has p input
terminals and q output terminals, and if the system is
initially relaxed, the input-output description (1-10)
can be extended to
y (t )  


G(t , t )u(t )d t
where
g11 (t , t )

g21 (t , t )

G (t , t ) 


gq1 (t , t )
g12 (t , t ) g1p (t , t ) 

g22 (t , t ) g2 p (t , t ) 

gij (t , t )

gq 2 (t , t ) gqp (t , t )  qp
and gij(t, ) is the response at time t at the ith output
terminal due to an impulse function applied at time  at the
jth input terminal, the inputs at other terminals being
identically zero.
é
0
ê
ég (t , t )ù
ê
ê 1j
ú
M
ê
ê M ú
ê
ê
ú
0
êg (t , t ) ú= G(t , t ) = H ê
ê
ê ij
ú
êdj ( x - t
ê
ú
ê
ê M ú
0
ê
ê
ú
ê
êgqj (t , t ) ú
ë
û
ê
M
ë
gij (t , t )
ù
ú
ú
ú
ú
ú
ú
)ú
ú
ú
ú
ú
û
4. Causality
Definition: A system is said to be causal if the output
of the system at time t does not depend on the input
applied after time t; it depends only on the input
applied before and at time t. In short, the past affects
the future, but not conversely.
If a relaxed system is causal, the output is identically
zero before any input is applied. Hence, a linear
system is causal if and only if
G (t , t )  0 t  t , t  (, )
Consequently, the input-output description of a linear,
causal, relaxed system becomes
t
y (t )   G (t , t )u(t )d t  


t
G (t , t )u(t )d t
0
t
  G (t , t )u(t )d t
(1-14)

Example: We often use the truncation operator to define
causality. The definition of a truncation operator is
u (t ),
y (t )  (Pa u )(t )  
0,
t a
t a
and can be illustrated by the following figure, which
chops off the input after time :
Pa (u (t ))
u (t )
α
past
t
future
present
A system is said to be causal if the following equation
holds
T PT (Hu )  PT (HPT u )
(*)
 T
PT (y )  PT (HPT u )
Equation (*) means that the input in
the output in
. t T
t  T not affect
dose
u
Hu
u
T
t
H
PT (Hu )
y
 PT (y )
T
t
PT (H (PT (u )))
PT (u )
u
u
H
y
T
T
T
PT (Hu )  PT (HPT u )
(*)
The future input does not affect the past and the present
output.
5. Relaxedness at time t0
 Definition of relaxedness at time t0
Definition: A system is said to be relaxed at time t0 if
and only if the output y[t0, +) is solely and uniquely
exited by u[t0, +).
If a system is known to be relaxed at t0, then its inputoutput relation can be written as
y[ t0 , )  Hu[ t0 ,  )
Definition: A linear system is said to be relaxed at t0 if
and only if

y (t ) 
 G(t , t )u(t )d t , t  [t 0 ,  )
t0
In particular, if the systems is causal, then
t
y (t )   G(t , t )u(t )d t , t  [t 0 ,  )
t0
Example: Consider the system
y&
= c
We have
t
yc (t )  
yc
T
+ u,
1
 (t t )
e T
u (t
yc (t 0 ) = 0.
)d t , t [t 0 ,  )
t0
It is clear that the system is relaxed and causal at t0,
yc (t ), 
t be
[t 0 ,determined
)
because
can
by
u (t ),and
t 
[t 0 , )
solely
uniquely.
Example: Consider the system
1
y&
= - yc + u
c
t
u
, (yc (t 0 ) ¹ 0)
yc
H
We have
yc (t )
1
 (t t 0 )
e t
y
t
c (t 0 ) 

1
 (t t )
e t
u (t
)d t , t  t 0
t0
Though the output can be determined uniquely, it is not
yccan
(t ), not
t be
[t 0 , )
a relaxed system at t0 because
determined solely and uniquely by
u (t ), t [t 0 , )
.
Example: If a linear system satisfies u(, t0)0, then the
system is relaxed at t0. As a matter of fact,

y (t ) 

 G(t , t )u(t )d t

t0
 G(t , t )u(t )d t

14444442 4444443
0

 y (t ) 


 G(t , t )u(t )d t
t0
 G(t , t )u(t )d t , t  t 0
t0
 y[t 0 ,)  Hu[t 0 ,)
which is exactly the definition of relaxedness at t0.
However, u(, t0)0 is only a sufficient condition of
relaxedness at t0.
Example: Consider a unit-time-delay system
y (t )  Hu  u (t  1), t  (, )
It is easy to verify that H is a linear operator. Let
u (t )  0, t  (, t 0  1)
and
u (t )  0, t [t 0  1, t 0 )
(s  1)
then the system is relaxed at t0. In fact, to determine y[t0,
+) by u[t0, +) solely and uniquely, one only needs to
know that u[t01, t0) is zero.
u(t)
t01
t0
y (t )  Hu  u (t  1), t  t 0

Initially relaxed
y(t)
t01
t0
t0 1
It is clear that if u[t01, t0) is zero, then y[t0, +) can be
determined by u[t0, +) solely and uniquely regardless of
u(t), t(, t01).
 Criterion
Theorem: The system that is described by
y (t )  


G(t , t )u(t )d t
is relaxed at t0 if and only if u[t0, +)0 implies
y[t0, +)0.
Proof: Necessity. If a system is relaxed at t0, the output
y(t) for t t0 can be expressed by

y (t ) 
 G(t , t )u(t )d t , t  [t 0 ,  )
t0
Hence, if u[t 0 ,)  0 

y (t ) 
 G(t , t )u(t )d t
 0, t  [t 0 ,  )
t0
Sufficiency: We shall show that if
u[t 0 ,)  0  y[t 0 ,)  0
then the system is relaxed at t0. Since
y (t )  


t0
G (t , t )u(t )d t
  G (t , t )u(t )d t  


t0
G (t , t )u(t )d t , t  [t 0 ,  )
The assumption u[t0, +)0, y[t0, +)0 imply that
t0
u[t 0 ,)  0  y[t 0 ,)  0   G(t , t )u(t )d t  0,t [t 0 ,  )

In words, the net effect of u(, t0) on the output y(t) for t
t0 is zero. Hence,
y (t ) 

t0



G (t , t )u(t )d t
G (t , t )u(t )d t  
1444444
42 44444443


t0
G (t , t )u(t )d t
0



t0
G (t , t )u(t )d t , t  [t 0 ,  )
Q.E.D.
The relaxedness of the system can be determined from
the behavior of the system after t0 without knowing the
previous history of the system. Certainly, it is impractical
or impossible to observe the output from time t0 to infinity;
fortunately, for a large class of systems, it is not
necessary to do so.
The following corollary gives a more applicable criterion
for a system that is relaxed at t0.
Corollary: If the impulse response matrix
G (t , t )
can be decomposed into
G (t , t )  M(t ) N(t ),
and if every element of M(t) is analytic on (, ),
then the system is relaxed at t0 if for a fixed positive ,
y[t 0 ,t 0  e )  0
u[t 0 ,t 0 .e )  0
implies
Remark: This is an important result. Because  is a finite
positive number, and this corollary can be applied in
engineering.
Example. Consider the following system
u
H
yc
if
t
t
t0
t0
yc (t )   G (t , t )u (t )d t   e A (t t )u (t )d t , t  [t 0 ,  )
using the property of matrix exponential, G(t,) can
be decomposed into
 G (t , t )  M (t )N (t )  e At e  At , t [t 0 ,  )
where M(t)=eAt is an analytic function.
Appendix: Analytic Function
Let D be an open interval in the real line R and let f(·)
be a function defined on D; that is, to each point in D, a
unique number is assigned to f.
A function of a real variable, f(·), is said to be an
element of class Cn on D if its nth derivative f(n)(t) exists
and is continuous for all t in D. C is the class of
functions having derivatives of all orders.
Definition: A function of real variable, f(t), is said to be
analytic on D if f(t) is an element of C and if for each t0
in D there exists a positive real number 0, such that for
all t in (t00, t0 +0), f(t) is representable by a Taylor
series about the point t0
(t  t 0 )n (n )
f (t )  
f (t 0 )
n!
0

For instance, polynomials, exponential functions, and
sinusoidal functions are analytic in the entire real line.
Theorem: If a function f is analytic on D and if f is
known to be identically zero on an arbitrarily small
nonzero interval in D, then the function f is identically
zero on D.
Proof. If the function is identically zero on an
arbitrarily small nonzero interval, say, (t0, t1), then the
function and its derivatives are all equal to zero on (t0,
t1). By analytic continuation, the function can be
shown to be identically zero.
Q.E.D.
The process of analytic continuation:
D
D1 D , D1 is an interval on which
f(t)0
f (t )  0t  D1  f
(n )
f(t) is
analytic
on D
(t )  0
 t 0  D1 , e0  0,
  (t  t 0 )n (n )

f (t )   
f (t 0 )   0, t  (t 0  e0 , t 0  e0 )
n!
 0

Corollary 1-1. If the impulse response matrix
G (t , t )can be decomposed as
G (t , t )  M(t ) N(t )
and each element of M(t) is analytic over (, +),
then the system is relaxed if for a constant , u[t0, t0+
)0 implies that y[t0, t0+ ) 0 .
Proof. We only need to prove that u[t0, +)0 implies y[t0,
+)0, i.e.
t0
y (t )  M(t )  N (t )u (t )d t  0, t  t 0

Let u[t0, +)0. Then,

y (t ) 
 G(t , t )u(t )d t


t0
 G(t , t )u(t )d t

t0
 M(t )  N (t )u (t )d t , t  [t 0 ,  )

u[t
0
,¥ )
= 0 Þ u[t
,t + e)
0 0
= 0 Þ y[t
,t + e)
0 0
= 0Þ
t0
y (t )  M(t )  N (t )u (t )d t  0, t  [t 0 , t 0  e )

Because
t0
 N (t )u(t )d t : c

is a constant, the assumption that M(t) is a analytic
function implies that y(t) is analytic over [t0, +). The
equation u[t0, t0+ )0 implies that y[t0, t0+ )0, that is
y(t )  M(t )c  0, t  [t 0 , t 0  e) (A.2)
By analytic continuation,
t0
y (t )  0t  t 0  M(t )  N(t )u(t )d t  0t  t 0


t0
 G(t , t )u(t )d t
 0, t  [t 0 ,  )

i .e. u[t 0 ,)  0  y[t 0 ,)  0.
This completes the proof.
This is an important result. For any system that satisfies
Corollary 1-1 can be easily determined by observing the
output over any nonzero interval of time. If the output is
zero in this interval, then the system is relaxed at the
moment.
Example: consider the system
x  Ax  Bu
where A and B are constant matrices with proper
dimensions. We have
t
x (t )  e A (t t 0 ) x (t 0 )   G(t , t )Bu (t )d t
t0
t0

t
 e A (t t 0 ) x (t 0 )   e A (t t ) Bu (t )d t
t0
If x(t0)=0, the energy of the system at time t0 is zero, the
system is relaxed at t0. Hence
t
x (t )   e A (t t ) Bu (t )d t implies u[t 0 ,)  0  X[t 0 ,)  0.
t0
6. Time Invariance
If the characteristics of a system do not change with time,
then the system is said to be time invariant. In order to
define it precisely, we need the concept of a shifting
operator Q.
u
y
t
t
u

t
y

t
Example: Consider the following linear timeinvariant system that is relaxed at time t=0:
y& y  u , y (0)  0
• Response to u=1(t):
y (t )  1  e t , t  0
t
• Response to u=1(t1): By tacking the Laplace
transform
1 1 s 1 s
1 s
Y (s ) 
e  e 
e
s 1s
s
s 1
we have
y (t )  1(t  1)  e  (t 1) , t  1
t
 Definition of shifting operator and timeinvariant system
The effect of the shifting operator Q is illustrated in
Figure 1-5. The output of Qu is equal to the input
delayed by  seconds.
u
t
Qa u : u

t
Figure 1-5: The effect of a shifting operator on a signal
Definition: A relaxed system is said to be time invariant
if and only if
Qa y  Qa Hu  HQ a u (1  18)
for any input u and any real number . Otherwise the
relaxed system is said to be time varying.
In other words, no matter what time an input is applied
to a relaxed time-invariant system, the waveform of the
output remains the same.
y
u

t
t

t
Qa y 

t
Example: Prove that for a constant , the shifting
operator Q is a linear time-invariant system, and
compute its impulse function and transfer function.
Proof: The linearity of Q is obvious. From the
definition of time invariance, we only need to show
that
Q bQ a u  Q aQ b u
for any R. In fact,
Q bQa u  Q b u (t  a )  u (t  a  b )
 Qa u (t  b )  Q aQ b u
Hence, the system is a linear time-invariant system,
whose impulse function is
Qa d(t  t )  d(t  (t  a ))
and whose transfer function is
L[d(t  (t  a ))]  e  ( t a )s .
 Impulse response function of time invariant
system
If a system is linear and time invariant, the impulse
response function reduces to
g(t , t )  H d(x  t )  g(t  t , 0)
In fact, from the property of time invariance, we have
Qa g (t , t )  Qa H d(x  t )  HQa d(x  t )
Right-hand side =
Left-hand side =
H d[x  (t  a )]  g (t , t  a )
Qa g (t , t )  g (t  a , t )
g (t  a , t )  g (t , t  a )
which implies that
g (t , t )  g (t  a , t  a )
for any t, , ..
g (t ', )
Qa g(t , t )
g(t , t )
t
t
t +a
g(t '  ,   )
t
In particular, by choosing
a , 
t have
we
g (t , t )  g (t  t , 0)
t , t
For simplicity, write
g (t , t )  g (t  t )
t , t
Hence, the impulse response of g(t, ) of a relaxed linear
time-invariant system depends only on the difference of t
and .
Example. Consider the following relaxed linear, causal
and time-invariant system
g&+ g = d(t - t ), g(0) = 0,
t ³ 0
g(t , t ) = e - (t - t ) = g(t - t ), t ³ t
 Multivariable systems
For all t and , we have
G (t , t )  G (t  t , 0)  G (t  t )
Hence, the input-output pair of a linear causal timeinvariant system which is relaxed at t0 satisfies
t
y (t )   G (t  t )u(t )d t
t0
(1-19)
For time invariant systems, without loss of generality, we
can choose t0 =0. Then,
t
y (t )   G(t  t )u(t )d t
0
(1-20)
7. Transfer-function matrix
 Transfer-function matrix Taking the Laplace
transform of y(t), we have
t
y (t )   G(t  t )u(t )d t
(1  20)
0

Y(s )=L[y(t )]   y(t )e st dt
0
From the Laplace transform of the convolution,
y ( s )  G ( s )u( s )
where
(1  22)

G (s )   G (t )e st dt
0
is the Laplace transform of the impulse response matrix
and is called the transfer-function matrix.
Here, the elements of a transfer-function matrix are
assumed to be rational of s.
Example: The impulse response matrix of a system is
 e t e t cos t 
G(t )  

5t
e
sin t

find its transfer-function matrix.
Solution: Taking the Laplace transform of each element
of G(t) yields
 1
 s 1
G(s )  
 1
 2
s  1
s 1


(s  1) 2  1 

1

s 5 
 Proper and strictly proper We assume that the
numerator polynomial and denominator polynomial of
every element of G(s) have no common factor.
Definition: A rational transfer function G(s) is said to
be proper if G() is nonzero; G(s) is said to be
strictly proper if G()=0 .
 Zeros and poles of a transfer function matrix
We can define the zeros and poles of rational transfer
function matrices by extending the definition of
“transfer function” in classical control theory.
Assumption: Let G(s) be a q×p rational transfer function
matrix with rank r. Here, the rank of a transfer function
matrix is defined as the highest order of non-zero minor of
G(s) .
Example Consider the following transfer function
matrices
1 
1 
 1
 1
 s  1 s  1
s  1 s  1
G1 (s )  
G2 (s )  


1 
2 
 1
 1
 s  1 s  1
 s  1 s  1
1

 (s  1)(s  2)
G3 (s )  
1


s 1
1 
s  1

1 
s  1 
rank (G1)  1
rank (G2 )  2
rank (G3 )  2
Definition1-5: The characteristic polynomial of a
proper rational matrix G(s) is defined to be the least
common denominator of all minors of G(s).
Note that in computing the characteristic polynomial of
a rational matrix, every minor of the matrix must be
reduced to an irreducible one; otherwise we will obtain
an erroneous result.
Example Consider the following transfer function
matrix
s 1 
 1
s  1 0
(s  1)(s  2) 
.
G (s )  
1
1
 1

 s  1 s  2
s  2 
From the definition 1-5, the common denominator of
the minors of order 1 is (s+1)(s1)(s+2). the
denominators of the minors of order 2 are
1
(s  1)(s  2)
(s  1)
(s  1)(s  2)2
2
(s  1)(s  2)
Therefore, the common denominator of minors of order 2
is
2
( s  1)( s  2)
Hence, the characteristic polynomial of G(s) is
(s  1)(s  1)(s  2)2
which has four poles:  1, 2, 2 and +1.
Definition 1-6: Let the denominators of all the minors
of order r of G(s) be replaced by the characteristic
polynomial. Then, their greatest common factor of
numerators is defined as the zero polynomial of G(s).
Example: Consider the following transfer function
matrix
s 1 
 1
s  1 0
(s  1)(s  2) 

G (s )  
1
1
 1

 s  1 s  2
s  2 
The characteristic polynomial of G(s) is
2
(s  1)(s 1)(s  2)
Let the denominators of minors of order 2 be replaced by
the characteristic polynomial:
( s  2)( s  1)
( s  1)( s  1)( s  2)2
( s  1)( s  2)
( s  1)( s  1)( s  2)2
( s  1)
( s  1)( s  1)( s  2)2
whose greatest common divisor of numerators is (s1);
hence the zero polynomial of G(s) is (s1), and G(s) has
one zero s=1.
Example Consider the following transfer function
matrices
1 
 1
s  1 s  1
G1 (s )  

1 
 1
 s  1 s  1 
1 
1.0001
 s  1 s  1
G2 (s )  

1 
 1
 s  1 s  1 
Find their characteristic polynomials.
The four minors of order 1 of G1(s) are
1
1
1
1
s 1 s 1 s 1 s 1
The minor of order 2 is zero; hence, the characteristic
polynomial is s+1.
The four minors of order 1 of G2(s) are
1.0001
1
1
1
s 1 s 1 s 1 s 1
and the minor of order 2 is
0.0001
(s  1)
2
Hence, its characteristic polynomial is (s+1)2.
Example. Consider the following transfer function matrix
 s
 (s  1) 2
G (s )  
 1
 s 1

1
1 
(s  1)(s  2) (s  3) 

1
1 
(s  2)
s 
The minors of order 1 are the elements of G(s), and the
minors of order 2 are
s 1
1

,
2
(s  1) (s  2) (s  1)(s  2)
s
1
1
2s  4
 

,
2
2
(s  1) s (s  1)(s  3) (s  1) (s  3)
3
s (s  1)(s  2)(s  3)
The least common denominator of all minors of G(s), i.e.,
the characteristic polynomial of G(s) is:
s(s+1)2(s+2)(s+3).
Summary
Relaxed at 
y  Hu
Linearity

y (t ) 

G(t , t )u (t )d t

Causality
t
y (t ) 
 G(t , t )u (t )d t

G(t , t )  0
t t
Relaxed at t0
t
y (t ) 
 G(t , t )u (t )d t ,
t  t0
t0
G(t , t )  0
t t
Time-invariance
t
y (t ) 

G(t  t )u (t )d t
t  t0
t0
t
t0=0
y (t )   G (t  t )u (t )d t
0
t 0
Laplace transform
y (s )  G (s )u (s )
SISO
t
y (t )   g (t  t )u (t )d t
t 0
0
y (s )  g(s )u (s )
where g(s) is a rational transfer function studied in
classic control theory.
MIMO:
y (s )  G (s )u (s )
Poles and zeros are the extension of those in classic
control theory. p. 234.