Lecture Notes on Control Systems/D. Ghose/2012
9.5
30
The Transfer Function
Consider the n-th order linear, time-invariant dynamical system.
a0 y + a1
dy
d2 y
dn y
du
d2 u
dm u
+ a2 2 + · · · + an n = b0 u + b1
+ b2 2 + · · · + bm m
dt
dt
dt
dt
dt
dt
with zero initial conditions on all derivatives. Taking the Laplace transform on both
sides, we get,
a0 Y (s) + a1 sY (s) + a2 s2 Y (s) + · · · + an sn Y (s)
= b0 U (s) + b1 sU (s) + b2 s2 U (s) + · · · + bm sm U (s)
From which,
m
k
b0 + b1 s + b2 s2 + · · · + bm sm
Y (s)
k=1 bk s
=
=
n
j
U (s)
a0 + a1 s + a2 s2 + · · · + an sn
j=1 aj s
N (s)
⇒ G(s) =
D(s)
Where, G(s) is called the transfer function. It is defined from differential equations for
which the initial condition is zero.
G(s) is said to be proper if m ≤ n.
G(s) is said to be strictly proper if m < n.
Note: We will understand these terms in a better way a little later. However, in most
cases we will be dealing with strictly proper systems.
Let us do some more algebraic manipulations. Let,
bm
an
bk
=
, for k = 0, . . . , m − 1
bm
aj
=
, for j = 0, . . . , n − 1
an
K =
b̄k
āj
Then,
G(s) =
bm sm + bm−1 sm−1 + · · · + b1 s + b0
an sn + an−1 sn−1 + · · · + a1 s + a0
Lecture Notes on Control Systems/D. Ghose/2012
=
G(s) =
=
31
bm−1 m−1
s
+ · · · + bbm1 s + bbm0
bm
n−1 + · · · + a1 s + a0
an sn + an−1
s
an
an
an
m
m−1
bm s + b̄m−1 s
+ · · · + b̄1 s + b̄0
an sn + ān−1 sn−1 + · · · + ā1 s + ā0
sm + b̄m−1 sm−1 + · · · + b̄1 s + b̄0
K
sn + ān−1 sn−1 + · · · + ā1 s + ā0
bm s m +
G(s) = K
(s − z1 )(s − z2 ) · · · (s − zm )
(s − p1 )(s − p2 ) · · · (s − pn )
where, m ≤ n
1. The numerator roots z1 , · · · , zm are called system zeros.
2. The denominator roots p1 , · · · , pn are called the system poles.
Figure 9.9: A sketch of the poles and zeros along the real line
3. The denominator polynomial is called the characteristic polynomial.
4. The transfer function is the Laplace transform of the impulse response function4 .
An Example. An example of why m > n (or, a system which is not proper) is a bad idea.
Newton’s law says f = mv̇.
Let us identify the input and the output.
Lecture Notes on Control Systems/D. Ghose/2012
Figure 9.10: Force applied on a mass
Figure 9.11: Case 1
Case 1: Let us say that input is v and output is f . Also, let v be a step input.
Looks like a bad idea!!!
Case 2: Let the input be f and the output be v, and let f be the unit force.
Figure
F
igure 99.12:
12: Case 2
Looks OK.
Now, look at the transfer function.
4
To see this, put u(t) = δ(t), then U (s) = 1. So, Y (s) = G(s) and y(t) = L−1 [G(s)]
32
Lecture Notes on Control Systems/D. Ghose/2012
33
f (t) = mv̇ ⇒ F (s) = msV (s) (assuming zero initial condition)
F (s)
V (s)
For Case 1: F (s) = msV (s) ⇒
= ms (which is an improper transfer function)
−st
Note that V (s) = e s 0 ⇒ F (s) = me−st0 . Thus, a delayed step function is caused by a
delayed impulse signal.
For Case 2: V (s) =
1
F (s)
ms
−st
V (s)
F (s)
⇒
=
−st
1
ms
(which is strictly proper).
−st
So, F (s) = e s 0 ⇒ V (s) = mes2 0 = 1s me s 0 . This may be interpreted as the integral of
a delayed step function, giving rise to a ramp function.
9.6
Initial and Final Value Theorems
Initial Value Theorem
Since
df
= sF (s) − f (0)
L
dt
and since, s → ∞
⇒
e−st → 0, we have,
∞
df
df −st
lim L
e dt = 0
= lim
s→∞
s→∞
dt
dt
0
So,
lim [sF (s) − f (0)] = 0
s→∞
Therefore,
⇒ f (0) = lim sF (s)
s→∞
Final Value Theorem
∞
∞
df
df −st
df
e dt =
dt = f (∞) − f (0)
= lim
lim L
s→0
s→0
dt
dt
dt
0
0
Therefore,
lim[sF (s) − f (0)] = f (∞) − f (0)
s→0
and hence,
lim sF (s) = f (∞) = lim f (t)
s→0
t→∞
However, the final value theorem is meaningful only if the following conditions are met.
Lecture Notes on Control Systems/D. Ghose/2012
1. The Laplace transform of f (t) and
df
dt
34
exists.
2. limt→∞ f (t) exists.
3. All poles of F (s) are on the left half plane except for one which may be at the
origin.
4. No poles on the imaginary axis.
9.7
Partial Fraction Expansion
The reason we introduced the Laplace transform is to devise an easy way to find the
system response.
Figure 9.13: Input-output representation
One way to do this would be by using the convolution integral,
y(t) =
t
0
g(t − τ )u(τ )dτ
But if the Laplace transforms are known for g and u, then
Y (s) = G(s)U (s)
and then y(t) is obtained by finding the inverse Laplace transform.
To achieve this, we need to break up Y (s) into pieces for which the inverse Laplace
transforms are available, and then use the Laplace transform tables to find the inverse
Laplace transform of the complete Y (s).
Let,
m
bm sm + bm−1 sm−1 + · · · + b1 s + b0
(s − zk )
N (s)
=
= K k=1
G(s) =
n
n
n−1
D(s)
an s + an−1 s
+ · · · + a1 s + a0
j=1 (s − pj )
Note that G(s) is not necessarily the transfer function of the system. It could be the
output Y (s) or any other function in the s-domain having a numerator polynomial and
a denominator polynomial of appropriate order.
Lecture Notes on Control Systems/D. Ghose/2012
35
Case 1: Distinct Poles: pi = pj , i = j
Then, the partial fraction expansion of G(s) is
G(s) =
n
ki
i=1 s − pi
where, ki are called the residues. How do we find ki ?
To find ki : Multiply G(s) by (s − pi ) and let s → pi .
⎡
n
⎤
kj ⎦
ki = s→p
lim (s − pi )G(s) = s→p
lim ⎣(s − pi )
i
i
j=1 (s − pj )
Example. Let
G(s) =
s2
s
+ 3s + 2
It is easy to find the poles of G(s),
G(s) =
s
(s + 1)(s + 2)
Since the poles are distinct (p1 = −1, p2 = −2), we may expand G(s) into partial
fractions as,
G(s) =
k1
k2
+
s+1 s+2
To find the residues,
−1
s
=
= −1
s+2
−1 + 2
−2
s
=
=2
= lim (s + 2)G(s) = lim
s→−2
s→−2 s + 1
−2 + 1
k1 =
k2
lim (s + 1)G(s) = lim
s→−1
s→−1
So,
G(s) =
2
−1
+
s+1 s+2
Verify that the above is indeed the same as the original G(s).
Case 2: A pole of multiple order or repeated pole.
G(s) =
N (s)
(s − p1 )(s − p2 ) · · · (s − pi−1 )(s − pi )l (s − pi+1 ) · · · (s − pn )
Lecture Notes on Control Systems/D. Ghose/2012
36
In the above, the pole pi has order l > 1. All other poles have order 1. Then the
partial fraction expansion is given by,
G(s) =
k1
k2
ki−1
+
+ ··· +
s − p1 s − p2
s − pi−1
kn
ki+1
+ ··· +
+
s − pi+1
s − pn
A2
Al
A1
+
+ ··· +
+
2
s − pi (s − pi )
(s − pi )l
← Simple poles
← Simple poles
← Repeated poles
where,
kj = (s − pj )G(s)|s=pj
and for the repeated pole,
← For simple poles
Al = (s − pi )l G(s)
Al−1 =
d ds
Al−2
A2
A1
s=pi
(s − pi ) G(s)
l
s=pi
1 d
l
=
(s
−
p
)
G(s)
i
2! ds2
s=pi
..
.
1
d(l−2) l
=
(s − pi ) G(s)
(l − 2)! ds(l−2)
s=pi
2
1
d(l−1) l
=
(s − pi ) G(s)
(l − 1)! ds(l−1)
s=pi
Example. Consider a system which has a response given by the differential equation,
ẍ + 2ẋ + x = 0, x(0) = a, ẋ(0) = b
Taking Laplace transform on both sides,
s2 X(s) − sx(0) − ẋ(0) + 2(sX(s) − x(0)) + X(s) = 0
s2 X(s) − sa − b + 2(sX(s) − a) + X(s) = 0
From which,
X(s) =
as + (2a + b)
as + (2a + b)
=
s2 + 2s + 1
(s + 1)2
The partial fraction expansion is then,
X(s) =
A1
A2
+
s + 1 (s + 1)2
Lecture Notes on Control Systems/D. Ghose/2012
37
where,
A2 = (s + 1)2 X(s)
A1
s=−1
= as + (2a + b)|s=−1
= −a
+ 2a + b = a +b
d
=
{(s + 1)2 X(s) ds
s=−1
= a
So,
X(s) =
a+b
a
+
s + 1 (s + 1)2
Verify that the above is indeed the same as the original X(s).
Case 3: Complex poles
G(s) =
TRANSFER FUNCTION HAVING DISTINCT POLES
+ TRANSFER FUNCTION HAVING REPEATED POLES
+ TRANSFER FUNCTION HAVING COMPLEX POLES
The complex roots are expressed as,
k̄1 s + k̄2
(s + a)2 + b2
When b > 0,
(s + a)2 + b2 = (s + a)2 − (jb)2
= (s + a + jb)(s + a − jb)
Since the complex roots are distinct, one can use the method of distinct roots as
given earlier to obtain the residues. But in that case the residues will also be
complex. On further manipulations we can get back the real numbers. Finally we
can use the following inverse Laplace transforms,
b
= e−at sin bt
L
(s + a)2 + b2
s
+
a
= e−at cos bt
L−1
(s + a)2 + b2
−1
Example.
Lecture Notes on Control Systems/D. Ghose/2012
G(s) =
=
=
=
=
38
s+3
+ 3s2 + 6s + 4
s+3
3
2
s + s + 2s2 + 2s + 4s + 4
s+3
2
s (s + 1) + 2s(s + 1) + 4(s + 1)
s+3
2
(s + 2s + 4)(s + 1)
s+3
√
(s + 1)[(s + 1)2 + ( 3)2 )
s3
So, the poles are,
√
√
p1 = −1, p2 = −1 − j 3, p3 = −1 + j 3
Since all the poles are distinct, by partial fraction expansion,
G(s) =
k1
k2
k3
√ +
√
+
s+1 s+1+j 3 s+1−j 3
and, the residues are computed as,
k1 = (s + 1)G(s)|s=−1
s+3
=
(s + 1)2 + 3 s=−1
−1 + 3
=
3
2
=
3
√
k2 = (s + 1 + j 3)G(s)
=
=
=
=
√
s=−1−j 3
s+3
√ (s + 1)(s + 1 − j 3) s=−1−j √3
√
−1 − j 3 + 3
√
√
(−j 3)(−j2 3)
√
2−j 3
−6 √
1
3
− +j
3
6
Similarly,
√
k3 = (s + 1 − j 3)G(s)
√
s=−1+j 3
Lecture Notes on Control Systems/D. Ghose/2012
39
=
=
=
=
s+3
√ (s + 1)(s + 1 + j 3) s=−1+j √3
√
−1 + j 3 + 3
√
√
(j 3)(j2 3)
√
2+j 3
−6 √
3
1
− −j
3
6
Substituting these values,
√
√
−(1/3) + j( 3/6) −(1/3) − j( 3/6)
2/3
√
√
+
+
G(s) =
s+1
s+1+j 3
s+1−j 3
√
√
s+1
3
2/3
+ (−2/3)
+ (1/ 3)
=
2
s+1
(s + 1) + 3
(s + 1)2 + 3
Applying the inverse transform we get,
√
√
2
2
1
g(t) = e−t − e−t cos 3t + √ e−t sin 3t
3
3
3
An alternative way to solve the same problem is by equating the coefficients. This
avoids the complication of using imaginary numbers. Let,
s+3
(s + 1)[(s + 1)2 + 3]
k2 s + k3
k1
=
+
s + 1 (s + 1)2 + 3
G(s) =
So,
k1 [(s + 1)2 + 3] + (s + 1)(k2 s + k3 )
s+3
=
(s + 1)[(s + 1)2 + 3]
(s + 1)[(s + 1)2 + 3]
Since the denominators are the same, the numerators must also be the same. Thus,
s + 3 = (k1 + k2 )s2 + (2k1 + k2 + k3 )s + 4k1 + k3
Comparing the coefficients of the powers of s,
k1 + k2 = 0
2k1 + k2 + k3 = 1
4k1 + k3 = 3
Solving,
k1 =
2
3
k2 = −
k3 =
1
3
2
3
Lecture Notes on Control Systems/D. Ghose/2012
The rest follows in the same way as before.
Question. How would you handle repeated complex roots?
40
© Copyright 2026 Paperzz