The Laplace Transform

185
Chapter 7
The Laplace Transform
In this chapter we will explore a method for solving linear differential equations
with constant coefficients that is widely used in electrical engineering. It involves
the transformation of an initial-value problem into an algebraic equation, which
is easily solved, and then the inverse transformation back to the solution of the
original problem, thereby bypassing the need to solve for arbitrary constants in
the general solution. The technique is especially well suited for finding generalized solutions to systems driven by impulses or by discontinuous or periodic
forcing functions.
7.1 Definition and Basic Properties
Given a function f (t) defined for t ≥ 0, its Laplace transform F (s) is defined
as
Z ∞
F (s) =
e−st f (t)dt.
(1)
0
Notice that the variable s appears as a parameter in an improper integral. We
say that the Laplace transform exists if this improper integral converges for
all sufficiently large s. The notational convention used here is common, though
not universal: The uppercase version of the function’s name denotes its Laplace
transform, and s is used for the transform’s independent variable. Before we
go further, let’s illustrate this definition and notation with a couple of simple
examples.
• Example 1 Consider a constant function f (t) = c, t ≥ 0. Its Laplace transform
is
Z ∞
c
c
c
c −st ∞
e =−
lim e−st − e0 = − (0 − 1) = ,
F (s) =
e−st c dt =
−s
s t→∞
s
s
t=0
0
provided that s > 0. Note that the improper integral diverges when s ≤ 0.
• Example 2 Let f (t) = eat , t ≥ 0. Its Laplace transform is
Z ∞
Z ∞
−st at
F (s) =
e e dt =
e−(s−a)t dt
0
0
1 1
1
−(s−a)t
=−
lim e
− e0 = −
(0 − 1) =
,
s − a t→∞
s−a
s−a
provided that s > a. Note that the improper integral diverges when s ≤ a.
186
Chapter 7. The Laplace Transform
The Laplace transform F (s) of a function f (t) is the result of applying a
linear operator to f . Denoting this linear operator by L, we can write
Lf = F, or L[f ](s) = F (s).
where F is given by (1). Using this notation, the result of Example 2, for instance, is that
L[eat ](s) =
1
, s > a.
s−a
It is easy to check that the operator L is linear :
Z ∞
Z ∞
L[cf ](s) =
e−st cf (t)dt = c
e−st f (t)dt = c F (s)
0
Z0 ∞
L[f + g](s) =
e−st (f (t) + g(t))dt
0
Z ∞
Z ∞
−st
=
e f (t)dt +
e−st g(t)dt = F (s) + G(s).
0
0
Indeed, the linearity of L is a simple consequence of the linearity properties of
integration.
Our study of Laplace transforms is primarily for the purpose of solving initialvalue problems
y 00 + py 0 + qy = f, y(0) = y0 , y 0 (0) = v0 ,
(2)
where p and q are constants. Later (starting in Section 3) we will consider generalized solutions that can arise when the nonhomogeneous term f is piecewise
continuous on [0, ∞). A function f is piecewise continuous on [0, ∞) if
(i) f has at most finitely many discontinuities in any bounded interval [0, T ],
(ii) lim f (t) exists, and
t→0+
(iii) at every number a > 0 both lim f (t) and lim f (t) exist.
t→a−
t→a+
Note that condition (iii) is automatically true at any point where f is continuous,
and it says that all of f ’s discontinuities in (0, ∞) are simple “jump” discontinuities. Also, piecewise continuity guarantees integrability on any bounded interval
[0, T ].
We also need a condition that will guarantee the existence of a function’s
Laplace transform. We say that a function f is exponentially bounded on
[0, ∞) if there are constants M and r such that
|f (t)| ≤ M ert for all t ≥ 0.
7.1. Definition and Basic Properties
187
RT
If f is piecewise continuous, then 0 e−st f (t) dt will exist for all T > 0 and all
s. If f is also exponentially bounded, then
Z T
Z T
M (r−s)T
e
−1 ;
e−st |f (t)| dt ≤
M e(r−s)t dt =
r−s
0
0
R T −st
thus 0 e |f (t)| dt converges as T → ∞, provided that s > r. This means that
the Laplace transform of |f | exists, which implies that the Laplace transform of
f exists.
Throughout the rest of this chapter we will be concerned almost exclusively
with functions defined on [0, ∞). The phrases “piecewise continuous” and “exponentially bounded” should always be understood to mean “piecewise continuous
on [0, ∞)” and “exponentially bounded on [0, ∞).”
We also emphasize that throughout this chapter we will deal only with exponentially bounded functions, since only those have Laplace transforms. Also,
in this section and the next, all of our examples will involve ordinary elementary functions—polynomials, sinusoidal functions, exponential functions, etc.—
that are clearly continuous everywhere. Not until Section 3 will we begin to
consider differential equations with nonhomogeneous terms that are piecewisecontinuous.
The First Differentiation Theorem
The reason why Laplace transforms are useful in solving differential equations
is embodied in the following theorem, which (together with the corollary that
follows) we will refer to as the first differentiation theorem.
THEOREM 1 Suppose that y is continuous and exponentially bounded and
that y 0 is piecewise continuous. Then L[y 0 ] exists, and
L[y 0 ](s) = s Y (s) − y(0).
J
Proof. First, for T > 0 we use integration by parts to compute
Z T
Z T
T
e−st y 0 (t)dt = e−st y(t)
−
(−s)e−st y(t)dt
t=0
0
0
Z T
−sT
=e
y(T ) − y(0) + s
e−st y(t)dt.
0
This computation is valid because of the piecewise continuity of y 0 and the continuity of y (at 0 and T ). Now, since y is assumed to be exponentially bounded,
it follows that, for s sufficiently large,
Z T
lim e−sT y(T ) = 0, and Y (s) = lim
e−st y(t)dt exists.
T →∞
T →∞ 0
188
Chapter 7. The Laplace Transform
Therefore, L[y 0 ] exists, and
L[y 0 ](s) = −y(0) + s Y (s).
By repeated application of Theorem 1, we arrive at the following corollary.
COROLLARY 1 Suppose that y and y 0 are continuous and exponentially
bounded and that y 00 is piecewise continuous. Then L[y 00 ] exists, and
L[y 00 ](s) = s2 Y (s) − sy(0) − y 0 (0).
Moreover, if y, y 0 , . . . , y (n−1) are continuous and exponentially bounded, and if
y (n) is piecewise continuous, then L[y (n) ] exists, and
L[y (n) ](s) = sn Y (s) − sn−1 y(0) − sn−2 y 0 (0) − · · · − y (n−1) (0).
J
Because of the properties stated in Theorem 1 and Corollary 1, the Laplace
transform is particularly well suited for solving linear initial-value problems
with constant coefficients. The following are two examples that indicate the
basic idea.
• Example 3 Consider the first-order initial-value problem
y 0 + 2y = 4, y(0) = 1,
and let Y denote the transform of the solution y. By applying the operator L
to both sides of the differential equation and using the result of Example 1 on
the right side, we find that
4
s Y (s) − y(0) + 2Y (s) = .
s
Now we use the given initial value and solve for Y (s):
(s + 2)Y (s) =
4
s+4
+ 1; so Y (s) =
.
s
s(s + 2)
Now our job is to find the function y(t) that has this transform. A partial
fraction expansion (consult your calculus book) reveals that
Y (s) =
2
1
−
.
s s+2
The results of Examples 1 and 2 now tell us that Y (s) is the transform of
y(t) = 2 − e−2t .
• Example 4 Consider the initial-value problem
y 00 + 3y 0 + 2y = 0, y(0) = 1, y 0 (0) = 0,
7.1. Definition and Basic Properties
189
and let Y denote the transform of the solution y. By applying the operator L
to both sides of the differential equation, we find that
s2 Y (s) − s y(0) − y 0 (0) + 3(s Y (s) − y(0)) + 2Y (s) = 0.
Using the given initial values and rearranging, this becomes
(s2 + 3s + 2)Y (s) − s − 3 = 0.
Now we solve for Y (s), finding
Y (s) =
s2
s+3
s+3
=
.
+ 3s + 2
(s + 1)(s + 2)
Now we look for the function y(t) that has this transform. A partial fraction
expansion reveals that
2
1
Y (s) =
−
.
s+1 s+2
From the result of Example 2 above, we see that this is the transform of
y(t) = 2e−t − e−2t .
The General Procedure
Examples 3 and 4 each illustrate a general procedure for solving initial value
problems with the help of Laplace transforms:
(1) Transform each side of the differential equation, using the given initial
values.
(2) Solve the resulting algebraic equation for the transform Y (s) of the solution y(t).
(3) Find the solution y by identifying the transform Y (s) with known transforms.
The final step amounts to finding the inverse transform of Y (s). An important
question to ask is whether the operator L is actually invertible. In other words,
given a transform Y (s), is there a unique function y(t), t ≥ 0, such that Ly = Y ?
The answer to this is no, unless we require the function y to be continuous.
However, this is not a difficulty in the context of solving differential equations,
since solutions will be continuous.
Let’s now consider what happens in general when we apply the Laplace
transform technique to a linear second-order initial-value problem with constant
coefficients. Suppose that the differential equation is
P (D)y = f, with P (D) = D2 + p D + q I ,
and we have initial conditions
y(0) = y0 , y 0 (0) = v0 .
190
Chapter 7. The Laplace Transform
The Laplace transform of the nonhomogeneous term is simply Lf = F . Transforming the left side produces
LP (D)y = LD2 y + p LD y + q Ly
= s2 Y (s) − s y0 − v0 + p(s Y (s) − y0 ) + q Y (s)
= P (s) Y (s) − (s y0 + v0 + p y0 ).
The transformed initial-value problem therefore becomes
P (s) Y (s) − (s y0 + v0 + p y0 ) = F (s),
and solving for Y (s) gives us
Y (s) =
γ(s)
F (s)
+
,
P (s) P (s)
(3)
where γ(s) = s y0 + v0 + p y0 . The term F (s)/P (s) in (3) is the transform of the
rest solution of P (D)y = f , and the term γ(s)/P (s) in (3) is the transform of the
solution of the homogeneous equation P (D)y = 0 that satisfies the given initial
conditions. The same statements are true for all linear differential equations
with constant coefficients, regardless of order.
Partial Fraction Expansions
Steps 1 and 2 of the general procedure just described are easy, provided that the
transform of the nonhomogeneous term is known. The challenging part of solving
any problem in this way lies in step 3. Extensive tables of known transforms are
available to help in this task. As in Examples 3 and 4, partial fraction expansion
is often a useful tool for expressing Y (s) is terms of known transforms.
Computer algebra systems such as Mathematica and Maple have the ability
to compute partial fraction expansions as well as all of the “standard” Laplace
transforms and inverse Laplace transforms. (You can even compute partial fraction expansions with the expand() function on the TI-89 and Voyage calculators.) Nevertheless, it is beneficial to have some skill in efficiently computing
partial fraction expansions “by hand.”
(s)
A rational function N
P (s) with degree(N ) < degree(P ) may be expanded into a sum
of rational functions with denominators corresponding to the linear and irreducible
quadratic factors of P (s) and with numerators that are either constant or linear. A
A
simple linear factor s − a of P (s) results in a single simple term of the form s−a
, and
k
repeated linear factors (s − a) can give rise to terms
A1
A2
Ak
+
+ ··· +
.
s − a (s − a)2
(s − a)k
7.1. Definition and Basic Properties
191
A simple irreducible quadratic factor s2 + bs + c of P (s) results in a term of the form
Bs+C
2
k
s2 +bs+c , and repeated quadratic factors (s + bs + c) can give rise to terms
B1 s + C1
Bk s + Ck
B2 s + C2
+ ··· + 2
.
+
s2 + bs + c (s2 + bs + c)2
(s + bs + c)k
The number of undetermined constants needed in any partial fraction expansion is equal
to the degree of the denominator P (s).
• Example 5 For any numerator N (s) with degree less than 10, there are constants
A1 , A2 , A3 , B, C1 , D1 , C2 , D2 , E, and F such that
N (s)
s3 (s + 1)(s2 + 1)2 (s2 + s + 1)
is equal to
A1
A2
A3
B
C1 s + D 1
C2 s + D2
Es + F
+ 2 + 3 +
+
+ 2
+ 2
.
s
s
s
s+1
s2 + 1
(s + 1)2
s +s+1
The problem of computing the constants in a partial fraction expansion ultimately
consists of solving a system of linear equations. There are various ways of assembling a
suitable system of equations, one of which is to clear fractions, expand products, and
equate coefficients. This approach is interesting from a theoretical point of view but
usually leads to a system of equations that requires considerable effort to solve. On the
other hand, one can usually assemble a much simpler set of equations with the help of
a few simple algebraic manipulations and evaluations at wisely chosen values of s. The
following examples illustrate these techniques.
• Example 6 Let’s find the partial fraction expansion of
s2
,
(s + 1)(s + 2)(s + 3)
which is typical of cases involving denominators with distinct linear factors. We seek
constants A1 , A2 , and A3 such that
A1
A2
A3
s2
=
+
+
.
(s + 1)(s + 2)(s + 3)
s+1 s+2 s+3
Multiplying both sides by s + 1 gives
s2
s+1
s+1
= A1 + A2
+ A3
,
(s + 2)(s + 3)
s+2
s+3
into which we substitute s = −1, obtaining
1
= A1 .
2
Multiplying both sides instead by s + 2 gives
s2
s+2
s+2
= A1
+ A2 + A3
,
(s + 1)(s + 3)
s+1
s+3
192
Chapter 7. The Laplace Transform
into which we substitute s = −2, obtaining
−4 = A2 .
Multiplying both sides by s + 3 gives
s2
s+3
s+3
= A1
+ A2
+ A3 ,
(s + 1)(s + 2)
s+1
s+2
into which we substitute s = −3, obtaining
9
= A3 .
2
Thus our partial fraction expansion is
s2
1/2
−4
9/2
=
+
+
.
(s + 1)(s + 2)(s + 3)
s+1 s+2 s+3
Now, looking back over the preceding calculations, it is evident that an elegant shortcut
amounts to evaluating the left-hand side at each root of the denominator after removing
the corresponding factor from the denominator; that is,
s2
1
A1 =
= ,
/////////(s
(s + 1) + 2)(s + 3) s=−1
2
A2 =
s2
= −4,
(s + 1)(s
/////////(s
+ 2) + 3) s=−2
A3 =
s2
9
=− .
(s + 1)(s + 2)(s
/////////
+ 3) s=−3
2
• Example 7 Let’s find the partial fraction expansion of
s
,
(s + 4)3
which is typical of cases involving denominators with repeated linear factors. We seek
constants A1 , A2 , and A3 such that
s2
A1
A2
A3
=
+
+
.
3
2
(s + 4)
s + 4 (s + 4)
(s + 4)3
We can find A3 as in the preceding example. Multiplying both sides by (s + 4)3 gives
s2 = A1 (s + 4)2 + A2 (s + 4) + A3 ,
into which we substitute s = −4, obtaining
16 = A3 .
To find A1 , we’ll multiply both sides by s + 4, obtaining
s2
A2
A3
= A1 +
+
.
(s + 4)2
s + 4 (s + 4)2
7.1. Definition and Basic Properties
193
Now letting s → ∞ gives
1 = A1 .
So far we have
s2
1
16
A2
=
+
.
+
(s + 4)3
s + 4 (s + 4)2
(s + 4)3
At this point we can simply evaluate both sides at s = −3 (so that s + 4 = 1) and solve
for A2 :
9 = 1 + A2 + 16; so A2 = −8.
Thus our expansion is
s2
1
16
8
=
+
.
−
(s + 4)3
s + 4 (s + 4)2
(s + 4)3
• Example 8 Let’s find the partial fraction expansion of
s2
,
(s + 2)(s2 + s + 1)
in which the factor s2 + s + 1 is said to irreducible, since it has no real zeros. We seek
constants A, B, and C such that
s2
A
Bs + C
=
+
.
(s + 2)(s2 + s + 1)
s + 2 s2 + s + 1
Multiplying both sides by s + 2 gives
(B s + C)(s + 2)
s2
=A+
,
2
s +s+1
s2 + s + 1
into which we substitute s = −2, obtaining
4
= A.
3
Now, letting s → ∞ in the same equation gives
1
1 = A + B; so B = 1 − A = − .
3
With only C left to find, we can simply evaluate at s = 0, which gives
0=
A
A
2
+ C; so C = − = − .
2
2
3
Thus our expansion is
s2
1
=
(s + 2)(s2 + s + 1)
3
4
s+2
−
s + 2 s2 + s + 1
• Example 9 Let’s find the partial fraction expansion of
(s2
5s3
,
+ 4)(s2 + 2s + 2)
.
194
Chapter 7. The Laplace Transform
whose denominator consists of two nonrepeated, irreducible quadratics. We seek constants A, B, C, and D such that
5s3
As + B
Cs + D
= 2
+ 2
.
2
2
(s + 4)(s + 2s + 2)
s +4
s + 2s + 2
Let’s first evaluate each side at s = 0, which gives us a simple equation involving only
B and D:
0 = B/4 + D/2, i.e., B = −2D.
Next, let’s multiply both sides by s and let s → ∞. That gives us
5 = A + C.
Now we’ll multiply both sides by s2 + 4, obtaining
(Cs + D)(s2 + 4)
5s3
=
As
+
B
+
,
s2 + 2s + 2
s2 + 2s + 2
into which we’ll substitute s = 2i:
−40i
= 2Ai + B + 0.
−4 + 4i + 2
Simplifying this gives
B + 2Ai = 4 (−2 + i),
from which we conclude that
A = 2 and B = −8.
Now returning to the relationships 5 = A + C and B = −2D, we obtain
C = 3 and D = 4.
Finally, the desired expansion is
5s3
2(s − 4)
3s + 4
= 2
+ 2
.
2
2
(s + 4)(s + 2s + 2)
s +4
s + 2s + 2
• Example 10 Let’s find the partial fraction expansion of
s2 (1 − s)
,
(s2 + 2s + 2)2
whose denominator consists of a repeated irreducible quadratic. We seek constants A,
B, C, and D such that
s2 (1 − s)
As + B
Cs + D
= 2
+ 2
.
2
2
(s + 2s + 2)
s + 2s + 2 (s + 2s + 2)2
As in the previous example, we begin by evaluating both sides at s = 0 and then by
multiplying both sides by s and letting s → ∞, which give
0 = B/2 + D/4 and − 1 = A + 0.
7.1. Definition and Basic Properties
195
Next, let’s clear fractions:
s2 (1 − s) = (As + B)(s2 + 2s + 2) + Cs + D.
The roots of s2 + 2s + 2 are −1 ± i, so we’ll evaluate both sides at s = −1 + i:
(−1 + i)2 (2 − i) = C (−1 + i) + D.
Simplifying this gives
D − C + C i = −2 − 4i,
from which we conclude that
C = −4 and D = −6.
Now returning to the relationship B/2 + D/4 = 0, we find that B = 3, and since we
have already that A = −1, our expansion is
−s + 3
2(s + 3)
s2 (1 − s)
= 2
−
.
+ 2s + 2)2
s + 2s + 2 (s2 + 2s + 2)2
(s2
Problems
In Problems 1 through 4, use the results of Examples 1 and 2 to write down the Laplace
transform of the given function. Express the result as a single quotient.
1. f (t) = 3 − 5e t
2. f (t) = 2e−t + 3e−2t
3. f (t) = cosh bt
4. f (t) = sinh bt
In Problems 5 through 10, find the Laplace transform Y (s) of the solution of the given
initial-value problem.
5. y 0 + 3y = e−t , y(0) = 0
6. y 00 + y = 0, y(0) = 1, y 0 (0) = −1
7. y 00 + y 0 + y = 0, y(0) = 0, y 0 (0) = 1
8. y 00 + 3y 0 + 2y = 0, y(0) = 1, y 0 (0) = 0
9. y 00 − 4y = e−t , y(0) = 1, y 0 (0) = 1
10. y 000 − y = 1, y(0) = y 0 (0) = y 00 (0) = 0
In Problems 11 through 14, suppose that f has the Laplace transform F (s), and write
down by inspection the transform Y (s) of the differential equation’s rest solution.
11. y 00 + y 0 + y = f
12. y 00 + ω 2 y = f
13. y 000 + y = f
14. y 000 − y 0 + 2y = f
15. Show that y = teat satisfies
y 00 − 2ay 0 + a2 y = 0, y(0) = 0, y 0 (0) = 1,
and use that fact to find the Laplace transform of teat .
16. (a) Use the fact that y = cos ω t is the solution of
y 00 + ω 2 y = 0, y(0) = 1, y 0 (0) = 0,
to find the Laplace transform of cos ω t.
(b) Similarly find the Laplace transform of sin ω t.
196
Chapter 7. The Laplace Transform
17. (a) Use the fact that y = t sin ω t satisfies
y 00 + ω 2 y = 2ω cos ω t, y(0) = 0, y 0 (0) = 0,
to find the Laplace transform of t sin ω t. Make use of the known transform of
cos ω t from Problem 16.
(b) Similarly find the Laplace transform of t cos ω t.
Assuming that degree(N ) < degree(P ), write down the form of the partial fraction
expansion of each of the rational functions in Problems 18 through 21.
18.
N (s)
s (s − 1)2
19.
N (s)
s2 (s2 + 2s + 2)
20.
N (s)
(s2 + 1)(s2 + 4)2
21.
N (s)
s4 − 1
Use the method described at the end of Example 6 to find the partial fraction expansion
of the rational functions in Problems 22 through 24.
s
12
2s
22.
23.
24.
(s − 1)(s − 2)
s (s + 1)(s − 3)
(s − 1)(s + 1)
Use the method of Example 7 to help find the partial fraction expansion of the rational
function in each of Problems 25 through 27.
25.
s
(s − 1)3
s2
(s + 2)3
26.
27.
s2
(s + 3)4
Use the method of Example 8 to find the partial fraction expansion of the rational
functions in Problems 28 through 30.
28.
2s
(s + 1)(s2 + 1)
29.
5s2
(s + 2)(s2 + 1)
30.
s2 + 4
(s + 1)(s2 + 2s + 2)
Use the method of Example 9 find the partial fraction expansion of the rational functions
in Problems 31 and 32.
8
8s2
31. 2
32.
(s + 1)(s2 + 4s + 5)
(s2 + 4s + 13)(s2 + 4s + 5)
Use the method of Example 10 find the partial fraction expansion of the rational functions in Problems 33 through 35.
33.
s2
(s2 + 4s + 5)2
34.
s2
(s2 + 2s + 5)2
35.
s3
(s2 + 2s + 5)2
36. Find general formulas for the partial fraction expansions of
(a)
(s2
s2
+ a2 )2
(b)
(s2
s3
+ a2 )2
Solve Problems 37 through 43 as in Examples 3 and 4.
37. y 0 + y = 1, y(0) = 0
38. y 0 − y = e−t , y(0) = 0
39. y 00 − y = 0, y(0) = 0, y 0 (0) = 1
40. y 00 − 4y = 8, y(0) = y 0 (0) = 0
41. y 00 + 4y 0 + 3y = 6, y(0) = y 0 (0) = 0
42. y 00 + 3y 0 + 2y = e−3t , y(0) = y 0 (0) = 0
7.2. More Transforms and Further Properties
197
43. y 000 + 2y 00 − y 0 − 2y = 6, y(0) = y 0 (0) = y 00 (0) = 0
44. Suppose that L[f (t)](s) = F (s). Prove the following “scaling theorems.”
(a) L[cf (ct)] = F (s/c)
b) L[f (t/c)] = c F (cs)
45. Use a comparison test to show that if the improper integral in (1) converges for some
s = s0 , then it converges for all s > s0 .
2
46. (a) Argue that f (t) = e t does not have a Laplace transform (i.e., that the defining
improper integral is divergent for all s).
(b) Argue that f (t) = tt does not have a Laplace transform. (Hint: tt = et ln t .)
47. Let 0 ≤ a < b. Assume that the fundamental theorem of calculus,
Z b
f 0 (t) dt = f (b) − f (a),
a
0
is true when f is continuous on [a, b]. Prove that the same is true if f is continuous
and f 0 is piecewise continuous on [a, b].
7.2 More Transforms and Further Properties
In this section we will find Laplace transforms for more of the elementary functions that commonly arise as solutions of linear differential equations with constant coefficients. We will also derive additional properties of Laplace transforms
that will help us in finding both transforms and inverse transforms.
First, let’s recall the results of Examples 1 and 2 in Section 1:
1
c
, s > a.
(1, 2)
L[c ](s) = , s > 0; L[eat ](s) =
s
s−a
Note that the first of these is actually a consequence of the second (with a = 0)
and linearity. Also, the computation that produced the second of these remains
valid if a is a complex constant, provided that s > Re(a). That is,
L[e(α+iβ)t ](s) =
1
s − α − iβ
=
, s > α.
s − α + iβ
(s − α)2 + β 2
So by the Euler-DeMoivre formula we have
L[e αt (cos β t + i sin β t)](s) =
s − α − iβ
, s > α.
(s − α)2 + β 2
By equating real and imaginary parts we arrive at
s−α
, s > α,
L[e αt cos β t](s) =
(s − α)2 + β 2
L[e αt sin β t](s) =
β
, s > α,
(s − α)2 + β 2
(3)
(4)
198
Chapter 7. The Laplace Transform
and, in particular,
L[cos β t] =
s2
s
, s > 0,
+ β2
L[sin β t] =
s2
β
, s > 0.
+ β2
(5, 6)
The First Shift Theorem
Notice that the transforms of e αt cos β t and e αt sin β t in (3) and (4) can be
viewed as shifted transforms of cos β t and sin β t obtained by replacing s with
s − α. It turns out that the same thing happens whenever any function is multiplied by e αt . To see this, suppose that f (t) has a known transform F (s) and
consider the transform of e αt f (t):
Z ∞
Z ∞
L[e αt f (t)](s) =
e−st e αt f (t) dt =
e−(s−α)t f (t) dt = F (s − α).
0
0
Thus we have what we will call the first shift theorem:
L[e αt f (t)](s) = F (s − α).
J
• Example 1 Let’s find the inverse transform of
s
.
F (s) = 2
s + 4s + 13
Since the denominator is an irreducible quadratic, the key is to complete the
square and then use the first shift theorem. Completing the square shows that
the denominator is (s+2)2 +9. Now in order take advantage of the shift theorem,
we must express the numerator in terms of s + 2 as well :
F (s) =
s+2−2
s+2
2
=
−
.
2
2
(s + 2) + 9
(s + 2) + 9 (s + 2)2 + 9
The first term on the right is the transform of e−2t cos 3t by (3), and the second
is the transform of − 23 e−2t sin 3t by (4). Therefore,
2
f (t) = e−2t cos 3t − sin 3t .
3
• Example 2 Suppose that we wish to find the rest solution of the equation
y 00 + 2y 0 + 10y = 10.
Transforming each side of the equation (and using y(0) = y 0 (0) = 0) gives us
(s2 + 2s + 10) Y (s) =
10
.
s
So the transform of the solution is
10
1
s+2
Y (s) =
= − 2
.
2
s(s + 2s + 10)
s s + 2s + 10
7.2. More Transforms and Further Properties
199
The denominator in the second term is (s + 1)2 + 9. To take advantage of this,
we need the numerator in terms of s + 1. So we write
Y (s) =
s+1
1
1
−
−
2
s (s + 1) + 9 (s + 1)2 + 9
and observe (by (1), (3), and (4)) that Y (s) is the Laplace transform of
y = 1 − e−t cos 3t −
1 −t
e sin 3t.
3
The Second Differentiation Theorem
The first shift theorem describes the transform of the product e αt f (t) in terms
of the transform of f . A similar result describing the transform of the product
tn f (t) would also be useful. To that end, let us consider a function f (t) with
known transform F (s):
Z ∞
e−st f (t) dt = F (s), s > s0 .
0
Differentiating with respect to s, we obtain∗
Z ∞
d
F (s), s > s0 .
e−st (−tf (t)) dt =
ds
0
Since the left side here is the transform of −tf (t), we have found we shall call
the second differentiation theorem:
d
L[ tf (t)](s) = − F (s).
J
ds
Repeated application of this result easily produces
L[ tn f (t)](s) = (−1)n
dn
F (s), n = 1, 2, 3, . . . .
dsn
The second differentiation theorem is useful for the derivation of a number
of transforms. For instance,
with the constant function f (t) = 1, t ≥ 0, we find
d
that L[t ](s) = − ds
s−1 ; that is,
L[t ](s) =
1
, s > 0.
s2
Repeating the same procedure produces the transforms of t2 , t3 , and so on:
L[tn ](s) =
n!
, s > 0.
sn+1
(7)
* Here we have made the questionable move of differentiating under the integral sign; however,
it can be justified in this situation by a theorem from advanced calculus.
200
Chapter 7. The Laplace Transform
Combining (7) with the first shift theorem yields
L[tn eat ](s) =
n!
, s > a.
(s − a)n+1
(8)
The second differentiation theorem also produces
L[t cos β t] =
L[t sin β t] =
s2 − β 2
, s > 0,
(s2 + β 2 )2
(9)
2β s
, s > 0.
+ β 2 )2
(10)
(s2
To put (9) into a form more useful for finding inverse transforms, we note that
1
s2 − β 2
2β 2
−
=
.
s2 + β 2 (s2 + β 2 )2
(s2 + β 2 )2
Now we combine (6) with (9) to obtain
sin β t
2β 2
L
− t cos β t = 2
, s > 0.
β
(s + β 2 )2
(11)
The following two examples illustrate the use of (8) and (11) in the context of
solving initial-value problems.
• Example 3 Consider the initial-value problem
y 00 + 4y 0 + 4y = 0, y(0) = 1, y 0 (0) = 1.
Transforming each side of the equation gives us
s2 Y (s) − s − 1 + 4(Y (s) − 1) + 4 Y (s) = 0,
which simplifies to
(s2 + 4s + 4) Y (s) = s + 5.
So the transform of the solution is
s+5
s+2+3
1
3
Y (s) = 2
=
=
+
.
2
s + 4s + 4
(s + 2)
s + 2 (s + 2)2
Therefore, because of (2) and (8), we see that Y is the transform of
y = e−2t + 3te−2t = (1 + 3t)e−2t ,
which is easily verified as the solution of the initial-value problem.
• Example 4 Consider the initial-value problem
y 00 + 4y = sin 2t, y(0) = 1, y 0 (0) = 0.
7.2. More Transforms and Further Properties
201
Transforming each side of the equation gives us
s2 Y (s) − s + 4 Y (s) =
s2
2
.
+4
So the transform of the solution is
2
+s
2
s
2
Y (s) = s +4
= 2
+ 2
.
2
2
s +4
(s + 4)
s +4
By (5) and (11), we see that
1 1
y=
sin 2t − t cos 2t + cos 2t .
4 2
Problems
In Problems 1 through 3, find the inverse transform (a) by partial fraction expansion and
(b) by completing the square and using the first shift theorem with the basic transforms
b
s
of cosh and sinh: L[sinh bt] = s2 −b
and L[cosh bt] = s2 −b
2
2.
1.
s2
2
+ 2s
2.
s2
2s
+ 4s + 3
3.
s2
4s
− 6s + 5
Find the inverse transform of each expression in Problems 4 through 15.
4.
s2
s
+ 4s + 5
5.
s2
s+1
+ 6s + 13
6.
s2
s+8
+ 10s + 34
7.
s2
2s
+ 2s + 5
8.
s
(s + 2)2
9.
2s2
(s + 1)3
10.
s
(s + 3)3
11.
s+1
(s − 1)3
12.
12s
(s2 + 4)2
13.
16
(s2 + 4)2
14.
4s2
(s2 + 4)2
15.
s3
(s2 + 4)2
Use formulas (10) and (11) and the first shift theorem to find the inverse transform of
each expression in Problems 16 through 18.
16.
16
(s2 + 2s + 5)2
17.
s
(s2 + 2s + 5)2
18.
2s + 6
(s2 + 2s + 2)2
In Problems 19 through 21, use the Laplace transform to solve the initial-value problem.
19. y 00 + 4y 0 + 3y = 0, y(0) = 1, y 0 (0) = −4
20. y 00 + 6y 0 + 13y = 0, y(0) = 0, y 0 (0) = 2
21. y 00 + 4y 0 + 4y = 0, y(0) = 0, y 0 (0) = 1
In Problems 22 through 27, use the Laplace transform to find the rest solution of the
differential equation.
22. y 00 + 4y 0 + 3y = 4e−t
23. y 00 + 6y 0 + 13y = 169t
24. y 00 + 4y 0 + 4y = e−2t
25. y 00 + 4y = 6 sin t
26. y 00 + 9y = 9 sin 3t
27. y 00 + y = cos t
202
Chapter 7. The Laplace Transform
28. This purpose of this problem is to derive the Laplace transform of
√
t.
(a) Use the definition of the Laplace transform and integration by parts to show that
Z ∞ −st
√
e
1
√ dt .
L[ t ](s) =
2s 0
t
(b) Use the substitution st = x2 to arrive at
Z ∞
√
2
k
L[ t ](s) = 3/2 where k =
e−x dx .
s
0
(c) After observing that
Z ∞
Z
2
k2 =
e−x dx
0
∞
2
e−y dy =
0
Z
0
∞Z ∞
2
e−(x
+y 2 )
dx dy ,
0
use polar coordinates to show that k 2 = π/4. Finally conclude that
√
√
π
.
L[ t ](s) =
3/2
2s
29. Let f be continuous on [0, ∞) with Laplace
R t transform F (s). Apply the first differentiation theorem to the antiderivative 0 f (τ )dτ to obtain the first integration
theorem:
hZ t
i
F (s)
L
f (u)du (s) =
.
s
0
30. In this problem we will derive the second integration theorem:
Z ∞
h1
i
L f (t) (s) =
F (σ)dσ.
t
s
(a) Suppose that f has the transform F (s) and that 1t f (t) has the transform Φ(s).
Apply the second differentiation theorem to show that Φ0 (s) = −F (s). Hence Φ
is some antiderivative of −F .
(b) RBy the result of Problem 36, we know that Φ(s) → 0 as s → ∞. Show that
∞
F (σ)dσ has that property, as well as being an antiderivative of −F (s), and
s
therefore must be the desired transform Φ(s).
In Problems 31 through 34, use the second integration theorem (see Problem 30) to find
the Laplace transform of the given function.
1 − eat
1 − cos β t
sin β t
sinh bt
31.
32.
33.
34.
t
t
t
t
35. Use the second integration theorem (see Problem
30) and the result of Problem 28
√
to find the Laplace transform of f (t) = 1/ t .
36. Asymptotic Behavior of the Laplace Transform. Suppose that f (t) is bounded on
every bounded interval [a, b], a ≥ 0. Suppose further that the Laplace transform
L[f ](s) = F (s) exists for all s > s0 , where s0 ≥ 0. By the following sequence of
steps, prove that
lim F (s) = 0.
s→∞
7.2. More Transforms and Further Properties
203
(a) Argue that e−s1 t f (t) → 0 as t → ∞ for any s1 > s0 .
(b) Argue that because of (a) there is a time T1 > 0 such that |e−s1 t f (t)| < 1 for all
t ≥ T1 , and consequently |f (t)| < es1 t for all t ≥ T1 .
(c) Use the result of (b) to obtain
Z T1
Z
|F (s)| <
e−st |f (t)|dt +
0
∞
e−st es1 t dt.
T1
(d) Use the result of (c) and the fact that f is bounded on [0, T1 ] to obtain
|F (s)| <
M
1
+
for all s > s1 ,
s
s − s1
where M is a bound on |f (t)| for all 0 ≤ t ≤ T1 .
(e) Finally, conclude that F (s) → 0 as s → ∞. Also observe the stronger fact that
sF (s) is bounded for large s.
37. Combine the result of Problem 36 with the first differentiation theorem to show that
if f and f 0 have Laplace transforms and f is continuous, then
lim sF (s) = f (0).
s→∞
38. The error function erf and the complementary error function erfc are
Z t
Z ∞
2
2
2
−x2
erf(t) = √
e
e−x dx
dx and erfc(t) = 1 − erf(t) = √
π 0
π t
2
(a) Show that y = e−t /4 satisfies the initial-value problem
1
t y = 0, y(0) = 1.
2
Then show that the transform Y of y satisfies
y0 +
s Y (s) − 1 −
1 0
Y (s) = 0.
2
(b) Use the appropriate integrating factor (see Section 2.1) to solve for Y (s) subject
to the condition Y (s) → 0 as s → ∞ (cf. Problem 36). Conclude that
√
2
2
L[e−t /4 ](s) = π e s erfc(s).
(c) Use the first integration theorem (see Problem 29) and the result of part (b) to
show that
1 2
L [ erf(t/2)] (s) = e s erfc(s).
s
(d) Finally, use the “scaling theorem” L[f (t/k)] (s) = k F (ks) (see Problem 43 in
Section 7.1) to show that, for k > 0,
h t i
√
2 2
2
2
k 2 2
L[e−t /(4k ) ](s) = k π e k s erfc(ks) and L erf
(s) = e k s erfc(ks).
2k
s
204
Chapter 7. The Laplace Transform
In Problems 39 and 40, use the first integration theorem (see Problem 29) to help in
finding the solution of the given integro-differential initial-value problem.
Z t
Z t
0
00
39. y +
y(τ ) dτ = t, y(0) = 0
40. y +
y(τ ) dτ = 0, y(0) = 3, y 0 (0) = 0
0
0
7.3 Heaviside Functions and Piecewise-Defined Inputs
The Laplace transform is particularly useful for solving initial-value problems in
which the differential equation has a piecewise-defined nonhomogeneous term.
In order to express such piecewise-defined functions in a convenient manner, we
will use the Heaviside unit step function, which is defined by
n
0, if t < 0;
h(t) =
(1)
1, if t ≥ 0.
This function may be thought of as a “switch” that “turns on” at time t = 0.
A switch that turns on at time t = c may be obtained by a simple shift:
n
0, if t < c;
h(t − c) =
(2)
1, if t ≥ c.
Figures 1a–c show the graphs of h(t), h(t − 3), and h(t − 3) − h(t − 5).
1
1
2
4
6
1
2
8
Figure 1a
4
6
4
2
8
Figure 1b
6
8
Figure 1c
For further illustration, Figure 2 shows the graphs of
(a) h(t) + h(t − 2),
(b) h(t) − 2h(t − 1) + h(t − 2),
(c) h(t) + h(t − 1) + h(t − 2) − 3h(t − 3).
3
1
2
2
1
2
4
1
–1
2
4
6
Figure 2a
8
2
Figure 2b
Figure 2c
4
7.3. Heaviside Functions and Piecewise-Defined Inputs
205
• Example 1 Let’s look closely at the function
f (t) = t h(t) + (2 − 2t)h(t − 1) + (t − 2)h(t − 2)
with the goal of expressing it in “piecewise form.” When t < 0, all three of
the Heaviside functions are “off,” and so f (t) = 0. When 0 ≤ t < 1, only
the Heaviside function h(t) in the first term is “on,” and so f (t) = t. When
1 ≤ t < 2, both Heaviside functions in the first two terms are “on,” and so
f (t) = t + (2 − 2t) = 2 − t. When t ≥ 2, all
three Heaviside functions are “on,” and so
1
f (t) = t + (2 − 2t) + (t − 2) = 0. Thus, f
has the piecewise form

0,
t < 0;


t,
0 ≤ t < 1;
f (t) =
!1
1
2
3
4
2
−
t,
1
≤ t < 2;


0,
2 ≤ t.
Its graph is shown on the right.
A switch that is “on” during a given time
useful and can be constructed as
(
0,
h(t − a) − h(t − b) = 1,
0,
interval a ≤ t < b is especially
if t < a;
if a ≤ t < b;
if b ≤ t.
(3)
The product of any function g(t) with h(t−a)−h(t−b) is a function that agrees
with g(t) when a ≤ t < b and is zero elsewhere. For instance, Figure 3 shows
the graphs of
(a) t (h(t) − h(t − 1)),
(b) (sin t) (h(t) − h(t − π)),
(c) (2 − t)(h(t − 1) − h(t − 3)).
1
1
1
!1
!1
1
Figure 3a
2
Π
2
Π
Figure 3b
3Π
2
1
2
3
4
!1
Figure 3c
When a piecewise-defined function consists of multiple nonzero pieces, it may
be viewed as a sum of functions of this type. This provides a convenient way of
expressing the function in terms of Heaviside functions, as is illustrated in the
next example.
5
206
Chapter 7. The Laplace Transform
• Example 2 Consider the

0,

 2
t ,
f (t) =
2
(t

 − 1) ,
0,
function
1
t < 0;
0 ≤ t < 1;
1 ≤ t < 2;
2 ≤ t.
We can express f in terms of Heaviside functions as follows. We begin by thinking of f
as the sum of simpler functions f1 and f2 , where
2 , 0 ≤ t < 1;
2
t
f1 (t) =
and f2 (t) = (t − 1) ,
0, elsewhere;
0,
1
2
1 ≤ t < 2;
elsewhere.
These functions can be expressed as
f1 (t) = t2 (h(t) − h(t − 1)) and f2 (t) = (t − 1)2 (h(t − 1) − h(t − 2)).
So we have
f (t) = t2 (h(t) − h(t − 1)) + (t − 1)2 (h(t − 1) − h(t − 2)),
which we rearrange and write as
f (t) = t2 h(t) + (t − 1)2 − t2 h(t − 1) − (t − 1)2 h(t − 2).
Another way of obtaining an expression of a piecewise-defined function in
terms of Heaviside functions is often convenient as well. Suppose that f is given
in piecewise form as
 0,
t < 0;



f
(t),
0 ≤ t < t1 ;
0


f1 (t), t1 ≤ t < t2 ;
f (t) = f (t), t ≤ t < t ;
2
2
3



.
.

..
 ..
The representation of f in terms of Heaviside functions is
f (t) = f0 (t)h(t) + (f1 (t) − f0 (t))h(t − t1 ) + (f2 (t) − f1 (t))h(t − t2 ) + · · · ,
the form of the typical term being (fnew − fprev )h(t − c). This is illustrated in
the next example.
• Example 3 The function
 0,
t < 0;


 t,
0 ≤ t < 1;
1 ≤ t < 2;
f (t) = 1,


 3 − t, 2 ≤ t < 3;
0
3 ≤ t;
1
!1
1
2
3
4
7.3. Heaviside Functions and Piecewise-Defined Inputs
207
may be written in terms of Heaviside functions as
f (t) = t h(t) + (1 − t) h(t − 1) + (3 − t − 1) h(t − 2) + (0 − (3 − t)) h(t − 3),
which after simplification becomes
f (t) = t h(t) + (1 − t) h(t − 1) + (2 − t) h(t − 2) + (t − 3) h(t − 3).
The Second Shift Theorem
Given a function f (t) with Laplace transform F (s), consider the function
h(t − c)f (t − c)
where c ≥ 0. The graph of this function is the same as that of f shifted to
the right by c units and zeroed out for t < c. This is illustrated in Figure 5.
Figure 5
The Laplace transform of h(t − c)f (t − c) is computed as follows:
Z ∞
L[h(t − c)f (t − c)](s) =
e−st h(t − c)f (t − c)dt
Z0 ∞
Z ∞
−st
=
e f (t − c)dt =
e−s(t+c) f (t)dt = e−cs F (s).
c
0
This computation gives us the second shift theorem:
L[h(t − c)f (t − c)](s) = e−cs F (s).
J
As a simple consequence (with f (t − c) = 1), we obtain the transform of any
simple Heaviside function:
L[h(t − c)](s) =
e−cs
.
s
(4)
Note that with c = 0 this gives L[h(t)](s) = 1/s, which we have already seen as
the transform of the function f (t) = 1, t ≥ 0.
208
Chapter 7. The Laplace Transform
• Example 4 Suppose that we want to find the inverse transform of
F (s) =
1 − 2e−s + e−2s
.
s
By splitting this into three terms,
F (s) =
1
e−s e−2s
−2
+
,
s
s
s
we recognize that
f (t) = h(t) − 2h(t − 1) + h(t − 2).
This is the function shown in Figure 2b.
• Example 5 Suppose that we want to find the inverse transform of
F (s) =
1 − 2e−s + e−2s
.
s2
After splitting this into three terms and recalling that 1/s2 = L[t](s), the second
shift theorem tells us that
f (t) = t h(t) − 2(t − 1)h(t − 1) + (t − 2)h(t − 2).
This is the function in Example 1.
• Example 6 Let’s find the inverse transform of
F (s) =
π (1 + e−s )
.
s2 + π 2
After splitting this into two terms and recalling that π/(s2 + π 2 ) = L[sin πt](s),
the second shift theorem tells us that
f (t) = h(t) sin πt + h(t − 1) sin(π(t − 1)).
Since sin(π(t − 1)) = − sin πt, this becomes
f (t) = (h(t) − h(t − 1)) sin πt =
n
sin πt, 0 ≤ t < 1,
0,
1 ≤ t.
As we saw in Examples 4–6, the second shift theorem makes it very straightforward to write down the inverse transform of the product of e−cs and a known
transform F (s). However, a slightly different form of the second shift theorem
is often more convenient for the purpose of computing transforms. If we let
g(t) = f (t − c), then f (t) = g(t + c) and so F (s) = L[g(t + c)](s). Thus the
second shift theorem is equivalent to
L[h(t − c)g(t)](s) = e−cs L[g(t + c)](s).
(5)
7.3. Heaviside Functions and Piecewise-Defined Inputs
209
• Example 7 Consider the function
f (t) = t h(t) − t h(t − 1),
whose graph is shown in Figure 3a. To find the transform of f we apply the
second shift theorem in the form of (5) as follows:
F (s) = L[t h(t)](s) − L[t h(t − 1)](s)
= L[t](s) − e−s L[t + 1](s)
1
1
1
1 − e−s (1 + s)
−s
= 2 −e
+
=
.
s
s2
s
s2
• Example 8 Let’s find the transform of f (t) = et h(t) + (e−t − et )h(t − 1). Using
(5), we have
F (s) = L[et h(t)](s) + L[(e−t − et )h(t − 1)](s)
1
=
+ e−s L[e−(t+1) − et+1 ](s)
s−1
−1
1
e
e
1 − e1−s e−1−s
=
+ e−s
−
=
+
.
s−1
s+1 s−1
s−1
s+1
Initial Value Problems
Our main purpose here, of course, is to use all of this to help us solve initial-value
problems with piecewise-defined inputs. We will illustrate with the following
example.
• Example 9 Consider an undamped spring-mass system with a natural frequency
ω
of 2π
= 1 cycle per unit time, so that ω = 2π. Suppose that the mass is initially
at rest and the platform to which the spring is attached undergoes motion
described by the function seen in Figure 2b. Thus we wish to find the rest
solution of the equation
y 00 + 4π 2 y = 4π 2 (h(t) − 2h(t − 1) + h(t − 2)).
Formally taking Laplace transforms, we find
Y (s) =
1 − 2e−s + e−2s
4π 2
.
2
s
(s + 4π 2 )
A partial fraction expansion shows that
1
s
− 2
.
s s + 4π 2
The second factor is the transform of 1 − cos 2πt; therefore
Y (s) = 1 − 2e−s + e−2s
y(t) = (1 − cos 2πt)h(t) − 2(1 − cos 2π(t − 1))h(t − 1) + (1 − cos 2π(t − 2))h(t − 2)
210
Chapter 7. The Laplace Transform
by the second shift theorem. Since cos 2πt =
cos 2π(t − k) for all integers k and all t, the
solution has the simple piecewise form

0,
t < 0;


1 − cos 2πt, 0 ≤ t < 1;
y(t) =

 cos 2πt − 1, 1 ≤ t < 2;
0,
2 ≤ t.
1
1
2
–1
The graphs of this solution and the forcing
function are seen in Figure 6.
Figure 6
We point out that in Example 9 we applied the first differentiation theorem
to transform each side of the differential equation without taking care to check
the hypotheses of that theorem. However, it is straightforward to verify a posteriori that the computation was justified. It is clear that y is continuous and
exponentially bounded, and easy computations reveal that y 0 is also continuous
and exponentially bounded and that y 00 is piecewise continuous.
The solution in Example 9 is not a solution is the normal sense, because y 00
fails to exist at the points where f is discontinuous. Such a solution is called a
generalized solution.
Definition. Let p and q be constants, and let f be piecewise continuous on
[0, ∞). A generalized solution of
y 00 + py 0 + qy = f
is a continuous function y on [0, ∞) such that (i) y 0 is continuous and (ii) the
differential equation is satisfied at each point where f is continuous.
The following theorem on generalized solutions justifies the computation in
Example 9 and similar problems. We omit the proof.
THEOREM 1 Let p and q be constants, and let f be piecewise continuous on
[0, ∞). Then for any numbers y0 and v0 , the initial-value problem
y 00 + py 0 + qy = f, y(0) = y0 , y 0 (0) = v0 ,
has a unique generalized solution y on [0, ∞). Moreover, y 00 is piecewise continuous, and if f is exponentially bounded, then so are y, y 0 , and y 00 .
Problems
In Problems 1–3, write the given function f in piecewise form. Then sketch the graph.
1. f (t) = h(t) + h(t − 1) + h(t − 2) − h(t − 3) − h(t − 4)
2. f (t) = 4t(1 − t) h(t) − h(t − 1)
3. f (t) = t h(t) + (1 − t)h(t − 1) − h(t − 2)
7.3. Heaviside Functions and Piecewise-Defined Inputs
211
In Problems 4–6, graph the given function without writing its piecewise form.
4. f (t) = 3h(t) − h(t − 1) − h(t − 2) − h(t − 3)
5. f (t) = h(t) + (1 − t)h(t − 1) + (t − 2)h(t − 2)
6. f (t) = h(t) − 2h(t − 1) + h(t − 2) + h(t − 3) − 2h(t − 4) + h(t − 5)
In Problems 7–9, express the given function in terms of Heaviside functions.
7. 1
8.
9.
1
1
2
3
1
4
1
–1
2
2
3
4
1
2
3
4
Let bxc denote the “floor” (or greatest integer) function; that is,
bxc = n, where n ≤ x < n + 1 with n an integer.
Write each of the functions in Problems 10 through 13 in terms of Heaviside functions.
Also sketch the graph.
t − btc , 0 ≤ t < 3
(−1)btc , 0 ≤ t < 3
10. f (t) =
11. f (t) =
0,
elsewhere
0,
elsewhere
3 − btc , 0 ≤ t < 3
t − bt + .5c , 0 ≤ t < 2
13. f (t) =
12. f (t) =
0,
elsewhere
0,
elsewhere
Problems n = 14, . . . , 26: Find the Laplace transform of the function in Problem n−13.
In Problems 27 through 32, find the inverse Laplace transform and sketch its graph.
1 − 2e−2s + e−3s
s
1 − e−s + e−2s
30.
s2 + 4π 2
27.
1 − 2e−2s + e−3s
s + e−2s − 2se−3s − s e−4s
29.
2
s
s2
−πs
−2πs
4 (1 − 2e
+e
)
2 − 3e−πs + e−2πs
31.
32.
s(s2 + 4)
s2 + 1
28.
In Problems 33 and 34, solve the initial-value problem; then graph the solution and the
nonhomogeneous term on the right side of the differential equation.
33. y 0 − y = −e h(t − 1), y(0) = 1
34. y 0 + y = (t + 1)(1 − h(t − 2)), y(0) = 0
In Problems 35 and 36, solve the initial-value problem; then graph the solution and the
forcing function.
35. y 00 + π 2 y = π 2 h(t − 2) − h(t − 4) , y(0) = 1, y 0 (0) = 0
36. y 00 + y = t h(t) − t h(t − 2π), y(0) = 0, y 0 (0) = 0
In Problems 37–39, obtain the transform of the given function by direct application of
the definition of the transform, and then check by finding the transform with the second
shift theorem.
37. f (t) = h(t − 1) − h(t − 2)
38. f (t) = h(t) − 2h(t − 1) + h(t − 2)
212
Chapter 7. The Laplace Transform
39. f (t) = et h(t) − h(t − 1)
40. A piecewise constant function that is right-continuous for all t, zero on (−∞, t0 ),
and discontinuous at t0 , t1 , t2 , . . . can be expressed as
f (t) = j0 h(t − t0 ) + j1 h(t − t1 ) + j2 h(t − t2 ) + · · ·
where each ji is a constant. Show that ji = lim+ f (t) − lim− f (t) . (Thus ji the
t→ti
t→ti
“jump” of f (t) at ti .) Show further that
j0 et0 s + j1 et1 s + j2 et2 s + · · ·
.
s
Using these results, redo Problems 9 and 22.
F (s) =
7.4 Periodic Inputs
A function f defined on [0, ∞) is periodic if there is a number p > 0 such that
f (t + p) = f (t) for all t ≥ 0.
The least such p is the function’s period . Such a function may be regarded as
the periodic extension of the function
f0 (t) = f (t)(h(t) − h(t − p)),
which agrees with f on [0, p) and is zero elsewhere. We will see that the Laplace
transform of the periodic extension f of f0 can be easily expressed in terms of
the transform of f0 .
Since f (t + p) = f (t) for all t ≥ 0, it follows that f (t) = f (t − p) for all t ≥ p.
Thus we can express f0 as
f0 (t) = h(t)f (t) − h(t − p)f (t − p),
which allows use of the second shift theorem, producing the transform
F0 (s) = F (s) − e−ps F (s),
where F = Lf as usual. Now we solve for F (s), finding
F0 (s)
.
(1)
1 − e−ps
Thus the transform of the periodic extension of f0 is simply the transform of f0
divided by 1 − e−ps . The transform F0 = Lf0 in the numerator can be obtained
either by expressing f0 in terms of Heaviside functions and applying the second
shift theorem or by evaluating the definite integral
Z p
F0 (s) =
e−st f (s)ds.
(2)
F (s) =
0
7.4. Periodic Inputs
213
• Example 1 The function
f (t) = (−1)btc , t ≥ 0
is the periodic extension, with period 2, of
f0 (t) = h(t) − 2h(t − 1) + h(t − 2).
Graphs of f and f0 are shown below.
The transform of f0 is
F0 (s) =
1 − 2e−s + e−2s
;
s
therefore, the transform of f is∗
F (s) =
1 − 2e−s + e−2s
1 − e−s 1
tanh(s/2)
=
=
.
s(1 − e−2s )
1 + e−s s
s
• Example 2 The function
f (t) = | sin t |, t ≥ 0
can be viewed as the periodic extension of
f0 (t) = sin t (h(t) − h(t − π)) = h(t) sin t + h(t − π) sin(t − π)
with period π. These functions are plotted below.
The Laplace transform of f0 is
F0 (s) = (1 + e−πs )
* Recall that tanh x =
ex −e−x
ex +e−x
=
1−e−2x
.
1+e−2x
s2
1
;
+1
214
Chapter 7. The Laplace Transform
consequently, the transform of f is
F (s) =
1 + e−πs
coth(πs/2)
1
=
.
1 − e−πs s2 + 1
s2 + 1
Inverse transforms
The key to finding the inverse transform of a given transform with 1 − e−ps as
a factor in its denominator is the geometric series:
1
= 1 + x + x2 + x3 + · · · , if |x| < 1.
1−x
Since 0 < e−ps < 1 for all s > 0 (and p > 0), we have the expansion
1
= 1 + e−ps + e−2ps + e−3ps + · · · , s > 0.
1 − e−ps
Although the transform of any periodic function can be expressed in the form
(1), a factor of 1 − e−ps in the denominator does not guarantee that the inverse
transform will be periodic, as we shall see in the first of the following examples.
• Example 3 The transform
F (s) =
1
s (1 − e−s )
can be written as the series
1 e−s e−2s e−3s
+
+
+
+ ··· .
s
s
s
s
Taking inverse transforms term by term, we find that
F (s) =
f (t) = h(t) + h(t − 1) + h(t − 2) + h(t − 3) + · · · ,
which is easily recognizable as the floor function f (t) = btc , t ≥ 0.
• Example 4 The transform
F (s) =
1 − e−s
1
=
−2s
s (1 − e )
s (1 + e−s )
can be written as the series
1 e−s e−2s e−3s
−
+
−
+ ··· .
s
s
s
s
Taking inverse transforms term by term, we find that
F (s) =
f (t) = h(t) − h(t − 1) + h(t − 2) − h(t − 3) + · · · ,
which can be expressed compactly as f (t) = 1 + (−1)btc /2, t ≥ 0.
7.4. Periodic Inputs
215
• Example 5 The transform
F (s) =
(s2
1
+ 1)(1 − e−πs )
can be written as
1
(1 + e−πs + e−2πs + e−3πs + · · · ) .
s2 + 1
Since the first factor is the transform of sin t, we apply successive shifts to obtain
F (s) =
f (t) = h(t) sin t + h(t − π) sin(t − π) + h(t − 2π) sin(t − 2π) + · · · .
Since sin(t − kπ) = (−1)k sin t for any integer k, we can write this function as
f (t) = (h(t) − h(t − π) + h(t − 2π) − h(t − 3π) + · · · ) sin t,
which is the same as
1
f (t) = ( sin t + | sin t |).
2
• Example 6 Let f be as in Example 1; that is, let f be the periodic extension of
f0 (t) = h(t) − 2h(t − 1) + h(t − 2)
with period 2. Let’s find the rest solution of the equation
y 00 + 4π 2 y = 4π 2 f (t).
The transform of f is
F (s) =
1 − 2e−s + e−2s
;
s (1 − e−2s )
so the transform of the solution is
1 − 2e−s + e−2s
4π 2
Y (s) =
.
1 − e−2s
s (s2 + 4π 2 )
The first factor has the series expansion
1 − 2e−s + e−2s
= (1 − 2e−s + e−2s )(1 + e−2s + e−4s + e−6s + · · · )
1 − e−2s
= 1 − 2e−s + 2e−2s − 2e−3s + · · · ,
and the second factor has the partial fraction expansion
4π 2
1
s
= − 2
.
2
2
s(s + 4π )
s s + 4π 2
So the transform can be written as
−s
Y (s) = (1 − 2e
+ 2e
−2s
− 2e
−3s
+ ···)
1
s
− 2
s s + 4π 2
.
216
Chapter 7. The Laplace Transform
Recognizing the second factor as the transform of 1 − cos 2πt, we apply successive shifts, obtaining the inverse transform
y(t) = (1 − cos 2πt)h(t) − 2(1 − cos 2π(t − 1))h(t − 1)
+ 2(1 − cos 2π(t − 2))h(t − 2) − 2(1 − cos 2π(t − 3))h(t − 3) + · · · .
Since cos 2π(t − k) = cos 2πt for any integer k, we finally arrive at the solution
y(t) = (1 − cos 2πt)(h(t) − 2h(t − 1) + 2h(t − 3) − · · · )
= (−1)btc (1 − cos 2πt).
Notice that this is precisely the periodic extension with period 2 of the solution
in Example 8 in Section 7.3.
Problems
The functions in Problems 1 through 3 are the same as those in Problems 7 through 9
in Section 7.3. For each of them, find the Laplace transform of the periodic extension
with period p = 4.
1. 1
2.
3.
1
1
2
3
1
4
1
–1
2
2
3
4
1
2
3
4
4. Let f0 (t) = h(t) − h(t − 1) − h(t − 2) + h(t − 3).
(a) Sketch the graph of the periodic extension of f0 with period p = 3 and find its
Laplace transform.
(b) Sketch the graph of the periodic extension of f0 with period p = 4 and find its
Laplace transform.
5. Let f (t) = t − btc , t ≥ 0. Graph f , describe it as a periodic extension of some
representative function f0 , and then find its Laplace transform.
6. Let f (t) = (t − btc)2 , t ≥ 0. Graph f , describe it as a periodic extension of some
representative function f0 , and then find its Laplace transform.
In Problems 7 through 10, find the inverse Laplace transform and sketch its graph.
7.
9.
s
(s2 + 1)(1 − e−πs )
1 − e−s
− e−2s )
s2 (1
8.
10.
1 − e−2πs
(s2 + 1)(1 − e−4πs )
1 − e−s
+ e−2s )
s2 (1
7.4. Periodic Inputs
217
In Problems 11 and 12, find the rest solution of the equation and plot its graph. Note
that each equation has a forcing function similar to t − btc from Problem 5.
11. y 00 + 4π 2 y = 4π 2 1 − (t − btc)
12. y 00 + π 2 y = π 3 t − btc
13. Let f be the periodic extension of f0 (t) = h(t) − 2h(t − 1) + h(t − 2) with period
p = 4. Find the rest solution of the equation
y 00 + 4π 2 y = 4π 2 f (t)
and sketch its graph along with the graph of f .
A function f defined on [0, ∞) is said to be antiperiodic with “antiperiod” k if
f (t + k) = −f (t) for all t ≥ 0. For example, sin t is antiperiodic with antiperiod π,
and (−1)btc is antiperiodic with antiperiod 1. An antiperiodic function with antiperiod
k can be described as the antiperiodic extension of
f0 (t) = f (t)(h(t) − h(t − k)),
which vanishes outside the interval [0, k]. Since f (t + k) = −f (t) for all t ≥ 0, it follows
that f0 (t) can be written as
f0 (t) = f (t)h(t) + h(t − k)f (t − k).
In Problems 14 through 16, assume that f is the antiperiodic extension of f0 with
antiperiod k.
14. Show that the transform of f can be expressed in terms of the transform of f0 as
Z k
F0 (s)
1
F (s) =
=
e−st f (t)dt .
1 + e−ks
1 + e−ks 0
15. Let φ(t) = |f (t)| for all t ≥ 0. To electrical engineers, |f (t)| is known as the full-wave
rectification of f (t).
(a) Show that φ is periodic with period k and thus
Z k
1
Φ(s) =
e−st |f (t)|dt.
1 − e−ks 0
(b) Conclude that if f (t) ≥ 0 for 0 ≤ t < k, then the following relationship holds
between the Laplace transforms of f (t) and φ(t) = |f (t)|:
1 + e−ks
F (s) = coth(ks/2)F (s).
1 − e−ks
f (t) + |f (t)| is the half-wave rectification of f (t).
Φ(s) =
16. The function g(t) =
1
2
(a) Sketch the graph of the half-wave rectification of sin 2π t.
(b) Show that if f (t) ≥ 0 for 0 ≤ t < k, then the following relationship
holds between
the Laplace transforms of f (t) and g(t) = 12 f (t) + |f (t)| :
Z k
1
F (s)
G(s) =
e−st f (t) dt =
.
1 − e−2ks 0
1 − e−k s
218
Chapter 7. The Laplace Transform
17. Use the result of Problem 14 to rework Example 1.
18. Use the result of Problem 15 to rework Example 2.
19. Use the result of Problem 15 in a backward fashion to obtain the result of Example
1 from the transform L[h(t)] = 1/s.
In Problems 20 and 21, sketch a graph of the given function, and use the appropriate
result from either Problem 15 or 16, along with the first shift theorem, to find its Laplace
transform.
20. g(t) = e−t | sin πt |
21. g(t) = 21 e−t sin πt + | sin πt | .
7.5 Impulses and the Dirac Distribution
Consider a spring-mass system being acted upon by some external force. By
Newton’s second law, force is the rate of change in momentum:
dµ
,
dt
where µ = mv. Thus, as illustrated in Figure 1, a force acting on the mass over a
time interval [t0 , t1 ] results in a change in
momentum given by
Z t1
∆µ = µ(t1 ) − µ(t0 ) =
F (t) dt.
F (t) =
Momentum
Force
t0
Figure 1
A force acting on the mass over a very
small time interval [ t, t + ∆t] might result, for instance, from stiking the mass
with a hammer. It will be useful to imagine an approximation to such a force
in the form of an impulse—an idealized force concentrated at a single point t0
in time and resulting in an instantaneous change in momentum. Our goal here
is to develop a model for such a force.
Consider a unit change in momentum caused by a force F (t) whose value is
a constant b between time t = 0 and t = ∆t and zero at all other times; that is,
Z ∆t
F (t) = (h(t) − h(t − ∆t)) b and ∆µ =
b dt = 1.
0
This arrangement is always possible no matter how short the time interval is.
In fact, b and ∆t are related by b∆t = 1; therefore, b = 1/∆t, and so
h(t) − h(t − ∆t)
.
(1)
∆t
The limit of such forces as ∆t → 0 defines a unit impulse at time t = 0
and may be thought of a generalized derivative of the Heaviside function h(t).
F (t) =
7.5. Impulses and the Dirac Distribution
219
Clearly this will not be a function in the usual sense, but we will use notation
that seems to suggest that it is. To capture the limiting behavior of F as ∆t → 0,
we define the symbol δ(t), with units of time−1 in the current context, by the
following properties:
Z ∞
(i) δ(t) = 0 for all t 6= 0;
(ii)
δ(t)dt = 1.
−∞
We emphasize that δ(t) is not a function of t in the usual sense, because (i) and
(ii) are contradictory in the realm of ordinary calculus. Instead, δ(t) is a peculiar invention that serves as a “limit” for a sequence of functions whose values
converge to zero except at a single point, but whose integrals do not converge to
zero. It is an example of a class of “generalized functions” called distributions.
This particular distribution δ(t) is known as the Dirac distribution, or the
Dirac delta function, after the great physicist Paul Dirac.
Let g(t) be a continuous function on (−∞, ∞). Since δ(t) represents the
“limit” as ∆t → 0 of the function F (t) in (1), we define
Z ∆t
Z ∞
Z ∞
1
h(t) − h(t − ∆t)
g(t)dt.
dt = lim
g(t)δ(t) dt = lim
g(t)
∆t→0 ∆t 0
∆t→0 −∞
∆t
−∞
This last quantity is the limit as ∆t → 0 of the average value of g(t) over [0, ∆t].
Since g is continuous, this limit must be g(0). Therefore, it is consistent with
(i) and (ii) above to define
Z ∞
(iii)
g(t)δ(t) dt = g(0) for any continuous g on (−∞, ∞).
−∞
A unit impulse at time t = c is represented by δ(t − c), the unit impulse at
t = 0 shifted to occur at t = c. A simple change of variables applied to (iii) gives
Z ∞
(iv)
g(t)δ(t − c) dt = g(c) for any continuous g on (−∞, ∞).
−∞
These last two properties put us in a position to state the Laplace transform
of δ(t) and δ(t − c). Applying (iii) and (iv) with g(t) = e−st , we find that
L[δ(t)](s) = 1 and L[δ(t − c)](s) = e−cs .
(2)
Initial-Value Problems
Impulses can cause discontinuities in the derivatives of the solution of a differential equation, or in the solution itself in the case of a first-order equation.
Therefore, initial values should be understood in the sense of left-sided limits. For example, in a second-order initial-value problem, the initial conditions
y(0) = y0 and y 0 (0) = v0 are understood to mean
y(0− ) = lim y(t) = y0 and y 0 (0− ) = lim y 0 (t) = v0 .
t→0−
t→0−
(3)
220
Chapter 7. The Laplace Transform
The rest solution of a second-order equation is understood to be the solution y
for which y(t) = 0 for all t ≤ 0, which implies that each of the statements in (3)
holds with y0 = v0 = 0.
• Example 1 Suppose that an undamped spring-mass
system with with mass m,
p
stiffness k, and natural angular frequency ω = k/m is initially at rest. At time
t = 0, the mass is struck with a hammer, imparting an instantaneous transfer
of momentum ∆µ. The subsequent motion may be modeled with the equation
m y 00 + k y = ∆µ · δ(t) .
(Note that all terms have units of force.) With zero initial values, the transformed equation is
m s2 Y (s) + k Y (s) = ∆µ · 1 .
Thus the transform of the rest solution is
∆µ
∆µ/m
Y (s) =
= 2
,
2
ms + k
s + ω2
and consequently the solution is
∆µ
h(t) sin ωt .
mω
For t ≥ 0, this is the same as the solution of m y 00 + k y = 0 with y(0) = 0
and initial velocity y 0 (0) = ∆µ/m. Also note that, if viewed on (−∞, ∞), the
solution is continuous, but its derivative has a jump discontinuity at t = 0. y(t) =
• Example 2 Suppose that an undamped spring-mass
system with mass m, stiffp
ness k, and natural angular frequency ω = k/m is initially in motion with
position y(0) = 0 and velocity y 0 (0) = v0 . At a later time t = t1 , the mass is
struck with a hammer, imparting an instantaneous transfer of momentum ∆µ.
The motion of the mass may be modeled by
m y 00 + k y = ∆µ · δ(t − t1 ), y(0) = 0, y 0 (0) = v0 .
The transformed equation is
m (s2 Y (s) − v0 ) + k Y (s) = ∆µ e−t1 s ;
so the transform of the solution is
∆µ e−t1 s + mv0
∆µ e−t1 s v0
ω
Y (s) =
=
+
.
2
2
ms + k
mω
ω s + ω2
Since the second factor is the transform of sin ωt, we apply the second shift
theorem to obtain
∆µ
v0
y(t) =
h(t − t1 ) sin (ω(t − t1 )) +
h(t) sin ωt.
mω
ω
7.5. Impulses and the Dirac Distribution
221
Note that if ∆µ = mv0 and t1 = (2k + 1) ωπ for some integer k ≥ 0, then for
t ≥ t1 we have
v0
v0
v0
sin(ω t − (2k + 1)π) +
sin ωt = ( − sin(ωt) + sin ωt) = 0,
y(t) =
ω
ω
ω
and so the impulse instantaneously stops the motion.
Circuits. Let Q(t) be the charge on the capacitor in a simple RC, LC, or RLC
circuit. (See Sections 2.2.3 and 5.1.) If current flows for some period of time
[t0 , t1 ], then there is a resulting change in the capacitor’s charge given by
Z t1
Q0 (t) dt.
∆Q = Q(t1 ) − Q(t0 ) =
t0
Since
Q0 (t)
is precisely the current I(t), this becomes
Z t1
I(t) dt.
∆Q =
t0
So, just as an instantaneous unit change in the momentum of a mass can be
described with the Dirac distribution, so can an instantaneous unit change in
the charge on a capacitor. A (nearly) instantaneous change in charge might
occur due to a quick discharge across a spark gap or a quick charge from a brief
spike in source voltage.
• Example 3 Consider an RC circuit with R = C = 1 and an initial charge of
Q0 = 0 on the capacitor. A voltage source produces large spikes at t = 0 and
t = 1 and is zero elsewhere. With y representing the charge, the model is
y 0 + y = a0 δ(t) + a1 δ(t − 1), y(0) = 0,
where a0 and a1 are the changes in charge that occur due the voltage spikes.
The transform of the solution is
1
Y (s) = (a0 + a1 e−s )
.
s+1
Thus the solution is
y = a0 e−t h(t) + a1 e−(t−1) h(t − 1) = (a0 h(t) + a1 e h(t − 1))e−t .
Problems
Evaluate each integral in Problems 1 through 6.
Z ∞
Z 1
2
1.
(t + 3) δ(t − 2) dt
2.
(t2 + 3) δ(t) dt
−∞
Z
4.
π
δ t−
sin t dt
6
−π
Z
1
5.
−1
δ(t − π) cos t dt
3.
−1
π
∞
Z
−∞
δ(t)
dt
1 + t2
Z
∞
6.
−∞
δ(t + 1)e−t dt
222
Chapter 7. The Laplace Transform
In Problems 7 through 12, find and graph the rest solution.
7. y 00 = δ(t) − δ(t − 1)
8. y 00 = δ(t) − 2δ(t − 1) + δ(t − 2)
9. y 00 + y = δ(t − 1)
10. y 00 + y = δ(t) + δ(t − π)
11. y 00 + 3y 0 + 2y = δ(t) − δ(t − ln 3)
12. y 00 + 2y 0 + y = δ(t − 1) − δ(t − 2)
In Problems 13-15, find and graph the solution satisfying y(0) = 1.
13. y 0 = δ(t−1)
14. y 0 + y = δ(t−ln 2)−δ(t−ln 3)
15. y 0 + y = −δ(t−ln 2)
16. Consider the rest solution of y 0 − y = δ(t) − a δ(t − 1). Find a so that the solution is
zero for all t ≥ 1.
17. Show that if p and q are constants, then the rest solution of y 00 + p y 0 + q y = δ(t)
and the solution of y 00 + p y 0 + q y = 0, y(0) = 0, y 0 (0) = 1, have the same Laplace
transform; hence the solutions agree for all t ≥ 0.
18. (a) Using Laplace transforms, find the solution of y 0 = δ(t), y(0− ) = 0. What interpretation of δ(t) does this imply?
(b) Let a > 0 and find the solution of y 0 = a1 h(t) − h(t − a) , y(0− ) = 0 in terms of
a. Sketch the graph of the forcing function and the solution for a = 1, 0.5, and
0.1. What happens as a → 0+ ?
19. (a) Let a > 0, and find the rest solution of y 00 + y = a1 h(t) − h(t − a) in terms of a.
(b) Show that if t ≥ a, then y(t) = a1 cos(t − a) − cos t .
(c) Argue that if t > 0, then y(t) → sin t as a → 0+ .
(d) Compare with the rest solution of y 00 + y = δ(t).
20. Find an impulse-driven system whose rest solution is as in the following figure.
1
–1
1
2
3
4
5
6
–1
7.6 Convolution
In this section we will study integral equations of the form
Z t
y(t) +
g(t − x)y(x)dx = f (t),
(1)
0
where f and g are given functions of t. An equation of this form is called a linear
Volterra integral equation of convolution type. General linear Volterra integral
equations are of the form
Z t
α(t)y(t) +
g(t, x)y(x)dx = f (t).
a
7.6. Convolution
223
When g(t, x) can be expressed as a function of t − x as in (1), the integral
involved is called a convolution integral, which defines the convolution of the
two functions g and y, sometimes denoted by g ∗ y. That is,
Z t
(g ∗ y)(t) =
g(t − x)y(x)dx,
0
and so (1) can be written in the abbreviated form
y + g ∗ y = f.
Integral equations (and integro-differential equations) arise in applications where
behavior is influenced by past history as well as the state at time t. Convolution
integrals, in particular, arise when the contribution at time t of the history at
time x < t depends upon the elapsed time t − x.
• Example 1 The convolution of t and t2 is
Z t
t 3 1 4 x=t
1 4
2
2
t∗t =
(t − x)x dx =
x − x
=
t .
3
4
12
0
x=0
The convolution of et and e−t is
Z t
Z t
x=t
1
et ∗ e−t =
et−x e−x dx =
et e−2x dx = − et e−2x 2
x=0
0
0
1
= et (1 − e−2t ) = sinh t.
2
It is clear from the linearity properties of integration that for a given g the
operator K defined by Ky = g ∗ y is linear; that is,
g ∗ (c y) = c (g ∗ y) and g ∗ (u + v) = g ∗ u + g ∗ v.
It is also true that convolution is commutative; that is, g∗y = y∗g. To see this,
simply make a change of variable τ = t − x in the integral as follows:
Z t
Z 0
Z t
g∗y =
g(t − x)y(x)dx =
g(τ )y(t − τ )(−dτ ) =
y(t − τ )g(τ )dτ = y ∗ g.
0
t
0
The Convolution Theorem
With an eye toward solving (1), we would like to know how to express the
Laplace transform of g ∗ y in terms of the individual transforms of g and y. The
answer is provided by the convolution theorem:
L[ g ∗ y ](s) = G(s) Y (s),
J
where the transform exists for all s such that both transforms G and Y exist.
224
Chapter 7. The Laplace Transform
To establish this result, first note that by the definition of the transform,
Z t
Z ∞
Z ∞Z t
−st
L[g ∗ y](s) =
e
g(t − x)y(x)dx dt =
e−st g(t − x)y(x)dx dt.
0
0
0
0
The planar region over which we are integrating is {(t, x) | 0 ≤ x ≤ t < ∞}; so
interchanging the order of integration produces
Z ∞Z ∞
Z ∞
Z ∞
−st
L[g ∗ y](s) =
e g(t − x)y(x) dt dx =
y(x)
e−st g(t − x) dt dx.
0
x
0
x
This step can be justified by the absolute convergence of the improper integrals
involved, which follows from the existence of the Laplace transforms of g and y.
Now the change of variable τ = t − x in the inner integral gives us
Z ∞
Z ∞
L[g ∗ y](s) =
y(x)
e−s(τ +x) g(τ ) dτ dx
0
Z0 ∞
Z ∞
−sx
=
e y(x)
e−sτ g(τ ) dτ dx
0
Z0 ∞
Z ∞
−sx
=
e y(x)G(s) dx = G(s)
e−sx y(x) dx = G(s)Y (s).
0
0
The next two examples illustrate how the convolution theorem can be used
to help calculate certain convolution integrals.
• Example 2 Consider the convolution
Z t
cos t ∗ sin t =
cos(t − x) sin x dx.
0
According to the convolution theorem,
s
1
s
L[cos t ∗ sin t](s) =
= 2
.
2
2
s +1
s +1
(s + 1)2
By (10) in Section 7.2, we recognize therefore that
Z t
t
cos(t − x) sin x dx = sin t.
2
0
• Example 3 Consider the convolution
Z t
√ √
√
√
t∗ t=
t − x x dx.
0
According to the convolution theorem and Problem 28 in Section 7.2,
√ 2
√ √
π
π
L[ t ∗ t](s) =
= 3.
3/2
4s
2s
7.6. Convolution
225
Therefore we conclude that
Z
t√
t−x
√
x dx =
0
π 2
t .
8
Integral Equations
We will now look at a few examples to indicate how the convolution theorem
can be used to solve linear Volterra integral equations of convolution type.
• Example 4 Consider the integral equation
Z t
y(t) +
y(x)dx = 1.
0
Viewing the integral as 1 ∗ y, we see that the transform of the solution satisfies
1
1
Y (s) = .
s
s
Solving for Y (s) yields Y (s) = 1/(s + 1), and so y(t) = e−t . Note that differentiating this integral equation results in y 0 + y = 0 and that the integral equation
implies the initial value y(0) = 1. Thus it is no surprise that y = e−t .
Y (s) +
• Example 5 Consider the integral equation
Z t
y(t) +
(t − x)y(x)dx = 1.
0
Viewing the integral as t ∗ y, we see that the transform of the solution satisfies
Y (s) +
1
1
Y (s) = .
s2
s
Solving for Y (s) produces Y (s) = s/(s2 + 1), and consequently y(t) = cos t. • Example 6 Consider the integral equation
Z t
y(t) + 3
sin(t − x)y(x)dx = 4.
0
Viewing the integral as (sin t) ∗ y, we see that Y (s) satisfies
Y (s) +
s2
4
3
Y (s) = .
+1
s
Solving for Y (s) produces
Y (s) = 4
s2 + 1
3
3s
= + 2
.
2
s (s + 4)
s s +4
Therefore, the solution is y(t) = 1 + 3 cos 2t.
226
Chapter 7. The Laplace Transform
The Green’s Function Revisited
In Section 6.2, it was shown that, given any two linearly independent solutions
u and v of the homogeneous equation y 00 + p y 0 + q y = 0, the rest solution of
y 00 + p y 0 + q y = f
could be obtained by variation of constants and expressed as
Z t
y(t) =
G0 (t, x)f (x)dx,
0
where G0 (t, x) is the Green’s function
G0 (t, x) =
u(x)v(t) + u(t)v(x)
u(x)v 0 (x) − u0 (x)v(x)
corresponding to the operator P (D) = D2 +pD+q I . Laplace transform methods
and the convolution theorem provide another approach to finding the Green’s
function for an operator P (D) with constant coefficients.
Suppose that P (D) has constant coefficients and that f (t) is a given function
with Laplace transform F (s). Recall that the Laplace transform of the rest
solution of P (D)y = f is
Y (s) =
1
F (s).
P (s)
Thus, by the convolution theorem,
Z t
y=
G0 (t − x)f (x)dx,
where
0
LG0 =
1
.
P (s)
Thus we can find the Green’s function for P (D) by finding the inverse Laplace
transform of 1/P (s). An interesting implication is that this type of Green’s
function G0 —for operators with constant coefficients—is always a function of
just one variable.
• Example 7 The transform of the rest solution of y 00 + y = f is
Y (s) =
Thus, LG0 =
1
.
s2 +1
s2
1
F (s).
+1
Therefore G0 (t) = sin t, and by the convolution theorem,
Z t
y=
sin(t − x)f (x)dx.
0
• Example 8 Consider the third-order operator P (D) = D3 + 2D2 − D − 2 I ,
which factors as (D + I )(D − I )(D + 2 I ). The corresponding Green’s function
7.6. Convolution
227
G0 is the inverse transform of
1
1
1
=
=
P (s)
(s + 1)(s − 1)(s + 2)
6
1
3
2
−
+
s−1 s+1 s+2
.
Therefore,
1 t
e − 3e−t + 2e−2t ,
6
000
00
and the rest solution of y + 2y − y 0 − 2y = f can be represented by
Z
1 t t−x
y=
e
− 3e−(t−x) + 2e−2(t−x) f (x) dx.
6 0
G0 (t) =
It is interesting to note that, since the Green’s function G0 (t) corresponding
to the operator P (D) has the Laplace transform 1/P (s), it is actually the rest
solution of the equation
P (D)y = δ(t),
where δ(t) is the Dirac distribution. Consequently, for any f with a Laplace
transform, the rest solution of P (D)y = f (t) is the convolution of f with the
rest solution of P (D)y = δ(t). Therefore, having found the rest solution of
P (D)y = δ(t), one has, in a sense, also found the rest solution of P (D)y = f (t)
for any f .
Problems
In Problems 1 through 4, use the definition (and commutativity) to compute the convolution.
√
1. t ∗ t
2. t ∗ t
3. 1 ∗ cos t
4. t ∗ e−t
In Problems 5 through 8, evaluate the convolution by means of the convolution theorem.
6. et ∗ et
5. sin t ∗ sin 2t
7. et ∗ cos t
8. tm ∗ tn , for integers m,n > 0
Solve the given integral equation in each of Problems 9 through 14.
9. y + e−t ∗ y = 1
2
12. y + t ∗ y = t
10. et ∗ y = e−t
11. y + t ∗ y = t
13. y − t ∗ y = h(t) − h(t − 1)
14. y − t ∗ y = |1 − t |
Solve the given integro-differential equation in Problems 15 through 17.
15. y 0 + e−2t ∗ y = h(t), y(0) = 0
16. y 0 + y − 2t ∗ y = 0, y(0) = 5
17. y 0 − (cos t) ∗ y = h(t), y(0) = 0
In Problems 18 and 19, use the results of Problems 15 and 16 in Section 7.4 to help in
solving the given integral equation.
18. (cos t) ∗ y = | sin t |
19. 2(cos t) ∗ y = sin t + | sin t |
228
Chapter 7. The Laplace Transform
Suppose that v and v have Laplace transforms and that u(0) = u0 , u0 (0) = u1 , v(0) =
v0 , v 0 (0) = v1 . Use the convolution theorem and the first differentiation theorem to
formally verify the identities in Problems 20 through 22.
20. u ∗ v 0 − u0 ∗ v = u0 v − v0 u
21. (u ∗ v)0 = u ∗ v 0 + v0 u = u0 ∗ v + u0 v
22. (u ∗ v)00 = u ∗ v 00 + v1 u + v0 u0 = u00 ∗ v + u1 v + u0 v 0
23. Conclude from Problem 21 that y = t ∗ eat satisfies y 0 = ay + t. Also, the definition
of convolution implies that y(0) = 0. Use this to compute t ∗ eat .
24. Conclude from Problem 22 that y = g ∗ sin at satisfies y 00 + a2 y = ag. Also conclude
from Problem 21 and the definition of convolution that y 0 (0) = y(0) = 0. Use this
to compute cos bt ∗ sin at.
25. Let y be the rest solution of y 00 + p y 0 + q y = f, where p and q are constants with
q 6= 0, and let u be the solution of u00 + p u0 = 0, u(0) = 0, u0 (0) = 1.
(a) Using the results of Problems 20 through 22, show that y satisfies the linear
Volterra integral equation of convolution type
y + q u ∗ y = u ∗ f.
(b) Express the problem y 00 + y 0 + y = et , y(0) = y 0 (0) = 0, as a linear Volterra
integral equation of convolution type.
26. Suppose that f and g have Laplace transforms and that P is a polynomial with
constant coefficients. Show that u is the rest solution of P (D)y = f , if and only if
g ∗ u is the rest solution of P (D)y = g ∗ f.
In Problems 27 through 30, find the Green’s function G0 (t) of the given operator, and
write down the rest solution of P (D)y = f .
27. P (D)y = y 00 − y
28. P (D)y = y 00 + 2y 0 + 2y
29. P (D)y = y 00 + 3y 0 + 2y
30. P (D)y = y 0 + y
In Problems 31 through 33, find the Green’s function G0 (t) of the given operator, and
write down the solution of T y = f, y(0) = 0.
31. T y = y 0 + 2e−3t ∗ y
32. T y = y 0 + e−2t ∗ y
33. T y = y 0 + 3 cos t ∗ y