1 Existence and uniqueness of solutions of the initial

MAP 3305-Engineering Mathematics 1
Fall 2012
Notes on Linear Equations
For adults only
We state here some basic results about linear equations, of order two and higher (and lower). If the order is two, we study
the equation
y 00 + p(t)y 0 + q(t)y = g(t),
(1)
We recall that a function y = φ(t) (sometimes also written y = y(t)) is a solution of this equation in an interval (α, β) if and
only if φ is twice differentiable in (α, β) and if for EVERY t in (α, β) (that is, for every t such that α < t < β) one has
φ00 (t) + p(t)φ0 (t) + q(t)φ(t) = g(t).
The linear equation is called homogeneous if g ≡ 0 (g is identically 0).
1
Existence and uniqueness of solutions of the initial value problem.
Theorem 1. Assume p, q, g are continuous in an interval I = (α, β), α < β (possibly α = −∞ or β = ∞). Then for every
t0 ∈ I, every pair of real numbers y0 , y1 , there exists precisely one function y = φ(t) defined and twice differentiable for all
t ∈ I such that
φ00 (t) + p(t)φ0 (t) + q(t)φ(t) = g(t)
for all t ∈ I and such that φ(t0 ) = y0 , φ0 (t0 ) = y1 .
In other words: If p, q, g are continuous in I, then the initial value problem
y 00 +
y(t0 ) =
y 0 (t0 ) =
p(t)y 0
y0 ,
y1 ,
+ q(t)y
=
g(t),
has a unique solution in I for every choice of y0 , y1 ; t0 ∈ I.
The proof of this theorem is outside of the scope of this course. However, every other general result that we see
about linear second order differential equations is an easy consequence of this theorem.
The order n analogue is:
Theorem 1’. Assume p0 , p1 , . . . , pn−1 , g are continuous functions in an interval I = (α, β), α < β (possibly α = −∞ or
β = ∞). Then for every t0 ∈ I, every n real numbers y0 , y1 , . . . , yn−1 , there exists a unique solution in I of the initial value
problem
y (n) + pn−1 (t)y (n−1) + · · · + p0 (t)y = g(t),
y(t0 ) =
y0 ,
y 0 (t0 ) =
y1 ,
...
...
y (n−1) (t0 ) =
yn−1 .
We concentrate now on the homogeneous equation, at least until further notice. The following theorem, whose proof is a
simple calculus exercise, is formulated for homogeneous equations of order two but it holds for homogeneous equations of all
orders (and is just as simple to prove). Incidentally, proving a theorem means justifying it, explaining why the statements
made in the theorem have to be true. Frequently proofs contain important ideas and, almost always, knowing the proof of a
result helps in understanding the result.
Theorem 2. If y = y(t) is a solution of the homogeneous equation y 00 + py 0 + qy = 0, then so is y = cy(t) for all real numbers
c. If y = y1 (t), y = y2 (t) are solutions of the homogeneous equation, so is y = c1 y1 (t) + c2 y2 (t) for any choice of real numbers
c1 , c2 .
We notice: y = 0 is always a solution of the homogeneous equation.
2
Fundamental sets of solutions of the homogeneous equation.
By Theorem 2, if you have any non-zero solution of a homogeneous differential equation, you have automatically many
solutions; you just multiply the solution you have by any constant to get a new and different solution (except if the solution
you had was the zero solution, with which you can’t do a great deal). If you have two solutions, you have even more ways of
forming new solutions, since you can multiply both by constants and then add the results. With three solutions, you get even
more. Or do you? Some natural questions are: If we start with one, or two, or three, or more, solutions of the homogeneous
equation, and multiply the solutions we start with by diverse constants and then add, how many different solutions of the
homogeneous equation can we get? Can we get them all? And, if so, how many solutions do we have to have originally?
We consider an example, trying to illustrate these questions. The equation itself is not terribly important, and we won’t
learn any method to solve equations looking like this one. We consider the third order homogeneous equation:
(t2 − 2t + 2)y 000 − t2 y 00 + 2ty 0 − 2y = 0.
We could write it in the standard form
y 000 −
t2
2t
2
y 00 + 2
y0 − 2
y = 0,
2
t − 2t + 2
t − 2t + 2
t − 2t + 2
but this just makes working with it more awkward. The leading coefficient, namely t2 − 2t + 2 is never zero so we can take
I = (−∞, ∞).
The function y = et is a solution of the equation. We discovered this by sheer luck. However one discovers a non-zero
solution, once one has found one it is easy to verify it is a solution. With this one solution that I have, I can get many more
solutions. Here are a few expressions that, when interpreted as functions, solve the equation:
√
√
0, et , 2et , −et , 2et , et−2 , −57π 2 et
and zillions more. Notice that et−2 = e−2 et , so it is a constant times et .
Are all solutions of this form? The answer is NO. We can easily verify that y = t also solves the differential equation,
yet there is no constant C that will allow us to write t = Cet . The solution y = t is totally unrelated to the solution y = et .
Now that we have two solutions, we can add a whole new (infinite) bunch of solutions to the previous list. Here are a few
that could not be obtained using et alone:
√
1 t
7+e
t
t
t, 2t, −t, e + t, −3e + 5t,
e − √
t, . . . .
125
5 3.72
Are now all solutions of this form? The answer is again NO. The function y = t2 is also a solution, and try as one may, there
is no way we can write t2 = C1 et + C2 t, with C1 , C2 constants. If such constants C1 , C2 could be found, they would have to
work for t = 0. Setting t = 0, the alleged equation t2 = C1 et + C2 t becomes 0 = C1 , so C1 must be zero. That means that
the equation really is t2 = C2 t, but since mostly t is not zero (and it has to hold for all t) we can cancel t to get C2 = t.
Well, if C2 = t it isn’t constant. Case closed.
With the solution y = t2 , we can now keep on expanding our list of solutions. We can add expressions such as
t2 , 2t + t2 , −3et + 5t2 , 6et + 4t + 21t2 , etc.
It turns out that now all solutions are of this form; the general solution of the equation we are considering is given by
y = C1 et + C2 t + C3 t2 .
How can we be sure? Thanks to Theorem 1, more precisely 1’, it is easy to be sure. Theorem 1’ says that all initial value
problems have a solution, and no more than one solution. If we can show that we can satisfy every initial value problem at
some point t0 ∈ I, then we have all solutions.
(Do you see why this is so? If not, why not? Should some sentences be read again?)
2
In our interval I = (−∞, ∞), the easiest point
third order equation is: Solve
(t2 − 2t + 2)y 000
y(0)
y 0 (0)
y 00 (0)
to work with is t0 = 0. At 0, the general initial value problem for this
− t2 y 00
=
y0 ,
=
y1 ,
=
y1 ,
+ 2ty 0
− 2y
=
0,
We should be able to solve this problem no matter what y0 , y1 , y2 are. Can we do it? What we are asking is: Can we
find constants C1 , C2 , C3 for every possible choice of y0 , y1 , y2 so that the function y = C1 et + C2 t + C3 t2 (which solves the
differential equation) satisfies the initial conditions, namely
¡
¢¯
C1 et + C2 t + C3 t2 ¯t=0 = y0 ,
¯
¢¯
d ¡
= y1 ,
C1 et + C2 t + C3 t2 ¯¯
dt
t=0
¯
¢¯
d2 ¡
t
2 ¯
= y2 .
C1 e + C2 t + C3 t ¯
dt2
t=0
If we carry out the differentiations and evaluations, this works out to:
C1
C1 + C2
C1 + 2C3
= y0 ,
= y1 ,
= y2 .
The solution is C1 = y0 , C2 = y1 − y0 , C3 = (y2 − y0 )/2. Every initial value problem can be satisfied indeed; it follows that
we have found our general solution. That is,
y = C1 et + C2 t + C3 t2
is indeed the general solution of the equation we are considering.
If you followed all this (and, if you didn’t, why not?) it should be fairly clear that if we want to get the general solution
of an n-th order linear homogeneous equation, it suffices to find n solutions
y = φ1 (t), y = φ2 (t), . . . , y = φn (t)
with the property that at some point t0 of the interval I the n × n system of linear equations in the unknowns C1 , . . . , Cn
φ1 (t0 )C1
φ01 (t0 )C1
φ001 (t0 )C1
···
(n−1)
φ1
(t0 )C1
+
+
+
·
+
φ2 (t0 )C2
φ02 (t0 )C2
φ002 (t0 )C2
···
(n−1)
φ2
(t0 )C2
+ ···
+ ···
+ ···
· ···
+ ···
+
φn (t0 )Cn
+
φ0n (t0 )Cn
+
φ00n (t0 )Cn
·
···
(n−1)
(t0 )Cn
+ φn
=
=
=
·
=
y0
y1
y2
···
yn−1
(2)
can be solved for every choice of constants y0 , y1 , . . . , yn−1 . It should also be fairly clear that if it can be done at one point t0
it can be done at all points t ∈ I. In fact, by Theorem 1’, if we can solve all initial value problems at t0 , we get all solutions
uniquely. If we get all solutions, we can solve any initial value problem based at any point of I whatsoever. Think about it.
Linear algebra gives a condition under which a system of n linear equations in n unknowns has a solution no matter what
the right hand sides are: The determinant of the system must be different from zero. Don’t worry too much if you haven’t
seen determinants larger than three by three, we hardly are going to use them. However, it saves time to do things at a
somewhat more general level, and see that what is true for order two is also true, mutatis mutandis, for order three, four,
five, etc.
So here are a few definitions and facts. The definitions formalize our previous discussion, the facts are immediate
consequences of what we did so far.
Definition. Let f1 , f2 , . . . , fn be n functions (at least) n − 1 times differentiable on the interval I. The Wronskian of these
functions is a new function on I defined at each t ∈ I as the following determinant:
¯
¯
¯ f1 (t)
f2 (t)
···
fn (t) ¯¯
¯
¯ f10 (t)
f20 (t)
···
fn0 (t) ¯¯
W (f1 , . . . , fn )(t) = ¯¯
¯.
···
···
···
···
¯ (n−1)
¯
(n−1)
(n−1)
¯ f
(t) f2
(t) · · · fn
(t) ¯
1
3
For example, if n = 2, then
¯
¯ f (t) g(t)
W (f, g)(t) = ¯¯ 0
f (t) g 0 (t)
¯
¯
¯ = f (t)g 0 (t) − g(t)f 0 (t).
¯
Definition A set {φ1 , . . . , φn } is said to be a fundamental set of solutions of the linear homogeneous equation
y (n) + pn−1 (t)y (n−1) + · · · + p0 (t)y = 0
in the interval I if (and only if) the following conditions hold:
1. Each function φj is a solution of the linear homogeneous differential equation (j = 1, . . . , n).
2. W (φ1 , φ2 , . . . , φn )(t) 6= 0 for all t ∈ I.
Theorem 3. Let {φ1 , . . . , φn } be a fundamental set of solutions of the linear homogeneous differential equation
y (n) + pn−1 (t)y (n−1) + · · · + p0 (t)y = 0
in the interval I. Then the general solution of this equation is given by
y = C1 φ(t) + · · · + Cn φn (t),
where C1 , C2 , . . . , Cn are constants.
That y = C1 φ(t) + · · · + Cn φn (t) is the general solution of the equation in question means (precisely):
1. For every choice of constants C1 , . . . , Cn , the function y = C1 φ(t)+· · ·+Cn φn (t) is a solution of the differential equation
in question.
2. For every solution y = y(t) of the differential equation in question there is a unique choice of constants C1 , . . . , Cn such
that y(t) = C1 φ(t) + · · · + Cn φn (t) for all t ∈ I.
The converse of Theorem 3 also holds:
Theorem 4. Let φ1 , . . . , φn be n functions defined on the interval I. If
y = C1 φ(t) + · · · + Cn φn (t)
is the general solution of the linear homogeneous equation
y (n) + pn−1 (t)y (n−1) + · · · + p0 (t)y = 0
in the interval I, then {φ1 , . . . , φn } is a fundamental set of solutions of this equation.
We mentioned that if the n × n system of linear equations (2) could be solved at t0 for all choices of the right hand sides,
then the same property had to hold at all t ∈ I. That property was equivalent to the Wronskian (which is the determinant
of the system) being non-zero. We formulate this as another theorem.
Theorem 5 Let φ1 , . . . , φn be n solutions of the linear homogeneous equation
y (n) + pn−1 (t)y (n−1) + · · · + p0 (t)y = 0
in the interval I, in which the coefficients p0 , p1 , . . . , pn−1 are continuous. If W (φ1 , . . . , φn )(t0 ) 6= 0 at some point t0 ∈
I, then W (φ1 , . . . , φn )(t) 6= 0 for all points t ∈ I. Equivalently, if W (φ1 , . . . , φn )(t0 ) = 0 at some point t0 ∈ I, then
W (φ1 , . . . , φn )(t) = 0 for all points t ∈ I.
However, we should be careful. If we consider the differential equation
t2 y 00 − 2ty 0 + 2y = 0
4
it turns out (as one can easily verify) that y = t and y = t2 are solutions. Now
¯
¯
¯ t t2 ¯
¯ = t2
W (t, t2 ) = ¯¯
1 2t ¯
so that W (t, t2 ) = 0 at t = 0, but nowhere else. What happens is that, in standard form, the equation in question is
2
2
y 00 − y 0 + 2 y = 0
t
t
and Theorem 4 applies to the interval I = (0, ∞) or to I = (−∞, 0), but does not apply to any interval containing 0.
To conclude this section, we mention another property a set of solutions can have, equivalent to its being a fundamental
set of solutions.
Theorem 5. Assume that φ1 , . . . , φn are solutions of the homogeneous linear differential equation
y (n) + pn−1 (t)y (n−1) + · · · + p0 (t)y = 0
in some interval I where the coefficient functions are continuous. Then {φ1 , . . . , φn } is a fundamental set of solutions of the
equation if and only if it is linearly independent, in the sense that the only way one can have
c1 φ1 (t) + c − 2φ2 (t) + · · · + cn φn (t) = 0
for all t ∈ I, where c1 , . . . , cn are constants, is if one selects c1 = 0, c2 = 0, . . ., cn = 0.
The condition given in this theorem is sometimes easier to check than the one of the Wronskian being non-zero. Especially
if n = 2 where the theorem’s condition is easily seen to be equivalent to neither of φ1 , φ2 being zero, and φ2 not being a
constant times φ1 . For example φ1 (t) = cos t, φ2 (t) = sin t are solutions of the equation y 00 + y = 0 (in all of (−∞, ∞)).
Obviously, there is no constant C such that sin t = C cos t for all t, thus {cos t, sin t} is a fundamental set of solutions for this
equation, and y = C1 cos t + C2 sin t is the general solution.
3
What to do.
To solve a linear homogeneous differential equation of order n:
1. Find n solutions φ1 , . . . , φn .
2. Verify that the solutions are linearly independent, for example by computing the Wronskian W (φ1 , . . . , φn )(t) and
verifying that it isn’t zero.
3. The general solution is y = C1 φ1 (t) + · · · + Cn φn (t). If that’s all that was asked for, we are done. If an initial value
problem has to be solved, use the initial values to set up the system of type (2); solve it and we are done.
Of these steps, the only truly hard one is step 1. There is no general method for finding fundamental sets of solutions of
linear homogeneous equations, except in a few cases of which the easiest is the case of constant coefficients.
If the coefficients are constant there is a relatively simple way of finding a fundamental set of solutions. It should be
noted that this method (or any analogue of it that one may fantasize) does not work if the coefficients are not
constant. Consider the equation
an y (n) + an−1 y (n−1) + · · · + a1 y 0 + a0 y = 0,
where a0 , a1 , . . . , an are real constants, an 6= 0. We associate with it the algebraic equation
an rn + an−1 rn−1 + · · · + a1 r + a0 = 0.
(3)
Generically, this equation has n-roots. However, we need to be a bit more precise. The roots come in several flavors. Two
flavors are real and complex non real. Another set of flavors is the order of the root. It is a well known College Algebra (or
5
High School math) fact that r1 is a root of the equation an rn + an−1 rn−1 + · · · + a1 r + a0 = 0 if and only if r − r1 divides
the polynomial an rn + an−1 rn−1 + · · · + a1 r + a0 evenly, with remainder 0. In this case, sometimes r − r1 divides again the
quotient, sometimes it doesn’t. The order of the root r1 is the number of times we can keep dividing by r − r1 with remainder
zero. A more direct definition is: The root r1 of an rn + an−1 rn−1 + · · · + a1 r + a0 = 0 has order k ≥ 1 if and only if we can
factor
an rn + an−1 rn−1 + · · · + a1 r + a0 = (r − r1 )k q(r)
where q(r) is a polynomial (necessarily of degree n − k) that does not have r1 as a root; i.e., q(r1 ) 6= 0. Roots of order 1 are
called simple roots.
We are ready to state the rules for constructing a fundamental set of solutions for a linear homogeneous equation with
constant coefficients. First, find all the distinct roots of (3). Then each root of order k contributes precisely k functions to
the fundamental set, as follows. Suppose r is one of these roots.
1. If r is real of order 1 (simple, real), then it contributes the function
ert .
2. If r is real, of order k > 1, it contributes the k functions
ert , tert , . . . tk−1 ert .
3. If r = λ + µi is a complex root, µ 6= 0, then λ − µi is also a root (we assume a0 , . . . , an are real) and both are of the
same order. We have two ways to proceed If the order is 1, we get from this pair of roots two complex valued solutions,
namely
e(λ+iµ)t and e(λ−iµ)t .
If we want to stick to real valued solutions (and mostly we will), this pair of simple roots produces the two functions
eλt cos µt,
eλt sin µt.
Aside: The relation between the complex and the real valued case is given by Euler’s formulas
ea+ib = ea (cos b + i sin b), ea−ib = ea (cos b − i sin b),
ea cos b =
¢
¢
1 ¡ a+ib
1 ¡ a+ib
e
+ ea−ib , ea sin b =
e
− ea−ib .
2
2i
4. If r = λ ± iµ are complex roots of order k, they contribute the following 2k functions to the fundamental set:
eλt cos µt, teλt cos µt, . . . , tk−1 eλt cos µt,
eλt sin µt, teλt sin µt, . . . , tk−1 eλt sin µt.
4
Constant coefficient examples
The case n = 2 is fairly simple. The equation
ar2 + br + c = 0
can only have either two distinct real roots, a single real root of order 2, or two simple complex roots, of which one is the
complex conjugate of the other. We present a few examples of each case.
1. Solve
6y 00 − 7y 0 + 2y = 0.
The so called characteristic equation is 6r2 − 7r + 2 = 0; solving by the quadratic formula gives
√
7 ± 72 − 4 · 6 · 2
7±1
r=
=
;
12
12
in other words there are two distinct roots 1/2, 2/3. The general solution is
1
2
y = C1 e 2 t + C2 e 3 t .
6
2. Solve
y 00 + 6y 0 + 9y = 0.
The characteristic equation is r2 + 6r + 9 = 0, which has the single root r = −3, of order two. A fundamental set is
given by
{e−3t , te−3t }.
The general solution of the differential equation is
y = C1 e−3t + C2 te−3t .
3. Solve
y 00 − y = 0.
The equation r2 − 1 = 0 has the two distinct roots r = 1 and r = −1. Thus {et , e−t } is a fundamental set of solutions.
Consider now
¢
¢
1¡ t
1¡ t
cosh t =
e + e−t , sinh t =
e − e−t .
2
2
Being (linear) combinations of solutions, they are solutions. Moreover
¯
¯
¯ cosh t sinh t ¯
¯ = cosh2 t − sinh2 t = 1 6= 0.
W (cosh t, sinh t) = ¯¯
sinh t cosh t ¯
Thus {cosh t, sinh t} is another fundamental set of the same equation. In many ways, it is a better set. Say we want to
solve the initial value problem
y 00 − y = 0,
y(0) = 2, y 0 (0) = −1.
Using the first set, we write the general solution in the form
y = C1 et + C2 e−t , thus y 0 = C1 et − C2 e−t .
Setting t = 0 and equating to the desired initial values gives the equations
C1 + C2
C1 − C2
= 2,
= −1.
The system is pretty easy to solve, yet solving does involve some (truly minimal) effort. We get C1 = 1/2, C2 = 3/2;
the solution of the initial value problem is
1
3
y = et + e−t .
2
2
On the other hand, if we had used the second set, we would have written the general solution in the form
y = C1 cosh t + C2 sinh t, thus y 0 = C1 sinh t + C2 cosh t.
Setting t = 0 and equating to the desired initial values gives at once C1 = 2, C2 = −1 and the solution to the initial
value problem is
y = 2 cosh t − sinh t.
(This is the same solution as before, of course).
4. Solve
y 00 + y 0 + y = 0.
The roots of r2 + r + 1 = 0 are given by
√
√
−1 ± −3
1
3
r=
=− ±
i.
2
2
2
7
These are complex, non-real roots; they correspond to the case λ = −1/2, µ =
fundamental set is given by
(
Ã√ !
à √ !)
3t
3t
− 21 t
− 12 t
e
cos
,e
sin
,
2
2
√
3/2 in the notation used above. A
the general solution of the equation by
y = C1 e
− 12 t
5. Solve
Ã√ !
Ã√ !
3t
3t
− 21 t
cos
+ C2 e
sin
.
2
2
y 00 + y = 0.
√
The equation r2 + 1 = 0 has the roots r = ± −1 = ±i. This corresponds to the case λ = 0, µ = 1. A fundamental set
is {cos t, sin t}, the general solution is given by y = C1 cos t + C2 sin t.
Higher order equations of constant coefficients can get a bit more messy. We consider an example that has a bit of everything.
Of course, I set it up so it works out, factoring equations of degree 5 or higher is not something that can be done in a nice
closed form (without the aid of some numerical calculations), except in exceptional circumstances. Please, don’t worry, you
won’t run into a monster like this one in a quiz or exam.
Solve
2y (13) − 7y (12) 122y (11) + 36y (10) + 154y (9) + 265y (8) − 262y (7) − 1826y (6)
−4400y (5) − 1508y (4) + 10824y 000 + 27000y 00 + 26784y 0 + 12960y
= 0.
The only reason this is not absolutely horrible is that it was carefully set up. The left hand side of the characteristic equation
2r13 − 7r12 − 22r11 + 36r10 + 154r9 + 265r8 − 262r7 − 1826r6 − 4400r5
−1508r4 + 10824r3 + 27000r2 + 26784r + 12960
=
0
factors and the characteristic equation can be rewritten in the form
¡
¢
4
(2r + 5) (r − 3) (r2 + 2r + 2)3 r2 + 4 = 0
showing clearly that there are the following roots:
1. The simple real root r = −5/2. It contributes e−5t/2 to a fundamental set.
2. The real root r = 3 of order 4. It contributes e−3t , te−3t , t2 e−3t , and t3 e−3t to the fundamental set we are constructing.
3. The complex, non real root pairs r = −1±i, both of order 3. They contribute the functions e−t cos t, e−t sin t, te−t cos t,
te−t sin t, t2 e−t cos t, and t2 e−t sin t to the set.
4. The complex non real pair of simple roots r = ±2i. They contribute sin 2t, cos 2t to the set.
The fundamental set we obtain this way is:
5
{e− 2 t , e−3t , te−3t , t2 e−3t , t3 e−3t , e−t cos t, e−t sin t, te−t cos t, te−t sin t,
t2 e−t cos t, t2 e−t sin t, sin 2t, cos 2t}.
The general solution of this equation is
y
5
= C1 e− 2 t + C2 e−3t + C3 te−3t + C4 t2 e−3t + C5 t3 e−3t + C6 e−t cos t + C7 e−t sin t
+C8 te−t cos t + C9 te−t sin t + C10 t2 e−t cos t + C11 t2 e−t sin t + C12 sin 2t + C13 cos 2t.
8