Calculus IV, MA 214

Calculus IV, MA 214
Summer II 2014
Lecture Notes
Section 3.2
In Section 3.1, we examined second order homogeneous linear equations with constant coefficients, that is, equations of the form y 00 + ay 0 + by = 0. Apart from developing the idea
of the characteristic equation, we introduced two important theorems we will utilize again
today, namely Theorem 3.2.1 (Existence and Uniqueness) and Theorem 3.2.2 (Principle of
Superposition). We mentioned, but did not fully define, the notion of a general solution,
which we described as the set of all linear combinations of two fixed linearly independent
solutions, y1 and y2 . We could (and did) write this more concisely as C1 y1 + C2 y2 .
The goal of section 3.2 is two-fold. Firstly, we want to develop theory that will enable us
to know if given any other solution φ(t), it is included in this general solution. In other
words can we write φ(t) = C1 y1 + C2 y2 for some C1 and C2 ? Secondly, we want to consider
more complicated second order homogeneous linear equations, i.e., those with coefficients
that may not be constant.
Def: In order to lighten the load of notation, we now wish to define a particular differential
operator. Let p and q be continuous functions on an open interval I. Then for any function
φ which is twice differentiable on I, we define
L[φ] = φ00 + pφ0 + qφ
However, then L[φ] is a function on I and we have
L[φ](t) = φ00 (t) + p(t)φ0 (t) + q(t)φ(t)
Ex: Suppose p(t) = 5t, q(t) = sin(t) and φ(t) = t3 . These are all defined on the whole real
line. Then
L[φ] = 6t + (5t)(3t2 ) + sin(t)t3
Let’s remind ourselves of the two important theorems from last time (Theorem 3.2.1 and
Theorem 3.2.2 from B&D).
Theorem 1 (Existence and Uniqueness). Consider the initial value problem
y 00 + p(t)y 0 + q(t)y = g(t),
y(t0 ) = y0 , y 0 (t0 ) = y00 ,
(1)
where p, q, and g are continuous on an open interval I that contains the point t0 . Then
there is exactly one solution y = φ(t) of this problem, and the solution exists throughout the
interval I.
MA 214, Summer II 2014
2 of 6
Ex.1 Suppose we have the initial value problem
(t − 1)(t + 5)y 00 + (t2 )y 0 − 17y = 0;
y(−1) = 3,
y 0 (−1) = 1
What is the longest interval in which the solution is guaranteed to exist? Dividing through
by (t − 1)(t + 5) gives us
y 00 +
t2
17
y0 −
y=0
(t − 1)(t + 5)
(t − 1)(t + 5)
2
t
17
meaning if we define p(t) = (t−1)(t+5)
and q(t) = (t−1)(t+5)
, then p and q are continuous
everywhere except t = 1 and t = −5. Since our t0 = −1 falls between these value, Theorem 1
says that the longest interal in which a solution is guarenteed to exist and be unique is
5 < t < 1.
We can now state the second theorem using our new notation.
Theorem 2 (Principle of Superposition). If y1 and y2 are two solutions of the differential
equation
L[y] = y 00 + p(t)y 0 + q(t)y = 0
(2)
then the linear combination c1 y1 + c2 y2 is also a solution for any value of the constants c1
an c2 .
We begin our investigation by considering the following question: Given two solutions y1
and y2 of equation (2) and initial conditions y(t0 ) = y0 and y 0 (t0 ) = y00 , when is it possible
to find c1 and c2 such that c1 y1 + c2 y2 satisfy these initial conditions?
Stated another way, we need to solve the system of linear equations
c1 y1 (t0 ) + c2 y2 = y0 (t0 )
c1 y10 (t0 ) + c2 y20 = y00 (t0 )
Sparing the reader some algebra, we arrive at
c1 =
y0 y20 (t0 ) − y00 y2 (t0 )
,
y1 (t0 )y20 (t0 ) − y10 (t0 )y2 (t0 )
c2 =
−y0 y10 (t0 ) + y00 y1 (t0 )
y1 (t0 )y20 (t0 ) − y10 (t0 )y2 (t0 )
Note that the denominator is the same for both. Therefore, if y1 (t0 )y20 (t0 ) − y10 (t0 )y2 (t0 ) 6= 0,
it is always possible to find c1 and c2 which satisfy the initial conditions. However, we note
that y1 (t0 )y20 (t0 ) − y10 (t0 )y2 (t0 ) is just the following determinant:
y (t ) y (t )
2 0 1 0
W = 0
= y1 (t0 )y20 (t0 ) − y10 (t0 )y2 (t0 )
y1 (t0 ) y20 (t0 )
(3)
MA 214, Summer II 2014
3 of 6
Def: The determinant W is called the Wronskian determinant, or simply just the Wronskian.
This allows us to state the result of our previous discussion more precisely.
Theorem 3 (Theorem 3.2.3 from B&D). Suppose that y1 and y2 are two solutions of
L[y] = y 00 + p(t)y 0 + q(t)y = 0
and that the initial conditions y(t0 ) = y0 and y 0 (t0 ) = y00 are assigned. Then it is always
possible to choose the constants c1 , c2 so that
y = c1 y1 (t) + c2 y2 (t)
satisfies the differential equation and the initial conditions if and only if the Wronskian
W = y1 y20 − y10 y2
is not zero at t0 .
Note: In particular, this says that if the Wronskian of y1 and y2 is zero at some t0 , then there
exist a pair of initial conditions y0 and y00 for which it is not possible to choose constants
which make y = c1 y1 + c2 y2 satisfy these initial conditions.
Ex.2 (Example 3 from B&D) We have shown previously that y1 (t) = e−2t and y2 (t) = e−3t
are both solutions of the differential equation y 00 + 5y + 6y = 0. We now consider the
Wronskian
e−2t
−3t e
W =
= −3e−5t − (−2e−5t ) = −e−5t
−2t
−3t
−2e
−3e Since W = −e−5t is never zero, it follows from Theorem 3 that a linear combination c1 y1 +c2 y2
can be constructed that satisfies the equation regardless of the value t0 at which the initial
conditions are stipulated.
We have been using the term general solution without any real proof that it is in fact general,
that is, any other solution can be represented by correct choice of constants in the general
solution. The next theorem justifies this use.
Theorem 4 (Theorem 3.2.4 from B&D). Suppose that y1 and y2 are two solutions of the
differential equation
L[y] = y 00 + p(t)y 0 + q(t)y = 0
Then the family of solutions
y = c1 y1 (t) + c2 y2 (t)
with arbitrary coefficients c1 and c2 includes every solution of L[y] = y 00 + p(t)y 0 + q(t)y = 0
if and only if there is a point t0 where the Wronskian of y1 and y2 is not zero.
MA 214, Summer II 2014
4 of 6
Proof. ⇐= Assume that there is some t0 where the Wronskian of y1 and y2 is not zero.
Then Theorem 3 says that regardless of how we choose initial conditions y0 and y00 , it will
be possible to c1 , c2 such that c1 y1 + c2 y2 satisfies these initial conditions. Now suppose we
are given some solution φ(t). We want to show that φ(t) = c1 y1 + c2 y2 for some c1 and
c2 . The key is to let φ(t0 ) = y0 and φ0 (t0 ) = y00 . Suppose b1 and b2 are the coefficients
such that b1 y1 + b2 y2 satisfy these initial conditions. However, φ(t) also satisfies these initial
conditions so by Theorem 1(Existence/Uniqueness), they must be the same solution. That
is, φ(t) = b1 y1 + b2 y2 . Since φ was arbitrary, we see that any solution can be written in the
form c1 y1 + c2 y2 .
=⇒ Here we will prove the contrapositive, i.e., that if the Wronskian of y1 and y2 is zero
everywhere, then there exist some solution φ(t) which is a solution of L[y] = y 00 + p(t)y 0 +
q(t)y = 0 but cannot be written in the form c1 y1 + c2 y2 . Then by Theorem 3, for any t0
there exists a pair of initial conditions y0 and y00 such that it is impossible to find c1 , c2 such
that c1 y1 + c2 y2 satisfies these initial conditions. However, by Theorem 1 there exist some
φ(t) that satisfies the initial conditions y0 and y00 . However, we just showed it is impossible
to represent φ(t) as a linear combination of y1 and y2 .
Note: Of particular importance, Theorem 4 tells us that if we can find two solutions y1 and
y2 whose Wronskian is not identically 0 (in other words, it is not zero everywhere), then we
have in essences found all solutions to the differential equation.
Therefore we are justified in our use of the term general solution. It is also common to
say that y1 and y2 form a foundamental set of solutions if and only if their Wronskian
is not identically zero.
Ex.3 (Example 4 from B&D) In section 3.1, we saw that if we have a homogeneous linear
second order equation with constant coefficients, we can find solutions by examining the roots
of the characteristic equation. Suppose r1 and r2 are roots of the characteristic equation and
recall that we have shown that this implies er1 t and er2 t are solutions.
er1 t
r2 t e W = rt
= er1 t r2 er2 t − er2 t r1 er1 t = (r2 − r1 )e(r1 +r2 )t
r
t
1
2
r1 e
r2 e Thus W = 0 if and only if r1 = r2 . Then Theorem 4 implies er1 t and er2 t form a fundamental
set of solutions if and only if r1 6= r2 .
It is natural to ask when it is possible to find such a fundamental set. The following theorem
answers that question in the affirmative.
MA 214, Summer II 2014
5 of 6
Theorem 5 (Theorem 3.2.5 from B&D). Consider the differential equation
L[y] = y 00 + p(y)y 0 + q(t)y = 0
whose coefficients p and q are continuous on some open interval I. Choose some point t0 in
I. Let y1 be the solution satisfying the initial conditions y1 (t0 ) = 1, y10 (t0 ) = 0 and let y2 be
the solution satisfying the initial conditions y1 (t0 ) = 0, y20 (t0 ) = 1. Then y1 and y2 form a
fundamental set of solutions.
Proof. The existence of y1 and y2 is guaranteed by Theorem 1. The we check and see that
y (t ) y (t )
2 0 1 0
W = 0
=1·1−0·0=1
y1 (t0 ) y20 (t0 )
Thus the Wronskian is nonzero at this t0 , Theorem 4 says they form a fundamental set of
solutions.
This theorem is important in that it guarantees that a fundamental set of solutions exists.
However, this is not a very practical way to go about finding them.
The last theorem we will discuss is due to a mathematician named Abel and gives a very
interesting result about the Wronskian of two solutions.
Theorem 6 (Theorem 3.2.6 from B&D). If y1 and y2 are solutions of the differential equation
L[y] = y 00 + p(t)y 0 + q(t)y = 0
where p and q are continuous on an open interval I, then the Wronskian
W (y1 , y2 ) = C · e−
R
p(t)dt
(4)
where C is a certain constant that depends on y1 and y2 , but not on t.
Proof. We note that y1 and y2 satisfy the equations
y100 + p(t)y10 + q(t)y = 0
y200 + p(t)y20 + q(t)y = 0
Our first goal is to reduce the number of functions we’re We multiply the first equation by
−y2 and multiply the second equation by y1 giving us
−y2 y100 − p(t)y2 y10 − q(t)y2 y1 = 0
y1 y200 + p(t)y1 y20 + q(t)y1 y2 = 0
MA 214, Summer II 2014
6 of 6
Then adding them together gives us
(y1 y200 − y100 y2 ) + p(t)(y1 y20 − y10 y2 ) = 0
(5)
Note that if let
W = y1 y20 − y10 y2
as usual, then
W 0 (t) = (y1 y200 + y10 y20 ) − (y10 y20 + y100 y2 ) = y1 y200 − y100 y2
Then we can write equation (5) as
W 0 + p(t)W = 0
However this is just aR first order linear differential equation. Thus we can solve it using the
integrating factor e− p(t)dt to get the equation
We
R
p(t)dt
=C
=⇒ W = Ce−
R
p(t)dt
Notes on Theorem 6:
• Theorem 6 implies that W (y1 , y2 )(t) is either zero for all t in I (the case where C = 0)
or W (y1 , y2 )(t) is never zero in I (the case where C 6= 0).
• The Wronskian of any fundamental set of solutions can be determined up to a constant
without ever first solving the differential equation.
Ex.4: Consider the equation
y 00 + 2ty 0 + sin(t)y = 0
Suppose that y1 and y2 are two solutions. Then Theorem 6 implies that the Wronskian of
y1 and y2
R
2
W (y1 , y2 ) = Ce− 2tdt = Ce−t
where again C depends on y1 and y2 .