Series Solutions

Series Solutions
Introduction. Throughout these pages I will assume that you are familiar with power
series and the concept of the radius of convergence of a power series.
Let us consider the example of the second order differential equation
; recall that
You probably already know that the solution is given by
this function has as its Taylor series
+
cos(2t) = 1 -
and that this representation is valid for all
-
+...,
.
Do you remember how to compute the Taylor series expansion for a given function?
If everything works out, then
where
Here y(n)(0) denotes the nth derivative of y at the point t=0.
What does this mean? To compute the Taylor series for the solution to our differential
equation, we just have to compute its derivatives. Note that the initial conditions give
us a head start: y(0)=1, so a0=1. y'(0)=0, so a1=0.
We can rewrite the differential equation as
y''=-4y,
so in particular
Consequently a2=-4/2!=-2. By now, we have figured out that the solution to our
differential equation has as its second degree Taylor polynomial the function 1-2t2.
Next we differentiate
y''=-4y
on both sides with respect to t, to obtain
y'''=-4y',
so in particular
y'''(0)=-4y'(0)=0,
yielding a3=0.
We can continue in this fashion as long as we like: Differentiating
y'''=-4y'
yields
y(4)=-4y'',
in particular
so a4=16/4!=2/3. At this point we have figured out that the solution to our differential
equation has as its fourth degree Taylor polynomial the function
We expect that this Taylor polynomial is reasonably close to the solution y(t) of the
differential equation, at least close to t=0. In the picture below, the solution
is drawn in red, while the power series approximation
is depicted in blue:
The method outlined works also in theory for non-linear differential equations, even
though the computational effort usually becomes prohibitive after the first few steps.
Let's consider the example
We try to find the Taylor polynomials for the solution, of the form
where
The initial conditions yield a0=0 and a1=1. To find a2, we rewrite the differential
equation as
and plug in t=0:
consequently a2=0.
Next we differentiate
on both sides with respect to t, to obtain
so in particular
yielding a3=-1/3!=-1/6. Note that since we were differentiating with respect to t, we
had to use the chain rule to find the derivative of the term
Let's continue a little bit longer: We differentiate
to obtain
Consequently y(4)(0)=0, and thus a4=0.
Differentiating
yields
You can check that y(5)=y'(0)3-y'''(0)=1-(-1)=2 and thus a5=2/5!.
.
The fifth degree Taylor polynomial approximation to the solution of our differential
equation has the form
We again expect that this Taylor polynomial is reasonably close to the solution y(t) of
the differential equation, at least close to t=0. In the picture below, the solution, as
computed by a numerical method, is drawn in red, while the power series
approximation
is depicted in blue:
The next sections will develop an organized method to find power series solutions for
second order linear differential equations. Here are a couple of examples to practice
what you have learned so far:
Exercise 1:
Find the fifth degree Taylor polynomial of the solution to the differential equation
Answer. Since y(0)=1, a0=1. Similarly y'(0)=-1 implies that a1=-1.
Since y''=3y, we obtain
y''(0)=3y(0)=3,
and thus a2=3/2.
Differentiating y''=3y yields
y'''=3y',
in particular
y'''(0)=3y'(0)=-3,
so a3=-3/3!=-1/2.
Since y(4)=3 y'', we get a4=9/4!=3/8.
One more time: y(5) = 3y''', so a5 = 3(- 3)/5! = - 3/40.
The Taylor approximation has the form
f (t) = 1 - t +
t2 -
t3 +
t4 -
t5 .
Exercise 2:
Find the fourth degree Taylor polynomial of the solution to the differential equation
Answer. The initial conditions yield a0=1 and a1=0. Since y''=-y2, we obtain that
Differentiating y''=-y2 yields
and hence a3=0.
When we differentiate
we obtain
and consequently
The fourth degree Taylor approximation has the form
Derivatives and Index Shifting. Throughout these pages I will assume that you are
familiar with power series and the concept of the radius of convergence of a power
series.
Given a power series
we can find its derivative by differentiating term by term:
Here we used that the derivative of the term an tn equals an n tn-1. Note that the start of
the summation changed from n=0 to n=1, since the constant term a0 has 0 as its
derivative.
The second derivative is computed similarly:
Taking the derivative of a power series does not change its radius of convergence, so
will all have the same radius of convergence.
The rest of this section is devoted to "index shifting". Consider the example
Using a simple substitution u=x+1, we can rewrite this integral as
or changing the dummy variable u back to x, we get:
The expression (x+1) is "shifted down" by one unit to x, while the limits of integration
are "shifted up" by one unit from 2 to 3, and 5 to 6.
Summation is just a special case of integration, so an analogous "index shifting" will
work:
You should convince yourself that both of these expressions are indeed the same, by
writing out explicitly the four terms of each of the two formulas!
Let's try this for our derivative formulas:
We shifted each occurrence of n in the expression up by one unit, while the limits of
to
summation were shifted down by one unit, from 1 to 0, and from
.
You should once again convince yourself that the first and the last formula are indeed
the same, by writing out explicitly the first few terms of each of the two formulas!
As a last example, let's shift the formula for the second derivative by 2 units:
Example. Let us look (again) at the example
y''+4y=0.
Using other techniques it is not hard to see that the solutions are of the form
We want to illustrate how to find power series solutions for a second-order linear
differential equation.
The generic form of a power series is
We have to determine the right choice for the coefficients (an).
As in other techniques for solving differential equations, once we have a "guess" for
the solutions, we plug it into the differential equation. Recall from the previous
section that
Plugging this information into the differential equation we obtain:
Our next goal is to simplify this expression such that only one summation sign " "
remains. The obstacle we encounter is that the powers of both sums are different, tn-2
for the first sum and tn for the second sum. We make them the same by shifting the
index of the first sum by 2 units to obtain
Now we can combine the two sums as follows:
and factor out tn:
Next we need a result you probably already know in the case of polynomials: A
polynomial is identically equal to zero if and only if all of its coefficients are equal to
zero. This results also holds true for power series:
Theorem. A power series is identically equal to zero if and only if all of its
coefficients are equal to zero.
This theorem applies directly to our example: The power series on the left is
identically equal to zero, consequently all of its coefficients are equal to 0:
Solving these equations for the "highest index" n+2, we can rewrite as
These equations are known as the "recurrence relations" of the differential
equations. The recurrence relations contain all the information about the coefficients
we need.
Recall that
in particular y(0)=a0, and
in particular y'(0)=a1. This means, we can think of the first two coefficients a0 and a1
as the initial conditions of the differential equation.
How can we evaluate the next coefficient a2? Let us read our recurrence relations for
the case n=0:
Reading off the recurrence relation for n=1 yields
Continue ad nauseam:
What do we know about the solutions to our differential equation at this point? They
look like this:
Of course the power series inside the parentheses are the familiar functions
and
:
so we have found the general solution of the differential equation (with a0 instead of
B, and a1/2 instead of A).
The series solutions method is mainly used to find power series solutions of
differential equations whose solutions can not be written in terms of familiar functions
such as polynomials, exponential or trigonometric functions. This means that in
general you will not be able to perform the last few steps of what we just did (less
worries!), all we can try to do is to come up with a general expression for the
coefficients of the power series solutions.
As another introductory example, let's find the solution to the initial value problem
The generic form of a power series is
We have to determine the right choice for the coefficients (an).
Start by plugging this "guess" into the differential equation. Recall from the previous
section that
and
Plugging this information into the differential equation we obtain:
Our next goal is to simplify this expression such that only one summation sign " "
remains. The obstacle we encounter is that the powers of the three sums are different,
tn-2 for the first sum tn-1 for the second sum and tn for the third. We make them the
same by shifting the index of the first two sums to obtain
Now we can combine the sums as follows:
and factor out tn:
Since the power series on the left is identically equal to zero, all of its coefficients are
equal to zero:
Solving these equations for the "highest index" n+2, we can rewrite as
These recurrence relations contain all the information about the coefficients we need.
Recall that y(0)=a0 and y'(0)=a1, so our initial conditions imply that a0=1 and a1=2.
Reading off the recurrence relations we can compute the next coefficients:
Can you see a pattern evolving for the numerators? Going from an to an+1 we multiply
by 3 and subtract 1. With mostly patience, experience and some magic, we can see
that the nth numerator is of the form
This implies that
This implies that our solution has the power series:
We can rewrite this to retrieve the solution in more familiar form:
Airy's Equation
The general form of a homogeneous second order linear differential equation looks as
follows:
y''+p(t) y'+q(t) y=0.
The series solutions method is used primarily, when the coefficients p(t) or q(t) are
non-constant.
One of the easiest examples of such a case is Airy's Equation
y''-t y=0,
which is used in physics to model the defraction of light.
We want to find power series solutions for this second-order linear differential
equation.
The generic form of a power series is
We have to determine the right choice for the coefficients (an).
As in other techniques for solving differential equations, once we have a "guess" for
the solutions, we plug it into the differential equation. Recall that
Plugging this information into the differential equation we obtain:
or equivalently
Our next goal is to simplify this expression such that (basically) only one summation
sign " " remains. The obstacle we encounter is that the powers of both sums are
different, tn-2 for the first sum and tn+1 for the second sum. We make them the same by
shifting the index of the first sum up by 2 units and the index of the second sum down
by one unit to obtain
Now we run into the next problem: the second sum starts at n=1, while the first sum
has one more term and starts at n=0. We split off the 0th term of the first sum:
Now we can combine the two sums as follows:
and factor out tn:
The power series on the left is identically equal to zero, consequently all of its
coefficients are equal to 0:
We can slightly rewrite as
These equations are known as the "recurrence relations" of the differential
equations. The recurrence relations permit us to compute all coefficients in terms of a0
and a1.
We already know from the 0th recurrence relation that a2=0. Let's compute a3 by
reading off the recurrence relation for n=1:
Let us continue:
The hardest part, as usual, is to recognize the patterns evolving; in this case we have
to consider three cases:
1. All the terms
as
are equal to zero. We can write this in compact form
2. All the terms
are multiples of a0. We can be more precise:
(Plug in k=1,2,3,4 to check that this works!)
3. All the terms
are multiples of a1. We can be more precise:
(Plug in k=1,2,3,4 to check that this works!)
Thus the general form of the solutions to Airy's Equation is given by
Note that, as always, y(0)=a0 and y'(0)=a1. Thus it is trivial to determine a0 and a1
when you want to solve an initial value problem.
In particular
and
form a fundamental system of solutions for Airy's Differential Equation.
Below you see a picture of these two solutions. Note that for negative t, the solutions
behave somewhat like the oscillating solutions of y''+y=0, while for positive t, they
behave somewhat like the exponential solutions of the differential equation y''-y=0.
In the next section we will investigate what one can say about the radius of
convergence of power series solutions.
The Radius of Convergence of Series Solutions
In the last section we looked at one of the easiest examples of a second-order linear
homogeneous equation with non-constant coefficients: Airy's Equation
y''-t y=0,
which is used in physics to model the defraction of light.
We found out that
and
form a fundamental system of solutions for Airy's Differential Equation.
The natural questions arise, for which values of t these series converge, and for which
values of t these series solve the differential equation.
The first question could be answered by finding the radius of convergence of the
power series, but it turns out that there is an elegant Theorem, due to Lazarus Fuchs
(1833-1902), which solves both of these questions simultaneously.
Fuchs's Theorem. Consider the differential equation
y''+p(t) y'+q(t) y=0
with initial conditions of the form y(0)=y0 and y'(0)=y'0.
Let r>0. If both p(t) and q(t) have Taylor series, which converge on the interval (-r,r), then
the differential equation has a unique power series solution y(t), which also converges on
the interval (-r,r).
In other words, the radius of convergence of the series solution is at least as big as the
minimum of the radii of convergence of p(t) and q(t).
In particular, if both p(t) and q(t) are polynomials, then y(t) solves the differential
equation for all
.
Since in the case of Airy's Equation p(t)=0 and q(t)=-t are both polynomials, the
fundamental set of solutions y1(t) and y2(t) converge and solve Airy's Equation for all
.
Let us look at some other examples:
Hermite's Equation of order n has the form
y''-2ty'+2ny=0,
where n is usually a non-negative integer. As in the case of Airy's Equation, both
p(t)=-2t and q(t)=2n are polynomials, thus Hermite's Equation has power series
solutions which converge and solve the differential equation for all
Legendre's Equation of order
has the form
.
where
is a real number.
Be careful! We have to rewrite this equation to be able to apply Fuchs's Theorem.
Let's divide by 1-t2:
Now the coefficient in front of y'' is 1 as required.
What is the radius of convergence of the power series representations of
(The center as in all our examples will be t=0.) We really have to investigate this
question only for the function
since multiplication by a polynomial (-2t, and
change the radius of convergence.
, respectively) does not
The geometric series
converges when -1<x<1. If we substitute x=t2, we obtain the power series
representation we seek:
which will be convergent when -1<x=t2<1, i.e., when -1<t<1. Thus both
will converge on the interval (-1,1). Consequently, by Fuchs's result, series solutions
to Legendre's Equation will converge and solve the equation on the interval (-1,1).
Bessel's Equation of order
where
has the form
is a non-negative real number.
Once again we have to be careful! Let's divide by t2:
Now the coefficient in front of y'' is 1 as required by Fuchs's Theorem.
has a singularity at t=0, thus p(t) fails to have a Taylor series
The function
with center t=0. Consequently, Fuchs's result does not even guarantee the existence of
power series solutions to Bessel's equation.
As it turns out, Bessel's Equation does indeed not always have solutions, which can be
written as power series. Nevertheless, there is a method similar to the one presented
here to find the solutions to Bessel's Equation. If you are interested in Bessel's
Equation, look up the section on "The Method of Frobenius" in a differential
equations or advanced engineering mathematics textbook.
Hermite's Equation
Hermite's Equation of order k has the form
y''-2ty'+2ky=0,
where k is usually a non-negative integer.
We know from the previous section that this equation will have series solutions which
both converge and solve the differential equation everywhere.
Hermite's Equation is our first example of a differential equation, which has a
polynomial solution.
As usual, the generic form of a power series is
We have to determine the right choice for the coefficients (an).
As in other techniques for solving differential equations, once we have a "guess" for
the solutions, we plug it into the differential equation. Recall that
and
Plugging this information into the differential equation we obtain:
or after rewriting slightly:
Next we shift the first summation up by two units:
Before we can combine the terms into one sum, we have to overcome another slight
obstacle: the second summation starts at n=1, while the other two start at n=0.
. Consequently, we do
Evaluate the 0th term for the second sum:
not change the value of the second summation, if we start at n=0 instead of n=1:
Thus we can combine all three sums as follows:
Therefore our recurrence relations become:
After simplification, this becomes
Let us look at the special case, where k=5, and the initial conditions are given as
. In this case, all even coefficients will be equal
to zero, since a0=0 and each coefficient is a multiple of its second predecessor.
What about the odd coefficients? a1=1, consequently
and
What about a7:
Since a7=0, all odd coefficients from now on will be equal to zero, since each
coefficient is a multiple of its second predecessor.
Consequently, the solution has only 3 non-zero coefficients, and hence is a
polynomial. This polynomial
(or a multiple of this polynomial) is called the Hermite Polynomial of order 5.
It turns out that the Hermite Equation of positive integer order k always has a
polynomial solution of order k. We can even be more precise: If k is odd, the initial
value problem
the initial value problem
will have a polynomial solution, while for k even,
will have a polynomial solution.
Exercise 1:
Find the Hermite Polynomials of order 1 and 3.
Answer. Recall that the recurrence relations are given by
We have to evaluate these coefficients for k=1 and k=3, with initial conditions a0=0,
a1=1.
When k=1,
Consequently all odd coefficients other than a1 will be zero. Since a0=0, all even
coefficients will be zero, too. Thus
H1(t)=t.
When k=3,
and
Consequently all odd coefficients other than a1 and a3 will be zero. Since a0=0, all
even coefficients will be zero, too. Thus
Exercise 2:
Find the Hermite Polynomials of order 2, 4 and 6.
Answer. Recall that the recurrence relations are given by
We have to evaluate these coefficients for k=2, k=4 and k=6, with initial conditions
a0=1, a1=0.
When k=2,
while
Consequently all even coefficients other than a2 will be zero. Since a1=0, all odd
coefficients will be zero, too. Thus
H2(t)=1-2t2.
When k=4,
Consequently all even coefficients other than a2 and a4 will be zero. Since a1=0, all
odd coefficients will be zero, too. Thus
You can check that
Exercise 3:
Consider the Hermite Equation of order 5:
y''-2ty'+10y=0.
Find the solution satisfying the initial conditions a0=1, a1=0.
Answer. solution will not be a polynomial.
Since a1=0, all odd coefficients will be zero. Let's compute a few of the even
coefficients:
From this it is not too hard to come up with the general formula:
This leads to the formula
for the solution to the given initial value problem.
The solution converges and solves the equation for all real numbers.