7.3 Singular points and the method of Frobenius

284
7.3
CHAPTER 7. POWER SERIES METHODS
Singular points and the method of Frobenius
Note: 1 or 1.5 lectures , §8.4 and §8.5 in [EP], §5.4 – §5.7 in [BD]
While behaviour of ODEs at singular points is more complicated, certain singular points are not
especially difficult to solve. Let us look at some examples before giving a general method. We may
be lucky and obtain a power series solution using the method of the previous section, but in general
we may have to try other things.
7.3.1
Examples
Example 7.3.1: Let us first look at a simple first order equation
2xy� − y = 0.
Note that x = 0 is a singular point. If we only try to plug in
y=
∞
�
ak xk ,
k=0
we obtain
�
0 = 2xy − y = 2x
��
∞
kak x
k=1
∞
�
= a0 +
k=1
k−1
�
−
��
∞
ak x
k
k=0
�
(2kak − ak )xk .
First, a0 = 0. Next, the only way to solve 0 = 2kak − ak = (2k − 1)ak for k = 1, 2, 3, . . . is for ak = 0
for all k. Therefore we only get the trivial solution y = 0. We need a nonzero solution to get the
general solution.
Let us try y = xr for some real number r. Consequently our solution—if we can find one—may
only make sense for positive x. Then y� = rxr−1 . So
0 = 2xy� − y = 2xrxr−1 − xr = (2r − 1)xr .
Therefore r = 1/2, or in other words y = x1/2 . Multiplying by a constant, the general solution for
positive x is
y = Cx1/2 .
Note that the solution is not even differentiable at x = 0. The derivative necessarily must “blow up”
at the origin, so much is clear from the differential equation itself.
7.3. SINGULAR POINTS AND THE METHOD OF FROBENIUS
285
Not every problem at a singular point has solution of the form y = xr , of course. But perhaps we
can combine the methods. What we will do is to try a solution of the form
y = xr f (x)
where f (x) is an analytic function.
Example 7.3.2: Suppose that we have the equation
4x2 y�� − 4x2 y� + (1 − 2x)y = 0,
and again note that x = 0 is a singular point.
Let us try
∞
∞
�
�
y = xr
ak x k =
ak xk+r ,
k=0
k=0
where r is a real number, not necessarily an integer. Again if such a solution exists, it may only
exist for positive x. First let us find the derivatives
∞
�
(k + r) ak xk+r−1 ,
y =
�
k=0
∞
�
��
y =
(k + r) (k + r − 1) ak xk+r−2 .
k=0
Plugging into our equation we obtain
0 = 4x2 y�� − 4x2 y� + (1 − 2x)y
��
�
��
�
��
�
∞
∞
∞
2
k+r−2
2
k+r−1
k+r
(k + r) (k + r − 1) ak x
(k + r) ak x
ak x
= 4x
− 4x
+ (1 − 2x)
=
=
��
∞
k=0
��
∞
k=0
k=0
4(k + r) (k + r − 1) ak x
k+r
4(k + r) (k + r − 1) ak x
k+r
�
�
−
−
��
∞
k=0
��
∞
k=1
k=0
4(k + r) ak x
k+r+1
�
4(k + r − 1) ak−1 x
+
��
∞
k+r
�
k=0
+
ak x
k+r
��
∞
k=0
�
ak x
k=0
−
��
∞
k+r
�
k=0
−
2ak x
��
∞
�
�
= 4r(r − 1) + 1 a0 +
k=1
�
�
2ak−1 x
k=1
∞ �
�
�
4(k + r) (k + r − 1) ak − 4(k + r − 1) ak−1 + ak − 2ak−1 xk+r
= 4r(r − 1) a0 + a0 +
k=1
∞ �
�
k+r+1
�
�
�
�
4(k + r) (k + r − 1) + 1 ak − 4(k + r − 1) + 2 ak−1 xk+r .
k+r
�
�
�
Therefore to a solution we must first have 4r(r − 1) + 1 a0 = 0. Supposing that a0 � 0 we obtain
4r(r − 1) + 1 = 0.
286
CHAPTER 7. POWER SERIES METHODS
This equation is called the indicial equation. We notice that this particular indicial equation has a
double root at r = 1/2.
OK, so we know what r has to be. That we obtained simply by looking at what the coefficient of
xr . All other coefficients of xk+r also have to be zero so
�
�
�
�
4(k + r) (k + r − 1) + 1 ak − 4(k + r − 1) + 2 ak−1 = 0.
If we plug in r = 1/2 and solve for ak we get
ak =
Let us set a0 = 1. Then
4(k + 1/2 − 1) + 2
1
a
ak−1 .
=
k−1
4(k + 1/2) (k + 1/2 − 1) + 1
k
1
1
1
a2 = a1 = ,
a1 = a0 = 1,
1
2
2
Extrapolating, we notice that
ak =
In other words,
y=
∞
�
1
1
a3 = a2 =
,
3
3·2
1
1
a4 = a3 =
,
4
4·3·2
...
1
1
= .
k(k − 1)(k − 2) · · · 3 · 2 k!
ak xk+r =
k=0
∞
∞
�
�
1 k+1/2
1 k
= x1/2
x
x = x1/2 e x .
k!
k!
k=0
k=0
That was lucky! In general, we will not be able to write the series in terms of elementary functions.
We have one solution, let us call it y1 = x1/2 e x . But what about a second solution? If we want a
general solution, we need two linearly independent solutions. Picking a0 to be a different constant
only gets us a constant multiple of y1 , and we do not have any other r to try; we only have one
solution to the indicial equation. Well, there are powers of x floating around and we are taking
derivatives, perhaps the logarithm (the antiderivative of x−1 ) is around as well. It turns out we want
to try for another solution of the form
∞
�
y2 =
bk xk+r + (ln x)y1 ,
k=0
which in our case is
y2 =
∞
�
bk xk+1/2 + (ln x)x1/2 e x .
k=0
We would now differentiate this equation, substitute into the differential equation again and solve
for bk . A long computation would ensue and we would obtain some recursion relation for bk . In
fact, the reader can try this to obtain for example the first three terms
2b1 − 1
6b2 − 1
,
b3 =
,
...
4
18
We would then fix b0 and obtain a solution y2 . Then we write the general solution as y = Ay1 + By2 .
b1 = b0 − 1,
b2 =
7.3. SINGULAR POINTS AND THE METHOD OF FROBENIUS
7.3.2
287
The method of Frobenius
Before giving the general method, let us clarify when the method applies. Let
p(x)y�� + q(x)y� + r(x)y = 0
be an ODE. As before, if p(x0 ) = 0, then x0 is a singular point. If, furthermore, we have that
lim (x − x0 )
x→x0
q(x)
p(x)
lim (x − x0 )2
and
x→x0
r(x)
p(x)
both exist and are finite, then we say that x0 is a regular singular point.
Example 7.3.3: Often, and for the rest of this section, x0 = 0. Consider
x2 y�� + x(1 + x)y� + (π + x2 )y = 0.
Then we write
lim x
x→0
and
lim x2
x→0
q(x)
x(1 + x)
= lim x
= lim(1 + x) = 1,
x→0
p(x) x→0
x2
r(x)
(π + x2 )
= lim x2
= lim(π + x2 ) = π.
2
x→0
x→0
p(x)
x
so 0 is a regular singular point.
On the other hand if we make the slight change:
x2 y�� + (1 + x)y� + (π + x2 )y = 0.
Then
(1 + x)
q(x)
1+x
= lim x
= DNE.
= lim
2
x→0 p(x)
x→0
x→0
x
x
Here DNE stands for does not exist. So the point 0 is a singular point, but not a regular singular
point.
lim x
Let us now discuss the general Method of Frobenius∗ . Let us only consider the method at the
point x = 0 for simplicity. The main idea is the following theorem.
Theorem 7.3.1 (Method of Frobenius). Suppose that
p(x)y�� + q(x)y� + r(x)y = 0
has a regular singular point at x = 0, then there exists at least one solution of the form
y = xr
∞
�
ak xk .
k=0
∗
Named after the German mathematician Ferdinand Georg Frobenius (1849 – 1917).
(7.3)
288
CHAPTER 7. POWER SERIES METHODS
Following the method is really a series of steps.
(i) We seek a solution of the form
y=
∞
�
ak xk+r .
k=0
We plug this y into equation (7.3). A solution of this form is called a Frobenius-type solution.
(ii) We obtain a series which must be zero. Setting the first coefficient (the coefficient of xr ) to
zero we obtain the indicial equation, which is a quadratic polynomial in r.
(iii) If the indicial equation has two real roots r1 and r2 such that r1 − r2 is not an integer, then we
obtain two linearly independent solutions
y1 = xr1
∞
�
ak x k ,
k=0
and
y2 = xr2
∞
�
ak x k ,
k=0
by solving all the relations that appear.
(iv) If the indicial equation has a doubled root r, then there we find one solution
y1 = x
r
∞
�
ak x k ,
k=0
and then we obtain a new solution by plugging
y2 = x
r
∞
�
bk xk + (ln x)y1 ,
k=0
into equation (7.3) and solving for the variables bk .
(v) If the indicial equation has two real roots such that r1 − r2 is an integer, then one solution is
y1 = xr1
∞
�
ak x k ,
k=0
and the second linearly independent solution is of the form
y2 = xr2
∞
�
bk xk + C(ln x)y1 ,
k=0
where we plug y2 into (7.3) and solve for the constants bk and C.
7.3. SINGULAR POINTS AND THE METHOD OF FROBENIUS
289
(vi) Finally, if the indicial equation has complex roots, then solving for ak in the solution
y = xr1
∞
�
ak x k
k=0
will result in a complex valued function—all the ak will be complex numbers. We obtain our
two linearly independent solutions∗ by taking the real and imaginary parts of y.
Note that the main idea is to find at least one Frobenius-type solution. If we are lucky and find
two we are done. If we only get one, there’s a variety of other methods (different from the above) to
obtain a second solution. For example, reduction of order, see Exercise 2.1.8 on page 50.
7.3.3
Bessel functions
An important class of functions that arises commonly in physics are the Bessel functions† . For
example, these functions arise as when solving the wave equation in two and three dimensions. First
we have Bessel’s equation of order p.
�
�
x2 y�� + xy� + x2 − p2 y = 0.
We allow p to be any number, not just an integer, although integers and multiples of 1/2 are most
important in applications.
When we plug
∞
�
y=
ak xk+r
k=0
into Bessel’s equation of order p we obtain the indicial equation
r(r − 1) + r − p2 = (r − p)(r + p) = 0.
Therefore we obtain two roots r1 = p and r2 = −p. If p is not an integer following the method of
Frobenius and setting a0 = 1, we can obtain linearly independent solutions of the form
∞
�
(−1)k x2k
,
y1 = x
22k k!(k + p)(k − 1 + p) · · · (2 + p)(1 + p)
k=0
∞
�
(−1)k x2k
−p
y2 = x
.
22k k!(k − p)(k − 1 − p) · · · (2 − p)(1 − p)
k=0
p
∗
See Joseph L. Neuringera, The Frobenius method for complex roots of the indicial equation, International Journal
of Mathematical Education in Science and Technology, Volume 9, Issue 1, 1978, 71–77.
†
Named after the German astronomer and mathematician Friedrich Wilhelm Bessel (1784 – 1846).
290
CHAPTER 7. POWER SERIES METHODS
Exercise 7.3.1: a) Verify that the indicial equation of Bessel’s equation of order p is (r−p)(r+p) = 0.
b) Suppose that p is not an integer. Carry out the computation to obtain the solutions y1 and y2
above.
Bessel functions will be convenient constant multiples of y1 and y2 . First we must define the
gamma function
�
∞
Γ(x) =
0
t x−1 e−t dt.
The gamma function has a wonderful property
Γ(x + 1) = xΓ(x).
From this property, one can show that Γ(n) = (n − 1)! when n is an integer. Furthermore we can
compute that
Γ(k + p + 1) = (k + p)(k − 1 + p) · · · (2 + p)(1 + p)Γ(1 + p),
Γ(k − p + 1) = (k − p)(k − 1 − p) · · · (2 − p)(1 − p)Γ(1 − p).
Exercise 7.3.2: Verify the above identities using Γ(x + 1) = xΓ(x).
We define the Bessel functions of the first kind of order p and −p as
∞
� x �2k+p
�
1
(−1)k
y
=
,
1
2 p Γ(1 + p)
k!Γ(k + p + 1) 2
k=0
∞
� x �2k−p
�
1
(−1)k
y2 =
J−p (x) = −p
.
2 Γ(1 − p)
k!Γ(k
−
p
+
1)
2
k=0
J p (x) =
As these are constant multiples of the solutions we found above, these are both solutions to Bessel’s
equation of order p. The constants are picked for convenience.
When p is not an integer, J p and J−p are linearly independent. When n is an integer we obtain
Jn (x) =
∞
�
k=0
In this case it turns out that
(−1)k � x �2k+n
.
k!(k + n)! 2
Jn (x) = (−1)n J−n (x),
and so we do not obtain a second linearly independent solution. The other solution is the so-called
Bessel function of second kind. These make sense only for integer orders n and are defined as limits
of linear combinations of J p (x) and J−p (x) as p approaches n in the following way:
cos(pπ)J p (x) − J−p (x)
.
p→n
sin(pπ)
Yn (x) = lim
7.3. SINGULAR POINTS AND THE METHOD OF FROBENIUS
291
As each linear combination of J p (x) and J−p (x) is a solution to Bessel’s equation of order p, then as
we take the limit as p goes to n, Yn (x) is a solution to Bessel’s equation of order n. It also turns out
that Yn (x) and Jn (x) are linearly independent. Therefore when n is an integer, we have the general
solution to Bessel’s equation of order n
y = AJn (x) + BYn (x),
for arbitrary constants A and B. Note that Yn (x) goes to negative infinity at x = 0. Many mathematical
software packages have these functions Jn (x) and Yn (x) defined, so they can be used just like say
sin(x) and cos(x). In fact, they have some similar properties. For example, −J1 (x) is a derivative of
J0 (x), and in general the derivative of Jn (x) can be written as a linear combination of Jn−1 (x) and
Jn+1 (x). Furthermore, these functions oscillate, although they are not periodic. See Figure 7.4 for
graphs of Bessel functions.
0.0
2.5
5.0
7.5
10.0
1.00
1.00
0.75
0.75
0.50
0.50
0.25
0.25
0.00
0.00
-0.25
-0.25
1.0
0.0
2.5
5.0
7.5
10.0
1.0
0.5
0.5
0.0
0.0
-0.5
-0.5
-1.0
-1.0
-1.5
-1.5
-2.0
0.0
2.5
5.0
7.5
10.0
0.0
2.5
5.0
7.5
-2.0
10.0
Figure 7.4: Plot of the J0 (x) and J1 (x) in the first graph and Y0 (x) and Y1 (x) in the second graph.
Example 7.3.4: Other equations can sometimes be solved in terms of the Bessel functions. For
example for a positive constant λ,
xy�� + y� + λ2 xy = 0,
can be changed to x2 y�� + xy� + λ2 x2 y = 0, and then changing variables t = λx we obtain via chain
rule the equation in y and t:
t2 y�� + ty� + t2 y = 0,
which can be recognized as Bessel’s equation of order 0. Therefore the general solution is y(t) =
AJ0 (t) + BY0 (t), or in terms of x:
y = AJ0 (λx) + BY0 (λx).
This equation comes up for example when finding fundamental modes of vibration of a circular
drum, but we digress.
292
7.3.4
CHAPTER 7. POWER SERIES METHODS
Exercises
Exercise 7.3.3: Find a particular (Frobenius-type) solution of x2 y�� + xy� + (1 + x)y = 0.
Exercise 7.3.4: Find a particular (Frobenius-type) solution of xy�� − y = 0.
Exercise 7.3.5: Find a particular (Frobenius-type) solution of y�� + 1x y� − xy = 0.
Exercise 7.3.6: Find the general solution of 2xy�� + y� − x2 y = 0.
Exercise 7.3.7: Find the general solution of x2 y�� − xy� − y = 0.
Exercise 7.3.8: In the following equations classify the point x = 0 as ordinary, regular singular, or
singular but not regular singular.
a) x2 (1 + x2 )y�� + xy = 0
b) x2 y�� + y� + y = 0
c) xy�� + x3 y� + y = 0
d) xy�� + xy� − e x y = 0
e) x2 y�� + x2 y� + x2 y = 0
Exercise 7.3.101: In the following equations classify the point x = 0 as ordinary, regular singular,
or singular but not regular singular.
a) y�� + y = 0
b) x3 y�� + (1 + x)y = 0
c) xy�� + x5 y� + y = 0
d) sin(x)y�� − y = 0
e) cos(x)y�� − sin(x)y = 0
Exercise 7.3.102: Find the general solution of x2 y�� − y = 0.
Exercise 7.3.103: Find a particular solution of x2 y�� + (x − 3/4)y = 0.
Exercise 7.3.104 (Tricky): Find the general solution of x2 y�� − xy� + y = 0.
302
6.3.101:
SOLUTIONS TO SELECTED EXERCISES
1
(cos t
2
6.3.102:
6.3.103:
+ sin t − e−t )
5t − 5 sin t
1
(sin t − t cos t)
2
�t
�
�
6.3.104: 0 f (τ) 1 − cos(t − τ) dτ
6.4.101:
6.4.102:
x(t) = t
x(t) = e−at
6.4.103:
x(t) = (cos ∗ sin)(t) = 12 t sin(t)
6.4.104: δ(t) − sin(t)
6.4.105: 3δ(t − 1) + 2t
7.1.101: Yes. Radius of convergence is 10.
7.1.102: Yes. Radius of convergence is e.
∞
�
1
1
1
7.1.103: 1−x
= − 1−(2−x)
so 1−x
=
(−1)n+1 (x − 2)n , which converges for 1 < x < 3.
n=0
7.1.104:
∞
�
n=7
1
xn
(n−7)!
7.1.105: f (x) − g(x) is a polynomial. Hint: Use Taylor series.
7.2.101: a2 = 0, a3 = 0, a4 = 0, recurrence relation (for k ≥ 5): ak = −2ak−5 , so:
y(x) = a0 + a1 x − 2a0 x5 − 2a1 x6 + 4a0 x10 + 4a1 x11 − 8a0 x15 − 8a1 x16 + · · ·
7.2.102: a) a2 = 12 , and for k ≥ 1 we have ak = ak−3 + 1, so
y(x) = a0 +a1 x+ 12 x2 +(a0 +1)x3 +(a1 +1)x4 + 32 x5 +(a0 +2)x6 +(a1 +2)x7 + 52 x8 +(a0 +3)x9 +(a1 +3)x10 +· · ·
b) y(x) = 12 x2 + x3 + x4 + 32 x5 + 2x6 + 2x7 + 52 x8 + 3x9 + 3x10 + · · ·
7.2.103: Applying the method of this section directly we obtain ak = 0 for all k and so y(x) = 0 is
the only solution we find.
7.3.101: a) ordinary, b) singular but not regular singular, c) regular singular, d) regular singular, e)
ordinary.
√
1+ 5
2
√
1− 5
2
7.3.102:
y = Ax
7.3.103:
y = x3/2
7.3.104:
y = Ax + Bx ln(x)
∞
�
k=0
+ Bx
(−1)−1 k
x
k!(k+2)!
(Note that for convenience we did not pick a0 = 1)