Solving second order ordinary differential equations
1
Revision
Consider the equation
y 00 + a1 (x)y 0 + a2 (x)y = r(x)
(1)
satisfied by the function y(x) where a prime indicates differentiation with respect to x. It is linear and so we know
that its solution has the form y(x) = yCF (x) + yPI (x), where yCF (x) is the general solution of (1) with r = 0 and, as
the equation is second order, contains two arbitrary constants. So, yCF (x) = Ay1 (x) + By2 (x) where both y1 and y2
satisfy (1) with R = 0 and are linearly independent, meaning there are no C1,2 such that C1 y1 + C2 y2 = 0 for all
x. The function yPI (x) is any function that satisfies (1) and is often difficult to find and up until now the method
has been to use guided guesswork but there are general methods that we shall investigate shortly.
2
Linear dependence - the Wronskian
If two functions y1,2 are linearly dependent then C1 y1 (x) + C2 y2 (x) = 0 for all x for some C1,2 . Differentiating this
expression yields C1 y10 (x) + C2 y20 (x) = 0. If we treat this as two equations for the values of C1,2 then we have
y1 y2
C1
0
=
⇒ W = y1 y20 − y10 y2 = 0.
(2)
y10 y20
C2
0
The expression W (x) is called the Wronskian of the functions y1,2 . See http://en.wikipedia.org/wiki/Wronskian.
Exercises
1. Can you extend the idea to more than two functions, a situation which would be relevant to differential
equations of higher order?
2. Show that the functions {ex , e−x } are linearly independent. How about {1, x, x2 } or {1, x, 3 − x}?
3
Reduction of order
It is not easy to find two linearly independent functions to form yCF . The reduction of order method allows one
to find a second, if you know a first. Let y = u(x) be a solution of (1) with r = 0 and let us look for a second
solution of the form y(x) = u(x)v(x). Substitution gives
uv 00 + 2u0 v 0 + u00 v + a1 (uv 0 + u0 v) + a0 uv = v (u00 + a1 u0 + a0 u) +uv 00 + (2u0 + a1 u)v 0 = 0,
{z
}
|
=0 as u satisfies (1)
so that, if we introduce z = v 0 we get a first order equation for z and we have reduced the order of the equation.
Dividing by u, we have,
Z x
u0
A Rx
z0
+ 2 + a1 (x) = 0, ⇒ ln z + ln u2 +
a1 (t) dt = constant ⇒ v 0 (x) = z(x) = 2 e− a1 (t) dt
z
u
u
and integrating again, and recalling y = uv,
Z
y(x) = Au(x)
|
x
1 − R t a1 (s) ds
Bu(x)
e
dt +
.
| {z }
u2 (t)
{z
} original solution
new solution
Rather than remembering (3) it sometimes better to proceed from first principles, I suggest. See also
http://en.wikipedia.org/wiki/Reduction_of_order.
Example
Legendre’s equation of order one is
(1 − x2 )y 00 − 2xy 0 + 2y = 0.
1
(3)
Its easy to spot that y = x is one solution. To find a second, write y = xv and substitute to get
(1 − x2 )(xv 00 + 2v 0 ) − 2x(xv 0 + v) + 2xv = (1 − x2 )(xv 00 + 2v 0 ) − 2x2 v 0 = 0,
⇒
and, using partial fractions, and integrating
Z x
1
2
1
ln v 0 +
+ +
dt = ln v 0 + ln(x − 1) + 2 ln x + ln(1 + x) = constant
t−1
t
1+t
for a constant B. Using partial fractions to integrate again,
Z x
1
1
1
1 1
1−x
v(x) = B
− −
dt = + ln
2(t − 1) t2
2(t + 1)
x 2
1+x
⇒
v 00
2 − 4x2
+
= 0,
v0
x(1 − x2 )
⇒
v0 =
B
x2 (x2
− 1)
,
x
1−x
y(x) = xv(x) = B x + ln
.
2
1+x
The first term in this expression is the solution we know already and the solution to Legendre’s equation of order
one is
x
1−x
y(x) = Ax + B ln
.
2
1+x
4
The Wronskian of the functions in the complementary function
Here we consider some properties of W (x) = y1 y20 − y10 y2 where y1,2 satisfy
y100 + a1 y10 + a0 y1 =
0,
(4)
y200
0.
(5)
+
a1 y20
+ a0 y2 =
Taking y1 (5)−y2 (4) gives y1 y200 − y100 y2 + a1 (y1 y20 − y10 y2 ) = 0. Now W 0 = (y1 y20 − y10 y2 )0 = y10 y20 + y1 y200 − y100 y2 − y10 y20
= y1 y200 − y100 y2 . Hence
Rx
dW
+ a1 W = 0 ⇒ W = Ce− a1 (t)dt .
(6)
dx
5
Variation of Parameters
We now return to looking for solutions to (1) with r(x) non-zero. We presume we have two solutions y1 (x) and
y2 (x) which make up the complementary function and look for a solution to (1) of the form y(x) = A(x)y1 (x)+
B(x)y2 (x), looking for suitable functions A(x) and B(x). There is too much freedom in this choice as we could
manage with B = 0, as we do in the method of reduction of order. We can after all write any function as A(x)y1 (x)
for appropriate A(x). We use this freedom to our advantage. Evaluating y 0 gives
y 0 = A0 y1 + B 0 y2 + Ay10 + By20 = Ay10 + By20
if we use our freedom to insist
A0 y1 + B 0 y2 = 0.
(7)
With this choice, differentiating again, gives y 00 = A0 y10 + B 0 y20 + Ay100 + By200 , so that we have avoided second
derivatives of A and B in this expression and, substitution into (1) gives
A [y100 + a1 y10 + a0 y1 ] +B [y200 + a1 y20 + a0 y2 ] +A0 y10 + B 0 y20 = r(x).
|
{z
}
|
{z
}
zero as y1 is in CF
(8)
zero as y2 is in CF
We have two linear equations now for A0 and B 0 , namely (7) and (8). Solving these leads to
Z x
Z x
dA
−ry2
r(t)y2 (t)
dB
ry1
r(t)y1 (t)
=
, ⇒ A(x) = −
dt, &
=
⇒ B(x) =
dt,
dx
W
W (t)
dx
W
W (t)
W = y1 y20 − y10 y2 .
(9)
Since y = Ay1 + By2 , constants of integration, a and b, say, in A and B will lead to reproduction of the
complementary function ay1 + by2 . Putting all this together leads to
Z x
y1 (t)y2 (x) − y1 (x)y2 (t)
y(x) = ay1 + by2 +
R(t) dt.
W (t)
See http://en.wikipedia.org/wiki/Variation_of_parameters.
2
(10)
Example
Solve y 00 + 2y 0 + y = e−x ln x. The auxiliary equation for the homogeneous version is α2 + 2α + 1 = (α + 1)2 = 0 so
the CF is y = Ae−x + Bxe−x = Ay1 + By2 say. If y = A(x)y1 (x) + B(x)y2 (x) is the solution of the forced equation
then, if we choose A0 y1 + B 0 y2 = 0, y 0 = Ay10 + By20 and y 00 = A0 y10 + B 0 y20 . Since y1,2 satisfy the homogeneous
equation substitution will lead to A0 y10 + B 0 y20 = e−x ln x. Solving these two equations for A0 and B 0 gives
A0 (y2 y10 − y1 y20 ) = mathrme−x ln xy2 and B 0 (y2 y10 − y1 y20 ) = −e−x ln xy1 . Now (y2 y10 − y1 y20 ) = xe−x (−e−x )−
e−x (e−x − xe−x ) = −e−2x . So A0 = e−x ln x.xe−x /(−e−2x ) = −x ln x and B 0 = −e−x ln x.e−x /(−e−2x ) = ln x.
Integrating, ignoring constants of integration, A = −x2 ln x/2 + x2 /4, B = x ln x − x and yPI = Ae−x + Bxe−x
= x2 e−x [ 31 ln x − 43 ]. Finally y = Ay1 + By2 + yPI
6
Generalised Transforms
Recall some results involving the Laplace transform ȳ(s) and Fourier transform ŷ(k) of the function y(x).
Z
Z ∞
1
ȳ(s) =
e−sx y(x) dx, y0 = sȳ − y(0), xy = −ȳ 0 , y(x) =
esx ȳ(s) ds,
2πi
0
γ
1
fˆ(k) = √
2π
Z
∞
e−ikx f (x) dx,
x
cy = iŷ 0 ,
yb0 = ik ŷ,
−∞
1
y(x) = √
2π
Z
∞
eikx fˆ(k) dx.
(11)
(12)
−∞
In both cases notice that multiplying by x corresponds to a differentiation of the transform, differentiation with
respect to x corresponds to multiplication of the transform by the transform variable (s or k) and y is expressed as
a contour integral in the transform variable, although the contour is the real axis in the case of the Fourier
transform. Here we extend this idea and look for solutions of a differential equation
(a1 x + a0 )y 00 + (b1 x + b0 )y 0 + (c1 x + c0 )y = 0,
by looking for solutions of the form
Z
y(x) =
ext f (t) dt
C
⇒
0
Z
xt
y (x) =
te f (t) dt
C
⇒
00
(13)
Z
y (x) =
t2 ext f (t) dt,
(14)
C
substituting into (13) and see what happens. We need to find an appropriate C and f .
To illustrate things, we will start with the rather straightforward case of constant coefficients, so that
a1 = b1 = c1 = 0. Substitution gives
Z
Z
Z
Z
2 xt
xt
xt
a0
t e f (t) dt + b0
te f (t) dt + c0
e f (t) dt =
[a0 t2 + b0 t + c0 ]ext f (t) dt = 0.
C
C
C
C
This integral is guaranteed to be zero, as required, if C is closed and the integrand is regular inside C
(http://en.wikipedia.org/wiki/Cauchy%27s_integral_theorem). Since ext is regular, we need
(a0 t2 + b0 t + c0 )f (t) to be regular inside our C, for then Cauchy’s theorem would give y(x) = 0. Here regular
means analytic or
R holomorphic.(http://en.wikipedia.org/wiki/Holomorphic_function.) However the solution,
proportional to C ext f (t) dt, should not be the zero solution, so we need f to have singularities.
Let α and β be the roots of a0 t2 + b0 t + c0 = 0, which are the roots of the auxiliary equation, then we look for f
such that (t − α)(t − β)f (t) is free of singularities in some region, which we encircle with C, but that f (t) has
singularities. We choose f (t) = A(t − α)−1 + B(t − β)−1 , so that (a0 t2 + b0 t + c0 )f (t) = a0 [A(t − β) + B(t − α)]
which is regular as required. With this choice
Z
A
B
y(x) =
ext
+
dt.
(t − α) (t − β)
C
3
α
The contour C = Cα gives one solution, y1 (x), C = Cβ gives a second
independent solution (y2 (x)), since it is not possible to push Cβ to
Cα without crossing a singularity in f . The contour C0 gives the zero
solution as ext f (t) is regular inside C0 . We can find explicit forms for
these solutions here, using Cauchy’s integral formula
I
g(t)
dt = 2πig(t0 ) if g(t) is regular inside C.
(t
− t0 )
C
β
Cβ
Cα
C0
Cα+β
(http://en.wikipedia.org/wiki/Cauchy%27s_integral_formula)
With C = Cα , t0 = α, g(t) = ext (A + B(t − α)/(t − β)), gives y1 (x) = 2πig(α) = exα (A + 0) = Ãeαx Similarly, Cβ
generates y2 (x) = B̃eβx . This of course is the solution we would have found by elementary methods. A sum of these
two solutions, giving the general solution would be generated with the contour encircling both singularities Cα+β
The method works too if the auxiliary equation has a double root at a0 t2 + b0 t + c0 = a0 (t − α)2 . We choose
f (t) = A(t − α)−2 + B(t − α)−1 and Cα gives the solution
I
B
A
+
dt
y(x) =
ext
(t − α)2
(t − α)
Cα
This can be evaluated using the residue theorem (http://en.wikipedia.org/wiki/Residue_Theorem) You will
recall that the residue of g(t), say, at a pole, here t = α, is the coefficient of (t − α)−1 in the Laurent expansion of
g(t) about t = α (http://en.wikipedia.org/wiki/Residue_%28complex_analysis%29).The easiest way to find
that here is to write
A
B
B
B
A
A
xt
αx x(t−α)
αx
g(t) = e
+
+
+
=e e
≈ e (1 + x(t − α) + · · · )
,
(t − α)2
(t − α)
(t − α)2
(t − α)
(t − α)2
(t − α)
1
1
+ (Ax + B)eαx
+ · · · t → α.
≈ Aeαx
(t − α)2
(t − α)
The residue is therefore (Ax + B)eαx and the residue theorem gives y(x) as 2πi times this, i.e. y(x) = eαx (Ã + B̃x),
as with elementary methods.
Consider now the general case. Substitution of (14) into (13) requires
Z
x[a1 t2 + b1 t + c1 ] + [a0 t2 + b0 t + c0 ] ext f (t) dt = 0.
(15)
C
There are two ways of proceeding from here. The first is to try and find a function g(t) such that the integrand is
d[ext g(t)]/dt so that we are left with the requirement that [ext g(t)]C = 0. The second is to use integration by parts
on the xext term in (15). We will use illustrate the first method here and the second in later examples.
Since d[ext g(t)]/dt = ext (g 0 + xg), comparing with (15) we identify, comparing coefficients of xext ,
Z
g0
a0 t2 + b0 t + c0
a0 t2 + b0 t + c0
2
0
2
g(t) = (a1 t +b1 t+c1 )f (t), g (t) = (a1 t +b1 t+c1 )f (t), ⇒
=
,
⇒
ln
g
=
dt.
g
a1 t2 + b1 t + c1
a1 t2 + b1 t + c1
Once g(t) has been calculated, f (t) can be found through f (t) = g(t)/(a1 t2 + b1 t + c1 ) or
f (t) = g 0 (t)/(a0 t2 + b0 t + c0 ). The solution is, recall
Z
ext f (t) dt, with C chosen so that [ext g(t)]C = 0.
C
Example
Solve
xy 00 + 4y 0 − xy = 0,
Z
x > 0,
looking for a solution of the form
y(x) =
ext f (t) dt.
C
We will see the relevance of the condition x ≥ 0 as we proceed. Equation (14) reminds us of the form of the
derivatives of y and substitution implies we need
Z
Z
d xt
[e g(t)] dt = ext g(t) C = 0,
(xt2 + 4t − x)ext f (t) dt =
C
C dt
4
(16)
where we write
(xt2 + 4t − x)ext f (t) =
d xt
[e g(t)] = ext g 0 (t) + xext g(t),
dt
g 0 = 4tf,
g = (t2 − 1)f.
So
4t
g0
= 2
, ⇒ ln g = 2 ln(t2 − 1), ⇒ g(t) = (t2 − 1)2 and f (t) = g(t)/(t2 − 1) = t2 − 1.
g
t −1
The solution therefore is
Z
ext (t2 − 1) dt, where C is chosen so that ext (t2 − 1)2 C = 0.
C
xt
2
There are zeros in e (t − 1) at 1, −1 and −∞ and joining any two
of these zeros with a contour C will make [ext (t2 − 1)]C = 0. This
is where the condition x > 0 comes into play. There are two ways of
doing this join to get two independent solutions, C1 joining −1 and 1,
generating the solution y1 (x) and C2 , joining −∞ and −1, generating
y2 (x). A contour joining −∞ and 1 would generate y1 + y2 which is
not independent of y1 and y2 . Similarly a contour from 1 to −1 will
generate −y1 . Contours C1 and C2 can run along the real axis, so that
Z
1
y1 (x) =
−1
ext (t2 − 1) dt,
Z
C2
−∞
1
−1
C1
−1
y2 (x)
−∞
ext (t2 − 1) dt,
y(x) = Ay1 (x) + By2 (x).
We now turn to looking at the properties of these solutions.
R1
Putting x = 0 gives ext = 1 and y1 (0) = −1 1(t2 − 1) dt which is a finite number. In contrast
R −1
y2 (0) = −∞ 1(t2 − 1) dt which is an improper divergent integral, diverging at its lower limit. However for positive
x, however small, ext → 0 as t → −∞ sufficiently quickly for the integral y2 (x) to converge, giving a finite result.
We conclude that y2 (x) has a singularity at x = 0.
Examining the limit x → ∞, shows that y1 (x) → ∞ as it has an interval of integration in which t > 0 and so ext in
the integrand gets arbitrarily large. In contrast the interval in y2 (x), (−∞, −1) has t < 0, so that as x → ∞ the
factor ext → 0 and the integral y2 (x) → 0. We will see more of this type of argument later in the course where we
will cover techniques for extracting more information about the behaviour of these integral solutions.
An alternative method of solving this type of second order equations is to look for a power series solution about
x = 0, as discussed in Methods 4. This gives a lot of information about the behaviour of the solution as x → 0 but
very little about the behaviour away from x = 0 and, although it is possible to extract the behaviour as x → ∞
from the series solution as x → 0, it is not straightforward. The current method gives a form of solution which
gives an explicit connection between the solution at x = 0 and the solution as x → ∞. In the current example we
see y1 is finite at x = 0, but grows exponentially as x → ∞ In contrast y2 is finite as x → ∞ but grows as x → 0.
Hence there is no solution finite as both x → 0 and x → ∞.
It is possible to get explicit forms for the solutions as these integrals can be evaluated. Note that if this is not the
case, as explained above, the integral form of the solution is still very valuable. Splitting the range of integration in
y1 and making a change of variable t → −t in one range gives,
Z 0
Z 1
Z 1
Z 1
y1 (x) =
ext (t2 − 1) dt +
ext (t2 − 1) dt =
e−xt (t2 − 1) dt +
ext (t2 − 1) dt =
−1
0
Z
2
0
1
0
0
4
cosh(xt)(t2 − 1) dt = 3 [sinh x − x cosh x],
x
integrating by parts twice. The growth at infinity is clear and l’Hopital’s rule shows that it is finite at x = 0. In
y2 (x), make the substitution t → −t and integrate by parts twice
Z ∞
2e−xt
y2 (x) =
e−xt (t2 − 1) dt =
(1 + x).
x3
1
We will now solve the same equation using the second method, direct substitution and then integration by parts.
From (16) we need
Z
Z
d
0=
{x(t2 − 1) + 4t}ext f (t) dt = ext (t2 − 1)f (t) C +
ext 4tf (t) − [f (t2 − 1)] dt,
dt
C
C
5
using parts on the xext .f (t)(t2 − 1) part of the integral. So we need
Z
0 = ext (t2 − 1)f (t) C +
ext (4f t − 2f t − (t2 − 1)f 0 ) dt.
C
2
0
We ensure this by ensuring f satisfies (t − 1)f = 2f t and then choosing C so that ext (t2 − 1)f (t) C = 0, just as
before.
We can now see explicitly that this method only works if the equation is linear and when the degree of the
polynomial coefficients in the differential equation is less than the order of the equation. If the coefficients are
quadratic in a second order equation, then two integration by parts are needed to get rid of the x2 f which would
lead to terms in f 00 and a second order equation would need to be solved for f and we are likely be no better off.
We also see the connection with Laplace and Fourier transforms in that differentiation of the function y leads to a
multiplication of the generalised transform f by t and that multiplication of y by x leads to differentiation of the
transform through the integration by parts.
Another example
xy 00 + (3x − 1)y 0 − 9y = 0,
Z
x > 0,
ext f (t) dt.
y(x) =
C
Substitution and gathering together terms that give xext
Z
Z
[t2 f + 3tf ]xext dt − [tf + 9f ]ext dt = 0.
C
C
Integration by parts in the first integral, changes the x into a −d/dt and giving
Z
2
d 2
t f + 3tf + (t + 9)f dt = 0,
(t f + 3tf )ext C −
C dt
which we ensure by insisting that the integrand in the integral is identically zero and then choosing C appropriately.
0
0 = t2 f + 3tf + (t + 9)f = f 0 (t2 + 3t) + f [2t + 3 + t + 9]
⇒
f0
3t + 12
1
4
=−
=
− ,
f
t(t + 3)
t+3
t
using partial fractions. Integration gives f = (t + 3)/t4 and we have a solution
Z xt
(t + 3)2 ext
(t + 3) xt
e (t + 3)
2
=
dt with C chosen so that (t + 3t)
e
= 0.
y(x) =
t4
t4
t3
C
C
C
The function (t + 3)3 ext /t3 is zero only in two places, t = −3 and
t → −∞ and although one suitable C can be found by joining these
two, with C2 , a second solution needs a new idea. The function is
−3
C2
−∞
single valued (the exponents are integers and not fractions) so that
any closed contour C will give zero. Most closed contours will give
a zero solution for y(x) except those which encircle a singularity in
ext f (t) = ext (t + 3)/t4 which would give a non-zero y(x). Such a
countour is C1 which encircles the singularity at the origin. We have
I
Z −3 xt
Z ∞ −xt
1
ext (t + 3)
e (t + 3)
e (3 − t)
y1 (x) =
dt,
y
(x)
=
dt
=
dt.
2
4
2πi C1
t4
t
t4
−∞
3
0
C1
The 2πi factor ensures that y1 is real. An explicit form for y1 can be found using Cauchy’s integral formula for
derivatives (http://en.wikipedia.org/wiki/Cauchy%27s_integral_formula),
Z
n!
f (t)
1 d3 xt
n
=
f (t0 ) =
dt, y1 (x) =
e (t + 3)
2πi C (t − t0 )n+1
3! dt3
t=0
1 3 xt
x e (t + 3) + 3x2 ext .1 + 0 + 0 t=0 = x2 (x + 1)/2,
6
where we have used Leibnitz’ theorem to quickly evaluate the third derivative. This solution is therefore a simple
polynomial in x. The second solution y2 (x) has an integrand that decays as x → 0 and so y2 (x) → 0 as x → ∞.
6
(Later in this course we will see how to find out exactly how it decays.) The function y2 (x) has a singularity at
x = 0 as its second derivative there is infinite. Differentiating under the integral sign gives
Z ∞ −xt
Z ∞
Z ∞ −xt
Z ∞ −xt
e (3 − t)
−te−xt (3 − t)
e (3 − t)
e (3 − t)
0
00
y2 (x) =
dt, y2 (x) =
dt = −
dt, y2 (x) =
dt.
4
4
3
t
t
t
t2
3
3
3
3
For x > 0 all these integrals converge due to the exponential decay in the integrand as t → 0. However for x = 0
they yield
Z ∞
Z ∞
Z ∞
(3 − t)
(3 − t)
(3 − t)
0
00
y2 (0) =
dt,
y
dt,
y
dt.
(0)
=
−
(0)
=
2
2
t4
t3
t2
3
3
3
The first two integrals converge giving a finite value for y2 (0) and y20 (0) but the third diverges due to the 1/t
behaviour in the integrand as t → ∞. Hence y200 (0) does not exist.
Example - Airy’s Equation
Airy’s equation is y 00 − xy = 0. It is important as it bridges the exponential behaviour of y 00 − y = 0 and the oscillatory behaviour of
y 00 + y = 0. Its solutions are oscillatory in x < 0 and exponentially
growing/decaying in x > 0. The solution that contains no component
of the growing solution and so decays is denoted Ai(x). The second tabulated solution is written Bi(x). http://en.wikipedia.org/
R
wiki/Airy_function. We look for a solution y(x) = C f (t)ext dt.
Direct substitution implies that we need
Z
Z
0=
(t2 − x)f ext dt = −f (t)ext f (t) C +
t2 f + f 0 ext dt,
C
1.5
1.0
0.5
-10
-8
-6
-4
2
-2
4
-0.5
C
arg(t) = π/2
3
C
arg(t) = π/6
so we choose f 0 = −t2 f , solve for f = e−t /3 and then choose C so
3
3
arg(t) = 5π/6
that [ext f ]C = [ext−t /3 ]C = 0. There are no zeros in ext−t /3 except
3
3
C
possibly as |t| → ∞. As |t| → ∞ the behaviour of ext−t /3 = ext e−t /3
is determined by the sign of the real part of t3 . If it is positive then
it is exponentially small, but it grows to be exponentially large where
arg(t) = −5π/6
the sign is negative. To investigate the sign put t = R(cos θ +i sin θ) so
3
3
3
that t = R (cos 3θ+i sin 3θ), with real part R cos 3θ. This is positive
arg(t) = −π/6
C
where −5π/2 < 3θ < −3π/2, −π/2 < 3θ < π/2, 3π/2 < 3θ < 5π/2,
arg(t) = −π/2
i.e. −5π/6 < θ < −π/2, −π/6 < θ < π/6, π/2 < θ < 5π/6. Hence
any contour C that originates at infinity in one of these sectors and
C̃ (R)
3
then exits to infinity in one will ensure [ext−t /3 ]C = 0 and the solution
R
3
will be C ext−t /3 dt. Alternatively any closed C will give a solution,
C̃
but this will be the zero solution as the exponential function is free of
singularities. Similarly any contour that starts at infinity in one sector
C̃
and then exits by the same sector will also give the zero solution. This
is because the contour can be ”closed at infinity” within the sector,
where the integrand can be made arbitrarily small by making |t| large.
C̃ (R)
The closed contour contains no singularities and so the integral for
y(x) gives the zero solution. To be more explicit, consider the integral
along the contour C̃1 shown extending to infinity and generating the
solution ỹ. The contour made up of the union of the parts of C̃1
C
with modulus less than R and the arc C̃1 (R) on which |t| = R is a
closed contour giving a zero value for y. The difference between the
C
integral giving ỹ and this zero integral consists of contributions along
C1 (R) and having |t| > R and so can be made as small as we please
as R → ∞. Hence the integral along C̃1 , i.e. ỹ must be as close as we
please to zero and so must be zero. The similar argument applied to
C̃2 which enters and exits into different sectors fails as the integrand
C
is not exponentially small along C̃2 (R).
R
3
Hence we identify three contours giving three non-zero solutions for y, yi (x) = Ci ext−t /3 dt. However only two of
1
2
3
1
1
2
2
1
2
3
7
these are independent as y1 (x) + y2 (x) + y3 (x) = 0 as the contours C1 , C2 and C3 can be ”joined at infinity” to give
a closed contour containing no singularities. The tabulated solutions Ai(x) and Bi(x) are given by
Z
3
1
1
1
Ai(x) =
y1 (x) =
ext−t /3 dt, Bi(x) =
(y2 (x) − y3 (x)).
(17)
2πi
2πi C1
2π
We will consider Ai(x) further and make a specific choice for C1 in (17), choosing C1 to run parallel to the
imaginary axis, C1 = C() = − + is, −∞ < s < ∞. Remember we will get the same function Ai(x) whatever C1
we choose as long as it enters and exits the complex t-plane in the right sectors so the result will be independent of
as long as > 0. We can take the limiting case → 0 giving a limiting contour running on the imaginary axis.
The integrand in this case does not become small as |s| → ∞ but does not grow and does oscillate increasingly
rapidly. The integral is not absolutely integrable but does converge.
Ai(x) =
1
2πi
Z
∞
−∞
3
e(is)x−(is)
/3
ids =
1
2π
Z
0
3
ei(sx+s
−∞
|
{z
/3)
put s → −s
Z ∞
=
1
π
1
ds +
2π
}
Z
∞
0
cos(xs + s3 /3) ds.
0
8
3
ei(sx+s
/3)
ds =
1
2π
Z
∞
3
e−i(sx+s
/3)
+ ei(sx+s
3
/3)
0
(18)
ds
© Copyright 2026 Paperzz