4 Boundary Value Problems

4
4
BOUNDARY VALUE PROBLEMS
1
Boundary Value Problems
In this chapter we deal with the boundary value problems as
(p(t)x0 (t))0 + r(t)x(t) = f (t)
0
0
0
0
B1 (x(a), x (a), x(b)x (b)) = 0, B2 (x(a), x (a), x(b)x (b)) = 0.
(4.1)
(4.2)
where B1 and B2 are linear functions of its variables and p(t) > 0 in [a, b] and r(t), q(t) are
continuous functions on [a, b]. We note that any second order equation of the form
x00 + A(t)x0 + B(t)x + λC(t)x(t) = 0
can be transformed into the equation in the normal from as above by multiplying the equation
Rt
with p(t) = e 0 A(s)ds . Since p0 = Ap, we get
(px0 )0 = Apx0 + px00 = Apx0 + p(−Ax0 − Bx − λCx) = −pBx − λpCx.
Hence, setting r(t) = pB, q = p(t)C(t) the equation becomes (4.14).
Solvability features of Boundary value problems are similar to the solvability features of the
system of linear equations Ax = b. In this case we have the following:
Let A be an n × n matrix whose eigenvectors form a basis of Rn . Let B = {e1 , e2 , e3 , ....en }
be an orthonormal basis of eigenvectors of A. That is hei , ej i = 0 if i 6= j and hei , ei i = 1.
Now let b ∈ Rn . Then b =
that the solution to be
Pn
i=1 bi ei ,
where bi = hb, ei i. Since B is a basis of Rn , we assume
x = x1 e1 + x2 e2 + .... + xn en
So it is enough to solve for x0i s. Now substituting this in the equation Ax = b we get
A
n
X
i=1
Therefore,
xi ei =
n
X
bi ei =⇒
i=1
n
X
i=1
n
X
xi Aei =
i=1
λ i xi e i =
n
X
n
X
bi ei
i=1
bi ei
i=1
Since e1 , e2 , ...en are Linearly independent, we get
λi xi = bi , i = 1, 2, ...n.
(4.3)
4
BOUNDARY VALUE PROBLEMS
2
From the equation (4.3), we get the following assertions which are known as Fredholm Alternative theorems. In fact these are true for ”compact operators” on infinite dimensional
spaces.
1. If zero is not an eigenvalue of A then the problem Ax = b has unique solution x =
Pn bi
i=1 λi ei . That is, if A is non-singular, then we have unique solution.
2. If λm = 0 for some m. Then solutions exists if bm = 0, in which case xm is arbitrary.
Since B is orthonormal, bm = hb, em i = 0. Also em satisfies Aem = 0. That is solutions
exists if b is orthogonal to every solution of Ax = 0.
3. On the other hand, if b is othogonal to every solution of Ax = 0. Then λm = 0 is an
eigenvalue of A and bm = 0. Hence xm is arbitrary in the equation (4.3) and infinitely
many solutions exist.
4. It is also clear that the problem Ax = b has NO SOLUTION if λm = 0 and hb, em i =
6 0.
The above method tells us that once we know the eigenvalues and eigenvectors forms basis
of Rn , then it is easy to determine if the system has infinitely many solutions or there is no
solution. We will show that we can generalize the above approach to differential equations to
solve a non-homogeneous boundary value problem.
4.1
Oscillation theory
In this section we will study the qualitative properties of solutions like oscillatory behaviour
of solutions. We consider the equation
x00 + p(t)x = 0,
(4.4)
where p(t) is continuous everywhere. We have the following defitions.
Definition 4.1.1. We say that a nontrivial solution x(t) of (4.4) is oscillatory if for any
number T , x(t) has infinitely many zeros in the interval (T, ∞). We also call the equation
(4.4) is oscillatory if it has an oscillatory solution.
We have the following theorem which is a consequence of uniqueness theorem.
Theorem 4.1.2. Let x1 (t) and x2 (t) are two linearly independent solutions of (4.4) and let
a and b are two consecutive zeros of x1 (t) with a < b. Then x2 (t) has exactly one zero in the
interval (a, b).
Proof. We note that x2 (a) 6= 0 6= x2 (b), otherwise Wronskian of x1 , x2 is zero. W.L.G we
assume that x1 (t) > 0 in (a, b). Suppose by contradiction, x2 (t) > 0 in [a, b]. Let
h(t) =
x1 (t)
.
x2 (t)
4
BOUNDARY VALUE PROBLEMS
3
Then h is differentiable on [a, b] and h(a) = h(b) = 0. Therefore, by Rolle’s theorem, there
exists c ∈ (a, b) such that h0 (c) = 0. But h0 (c) = 0 implies x2 (c)x01 (c) − x1 (c)x02 (c) = 0, which
implies Wronskian of x1 , x2 is zero at c. This is contradiction. So x2 (t) has a zero in (a, b).
Suppose x2 (t) has two zeros t1 , t2 in (a, b). Then change the roles of x1 and x2 in the above
discussion to obtain a zero d between (t1 , t2 ) of x1 (t). This contradicts that a and b are two
consecutive zeros of x1 (t).
///
Corollary 4.1.3. If (4.4) has one oscillatory solution, then all solutions are oscillatory.
Next we study the number of zeros of solutions and their dependance on the coefficients. First
let us consider the equation
x00 + k 2 x = 0
The linearly independent solutions are x1 (t) = sin kt and x2 (t) = cos kt, which are oscillatory.
To start with let us take k = 1. Then the resultant solution x1 (t) = sin t has exactly one zero
in (0, 2π). If we take k = 2, then x1 (t) = sin 2t has three zeros in (0, 2π). So we notice that as
k increases the zeros are moving towards left and the number of zeros in the interval (0, 2π)
increases. In otherwords, the larger the coefficient k is the solutions oscillates more rapidly.
In the general case this is the consequence of the following theorem.
Theorem 4.1.4. Consider the two equations
x00 + p(t)x = 0
(4.5)
y 00 + q(t)y = 0.
(4.6)
Suppose that x(t) is a nontrivial solution of (4.5) with consecutive zeros at a and b. Assume
further that p(t), q(t) ∈ C([a, b]) such that p(t) ≤ q(t), p(t) 6≡ q(t). If y(t) is any nontrivial
solution of (4.6) such that y(a) = 0, then there exists another zero c of y(t) in (a, b).
Proof. Assume that the assertion of the theorem is false. Without loss of generality, we can
assume that x(t) > 0 on (a, b), otherwise replace x by −x. Similarly, we can also assume that
y(t) > 0 on (a, b). Multiplying the first equation by y(t) and second equation by x(t) and
subtracting, we get
y(t)x00 (t) − x(t)y 00 (t) + (p(t) − q(t))x(t)y(t) = 0
Integrating from a to b and using integration by parts formula, we get
0
xy −
yx0 |ba
Z
b
(q − p)(t)x(t)y(t)dt = 0.
+
a
(4.7)
4
BOUNDARY VALUE PROBLEMS
4
Since x(a) = x(b) = y(a) = 0, the above equation can be written as
Z
0
b
(q(t) − p(t))x(t)y(t)dt.
y(b)x (b) =
a
The RHS of above equation is positive and LHS is non-positive (≤ 0) as y(b) ≥ 0, x0 (b) ≤
0.
///
Example: Show that between any two zeros of cos t there is a zero of 2 sin t − 3 cos t.
proof follows from Theorem 4.1.2 by noting that both functions are solutions of x00 + x =
0.
///
Corollary 4.1.5. If l = lim q(t) > 0, then x00 + q(t)x = 0 is an oscillatory equation.
t→∞
Proof. Since lim q(t) = l > 0, we can choose a number T such that for t ≥ T, p(t) > l/2.
t→∞
Now using the Theorem 4.1.4 with P (t) = l/2 and q(t) = q(t), we get that for t ≥ T every
solution of x00 + q(t)x = 0 has a zero between any two zeros of solution of x00 + lx = 0. The
assertion follows from the fact that x00 + lx = 0 is oscillatory.
///
Example: Show that
2t6 − 2t4 + 3t − 1
x=0
x00 +
t6 + 3t2 + 1
is an oscillatory equation.
Proof follows from the above corollary as limt→∞
2t6 −2t4 +3t−1
t6 +3t2 +1
= 2.
///
Corollary 4.1.6. If p(t) ≤ 0, p(t) 6≡ 0, then no solution of x00 + p(t)x = 0 can have more
than one zero.
Proof. We will apply theorem 4.1.4 with p(t) = p(t) and q(t) = 0. Let t1 , t2 are two zeros.
By theorem 4.1.4, every solution of y 00 = 0 has a zero between t1 and t2 , which is impossible
as the general solution of y 00 = 0 is at + b which cannot have two zeros. This contradiction
proves the theorem.
///
We note that if x(t) is a solution of
x00 + p(t)x0 + q(t)x = 0.
1
Then y(t) = x(t)e 2
R
p(t)dt
(4.8)
satisfies the equation
y 00 − (q −
p2
)y = 0.
4
Using this, we can study the oscillation properties of equation (4.8).
(4.9)
4
BOUNDARY VALUE PROBLEMS
4.2
5
Eigenvalue problems
In this section, we will study the Sturm-Liouville eigenvalue problems and their applications
to the boundary value problems of type
(p(t)x0 (t))0 + r(t)x(t) + λq(t)x(t) = 0 t ∈ (a, b)
B1 (x(a), x0 (a), x(b)x0 (b)) = 0, B2 (x(a), x0 (a), x(b)x0 (b)) = 0.
where B1 = c1 x(a)+c2 x(b)+c3 x0 (a)+c4 x0 (b) = 0 and B2 = d1 x(a)+d2 x(b)+d3 x0 (a)+d4 x0 (b) =
0. The constants ci are such that not all of them are zero. Similarly, not all d0i s are zero.
For simplicity, we will study the existence and properties of eigenvalues for the following
eigenvalue problem:
x00 (t) + λq(t)x(t) = 0, t ∈ (a, b)
(4.1)
x(a) = 0, x(b) = 0.
(4.2)
It is clear that x(t) = 0 is a solution of (4.1), (4.2).
Definition 4.2.1. A scalar λ is called an eigenvalue if the problem (4.1)-(4.2) has non-trivial
solution x(t). The function x(t) is called the eigenfunction corresponding to λ.
Example 4.2.2. Find eigenvalues of the eigenvalue problem
x00 + λx = 0, x(a) = x(b) = 0.
Solution: case 1: First we look for negative eigenvalues. So if λ < 0. Then the characteristic
polynomial is m2 + λ = 0. So the general solution is x(t) = c1 e−|λ|t + c2 e|λ|t . Substituting the
boundary conditions x(a) = x(b) = 0, we get
√
√
c1 e− |λ|a + c2 e |λ|a = 0
√
√
c2 e− |λ|b + c2 e |λ|b = 0
The above equations has unique solution c1 = c2 = 0. Therefore, there is no negative
eigenvalue for the problem.
Case 2: If λ = 0. Then the general solution is x(t) = A + Bt. This is a linear function and
cannot vanish at two points. So zero is not an eigenvalue.
√
√
Case 3: If λ > 0. Then the general solution is xλ (t) = c1 sin λt + c2 cos λt. Imposing the
4
BOUNDARY VALUE PROBLEMS
6
boundary conditions x(a) = x(b) = 0.
√
√
c1 sin λa + c2 cos λa = 0
√
√
c1 sin λb + c2 cos λb = 0
This system has nontrival solution if and only if
sin √λa cos √λa √
√
√
= sin λ(a − b) = 0.
sin λb cos λb √
k2 π 2
Therefore, λ(a − b) = kπ, k = 1, 2, ... Then for any λk = (b−a)
2 , k = 1, 2, 3... In case of a =
0, b = π, eigenvalues are λk = k 2 , k = 1, 2, ... The eigenfunctions are xk (t) = C sin kt, C =
6 0
a constant.
Next we will study some qualitative properties of eigenfunctions. First, we will show the
following:
Lemma 4.2.3. If q(t) > 0 in [a, b]. Then the eigenvalue λ is positive.
Proof. Multiplying the equation with x(t) and integrate by parts, we find
Z
λ
b
q(t)x2 (t)dt =
a
b
Z
(x(t))0 x(t)dt
a
0
Z
0
= (x (b))x(b) − (x (a))x(a) −
b
x0 (t)x0 (t)dt
a
Since x(a) = x(b) = 0, we infer
Z
b
Z
2
q(t)x (t)dt = −
λ
a
b
(x0 (t))2 dt
a
Since p(t) > 0 and x(t) 6≡ 0, the RHS is strictly positive. Hence λ > 0.
Lemma 4.2.4. Letλ1 6= λ2 are two different eigenvalues and let φ1 , φ2 be the the correspondRb
ing eigenfunctions. Then a φλ φµ q(t)dt = 0.
Proof. Multiplying (pφ01 )0 + λ1 φ1 = 0 by φ2 and integrating by parts, we obtain
Z
b
φ01 (t)φ02 (t)dt
Z
b
= λ1
a
φ1 (t)φ2 (t)q(t)dt
(4.3)
a
Similarly, multiplying (φ02 )0 + λ2 φ2 = 0 by φ1 and integrate by parts, we get
Z
a
b
φ01 (t)φ02 (t)dt
Z
= λ2
b
φ1 (t)φ2 (t)q(t)dt
a
(4.4)
4
BOUNDARY VALUE PROBLEMS
7
Subtracting (4.3) from (4.4) we get
Z
(λ1 − λ2 )
b
φ1 (t)φ2 (t)q(t)dt = 0.
a
Proof follows as λ1 6= λ2 .
///
Corollary 4.2.5. Eigenfunctions corresponding to different eigenvalues are linearly independent.
Proof. If φ2 = αφ1 for some real number α 6= 0 we would have
Z
b
Z
φ1 (t)φ2 (t)q(t)dt = α
a
b
q(t)φ21 (t)dt = 0,
a
a contradiction.
///
Lemma 4.2.6. Let q(t) be a positive continuous function that satisfies
0 < m2 < q(t) < M 2
on a closed interval [a, b]. If x(t) is a nontrivial solution x00 + λ2 q(t)x = 0 on this interval,
and if t1 and t2 are successive zeros of y(t), then
π
π
< t2 − t1 <
λM
λm
(4.5)
Furthermore, if x(t) vanishes at a and b, and n − 1 points in (a, b), then
λm(b − a)
λM (b − a)
<n<
.
π
π
(4.6)
Proof. Let z(t) be the solution of z 00 + m2 λ2 z = 0. A nontrivial solution of this that vanishes
π
at t1 is z(t) = sin mλ(t − t1 ). Since the next zero of z(t) is t1 + λm
and Theorem4.1.4 tells us
π
π
that t2 must occur before this. So we have t2 < t1 + λm or t2 − t1 < λm
.
To prove the other side of (4.5), take w(t), solution of w00 + M 2 λ2 w = 0. A non-trivial soluπ
tion of this that vanishes at t2 is w(t) = sin M λ(t − t1 ). The zero before this is t1 + λM
and
π
π
Theorem4.1.4 tells us that t1 + λM must lie between t1 and t2 . So we have t2 > t1 + λM or
π
t2 − t1 > λM
.
To prove (4.6), we first observe that there are n subintervals between n + 1 zeros, so by (4.5)
nπ
we have b−a = the sum of the lengths of the n subintervals < λm
, and therefore, mλ(b−a)
< n.
π
4
BOUNDARY VALUE PROBLEMS
In the same way, we see that b − a >
8
M (b − a)
nπ
, so n <
.
Mλ
π
///
Finally, we prove the following theorem on existence of sequence of eigenvalues.
Theorem 4.2.7. Suppose q(t) > 0. Then there exist infinitely many positive eigenvalues λk
of (4.1)-(4.2) such that 0 < λ1 < λ2 < λ3 < ... < λk < λk+1 < .... Moreover λk → ∞.
Proof. For each λ, let xλ (t) be the unique solution of x00 + λ2 q(t)x = 0, x(a) = 0, x0 (a) = 1.
It is clear from Theorem 4.1.4 that xλ has no zeros to the right of a when λ ≤ 0. Our plan
is to watch the oscillation behaviour of xλ (t) as λ increases from 0. By continuity of q(t) on
[a, b] there exists m, M such that 0 < m2 < q(t) < M 2 .
Thus, by Theorem 4.1.4, xλ (t) oscillates more rapidly on [a, b] than solutions of
y 00 + λ2 m2 y = 0,
and less rapidly than solutions of
z 00 + λ2 M 2 z = 0.
π
≥ b − a the function xλ (t) has no zeros in [a, b] to
λM
π
≤ b − a, then xλ (t) has at least one zero.
the right of a; and when λ increases such that
λm
Also from (4.5), we see that as λ → ∞, we get t2 − t1 → 0 and the number of zeros of xλ
increases. Also, (4.5) also ensures that λ 7→ αk (λ), where αk (λ) is the k th zero of xλ (t),
is continuous. Therefore, as λ starts from zero and increases to ∞, there are many values
λ1 , λ2 , ...λn ... for which a zero of xλ (t) reaches b and subsequently enters the interval (a, b)
so that xλ (t) vanishes at a and b and has n − 1 zeros in (a, b).
By above lemma, when λ is small say
Also, notice that λ1 is such that xλ1 has no zero in (a, b). i.e., The eigenfunction corresponding to smallest eigenvalue is positive.
///
The eigenvalue problems in ODE’s also has variational structure. Multiplying the equation
with φ such that φ(a) = φ(b) = 0 and integrating by parts, we get
Z
b
0
Z
0
p(t)y (t)φ (t) = λ
a
b
q(t)y(t)φ(t)dt
a
If (λ1 , φ1 ) is an eigen pair then taking y = φ1 and φ = φ1 we get
Z
b
0
2
Z
p(t)(φ (t)) = λ1
a
a
b
q(t)φ21 dt
4
BOUNDARY VALUE PROBLEMS
9
It follows that
Rb
a
λ1 (q) =
p(t)(φ01 )2 (t)dt
.
Rb 2
a φ1 (t)dt
Moreover it is clear that for all φ ∈ C ( a, b) with φ(a) = φ(b) = 0, we have
Rb
a
λ1 (q) ≤
p(t)(φ0 )2 (t)dt
.
Rb
2
a φ (t)dt
(4.7)
As a consequence of this, we have the following Poincare inequality:
Theorem 4.2.8. There exists a constant C > 0 such that
Z
b
Z
2
b
φ (t)dt ≤ C
a
(φ0 )2 (t)dt
a
for all φ ∈ C01 (a, b) = {φ ∈ C 1 (a, b) : φ(a) = φ(b) = 0}.
Proof. Take p = q = 1 in (4.7).
4.3
///
Method of eigenfunction expansions
As outlined in the introduction, in this section we shall study the method of solving boundary value problems using the eigenvalues and eigenfunctions of the operator. We start our
discussion from (4.7). From this equation, one can show that
Rb
λ(q) = inf
a
φ∈C
p(t)(φ0 )2 (t)dt
.
Rb
2
a φ (t)dt
Proof of this requires good amount of functional analysis. We give an idea of the proof this
characterization in case of Symmetric positive definite matrix. Let A be an n × n positive
definite symmetric matrix. Then the smallest eigenvalue λ1 can be found as
λ1 = minn
x∈IR
hAx, xi
hx, xi
where hx, xi = xT x, x ∈ IRn . So hAx, xi is polynomial of order n. To see that this minimizer
is achieved, we consider an equivalent problem
λ1 = minn {hAx, xi : hx, xi = 1}.
x∈IR
Suppose, {xk } be a sequence such that hxk , xk i = 1 and hAxk , xk i → λ1 . Then boundedness
of the sequence implies that there is a subsequence which converges to x and hAxk , xk i →
hAx, xi. Therefore, hx, xi = 1 and hAx, xi = λ1 . Once this minimizer is achieved, then one
4
BOUNDARY VALUE PROBLEMS
10
can use Lagrange multiplier theorem to conclude hAx, yi = λ1 hx, yi. In this proof we used
the fact that every bounded sequence in IRn has a convergent subsequence. This is not true
in case of infinite dimensional spaces like C 1 (a, b).
To obtain the second smallest eigenvalue of A, we minimize over the ortohogonal compliment of the first eigenspace. That is, let E1 be the eigen space corresponding to λ1 . Then
L
IRn = E1 H2 where H2 is the orthogonal compliment of E1 . We consider the minimization
problem
hAx, xi
λ2 = min
.
x∈H2 hx, xi
Following the above argument, one can show that the minimizer exists. We can follow this
procedure to find all eigenvalues.
Similarly, in case of Boundary value problems, we have the following characterization for
k≥2
Rb
p(t)(φ0 )2 (t)dt
λk = min Rab
2
φ∈Hk
a q(t)(φ) (t)dt
where Hk is the orthogonal compliment of span{φ1 , φ2 , ....φk−1
Proof of this involves Hilbert space theory and is beyond the scope of this course.
Theorem 4.3.1. The set of all eigenfunctions span the space C. That is, any ψ ∈ C may be
written
∞
X
hψ, φi i
ψ(t) =
φi (t)
hφi , φi i
i=1
and the series converges absolutely and uniformly.
Moreover if f ∈ C([a, b]) then the series converges with respect to the metric d(f, g) =
1/2
R
b
2
.
|f
−
g|
a
The series above is called Fourier Series of ψ. Proof of this theorem is again beyond the
scope of this course.
Let L be second order linear differential operator and consider the boundary conditions B1 =
∞
0, B2 = 0. Let {λn }∞
n=1 , {φn }n=1 be sequences of eigenvalues and eigenfunctions. That is
Lφn = λn φn , in (a, b) B1 (φn ) = 0, B2 (φn ) = 0
Now consider the boundary value problem: f (t) ∈ C,
Lu = f (t), t ∈ (a, b), B1 (u) = 0, B2 (u) = 0
4
BOUNDARY VALUE PROBLEMS
11
By above theorem
f (t) =
∞
X
Rb
f (t)φn (t)dt
fn φn (t), where fn = Ra b
2
n=1
a (φn ) (t)dt
P
Let u(t) = ∞
n=1 cn φn be the solution. Then assuming that the series converges absolutely
and uniformly, we get
X
X
Lu =
cn L(φn ) =
cn λn φn
Then u(t) solves the problem if
X
cn λn φn =
X
fn φn
From this, we obtain the relation
cn λn = fn
So from this, we see that
1. If λn 6= 0 for all n, then cn =
fn
λn
gives the unique solution.
2. If λm = 0 for some m, then solution exists if and only if
Rb
a
f (t)φm (t)dt = 0.
Problem: Solve the boundary value problem: u00 = t, u(0) = 0, u(1) = 0.
Solution: The eigenvalue problem: u00 = λu, u(0) = u(1) = 0 and the eigenvalues are
λn = n2 π 2 and eigenfunctions are φn = sin(nπt). Then
f (t) =
∞
X
R1
t sin nπtdt
R01 2
n=1 0 sin nπtdt
=
(−1)n+1
nπ
1
2
n=1
∞
X
sin nπt =
∞
X
2(−1)n+1
n=1
nπ
sin nπt
P
P
00
cn (n2 π 2 ) sin nπt. substituting this in the equation
Let u(t) = ∞
n=1 cn sin nπt. Then u =
and comparing the coefficients, we get
cn =
Therefore,
u(t) =
2(−1)n+1
n3 π 3
∞
X
2(−1)n+1
n=0
n3 π 3
sin nπt
Now it is easy to check that this series converges uniformly, as
P
|cn | =
2
π3
P
1
n3
< ∞.
///
Problem 2: Solve the boundary value problem: u00 = 1, t ∈ (0, 1) u(0) = 0, u0 (1) = 0
Solution: Consider the eigenvalue problem: u00 = λu, u(0) = 0, u0 (1) = 0. It is not difficult
to show that
(2n − 1)2 π
2n − 1
λn =
, φn (t) = sin
πt
4
2
4
BOUNDARY VALUE PROBLEMS
12
The Fourier coefficients of f (t) = 1 are
1
Z
sin
fn =
0
(2n − 1)πt
4
dt =
2
(2n − 1)π
. So by assuming that the solution u(t) =
u00 = −
X
cn
P
cn sin (2n−1)πt
we get
2
(2n − 1)2 π 2
(2n − 1)πt
sin
4
2
substituting this in the equation, we get
cn =
16
.
(2n − 1)63π 3
As in the previous problem, we can show that the corresponding series converges absolutely
and uniformly.
///
4.4
Singular Eigenvalue Problems
Consider the Sturm-Liouville eigenvalue problem (SLP)
−(p(t)y 0 )0 + q(t)y = λw(t)y,
t ∈ [a, b]
0
B1 (a) := A1 u(a) + A2 u (a) =0
B2 (a) := C1 u(b) + C2 u0 (b) =0.
Definition 4.4.1. The (SLP) is called regular if (pw)(a) 6= 0 and (pw)(b) 6= 0.
The following cases are called singular eigenvalue problem
Case 1 : (pw)(a) = 0. In this case B1 (a) can be dropped.
Case 2 : (pw)(b) = 0. In this case B2 (b) can dropped.
Case 3 : (pw)(a) = 0, (pw)(b) = 0. Then both the boundary conditions can be dropped.
In all the above cases, we call y(t) a solution if y and y 0 are defined at the boundary
points.
Case 4 : The interval [a, b] may be infinite. In this case we assume that y(t) ∈ L2 ([a, b]).
4.4.1
Legendre equation
Consider the Legendre equation
(1 − t2 )y 00 − 2ty 0 + ν(ν + 1)y = 0.
4
BOUNDARY VALUE PROBLEMS
13
We know that there are two linearly independent power series solutions Pν (t) and Qν (t) which
converge in (−1, 1). This equation may be written as
− (1 − t2 )y 0
0
= λν(ν + 1)y.
So comparing with (SLP ), we have p(t) = 1 − t2 , q = 0 and w(t) = 1. p(±1) = 0. This is a
singular eigenvalue problems and as we mentioned, both boundary conditions can be dropped.
We will see that if the solution is an infinite series, then it does not converge at ±1. Indeed,
the series solution is determined by the recurrence relation
ck+2 =
(k − ν)(ν + k + 1)
ck , k = 0, 1, 2...
(k + 2)(k + 1)
So, the ratio of two consecutive terms in the power series at t = 1 will be,
(k − ν)(k + ν + 1)
ck+2
=
ck
(k + 2)(k + 1)
1
(k − ν)(ν + k + 1)
≥
k+1
k+2
k−ν
k
≥
∼
.
k+1
k+1
P1
ak+1
k
The series
n also will have the same ratio ak = k+1 . Hence the series solution does not
converge at t = 1. A similar argument shows that the series solution does not converge at
t = −1.
Another way of defining the series solution at t = 1 is to find the solution around t = 1
P
with series in the form
ck (t − 1)k . In this case, t = 1 is a regular singular point and so
P
taking the solution in the form
ck (1 − t)k+r , we see that the equation equivalently can be
written as
2t 0 1 − t
y +
ν(ν + 1)y = 0
(t − 1)2 y 00 + (t − 1)
t+1
1+t
So the indicial polynomial is r(r − 1) + α0 r + β0 = 0 where α0 is the constant term in the
2t
series expansion of t+1
at t = 1 and β0 is the constant term in the series expansion of 1−t
1+t . So
2
α0 = 1, β0 = 0. Therefore indicial polynomial is r = 0. Therefore one of the solutions has
P
log(t−1) as a factor which is not defined at t = 1. The other solution is given by ak (t−1)k ,
where
(ν − k)(ν + k + 1)
ak+1 =
ak , k ≥ 0.
2(k + 1)2
It is easy to see by Ratio test that the series converges in |t − 1| < 2. But this solution is not
4
BOUNDARY VALUE PROBLEMS
14
defined at t = −1. To see this,
ak+1 (−2)k+1
(k − ν)(k + ν + 1)
(ν − k)(ν + k + 1)
(−2) =
=
2
k
2(k + 1)
(k + 1)2
ak (−2)
If ν is not a positive integer, then as earlier one can show that this ratio is similar to the ratio
P1
of
n.
Therefore, the infinite series solution that is defined at t = 1 does not converge at t = −1.
Similarly, we can show that the infinite series solution that is defined at t = −1 is not defined
at t = 1. Therefore, the only solution that is defined at t = ±1 is the polynomial solution
which exists only when ν is a positive integer. Hence we proved the following:
Theorem 4.4.2. The sequence of eigenvalues and eigenfunctions of the singular SLP:
((1 − t2 )y 0 )0 + n(n + 1)y = 0, −1 < t < 1
are λn = n(n + 1), φn = Pn (t).
4.4.2
Hermite and Laguerre polynomials
The equation y 00 − 2ty 0 + 2αy = 0, t ∈ IR is the Hermite equation. This may be written in
the self-adjoint form as
2
2
−(et y 0 )0 = 2αe−t y, t ∈ IR.
2
2
Here p(t) = e−t , q = 0, w = e−t , λ = 2α. This is singular SLP because the domain in
unbounded. We can find two linearly independent power series solutions for each real number
P k
α. If we write x(t) =
ck t , then the coefficients are determined by the recurrence relation
is
2(α − n)
an
an+2 = −
(n + 1)(n + 2)
So, if α = m a positive integer, then one of the solution is a polynomial. So for each m, we
get a polynomial solution Hm (t) which can be defined as
2
Hm (t) = (−1)m et
dm −t2
e
dtm
In this case, it can be shown that if α is not a positive integer, then the infinite series solution
2
Hα (t) is not eigenfunction in the sense that it does not belong to L2 (IR, e−t ). i.e.,
Z
IR
2
Hα2 (t)e−t dt < ∞ if and only if α = 1, 2, 3, 4, ....
4
BOUNDARY VALUE PROBLEMS
15
Laguerre equation: The equation ty 00 + (1 − t)y 0 + au, t ∈ [0, ∞) is the Laguerre
equation. In the self adjoint form, this equation is
−(te−t y 0 )0 = ae−t y, t ∈ [0, ∞).
Here p(t) = t and the problem is singular as p(0) = 0 and interval is infinite. The polynomial
solutions can be obtained in similar fashion as in the above cases if λ = n a positive integer.
One can show that the only solutions defined at t = 0 and L2 ([0, ∞) : e−t ) are polynomial
solutions.
4.4.3
Bessel functions and eigenvalue problems
Consider the Bessel equation t2 y 00 + xy 0 + (t2 − n2 )y = 0. Taking the transformation, t = az,
this equation transforms to
d dy
n2
− (z ) − y = a2 zy.
dz dz
z
This is in the form of −(py 0 )0 + qy = λw(z)y with p(z) = z, q(z) =
The solution is Jn (az). So if we consider the eigenvalue problem
−
n2
z , w(z)
= z and λ = a2 .
n2
d dy
(z ) − y = λzy, in [0, c]
dz dz
z
y(c) = 0.
Then the eigenvalues are λj = αj2 and eigenfunctions are φj = Jn (αj t) where αj are roots of
Jn (αc) = 0.
4.5
Spectral theory of self adjoint operators
Let L2 ([a; b]; w(x); dx) be the completion of inner product space of continuous functions on
Rb
[a, b], C([a, b]), with inner product hf gi = a f (x)g(x)w(x)dx. The function w(x) is called
weight function. Let H be the subspace of functions that satisfy the boundary conditions of
(SLP) problem. Now, the differential operator of the form
L=
1
w(t)
d
d
−
p(t)
+ q(t)
dt
dt
on some domain in H, is called a Sturm-Liouville operator. Then the SLP becomes an
eigenvalue equation in the space H
Ly = λy
Any second order differential operator L(y) = py 00 + qy 0 + ry = 0, p > 0 can be written in the
self-adjoint form.
Theorem 4.5.1. The Sturm-Liouville operator L is self adjoint on H.
4
BOUNDARY VALUE PROBLEMS
16
Proof. Note that by integration by parts, we can get
Z
b
hf, Lgi =
f (t)Lg(t)w(t)dt
Z b
dg
d
=
p(t)
+ q(t) dt
f (t) −
dt
dt
a
Z b
0
b
0
0
= −(p(t)f (t)g (t) a +
f (t)p(t)g (t) + f (t)q(t)g(t) dt
a
a
Similarly,
Z
b
hLf, gi =
Lf (t)g(t)w(t)dt
Z b
b
0
= −(p(t)f (t)g(t) a +
f 0 (t)p(t)g 0 (t) + f (t)q(t)g(t) dt
a
a
Therefore, hLf, gi = hf, Lgi if and only if p(t)f (t)g 0 (t) ba − p(t)f 0 (t)g(t) ba = 0. Now using the
boundary conditions we get Af (a) + Bf 0 (a) = 0, Ag(a) + Bg 0 (a) = 0. Since (A, B) 6≡ (0, 0),
we get f 0 (a)g(a) − f (a)g 0 (a) = 0. Similarly, f 0 (b)g(b) − f (b)g 0 (b) = 0. Now
p(t)f (t)g 0 (t) ba − p(t)f 0 (t)g(t) ba = p(b) f 0 (b)g(b) − f (b)g 0 (b) − p(a) f 0 (a)g(a) − f (a)g 0 (a)
(4.8)
=0
///
Now, this result can easily be extended to periodic and singular SLP. The two changes in the
proof will be as follows:
1. The subspace H will be appropriately defined by the boundary conditions.
2. Note that RHS of the equation (4.8) will still be zero, if
(a) p(a) = p(b) and boundary conditions are periodic.
(b) p(a) = 0 and boundary condition at b is homogeneous (Singular SLP)
(c) p(b) = 0 and boundary condition at a is homogeneous (Singular SLP)
(d) interval [a, b] is infinite (since at infinity functions will be vanishing) (Singular SLP)
Theorem 4.5.2. Eigenvalues of SLP are real.
Proof. Let λ be an eigenvalue and φλ be an eigenfunction. Then Lφλ = λφλ and by self
adjointness of L, we have
hLφλ , φλ i = hφλ , Lφλ i
i.e., λhφλ , φλ i = λhφλ , φλ i
4
BOUNDARY VALUE PROBLEMS
Therefore, λ = λ.
17
///
Theorem 4.5.3. If λm and λn are two distinct eigenvalues of a SLP, with corresponding
eigenfunctions ym and yn , then ym and yn are orthogonal.
Proof. Note that
hLym , yn i = hym , Lyn i
i.e., (λm − λn )hym , yn i = 0
Since λm 6= λn , we get hym , yn i = 0.
///
Theorem 4.5.4. Let λ be an eigenvalue of SLP with at least one separated boundary condition. Then There is a unique eigenfunction (up to constant) corresponding to λ. That is the
dimension of eigenspace is one.
Proof. Let φ, ψ be two eigenfunctions corresponding to λ. Then multiplying Lφ = λφ with ψ
and Lψ = λψ by φ and subtracting the resultant equations, we get
φLψ − ψLφ = 0.
That is
0 = φLψ − ψLφ = ψ(pφ0 )0 − φ(pψ 0 )0
d
=
(pφ0 ψ − pψ 0 φ)
dt
d
= (pW (φ, ψ)(t)).
dt
Therefore, (pW )(t) = constant. If B1 (a) = Ay(a) + By 0 (a) = 0 is specified, then W (a) = 0.
Hence pW (t) ≡ 0 which implies φ, ψ are linearly dependent.
///
Existence of eigenvalues
Definition 4.5.5. An operator L on H is positive definite if
f (t) 6≡ 0 =⇒ hLf, f i > 0.
Definition 4.5.6. The directional derivative of a function f : IRn → IR at a ∈ IRn along v
(|v| = 1) is defined as
f (a + tv) − f (a)
Dv f (a) := lim
t→0
t
4
BOUNDARY VALUE PROBLEMS
18
Let A be an n × n self adjoint matrix and let f (x) = hAx, xi, where h, i is real inner
product. Then
hA(x + tv), x + tvi − hAx, xi
t
t(hAx, vi + hx, Avi) + t2 hAv, vi
= lim
t→0
t
= 2hAx, vi
Dv f (x) = lim
t→0
This motivates us to define the Directional derivative of L : H → IR in the direction v as
L(f + tv) − L(f )
t→0
t
Dv L(f ) = lim
As above, if we take
L(f ) = hLf, f i
where L is self-adjoint positive definite operator on H. Then we can follow above steps to
show that
Dh L(f ) = 2hLf, hi
If λ is an eigenvalue and φ is an eigenfunction, then we have
hLφ, φi = λhφ, φi or λ =
hLφ, φi
.
hφ, φi
The smallest eigenvalue of the operator is
λ1 = min
f ∈H
hLf, f i
.
hf, f i
This is equivalent to the minimization problem
min{L(f ) : T (f ) = 1}
H
where L(f ) := hLf, f i and T (f ) := hf, f i. Then we have the following theorem which is
beyond the scope of first course.
Theorem 4.5.7. The minimizer for the above minimzation problem is achieved in H.
Then by the Lagrange multiplier theorem, the directional derivatives of L and T are
proportional. That is
hLf, hi = λ1 hf, hi.
L
Once the first eigenvalue is found, then we can write H = E1 H2 , where E1 is the eigenspace
and H2 is orthogonal compliment of E1 in H. Now it is natural consider the minimization
4
BOUNDARY VALUE PROBLEMS
19
problem
λ2 = min{L(f ) : T (f ) = 1}
H
Then λ2 6= λ1 . This way, one can find the sequence of eigenvalues and eigenfunctions.
We state the following theorem which is beyond the scope of this course.
Theorem 4.5.8. Let {φn } be the set of normalized eigenfunctions of the (SLP). If f (t) is a
function in H. Then
*
+
n
n
X
X
lim f −
ck φk , f −
ck φk = 0,
n→∞
Z
k=1
k=1
b
φk (t)f (t)w(t)dt.
where ck =
a
As a consequence of this theorem, we have the following method of eigenfunctions.
Problem: Let f (t) ∈ L2 ([−1, 1]). Find a solution u ∈ L2 ([−1, 1]) of −(1 − t2 )y 0 )0 = f (t).
Solution: First we note that the eigenvalue problem −(1−t2 )y 0 )0 = λy is a singular eigenvalue
problem with eigenvalues λn = n(n + 1) and eigenfunctions Pn (t). Since f ∈ L2 , by above
theorem, we can write
f (t) =
∞
X
k=0
2
fk Pk (t), where fk =
2k + 1
Assuming the solution in the form y(t) =
Then substituting in the equation we get
P∞
−((1 − t2 )y 0 )0 = −((1 − t2 )
On the other hand, f (t) =
P
Z
f (t)Pk (t)dt
−1
k=0 ck Pk (t)
X
ck Pk0 )0 =
1
where ck =
X
2
2k+1
R1
−1 y(t)Pk (t)dt.
ck k(k + 1)Pk
fk Pk (t). Then y is a solution if ck =
fk
k(k+1) .
///
dy
d
Problem 2: Solve the problem − 1t dt
(t dy
dt ) = f (t), t ∈ (0, 1), limt→0 t dt = 0.
d
0
Solution: The eigenvalue problem − 1t dt
(t dy
dt )+λy = 0, y(1) = 0, limt→0 ty = 0. The general
solution of this equation is y(t) = AJ0 (t) + BY0 (t) where
∞
∞
X
X
(−1)m t2m
(−1)m t2m
,
Y
(t)
=
(ln
t)J
(t)
+
J0 (t) =
0
0
22m (m!)2
22m (m!)2
m=1
m=0
dy
= 0 =⇒ B = 0
dt
√
y(1) = 0 =⇒ J0 ( λ) = 0
lim t
t→0
√
Therefore, the eigenvalues are zeros of eigenfunction J0 and eigenfunctions are J0 ( λn t). Now
4
BOUNDARY VALUE PROBLEMS
we can write
f (t) =
∞
X
20
Z
p
fk J0 ( λk t), fk =
1
p
f (t)J0 ( λn t)tdt
0
n=1
Now assuming that
y(t) =
∞
X
p
ck J0 ( λk t)
n=1
Substituting this in the equation, we get
λk ck = fk
Therefore ck =
4.6
fk
λk
and the the solution is determined.
///
The Green’s function
In this section we solve the non homogeneous boundary value problem using the so called
Green’s function. We define the Dirac Delta function δa (t) as
1. δa (t) = 0 if x 6= a
Z
2.
δa (t) dt = 1.
R
This is understood with help of approximating sequences δk defined as

k if |t − a| < 1 ,
2k
δk (t − a) =
0 if |t − a| > 1 .
2k
Then
δa (t) = lim δk (t − a).
k→∞
The important property of Dirac delta is
Theorem 4.6.1. filtering Property for a function f (x)
Z
f (s)δt (s)ds = f (t).
R
Our aim is to find solution of the problem
L(u)(t) = (pu0 )0 − qu = f (t), B1 (a) = 0, B1 (b) = 0.
in the form
Z
u(t) =
b
G(t, s)f (s)ds
a
(4.9)
(4.10)
4
BOUNDARY VALUE PROBLEMS
21
Definition 4.6.2. The boundary value problem in (4.9) is called self-adjoint if
Z
b
Z
w(t)L(v)(t)dt =
a
b
L(w)(t)v(t)dt = 0
a
for all functions w and v satisfying the boundary conditions Ba (w) = 0 and Bb (v) = 0.
We have the following motivating example
Example 4.6.3. Consider the problem u00 + 4u = −f, u(0) = u(1) = 0. A particular solution
of this problem is
Z
1 t
u0 (t) = −
sin 2(t − s)f (s)ds.
2 0
The general solution is
u(t) = u0 (t) + c1 cos 2t + c2 sin 2t.
substituting the boundary conditions, we get
1
u(t) = −
2
Z
0
t
sin 2t
sin 2(t − s)f (s)ds +
2 sin 2
Z
0
t
sin 2t
sin 2(1 − s)f (s)ds +
2 sin 2
Z
1
sin 2(1 − s)f (s)ds.
t
We can write this in the form (4.10) with

sin 2s sin 2(1 − t)/2 sin 2,
G(t, s) =
sin 2t sin 2(1 − s)/2 sin 2,
s≤t
t ≤ s.
The function G(t, s) is called Green’s function of the problem. We note that G(t, s)
satisfies the following properties for fixed s:
1. As a function of t, G(t, s) satisfies L(u) = 0 in 0 < t < s and s < t < 1.
2. G(t, s) is continuous. (particularly at t = s)
3.
d
d
G(t, s) t=s+ − G(t, s) t=s− = −1
dt
dt
4. G(0, s) = G(1, s) = 0.
5. G(t, s) = G(s, t)
This motivates us to define
Definition 4.6.4. Greeen’s function of (4.9) is a two variable function G(t, s) that satisfies
the following:
G1 L(G) = 0, t 6= s
G2 Ba (G) = Bb (G) = 0
4
BOUNDARY VALUE PROBLEMS
22
G3 G(t, s) is continuous at t = s
G4
d
−1
d
G(t, s) t=s+ − G(t, s) t=s− =
dt
dt
p(s)
Theorem 4.6.5. If the BVP: L(u) = 0, a < t < b, Ba (u) = 0, Bb (u) = 0 has only the trivial
solution, then the Green’s function G(t, s) exists and is unique.
Proof. Uniqueness: Let s be fixed and let G1 and G2 be two Green’s functions. Set H(t, s) =
G1 (t, s) − G2 (t, s). From (G2), H is continuous in [a, b]. From (G3),
dG1 dG2
d
H(t, s) t=s+ =
−
+
dt
dt
dt t=s
dG1
1
dG2
1
=
−
− −
+ +
dt t=s
p(s)
dt t=s
p(s)
dH
=
−
dt t=s
Hence, H(t, s) ∈ C 1 (a, b).
For s 6= t, H 00 = −[p0 H 0 − qH]/p. Since p0 , q, H, H 0 are all continuous functions at s = t,
H satisfies BVP. Hence H ≡ 0.
Existence: Let u1 (t) be a nontrivial solution of L(u) = 0 satisfying Ba (u1 ) = 0 and let
u2 (t) be a nontrivial solution of L(u) = 0 with Bb (u2 ) = 0. We claim that u1 (t) and u2 (t) are
linearly independent. Suppose there exists a nonzero constant C such that u1 (t) = Cu2 (t).
Then Ba (u2 ) = 0. By uniqueness u2 ≡ 0, a contradiction. Therefore u1 and u2 are linearly
independent.
Let s be fixed. Define

Au (t) t ≤ s
1
G(s, t) =
Bu (t) t ≥ s
2
where A and b are constants to be determined so that G satisfies (G1)-(G4). From (G3) and
(G4) we have
1
.
Bu2 (s) − Au1 (s) = 0 and Bu02 (s) − Au01 (s) = −
p(s)
Thus, A = −
Then
u1 (s)
u2 (s)
and B = −
where W (s) is the Wronskian of u1 and u2 at s.
p(s)W (s)
p(s)W (s)

u1 (t)u2 (s)


, t≤s
−
p(s)W (s)
G(t, s) =
(4.11)
u1 (s)u2 (t)


, t ≥ s.
−
p(s)W (s)
4
BOUNDARY VALUE PROBLEMS
23
We note that
(pW )0 = pW 0 +p0 W = p(u1 u02 −u2 u01 )0 +p0 (u1 u02 −u2 u01 ) = u1 (pu02 )0 −u2 (pu01 )0 = q(u1 u2 −u2 u1 ) = 0
Thus pW is a constant. It is easy to see that G(s, t) = G(t, s).
///
Now we discuss the first case of Fredholm theorem where Ax = b has trivial solution. i.e.,
when Ax = 0 has only trivial solution. This is same as saying homogeneous problem has only
trivial solution.
Theorem 4.6.6. If the BVP: L(u) = 0, a < t < b, Ba (u) = 0, Bb (u) = 0 has only trivial
solution. Then the non homogeneous problem L(u) = −f, a < t < b, Ba (u) = 0, Bb (u) = 0
has a solution given by
Z b
u(t) =
G(t, s)f (s)ds
a
where G(t, s) is the Green’s function given by (4.11).
Proof. From (4.11), we see that
Z
u(t) = −u1 (t)
t
b
u2 (s)f (s)
ds − u2 (t)
p(s)W (s)
t
Z
a
u1 (s)f (s)
ds
p(s)W (s)
Now, by direct differentiation using Newton-Leibnits rule, we get L(u) = −f .
///
In case the homogeneous problem has non-trivial solution. In this case we shall expect similar
result as in case of Ax = 0 has nontrivial solution( i.e., when A is singular matrix).
Theorem 4.6.7. Let u0 (t) be a nontrivial solution of the BVP: L(u) = 0, a < t < b, Ba (u) =
0, Bb (u) = 0. Then the nonhomogemous problem L(u) = −f, Ba (u) = 0, Bb (u) = 0 has a
solution if and only if f is orthogonal to u0 . i.e.,
Z
b
f (t)u0 (t)dt = 0.
a
Proof. Let u(t) be a solution of the non homogenous problem. Then multiplying the equation
with u0 and integrating from a to b, we get
Z
−
b
Z
f (t)u0 (t)dt =
a
b
Z
u0 L(u)dt =
a
b
uL(u0 )dt = 0
a
Rb
Conversely, let a f u0 = 0. Then let v(t) be such that L(v) = 0, Ba (v) 6= 0 and Bb (v) 6= 0.
Then u0 (t) and v(t) are linearly independent. Then by variation of parameter’s formula, a
4
BOUNDARY VALUE PROBLEMS
24
particular solution up (t) of L(u) = −f is given by
up (t) = y1 (t)v(t) + y2 (t)u0 (t)
such that
1
y1 (t) =
pW
Z
t
a
1
u0 (s)f (s)ds and y2 (t) = −
pW
Z
t
v(s)f (s)ds
a
Then the general solution of L(u) = −f is given by
u(t) = up (t) + c1 v(t) + c2 u0 (t).
This solution satisfies the boundary condition Ba (u) = 0 and Bb (u) = 0 if
0 = Ba (u) = Ba (up ) + c1 Ba (v)
(4.12)
0 = Bb (u) = Bb (up ) + c1 Bb (v)
(4.13)
We note that u0p = y1 (t)v 0 (t) + y2 (t)u00 (t). Thus up (a) = u0p (a) = 0 (y1 (a) = y2 (a) = 0.)
Therefore, Ba (up ) = Aup (a) + Bu0p (a) = 0. Also up (b) = y2 (b)u0 (b) and u0p (b) = y2 (b)u00 (b)
Rb
1
since y1 (b) = pW
a u0 f = 0 (by hypothesis). Then
Bb (up ) = Cup (b) + Du0p (b) = y2 (b)Bb (u0 ) = 0
Hence from (4.12) we get c1 = 0 since Ba (v) 6= 0 and Bb (v) 6= 0, and c2 is arbitrary. Hence
1
u(t) =
pW
Z
t
[u0 (s)v(t) − v(s)u0 (t)]f (s)ds + c2 u0 (t)
a
is a solution for any c2 .
4.7
///
Finite dimensional approximations
In this section we will study a method of finding approximations through finite dimensional
reduction.This is a powerful technique and is the basic idea in the so called Finite element
Methods. To represent a solution or even approximate solution we need to either know the
eigen functions or exact linearly independent solutions of homogeneous equation (In Greens
function method). It is not always easy to find these functions in case of equations with nonconstant coefficients. This finite dimensional approximations does not require eigen functions
or exact solutions to find approximations. These remarks apply for the equations like
(BV P ) : L(u) := −(p(t)u0 )0 = f (t), 0 < t < l,
u(0) = u(l) = 0.
4
BOUNDARY VALUE PROBLEMS
25
suppose if the above problem admits a solution, say u(t), then by multiplying the equation
with a function φ(t) which satisfies the boundary conditions and integrating by parts, we get
l
Z
l
Z
f (t)φ(t)dt =
0
p(t)u0 (t)φ0 (t)dt − (pu0 )φ
l
t=0
0
l
Z
=
p(t)u0 (t)φ0 (t)dt
0
We define the function space
C0 ([0, l]) = {u(t) ∈ C([0, l]) : u(0) = u(l) = 0}
Definition 4.7.1. A function u(t) ∈ C0 ([0, l]) is called weak solution of the (BVP) if
Z
l
Z
l
f (t)φ(t)dt =
0
p(t)u0 (t)φ0 (t)dt
(4.14)
0
for all φ ∈ C0 ([0, l])
The idea of finite dimensional approximation is to find the weak solution in finite dimensional
subspaces Vn and then choose these spaces in such a way that
Vn+1 ⊂ Vn , and C0 ([0, l]) ⊂
∞
[
Vn
i=1
The Galerkin Method: The Galerkin idea is simple. Reduce the solution to the space Vn :
Find un ∈ Vn satisfying
l
Z
l
Z
f (t)φ(t)dt =
0
p(t)u0n (t)φ0 (t)dt for all φ ∈ Vn .
0
To compute un , suppose {φ1 , φ2 , ..., φn } is a basis for Vn . Then it is equivalent to solving Find
un ∈ Vn satisfying
Z
l
Z
f (t)φ(t)dt =
0
l
p(t)u0n (t)φ0i (t)dt, i = 1, 2, ..., n.
0
Since un ∈ Vn , it can be written as a linear combination of the basis vectors
vn =
n
X
cj φj
j=1
Finding un is now equivalent to determining c1 , c2 , ...cn . Substituting vn in the weak formu-
4
BOUNDARY VALUE PROBLEMS
lation, we get
n Z
X
l
26
pφ0j φ0i cj = (f, φi ), i = 1, 2, ...n.
0
j=1
This is a system of n linear equations for the unknowns c1 , c2 , ..., cn . We define a matrix
K ∈ Rn×n and a vector f ∈ Rn by
Z
Kij =
l
pφ0j φ0i , fi = (f, φ), i, j = 1, 2, ...n.
0
Finding un is equivalent to solving the linear system
KC = f
Since φ1 , ....φn are linearly independent, we see that K is invertible.
Example 4.7.2. Let VN be the subspace of V spanned by the basis
{sin(πt), sin(2πt), ..., sin(N πt)}
Find weak solution of −u00 = t, 0 < t < 1, u(0) = u(1) = 0 in VN . The weak solution is in
P
the form un = N
i=1 ci sin(iπt) and Kij satisfies
Z
Kij =
1
φi φj = ijπ 2
1
Z
cos(iπt) cos(jπt)dt =
0
0
and
Z
1
(f, φ) =

 i2 π 2
i=j
0
i 6= j
2
(−1)i+1
iπ
t sin(iπt)dt =
0
Hence the matrix K is diagonal and we can solve the system KC = f immediately to obtain
the solution
N
X
2(−1)i+1
uN =
i3 π 3
i=1
In the above example, taking limit N → ∞ we see that the solution is
u=
∞
X
2(−1)i+1
i=1
i3 π 3
,
which is same as the solution obtained using eigen function expansion. In the next example
we will see that this method can be applied to equations with non-constant coefficients:
Example 4.7.3. Solve the following BVP in the space VN defined in the above example:
− (1 + t)u0
0
= t, 0 < t < 1, u(0) = u(1) = 0
4
BOUNDARY VALUE PROBLEMS
27
The weak solution is in the form un =
Z
Kij =
1
φi φj = ijπ 2
0
Z
PN
i=1 ci sin(iπt)
1
(1 + t) cos(iπt) cos(jπt)dt =
0
and
Z
(f, φ) =
1
t sin(iπt)dt =
0
and Kij satisfies

 3iπ2
4
 ij((−1)i+j j 2 +(−1)i+j i2 −i2 −j 2 )
(i+j)2 (i−j)2
i = j,
i 6= j
(−1)i+1
iπ
We can solve the system KC = f and find the coefficients ci .
Exercise: Solve the above problem in the finite dimensional space
1
V2 = span{t(1 − t), t( − t)(1 − t)}
2
Piecewise linear polynomials and FEM
In the previous examples we could see that to get more accurate solution we need to invert a
matrix which sometimes is neither diagonal nor sparse enough. In the Finite element method
we choose a basis which consists of linear functions. We construct the subspaces of piecewise
polynomials. To define a space of piecewise linear functions, we begin with a mesh on the
interval [0, l]:
0 = t0 < t1 < ... < tn = l.
The points t0 , t1 , ..., tn are called the nodes of the mesh. Often we choose a regular mesh,
with xi = ih, h = nl . A function p : [0, l] → R is a piecewise linear if, for each i = 1, 2, ..., n,
there exist constants ai , bi with
p(t) = ai t + bi , for all t ∈ (t−1 , ti )
We define the space
Sn = {p : [0, l] → R : p is continuous and piecewise linear, p(0) = p(l) = 0}
We can also define ”hat functions” for i = 1, 2, ...n − 1

1


(t − (i − 1)h),
ti−1 < t < ti ,

h
φi (t) = − h1 (t − (i + 1)h), ti < t < ti+1



0
otherwise.
4
BOUNDARY VALUE PROBLEMS
Then
28

1 i = j
φi (tj ) = δij =
0 i =
6 j
It is not difficult to check that if p ∈ Sn ,then
n−1
X
p(t) =
p(ti )φi (t)
i=1
Examples 4.7.4. Solve the problem −u00 = et , 0 < t < 1, u(0) = u(1) = 0 on Sn .
From the above hat functions, we get

1


,

h
dφi
(t) = − h1 ,

dt


0
ti−1 < t < ti
ti < t < ti+1 ,
otherwise
Therefore
(i+1)h
Z
Kii =
(i−1)h
Z
(i+1)h
Ki,i+1 = −
ih
(f, φ) =
eih
h
dt
2
=
h2
h
1
−1
dt =
2
h
h
(eh + e−h − 2)
In case of N = 5 we get the Matrix K and vector f as




10 −5 0
0
0.2451




−5 10 −5 0 
0.2994




K=
 ; f = 0.3656
0
−5
10
−5




0
0 −5 10
0.4466
The solution is


0.1223


0.1955


C=

0.2089


0.1491
We have the following example with non-constant coefficients.
Example 4.7.5. Solve the BVP: −((1 + t)u0 )0 = 1, 0 < t < 1, u(0) = u(1) = 0 on Sn .
4
BOUNDARY VALUE PROBLEMS
Here
29
(i+1)h
Z
Kii =
(1 + t)
(i−1)h
and
Z
1
2(ih + 1)
dt =
2
h
h
(i+1)h
Ki,i+1 = −
(1 + t)
ih
h + 2ih + 2
1
dt = −
2
h
2h
As in the previous example, these are enough to compute the matrix K. We also have
Z
ih
(f, φ) =
(i−1)h
1
(t − (i − 1)h)dt −
h
Z
(i+1)h
ih
1
(t − (i + 1)h)dt = h
h
In case of N = 5, the Matrix K is 4 × 4.


 
12
−13/2
0
0
1/5


 
−13/2
 
14
−15/2
0 

 ; f = 1/5
 0

1/5
−15/2
16
−17/2

 
0
0
−17/2
18
1/5
The solution is


0.0628


0.0851

C=
0.0778 .


0.0479
Fundamental issues for the finite element methods are:
1. How fast the approximate solution converge to the true solution as n → ∞?
2. How can the sparse system Ku = f be solved efficiently?
These are much beyond the level and scope of this course.
References:
1. S. Ahmad and A. Ambroseetti, A text book of ODE, springer
2. C.Y.Lo, Boundary value problems, world scientific.
3. M.S. Gockenbach, Partial Differential equations, Siam.