A Survey on Solution Methods for Integral Equations∗

A Survey on Solution Methods for Integral Equations∗
Ilias S. Kotsireas†
June 2008
1
Introduction
Integral Equations arise naturally in applications, in many areas of Mathematics, Science and
Technology and have been studied extensively both at the theoretical and practical level. It
is noteworthy that a MathSciNet keyword search on Integral Equations returns more than
eleven thousand items. In this survey we plan to describe several solution methods for Integral
Equations, illustrated with a number of fully worked out examples. In addition, we provide a
bibliography, for the reader who would be interested in learning more about various theoretical
and computational aspects of Integral Equations.
Our interest in Integral Equations stems from the fact that understanding and implementing
solution methods for Integral Equations is of vital importance in designing efficient parameterization algorithms for algebraic curves, surfaces and hypersurfaces. This is because the
implicitization algorithm for algebraic curves, surfaces and hypersurfaces based on a nullvector
computation, can be inverted to yield a parameterization algorithm.
Integral Equations are inextricably related with other areas of Mathematics, such as Integral
Transforms, Functional Analysis and so forth. In view of this fact, and because we made a
conscious effort to limit the length in under 50 pages, it was inevitable that the present work
is not self-contained.
∗
We thank ENTER. This project is implemented in the framework of Measure 8.3 of the programme ”Competitiveness”, 3rd European Union Support Framework, and is funded as follows: 75% of public funding from the
European Union, Social Fund, 25% of public funding from the Greek State, Ministry of Development, General
secretariat of research and technology, and from the private sector.
†
We thank Professor Ioannis Z. Emiris, University of Athens, Athens, Greece, for the warm hospitality in
his Laboratory of Geometric & Algebraic Algorithms, ERGA.
1
2
Linear Integral Equations
2.1
General Form
The most general form of a linear integral equation is
Z
b(x)
h(x)u(x) = f (x) +
K(x, t)u(t) dt
(1)
a
The type of an integral equation can be determined via the following conditions:
f (x) = 0 Homogeneous. . .
f (x) 6= 0 Nonhomogeneous. . .
b(x) = x . . . Volterra integral equation. . .
b(x) = b . . . Fredholm integral equation. . .
h(x) = 0 . . . of the 1st kind.
h(x) = 1 . . . of the 2nd kind.
For example, Abel’s problem:
Z x
p
φ(t)
√
dt
− 2gf (x) =
x−t
0
(2)
is a nonhomogeneous Volterra equation of the 1st kind.
2.2
Linearity of Solutions
If u1 (x) and u2 (x) are both solutions to the integral equation, then c1 u1 (x) + c2 u2 (x) is also a
solution.
2.3
The Kernel
K(x, t) is called the kernel of the integral equation. The equation is called singular if:
• the range of integration is infinite
• the kernel becomes infinite in the range of integration
2.3.1
Difference Kernels
If K(x, t) = K(x−t) (i.e. the kernel depends only on x−t), then the kernel is called a difference
kernel. Volterra equations with difference kernels may be solved using either Laplace or Fourier
transforms, depending on the limits of integration.
2
2.3.2
2.4
Resolvent Kernels
Relations to Differential Equations
Differential equations and integral equations are equivalent and may be converted back and
forth:
Z xZ s
dy
= F (x) ⇒ y(x) =
F (t) dt ds + c1 x + c2
dx
a
a
2.4.1
Reduction to One Integration
In most cases, the resulting integral may be reduced to an integration in one variable:
Z xZ s
Z x
Z x
Z x
F (t) dt ds =
F (t)
ds dt =
(x − t)F (t) dt
a
a
a
t
a
Or, in general:
Z xZ
Z
x1
xn−1
...
a
a
1
(n−1)!
2.4.2
F (xn ) dxn dxn−1 . . . dx1 =
Za
x
(x − x1 )n−1 F (x) dx1
a
Generalized Leibnitz formula
The Leibnitz formula may be useful in converting and solving some types of equations.
d
dx
Z
β(x)
F (x, y) dy =
α(x)
Z
β(x)
α(x)
3
3.1
∂F
dβ
dα
(x, y) dy + F (x, β(x)) (x) − F (x, α(x)) (x)
∂x
dx
dx
Transforms
Laplace Transform
The Laplace transform is an integral transform of a function f (x) defined on (0, ∞). The
general formula is:
Z ∞
F (s) ≡ L{f } =
e−sx f (x) dx
(3)
0
3
The Laplace transform happens to be a Fredholm integral equation of the 1st kind with kernel
K(s, x) = e−sx .
3.1.1
Inverse
The inverse Laplace transform involves complex integration, so tables of transform pairs are
normally used to find both the Laplace transform of a function and its inverse.
3.1.2
Convolution Product
When F1 (s) ≡ L{f1 } and FR2 (s) ≡ L{f2 } then the convolution product is L{f1 ∗ f2 } ≡
x
F1 (s)F2 (s), where f1 ∗ f2 = 0 f1 (x − t)f2 (t) dt. f1 (x − t) is a difference kernel and f2 (t)
is a solution to the integral equation. Volterra integral equations with difference kernels where
the integration is performed on the interval (0, ∞) may be solved using this method.
3.1.3
Commutativity
The Laplace transform is commutative. That is:
Z x
Z
f1 ∗ f2 =
f1 (x − t)f2 (t) dt =
0
3.1.4
x
f2 (x − t)f1 (t) dt = f2 ∗ f1
0
Example
Find the inverse Laplace transform of F (s) =
1
s2 +9
½
f (x) =
=
=
=
=
+
1
s(s+1)
using a table of transform pairs:
¾
1
1
L
+
s2 + 9 s(s + 1)
½
¾
½
¾
1
1
−1
−1
L
+L
s2 + 9
s(s + 1)
½
¾
1
1
1
−1
sin 3x + L
−
3
s s+1
½ ¾
½
¾
1
1
1
−1
−1
sin 3x + L
−L
3
s
s+1
1
sin 3x + 1 − e−x
3
−1
4
3.2
Fourier Transform
The Fourier exponential transform is an integral transform of a function f (x) defined on
(−∞, ∞). The general formula is:
Z ∞
F (λ) ≡ F{f } =
e−iλx f (x) dx
(4)
−∞
The Fourier transform happens to be a Fredholm equation of the 1st kind with kernel K(λ, x) =
e−iλx .
3.2.1
Inverse
The inverse Fourier transform is given by:
1
f (x) ≡ F {F } =
2π
−1
Z
∞
eiλx F (λ) dλ
(5)
−∞
It is sometimes difficult to determine the inverse, so tables of transform pairs are normally used
to find both the Fourier transform of a function and its inverse.
3.2.2
Convolution Product
When F1 (λ) ≡ F{f1 } and FR2 (λ) ≡ F{f2 } then the convolution product is F{f1 ∗ f2 } ≡
∞
F1 (λ)F2 (λ), where f1 ∗ f2 = −∞ f1 (x − t)f2 (t) dt. f1 (x − t) is a difference kernel and f2 (t)
is a solution to the integral equation. Volterra integral equations with difference kernels where
the integration is performed on the interval (−∞, ∞) may be solved using this method.
3.2.3
Commutativity
The Fourier transform is commutative. That is:
Z ∞
Z
f1 ∗ f2 =
f1 (x − t)f2 (t) dt =
−∞
3.2.4
∞
f2 (x − t)f1 (t) dt = f2 ∗ f1
−∞
Example
Find the Fourier transform of g(x) =
The inverse transform of sinλaλ is
sin ax
x
using a table of integral transforms:
½
f (x) =
1
,
2
|x| ≤ a
0, |x| > a
5
but since the Fourier transform is symmetrical, we know that
F{F (x)} = 2πf (−λ).
Therefore, let
F (x) =
and
½
f (λ) =
so that
½
F
3.3
sin ax
x
sin ax
x
1
,
2
|x| ≤ a
0, |x| > a
¾
½
= 2πf (−λ) =
π, |λ| ≤ a
.
0, |λ| > a
Mellin Transform
The Mellin transform is an integral transform of a function f (x) defined on (0, ∞). The general
formula is:
Z ∞
F (λ) ≡ M{f } =
xλ−1 f (x) dx
(6)
0
If x = e
−t
and f (x) ≡ 0 for x < 0 then the Mellin transform is reduced to a Laplace transform:
Z ∞
Z ∞
−t λ−1
−t
−t
(e ) f (e )(−e ) dt = −
e−λt f (e−t ) dt.
0
3.3.1
0
Inverse
Like the inverse Laplace transform, the inverse Mellin transform involves complex integration,
so tables of transform pairs are normally used to find both the Mellin transform of a function
and its inverse.
3.3.2
Convolution Product
When F1 (λ) ≡ M{f1 } and F2 (λ) ≡ M{f2 } then the convolution product is M{f1 ∗ f2 } ≡
R ∞ f (t)f ( x )
F1 (λ)F2 (λ), where f1 ∗ f2 = 0 1 t 2 t dt. Volterra integral equations with kernels of the
form K( xt ) where the integration is performed on the interval (0, ∞) may be solved using this
method.
6
3.3.3
Example
Find the Mellin transform of e−ax , a > 0. By definition:
Z ∞
F (λ) =
xλ−1 e−ax dx.
0
Let ax = z, so that:
Z
∞
−λ
F (λ) = a
e−z z λ−1 dz.
0
From the definition of the gamma function Γ(ν)
Z ∞
Γ(ν) =
xν−1 e−x dx
0
the result is
F (λ) =
4
Γ(λ)
.
aλ
Numerical Integration
General Equation:
Z
b
u(x) = f (x) +
K(x, t)u(t) dt
a
P
Let Sn (x) = nk=0 K(x,
P tk )u(tk )∆k t.
Let u(xi ) = f (xi ) + nk=0 K(xi , tk )u(tk )∆k t, i = 0, 1, 2, . . . , n.
If equal increments are used, ∴ ∆x = b−a
.
n
4.1
Midpoint Rule
Z
b
a
4.2
n
b−aX
f (x) dx ≈
f
n i=1
µ
xi−1 + xi
2
¶
Trapezoid Rule
Z
b
a
#
"
n−1
X
1
b−a 1
f (xi ) + f (xn )
f (x0 ) +
f (x) dx ≈
n
2
2
i=1
7
4.3
Simpson’s Rule
Z
b
f (x) dx ≈
a
5
b−a
[f (x0 ) + 4f (x1 ) + 2f (x2 ) + 4f (x3 ) + · · · + 4f (xn−1 ) + f (xn )]
3n
Volterra Equations
Volterra integral equations are a type of linear integral equation (1) where b(x) = x:
Z x
h(x)u(x) = f (x) +
K(x, t)u(t) dt
(7)
a
The equation is said to be the first kind when h(x) = 0,
Z x
−f (x) =
K(x, t)u(t) dt
(8)
a
and the second kind when h(x) = 1,
Z
x
u(x) = f (x) +
K(x, t)u(t) dt
(9)
a
5.1
Relation to Initial Value Problems
A general initial value problem is given by:
d2 y
dy
+ A(x) + B(x)y(x) = g(x)
2
dx
dx
y(a) = c1 , y 0 (a) = c2
Z x
∴ y(x) = f (x) +
K(x, t)y(t) dt
a
where
Z
x
f (x) =
(x − t)g(t) dt + (x − a)[c1 A(a) + c2 ] + c1
a
and
K(x, t) = (t − x)[B(t) − A0 (t)] − A(t)
8
(10)
5.2
5.2.1
Solving 2nd -Kind Volterra Equations
Neumann Series
The resolvent kernel for this method is:
Γ(x, t; λ) =
∞
X
λn Kn+1 (x, t)
n=0
where
Z
x
Kn+1 (x, t) =
K(x, y)Kn (y, t) dy
t
and
K1 (x, t) = K(x, t)
The series form of u(x) is
u(x) = u0 (x) + λu1 (x) + λ2 u2 (x) + · · ·
and since u(x) is a Volterra equation,
Z
x
u(x) = f (x) + λ
K(x, t)u(t) dt
a
so
u0 (x) + λu1 (x) + λ2 u2 (x) + · · · =
Z x
Z
2
f (x) + λ
K(x, t)u0 (t) dt + λ
a
x
K(x, t)u1 (t) dt + · · ·
a
Now, by equating like coefficients:
u0 (x) = f (x)
Z x
u1 (x) =
K(x, t)u0 (t) dt
Za x
u2 (x) =
K(x, t)u1 (t) dt
..
.
..
.Z
a
x
un (x) =
K(x, t)un−1 (t) dt
a
9
Substituting u0 (x) = f (x):
Z
x
u1 (x) =
K(x, t)f (t) dt
a
Substituting this for u1 (x):
Z
Z
x
s
u2 (x) =
K(x, t)
K(s, t)f (t) dt ds
a
a
·Z x
¸
Z x
=
f (t)
K(x, s)K(s, t) ds dt
a
t
Z x
=
f (t)K 2 (x, t) dt
a
Continue substituting recursively for u3 , u4 , . . . to determine the resolvent kernel. Sometimes
the resolvent kernel becomes recognizable after a few iterations as a Maclaurin or Taylor series
and can be replaced by the function represented by that series. Otherwise numerical methods
must be used to solve the equation.
Example: Use the Neumann series method to solve the Volterra integral equation of the
nd
2 kind:
Z x
u(x) = f (x) + λ
ex−t u(t) dt
(11)
0
x−t
In this example, K1 (x, t) ≡ K(x, t) = e
, so K2 (x, t) is found by:
Z x
K2 (x, t) =
K(x, s)K1 (s, t) ds
t
Z x
=
ex−s es−t ds
Zt x
=
ex−t ds
t
= (x − t)ex−t
and K3 (x, t) is found by:
Z
x
K3 (x, t) =
K(x, s)K2 (s, t) ds
Z
t
=
x
ex−s (s − t)es−t ds
t
10
Z
=
=
=
=
=
x
(s − t)ex−t ds
t
Z x
x−t
e
(s − t) ds
t
· 2
¸x
x−t s
e
− ts
2
t
· 2
¸
2
x
−
t
x−t
e
− t(x − t)
2
(x − t)2 x−t
e
2
so by repeating this process the general equation for Kn+1 (x, t) is
Kn+1 (x, t) =
(x − t)n x−t
e .
n!
Therefore, the resolvent kernel for eq. (11) is
Γ(x, t; λ) = K1 (x, t) + λK2 (x, t) + λ2 K3 (x, t) + · · ·
(x − t)2 x−t
= ex−t + λ(x − t)ex−t + λ2
e + ···
2
·
¸
2
x−t
2 (x − t)
= e
1 + λ(x − t) + λ
+ ··· .
2
2
The series in brackets, 1 + λ(x − t) + λ2 (x−t)
+ · · ·, is the Maclaurin series of eλ(x−t) , so the
2
resolvent kernel is
Γ(x, t; λ) = ex−t eλ(x−t) = e(λ+1)(x−t)
and the solution to eq. (11) is
Z
x
u(x) = f (x) + λ
e(λ+1)(x−t) f (t) dt.
0
5.2.2
Successive Approximations
1. make a reasonable 0th approximation, u0 (x)
2. let i = 1
3. ui = f (x) +
Rx
0
K(x, t)ui−1 (t) dt
11
4. increment i and repeat the previous step until a desired level of accuracy is reached
If f (x) is continuous on 0 ≤ x ≤ a and K(x, t) is continuous on both 0 ≤ x ≤ a and
0 ≤ t ≤ x, then the sequence un (x) will converge to u(x). The sequence may occasionally be
recognized as a Maclaurin or Taylor series, otherwise numerical methods must be used to solve
the equation.
Example: Use the successive approximations method to solve the Volterra integral equation
of the 2nd kind:
Z x
u(x) = x −
(x − t)u(t) dt
(12)
0
In this example, start with the 0
th
approximation u0 (t) = 0. this gives u1 (x) = x,
Z x
u2 (x) = x −
(x − t)u1 (t) dt
0
Z x
= x−
(x − t)t dt
0
· 2
¸x
xt
t3
= x−
−
2
3 0
3
x
= x−
3!
and
Z
x
(x − t)u2 (t) dt
µ
¶
Z x
t3
x−
(x − t) t −
dt
6
0
· 2
¸x · 3
¸x
t
t
t4
t5
x−x
−
+
−
2
24 0
3
30 0
¶
µ
¶
µ 2
x
x4
x3 x5
x−x
−
+
−
2
24
3
30
3
5
3
5
x
x
x
x
x−
+
+
−
2
24
3
30
x3 x5
x−
+ .
3!
5!
u3 (x) = x −
0
=
=
=
=
=
Continuing this process, the nth approximation is
un (x) = x −
x3 x5
x2n+1
+
− · · · + (−1)n
3!
5!
(2n + 1)!
12
which is clearly the Maclaurin series of sin x. Therefore the solution to eq. (12) is
u(x) = sin x.
5.2.3
Laplace Transform
If the kernel depends only on x − t, then it is called a difference kernel and denoted by K(x − t).
Now the equation becomes:
Z x
u(x) = f (x) + λ
K(x − t)u(t) dt,
0
which is the convolution product
Z
x
K(x − t)u(t) dt = K ∗ u
0
Define the Laplace transform pairs U (s) ∼ u(x), F (s) ∼ f (x), and K(s) ∼ K(x).
Now, since
L{K ∗ u} = KU,
then performing the Laplace transform to the integral equations results in
U (s) = F (s) + λK(s)U (s)
F (s)
U (s) =
, λK(s) 6= 1,
1 − λK(s)
and performing the inverse Laplace transform:
½
¾
F (S)
−1
u(x) = L
, λK(s) 6= 1.
1 − λK(s)
Example: Use the Laplace transform method to solve eq. (11):
Z x
u(x) = f (x) + λ
ex−t u(t) dt.
0
Let K(s) = L{ex } =
1
s−1
and
F (s)
λ
1 − s−1
s−1
= F (s)
s−1−λ
U (s) =
13
s−1−λ+λ
s−1−λ
F (s)
= F (s) + λ
s−1−λ
F (s)
= F (s) + λ
.
s − (λ + 1)
= F (s)
The solution u(x) is the inverse Laplace transform of U (s):
½
¾
F (s)
−1
u(x) = L
F (s) + λ
s − (λ + 1)
½
¾
F (s)
−1
−1
= L {F (s)} + λL
s − (λ + 1)
¾
½
1
−1
F (s)
= f (x) + λL
s − (λ + 1)
= f (x) + λe(λ+1)x ∗ f (x)
Z x
= f (x) + λ
e(λ+1)(x−t) f (t) dt
0
which is the same result obtained using the Neumann series method.
Example: Use the Laplace transform method to solve eq. (12):
Z x
u(x) = x −
(x − t)u(t) dt.
0
Let K(s) = L{x} =
1
s2
and
U (s) =
1
1
− 2 U (s)
2
s
s
1
s2
=
1 + s12
1
= 2
s +1
The solution u(x) is the inverse Laplace transform of U (s):
½
¾
1
−1
u(x) = L
s2 + 1
= sin x
14
which is the same result obtained using the successive approximations method.
5.2.4
Numerical Solutions
Given a 2nd kind Volterra equation
Z
x
u(x) = f (x) +
K(x, t)u(t) dt
a
divide the interval of integration (a, x) into n equal subintervals, ∆t = xnn−a .n ≥ 1, where
xn = x.
Let t0 = a, x0 = t0 = a, xn = tn = x, tj = a + j∆t = t0 + j∆t, xi = x0 + i∆t = a + i∆t = ti .
Using the trapezoid rule,
Z x
K(x, t)u(t) dt ≈
a
·
1
∆t K(x, t0 )u(t0 ) + K(x, t1 )u(t1 ) + · · ·
2
¸
1
+ K(x, tn−1 )u(tn−1 ) + K(x, tn )u(tn )
2
t −a
where ∆t = j j = x−a
, tj ≤ x, j ≥ 1, x = xn = tn .
n
Now the equation becomes
·
1
u(x) = f (x) + ∆t K(x, t0 )u(t0 ) + K(x, t1 )u(t1 ) + · · ·
2
¸
1
+ K(x, tn−1 )u(tn−1 ) + K(x, tn )u(tn ) , tj ≤ x, j ≥ 1, x = xn = tn .
2
∵ K(x, t) ≡ 0 when t > x (the integration ends at t = x), ∴ K(xi , tj ) = 0 for tj > xi .
Numerically, the equation becomes
·
1
u(xi ) = f (xi ) + ∆t K(xi , t0 )u(t0 ) + K(xi , t1 )u(t1 ) + · · ·
2
¸
1
+ K(xi , tj−1 )u(tj−1 ) + K(xi , tj )u(tj ) , i = 1, 2, . . . , n, tj ≤ xi
2
where u(x0 ) = f (x0 ).
15
Denote ui ≡ u(ti ), fi ≡ f (xi ), Kij ≡ K(xi , tj ), so the numeric equation can be written in a
condensed form as
·
1
u0 = f0 , ui = fi + ∆t Ki0 u0 + Ki1 u1 + · · ·
2
¸
1
+ Ki(j−1) uj−1 + Kij uj , i = 1, 2, . . . , n, j ≤ i
2
∴ there are n + 1 equations
u0 = f 0
·
¸
1
1
u1 = f1 + ∆t K10 u0 + K11 u1
2
2
·
¸
1
1
u2 = f2 + ∆t K20 u0 + K21 u1 + K22 u2
2
2
..
..
.
.
·
¸
1
1
un = fn + ∆t Kn0 u0 + Kn1 u1 + · · · + Kn(n−1) un−1 + Knn un
2
2
where a general relation can be rewritten as
¤
£
fi + ∆t 21 Ki0 u0 + Ki1 u1 + · · · + Ki(i−1) ui−1
ui =
1 − ∆t
Kii
2
and can be evaluated by substituting u0 , u1 , . . . , ui−1 recursively from previous calculations.
5.3
5.3.1
Solving 1st -Kind Volterra Equations
Reduction to 2nd -Kind
Z
x
f (x) = λ
K(x, t)u(t) dt
0
If K(x, x) 6= 0, then:
Z
x
∂K(x, t)
µ(t) dt + λK(x, x)u(x)
∂x
0
Z x
1
1
df
∂K(x, t)
⇒ u(x) =
−
u(t) dt
λK(x, x) dx
∂x
0 K(x, x)
df
=λ
dx
16
1
−1 ∂K(x, t)
and H(x, t) =
.
λK(x, x)
K(x, x) ∂x
Z x
u(x) = g(x) +
H(x, t)u(t) dt
Let g(x) =
∴
0
This is a Volterra equation of the 2nd kind.
Example: Reduce the following Volterra equation of the 1st kind to an equation of the 2nd
kind and solve it:
Z x
sin x =
ex−t u(t) dt.
(13)
0
x−t
In this case, λ = 1, K(x, t) = e , and f (x) = sin x. Since K(x, x) = 1 6= 0, then eq. (13)
becomes:
Z x
u(x) = cos x −
ex−t u(t) dt.
0
This is a special case of eq. (11) with f (x) = cos x and λ = −1:
Z x
cos t dt
u(x) = cos x −
0
= cos x − [sin t]x0
= cos x − sin x.
5.3.2
Laplace Transform
If K(x, t) = K(x − t), then a Laplace transform may be used to solve the equation. Define the
Laplace transform pairs U (s) ∼ u(x), F (s) ∼ f (x), and K(s) ∼ K(x).
Now, since
L{K ∗ u} = KU,
then performing the Laplace transform to the integral equations results in
F (s) = λK(s)U (s)
F (s)
, λK(s) 6= 0,
U (s) =
λK(s)
and performing the inverse Laplace transform:
½
¾
1 −1 F (S)
u(x) = L
, λK(s) 6= 0.
λ
K(s)
17
Example: Use the Laplace transform method to solve eq. (13):
Z x
sin x = λ
ex−t u(t) dt.
0
Let K(s) = L{ex } =
1
s−1
and substitute this and L{sin x} =
1
s2 +1
to get
1
1
=
λ
U (s).
s2 + 1
s−1
Solve for U (s):
1
1 s2 +1
U (s) =
1
λ s−1
1 s−1
=
λ sµ2 + 1
¶
s
1
1
−
.
=
λ s2 + 1 s2 + 1
Use the inverse transform to find u(x):
½
¾
s
1 −1
1
u(x) =
L
−
λ
s2 + 1 s2 + 1
1
=
(cos x − sin x)
λ
which is the same result found using the reduction to 2nd kind method when λ = 1.
Example: Abel’s equation (2) formulated as an integral equation is
Z x
p
φ(t)
√
− 2gf (x) =
dt.
x−t
0
This describes the shape φ(y) of a wire in a vertical plane along which a bead must descend
under the influence of gravity a distance y0 in a predetermined time f (y0 ). This problem is
described by Abel’s equation (2).
To solve Abel’s equation, let F (s) be the Laplace
of f (x) and Φ(s) be the Laplace
p transform √
transform of φ(x). Also let K(s) = L{ √1x } = πs (∵ Γ( 21 ) = π). The Laplace transform of
eq. (2) is
r
p
π
Φ(s)
− 2gF (s) =
s
18
or, solving for Φ(s):
r
Φ(s) = −
So φ(x) is given by:
r
φ(x) = −
2g √
sF (s).
π
2g −1 √
L { sF (s)}.
π
√
Since L−1 { s} does not exist, the convolution theorem cannot be used directly to solve for
φ(x). However, by introducing H(s) ≡ F√(s)
the equation may be rewritten as:
s
r
φ(x) = −
2g
sH(s).
π
Now
h(x) = L−1 {H(s)}
½
¾
1
−1
√ F (s)
= L
s
Z x
1
f (t)
√
= √
dt
π 0
x−t
and since h(0) = 0,
dh
= L−1 {sH(s) − h(0)}
dx
= L−1 {sH(s)}
Finally, use
dh
dx
to find φ(x) using the inverse Laplace transform:
r
¸
·
Z x
2g d
1
f (t)
√
√
φ(x) = −
dt
π dx
π 0
x−t
√
Z x
f (t)
− 2g d
√
=
dt
π dx 0
x−t
which is the solution to Abel’s equation (2).
19
6
Fredholm Equations
Fredholm integral equations are a type of linear integral equation (1) with b(x) = b:
Z b
h(x)u(x) = f (x) +
K(x, t)u(t) dt
(14)
a
The equation is said to be the first kind when h(x) = 0,
Z x
−f (x) =
K(x, t)u(t) dt
(15)
a
and the second kind when h(x) = 1,
Z
x
u(x) = f (x) +
K(x, t)u(t) dt
(16)
a
6.1
Relation to Boundary Value Problems
A general boundary value problem is given by:
d2 y
dy
+ A(x) + B(x)y = g(x)
2
dx
dx
(17)
y(a) = c1 , y(b) = c2
Z b
∴ y(x) = f (x) +
K(x, t)y(t) dt
a
where
Z
x
f (x) = c1 +
a
and
½
K(x, t) =
6.2
¸
·
Z b
x−a
(x − t)g(t) dt +
(b − t)g(t) dt
c2 − c1 −
b−a
a
x−b
[A(t) − (a − t) [A0 (t) − B(t)]]
b−a
x−a
[A(t) − (b − t) [A0 (t) − B(t)]]
b−a
if x > t
if x < t
Fredholm Equations of the 2nd Kind With Seperable Kernels
A seperable kernel, also called a degenerate kernel, is a kernel of the form
K(x, t) =
n
X
ak (x)bk (t)
(18)
k=1
which is a finite sum of products ak (x) and bk (t) where ak (x) is a function of x only and bk (t)
is a function of t only.
20
6.2.1
Nonhomogeneous Equations
Given the kernel in eq. (18), the general nonhomogeneous second kind Fredholm equation,
Z b
u(x) = f (x) + λ
K(x, t)u(t) dt
(19)
a
becomes
u(x) = f (x) + λ
= f (x) + λ
Z bX
n
ak (x)bk (t)u(t) dt
a k=1
n
X
Z
b
ak (x)
k=1
Define ck as
Z
bk (t)u(t) dt
a
b
ck =
bk (t)u(t) dt
a
(i.e. the integral portion). Multiply both sides by bm (x), m = 1, 2, . . . , n
bm (x)u(x) = bm (x)f (x) + λ
n
X
ck bm (x)ak (x)
k=1
then integrate both sides from a to b
Z b
Z b
Z b
n
X
ck
bm (x)ak (x) dx
bm (x)u(x) dx =
bm (x)f (x) dx + λ
a
a
k=1
Define fm and amk as
Z
a
b
fm =
bm (x)f (x) dx
a
Z
b
amk =
bm (x)ak (x) dx
a
so finally the equation becomes
cm = fm + λ
n
X
amk ck , m = 1, 2, . . . , n.
k=1
That is, it is a nonhomogeneous system of n linear equations in c1 , c2 , . . . , cn . fm and amk are
known since bm (x), f (x), and ak (x) are all given.
21
This system
can be solved by finding all cm , m = 1, 2, . . . , n,
Pn
f (x) + λ k=1 ck ak (x) to find u(x). In matrix notation,





c1
f1
a11 a12
 c2 
 f2 
 a21 a22





let C =  .. , F =  .. , A =  ..
..
 . 
 . 
 .
.
an1 an2
cn
fn
and substituting into u(x) =

· · · a1n
· · · a2n 

.. .
..
. . 
· · · ann
∴ C = F + λAC
C − λAC = F
(I − λA)C = F
which has a unique solution if the determinant |I − λA| =
6 0, and either no solution or infinitely
many solutions when |I − λA| = 0.
Example: Use the above method to solve the Fredholm integral equation
Z 1
u(x) = x + λ
(xt2 + x2 t)u(t) dt.
(20)
P2
0
The kernel is K(x, t) = xt2 + x2 t = k=1 ak (x)bk (t), where a1 (x) = x, a2 (x) = x2 , b1 (t) = t2 ,
and b2 (t) = t. Given that f (x) = x, define f1 and f2 :
Z 1
Z 1
1
f1 =
b1 (t)f (t) dt =
t3 dt =
4
Z0 1
Z0 1
1
f2 =
b2 (t)f (t) dt =
t2 dt =
3
0
0
so that the matrix F is
·
F =
1
4
1
3
¸
.
Now find the elements of the matrix A:
Z 1
Z 1
a11 =
b1 (t)a1 (t) dt =
t3 dt =
Z0 1
Z0 1
a12 =
b1 (t)a2 (t) dt =
t4 dt =
Z0 1
Z0 1
a21 =
b2 (t)a1 (t) dt =
t2 dt =
Z0 1
Z0 1
a22 =
b2 (t)a2 (t) dt =
t3 dt =
0
0
22
1
4
1
5
1
3
1
4
so that the A is
·
1
4
1
3
A=
Therefore, C = F + λAC becomes
·
¸ ·
c1
=
c2
or:
·
1
4
1
3
¸
·
+λ
1 − λ4 − λ5
− λ3 1 − λ4
¸
1
5
1
4
¸·
.
1
5
1
4
1
4
1
3
c1
c2
¸·
¸
·
=
¸
c1
,
c2
1
4
1
3
¸
.
If the determinant of the leftmost term is not equal to zero, then there is a unique solution for
c1 and c2 that can be evaluated by finding the inverse of that term. That is, if
¯
¯
¯ 1 − λ −λ ¯
4
5
¯
¯
¯ − λ 1 − λ ¯ 6= 0
3
4
¶2
µ
λ2
λ
−
6= 0
1−
4
15
240 − 120λ − λ2
6= 0
240
240 − 120λ − λ2 6= 0.
If this is true, then c1 and c2 can be found as:
60 + λ
240 − 120λ − λ2
80
=
,
240 − 120λ − λ2
c1 =
c2
then u(x) is found from
u(x) = f (x) + λ
= x+λ
n
X
ck ak (x)
k=1
2
X
ck ak (x)
k=1
= x + λ [c1 a1 (x) + c2 a2 (x)]
·
¸
(60 + λ)x
80x2
= x+λ
+
240 − 120λ − λ2 240 − 120λ − λ2
23
=
6.2.2
(240 − 60λ)x + 80λx2
, 240 − 120λ − λ2 6= 0.
240 − 120λ − λ2
Homogeneous Equations
Given the kernel defined in eq. (18), the general homogeneous second kind Fredholm equation,
Z
b
u(x) = λ
K(x, t)u(t) dt
(21)
a
becomes
u(x) = λ
= λ
Z bX
n
a k=1
n
X
ak (x)bk (t)u(t) dt
Z
b
ak (x)
bk (t)u(t) dt
a
k=1
Define ck , multiply and integrate, then define amk as in the previous section so finally the
equation becomes
n
X
amk ck , m = 1, 2, . . . , n.
cm = λ
k=1
That is, it is a homogeneous system of n linear equations in c1 , c2 , . . . , cn . Again,
system
Pthe
n
can be solved by finding all cm , m = 1, 2, . . . , n, and substituting into u(x) = λ k=1 ck ak (x)
to find u(x). Using the same matrices defined earlier, the equation can be written as
C = λAC
C − λAC = 0
(I − λA)C = 0.
Because this is a homogeneous system of equations, when the determinant |I − λA| 6= 0 then
the trivial solution C = 0 is the only solution, and the solution to the integral equation is
u(x) = 0. Otherwise, if |I − λA| = 0 then there may be zero or infinitely many solutions whose
eigenfunctions correspond to solutions u1 (x), u2 (x), . . . , un (x) of the Fredholm equation.
Example: Use the above method to solve the Fredholm integral equation
Z π
u(x) = λ
(cos2 x cos 2t + cos 3x cos3 t)u(t) dt.
(22)
0
24
P
The kernel is K(x, t) = cos2 x cos 2t + cos 3x cos3 t = k=1 2ak (x)bk (t) where a1 (x) = cos2 x,
a2 (x) = cos 3x, b1 (t) = cos 2t, and b2 (t) = cos3 t. Now find the elements of the matrix A:
Z π
Z π
π
a11 =
b1 (t)a1 (t) dt =
cos 2t cos2 t dt =
4
Z0 π
Z0 π
a12 =
b1 (t)a2 (t) dt =
cos 2t cos 3t dt = 0
0
0
Z π
Z π
a21 =
b2 (t)a1 (t) dt =
cos3 t cos2 t dt = 0
Z0 π
Z0 π
π
a22 =
b2 (t)a2 (t) dt =
cos3 t cos 3t dt =
8
0
0
and follow the method of the previous example to find
¸
·
1 − λπ
0
4
0
1 − λπ
8
³
π´
1−λ
4
³
π´
1−λ
8
c1 and c2 :
= 0
= 0
= 0
For c1 and c2 to not be the trivial solution, the determinant must be zero. That is,
¯
¯ ³
´³
´
¯ 1 − λπ
¯
0
4
¯
¯ = 1 − λ π 1 − λ π = 0,
λπ
¯
0
1− 8 ¯
4
8
which has solutions λ1 = π4 and λ2 = π8 which are the eigenvalues of eq. (22). There are two
corresponding eigenfunctions u1 (x) and u2 (x) that are solutions to eq. (22). For λ1 = π4 , c1 = c1
(i.e. it is a parameter), and c2 = 0. Therefore u1 (x) is
4
c1 cos2 x.
π
To form a specific solution (which can be used to find a general solution, since eq. (22) is
linear), let c1 = π4 , so that
u1 (x) = cos2 x.
u1 (x) =
Similarly, for λ2 = π8 , c1 = 0 and c2 = c2 , so u2 (x) is
8
c2 cos 3x.
π
u2 (x) =
To form a specific solution, let c2 = π8 , so that
u2 (x) = cos 3x.
25
6.2.3
Fredholm Alternative
The homogeneous Fredholm equation (21),
Z
b
u(x) = λ
K(x, t)u(t) dt
a
has the corresponding nonhomogeneous equation (19),
Z b
u(x) = f (x) + λ
K(x, t)u(t) dt.
a
If the homogeneous equation has only the trivial solution u(x) = 0, then the nonhomogeneous
equation has exactly one solution. Otherwise, if the homogeneous equation has nontrivial
solutions, then the nonhomogeneous equation has either no solutions or infinitely many solutions
depending on the term f (x). If f (x) is orthogonal to every solution ui (x) of the homogeneous
equation, then the associated nonhomogeneous
equation will have a solution. The two functions
Rb
f (x) and ui (x) are orthogonal if a f (x)ui (x) dx = 0.
6.2.4
Approximate Kernels
Often a nonseperable kernel may have a Taylor or other series expansion that is separable.
This approximate kernel is denoted by M (x, t) ≈ K(x, t) and the corresponding approximate
solution for u(x) is v(x). So the approximate solution to eq. (19) is given by
Z b
v(x) = f (x) + λ
M (x, t)v(t) dt.
a
The error involved is ε = |u(x) − v(x)| and can be estimated in some cases.
Example: Use an approximating kernel to find an approximate solution to
Z 1
u(x) = sin x +
(1 − x cos xt)u(t) dt.
(23)
0
The kernel K(x, t) = 1 − x cos xt is not seperable, but a finite number of terms from its
Maclaurin series expansion
µ
¶
x2 t2 x4 t4
x3 t2 x5 t4
1−x 1−
+
− ··· = 1 − x +
−
+ ···
2!
4!
2!
4!
is seperable in x and t. Therefore consider the first three terms of this series as the approximate
kernel
x3 t2
.
M (x, t) = 1 − x +
2!
26
The associated approximate integral equation in v(x) is
¶
Z 1µ
x3 t2
v(x) = sin x +
1−x+
v(t) dt.
2!
0
(24)
Now use the previous method for solving Fredholm equations with seperable kernels to solve
eq. (24):
2
X
M (x, t) =
ak (x)bk (t)
k=1
2
where a1 (x) = (1 − x), a2 (x) = x3 , b1 (t) = 1, b2 (t) = t2 . Given that f (x) = sin x, find f1 and
f2 :
Z 1
Z 1
f1 =
b1 (t)f (t) dt =
sin t dt = 1 − cos 1
0
0
Z 1
Z 1 2
1
t
sin t dt = cos 1 + sin 1 − 1.
f2 =
b2 (t)f (t) dt =
2
0
0 2
Now find the elements of the matrix A:
Z 1
Z 1
1
a11 =
b1 (t)a1 (t) dt =
(1 − t) dt =
2
Z0 1
Z0 1
1
a12 =
b1 (t)a2 (t) dt =
t3 dt =
4
Z0 1
Z0 1 2
t
1
a21 =
b2 (t)a1 (t) dt =
(1 − t) dt =
2
24
Z0 1
Z0 1 5
t
1
a22 =
b2 (t)a2 (t) dt =
dt = .
12
0
0 2
Therefore, C = F + λAC becomes
¸ ·
·
¸
·
1 − cos 1
c1
= 1
+1
c2
cos 1 + sin 1 − 1
2
or
·
1 − 12
− 41
1
1
− 24
1 − 12
¸·
c1
c2
¸
·
=
27
1
2
1
24
1
4
1
12
¸·
1 − cos 1
1
cos 1 + sin 1 − 1
2
¸
c1
,
c2
¸
.
The determinant is
¯
¯ 1− 1
− 14
2
¯
1
1
¯ −
1 − 12
24
¯
¯
¯ = 1 · 11 − 1 · 1
¯
2 12 4 24
43
=
96
which is non-zero, so there is a unique solution for C:
76
64 24
cos 1 +
+
sin 1 ≈ 1.0031
43
43 43
20
44 48
=
cos 1 −
+
sin 1 ≈ 0.1674.
43
43 43
c1 = −
c2
Therefore, the solution to eq. (24) is
v(x) = sin x + c1 a1 (x) + c2 a2 (x)
≈ sin x + 1.0031(1 − x) + 0.1674x3
which is an approximate solution to eq. (23). The exact solution is known to be u(x) = 1, so
comparing some values of v(x):
Solution
0
0.25
0.5
0.75
1
u(x)
1
1
1
1
1
v(x)
1.0031 1.0023 1.0019 1.0030 1.0088
6.3
Fredholm Equations of the 2nd Kind With Symmetric Kernels
A symmetric kernel is a kernel that satisfies
K(x, t) = K(t, x).
If the kernel is a complex-valued function, then to be symmetric it must satisfy [K(x, t) =
K(t, x)] where K denotes the complex conjugate of K.
These types of integral equations may be solved by using a resolvent kernel Γ(x, t; λ) that
can be found from the orthonormal eigenfunctions of the homogeneous form of the equation
with a symmetric kernel.
28
6.3.1
Homogeneous Equations
The eigenfunctions of a homogeneous Fredholm equation with a symmetric kernel are functions
un (x) that satisfy
Z b
un (x) = λn
K(x, t)un (t) dt.
a
It can be shown that the eigenvalues of the symmetric kernel are real, and that the eigenfunctions un (x) and um (x) corresponding to two of the eigenvalues λn and λm where λn 6= λm
are orthogonal (see earlier section on the Fredholm Alternative). It can also be shown that
there is a finite number of eigenfunctions corresponding to each eigenvalue of a kernel if the
kernel is square integrable on {(x, t)|a ≤ x ≤ b, a ≤ t ≤ b}, that is:
Z bZ
a
b
K 2 (x, t) dx dt = B 2 < ∞.
a
Once the orthonormal eigenfunctions φ1 (x), φ2 (x), . . . of the homogeneous form of the equations are found, the resolvent kernel can be expressed as an infinite series:
Γ(x, t; λ) =
∞
X
φk (x)φk (t)
λ − λk
k=1
, λ 6= λk
so the solution of the 2nd -order Fredholm equation is
u(x) = f (x) + λ
∞
X
ak φk (x)
k=1
where
Z
λk − λ
b
f (x)φk (x) dx.
ak =
a
6.4
6.4.1
Fredholm Equations of the 2nd Kind With General Kernels
Fredholm Resolvent Kernel
Evaluate the Fredholm Resolvent Kernel Γ(x, t; λ) in
Z
b
u(x) = f (x) + λ
Γ(x, t; λ)f (t) dt
a
Γ(x, t; λ) =
29
D(x, t; λ)
D(λ)
(25)
where D(x, t; λ) is called the Fredholm minor and D(λ) is called the Fredholm determinant.
The minor is defined as
D(x, t; λ) = K(x, t) +
∞
X
(−λ)n
n!
n=1
where
Z
Bn (x, t)
b
Bn (x, t) = Cn K(x, t) − n
K(x, s)Bn−1 (s, t) ds, B0 (x, t) = K(x, t)
a
and
Z
b
Cn =
Bn−1 (t, t) dt, n = 1, 2, . . . , C0 = 1
a
and the determinant is defined as
D(λ) =
∞
X
(−λ)n
n!
n=0
Bn (x, t) and Cn may each be
determinant:
¯
¯
Z b Z b Z b ¯¯
¯
Bn (x, t) =
···
¯
a a
a ¯
¯
¯
Cn , C0 = 1.
expressed as a repeated integral where the integrand is a
K(x, t) K(x, t1 )
K(t1 , t) K(t1 , t1 )
..
..
.
.
K(tn , t) K(tn , t1 )
¯
¯ K(t1 , t1 ) K(t1 , t2 )
Z b Z b Z b ¯¯
¯ K(t2 , t1 ) K(t2 , t2 )
Cn =
···
¯
..
..
.
.
a a
a ¯
¯
¯ K(tn , t1 ) K(tn , t2 )
6.4.2
· · · K(x, tn )
· · · K(t1 , tn )
..
..
.
.
· · · K(tn , tn )
· · · K(t1 , tn )
· · · K(t2 , tn )
..
..
.
.
· · · K(tn , tn )
¯
¯
¯
¯
¯
¯ dt1 dt2 . . . dtn
¯
¯
¯
¯
¯
¯
¯
¯
¯ dt1 dt2 . . . dtn
¯
¯
¯
Iterated Kernels
Define the iterated kernel Ki (x, t) as
Z
b
Ki (x, y) ≡
K(x, t)Ki−1 (t, y) dt, K1 (x, t) ≡ K(x, t)
a
so
Z
b
φi (x) ≡
Ki (x, y)f (y) dy
a
30
and
un (x) = f (x) + λφ1 (x) + λ2 φ2 (x) + · · · + λn φn (x)
n
X
= f (x) +
λi φi (x)
i=1
un (x) converges to u(x) when |λB| < 1 and
s
Z bZ b
B=
K 2 (x, t) dx dt.
a
6.4.3
a
Neumann Series
The Neumann series
u(x) = f (x) +
∞
X
λi φi (x)
i=1
is convergent, and substituting for φi (x) as in the previous section it can be rewritten as
Z b
∞
X
i
Ki (x, t)f (t) dt
u(x) = f (x) +
λ
a
= f (x) +
i=1
Z b "X
∞
a
#
λi Ki (x, t) f (t) dt
i=1
Z
b
= f (x) + λ
Γ(x, t; λ)f (t) dt.
a
Therefore the Neumann resolvent kernel for the general nonhomogeneous Fredholm integral
equation of the second kind is
Γ(x, t; λ) =
∞
X
λi−1 Ki (x, t).
i=1
6.5
Approximate Solutions To Fredholm Equations of the 2nd Kind
The solution of the general nonhomogeneous Fredholm integral equation of the second kind eq.
(19), u(x), can be approximated by the partial sum
SN (x) =
N
X
k=1
31
ck φk (x)
(26)
of N linearly independent functions φ1 , φ2 , . . . , φN on the interval (a, b). The associated error
ε(x, c1 , c2 , . . . , cN ) depends on x and the choice of the coefficients c1 , c2 , . . . , cN . Therefore, when
substituting the approximate solution for u(x), the equation becomes
Z
b
SN (x) = f (x) +
K(x, t)SN (t) dt + ε(x, c1 , c2 , . . . , cN ).
(27)
a
Now N conditions must be found to give the N equations that will determine the coefficients
c1 , c 2 , . . . , c N .
6.5.1
Collocation Method
Assume that the error term ε disappears at the N points x1 , x2 , . . . , xN . This reduces the
equation to N equations:
Z b
SN (xi ) f (xi ) +
K(xi , t)SN (t) dt, i = 1, 2, . . . , N,
a
where the coefficients of SN can be found by substituting the N linearly independent functions φ1 , φ2 , . . . , φN and values x1 , x2 , . . . , xN where the error vanishes, then performing the
integration and solving for the coefficients.
Example: Use the Collocation method to solve the Fredholm integral equation of the
second kind
Z 1
u(x) = x +
xtu(t) dt.
(28)
−1
Set N = 3 and choose the linearly independent functions φ1 (x) = 1, φ2 (x) = x, and φ3 (x) = x2 .
Therefore the approximate solution is
S3 (x) =
3
X
ck φk (x) = c1 + c2 x + c3 x2 .
k=1
Substituting into eq. (27) obtains
Z
1
S3 (x) = x +
xt(c1 + c2 t + c3 t2 ) dt + ε(x, c1 , c2 , c3 )
−1
Z
1
= x+x
(c1 t + c2 t2 + c3 t3 ) dt + ε(x, c1 , c2 , c3 )
−1
32
and performing the integration results in
· 2
¸1
Z 1
c1 t
c2 t3 c3 t4
2
3
(c1 t + c2 t + c3 t ) dt =
+
+
2
3
4 −1
−1
³
c1 c2 c3
c1 c2 c3 ´
=
+ + −
− +
2
3
4
2
3
4
c1 − c1 c2 + c2 c3 − c3
=
+
+
2
3
4
2
=
c2
3
so eq. (27) becomes
¶
2
= x+x
c2 + ε(x, c1 , c2 , c3 )
3
µ
¶
2
= x 1 + c2 + ε(x, c1 , c2 , c3 ).
3
µ
c1 + c2 x + c3 x
2
Now, three equations are needed to find c1 , c2 , and c3 . Assert that the error term is zero at
three points, arbitrarily chosen to be x1 = 1, x2 = 0, and x3 = −1. This gives
2
c1 + c2 + c3 = 1 + c2
3
1
c1 + c2 + c3 = 1,
3
c1 + 0 + 0 = 0
c1 = 0,
2
c1 − c2 + c3 = −1 − c2
3
1
c1 − c2 + c3 = −1.
3
Solve for c1 , c2 , and c3 , giving c1 = 0, c2 = 3, and c3 = 0. Therefore the approximate solution
to eq. (28) is S3 (x) = 3x. In this case, the exact solution is known to be u(x) = 3x, so the
approximate solution happened to be equal to the exact solution due to the selection of the
linearly independent functions.
33
6.5.2
Galerkin Method
Assume that the error term ε is orthogonal to N given linearly independent functions ψ1 , ψ2 , . . . , ψN
of x on the interval (a, b). The N conditions therefore become
Z
b
ψj (x)ε(x, c1 , c2 , . . . , cN ) dx
·
¸
Z b
Z b
=
ψj (x) SN (x) − f (x) −
K(x, t)SN (t) dt dx
a
a
a
= 0, j = 1, 2, . . . , N
which can be rewritten as either
·
¸
Z b
Z b
ψj (x) SN (x) −
K(x, t)SN (t) dt dx
a
a
Z b
=
ψj (x)f (x) dx, j = 1, 2, . . . , N
a
or
Z
(
b
ψj (x)
a
N
X
Z
ck φk (x) −
K(x, t)
a
k=1
Z
b
" N
X
#
ck φk (t) dt
)
dx
k=1
b
=
ψj (x)f (x) dx, j = 1, 2, . . . , N
a
after substituting for SN (x) as before. In general the first set of linearly independent functions φj (x) are different from the second set ψj (x), but the same functions may be used for
convenience.
Example: Use the Galerkin method to solve the Fredholm integral equation in eq. (28),
Z 1
u(x) = x +
xtu(t) dt
−1
using the same linearly independent functions φ1 (x) = 1, φ2 (x) = x, and φ3 (x) = x2 giving the
approximate solution
S3 (x) = c1 + c2 x + c3 x2 .
34
Substituting into eq. (27) results in the error term being
Z 1
2
ε(x, c1 , c2 , c3 ) = c1 + c2 x + c3 x − x −
xt(c1 + c2 t + c3 t2 ) dt.
−1
The error must be orthogonal to three linearly independent functions, chosen to be ψ1 (x) = 1,
ψ2 (x) = x, and ψ3 (x) = x2 :
¸
Z 1 ·
Z 1
Z 1
2
2
1 c1 + c2 x + c3 x −
xt(c1 + c2 t + c3 t ) dt dx =
x dx
−1
−1
−1
¯1
¸
Z 1·
Z 1
x2 ¯¯
2
2
3
c1 + c2 x + c3 x − x
(c1 t + c2 t + c3 t ) dt dx =
2 ¯−1
−1
−1
"
#
· 2
¸1
Z 1
c1 t
1 1
c2 t3 c3 t4
2
c1 + c2 x + c3 x − x
+
+
dx =
−
2
3
4 −1
2 2
−1
¶
Z 1µ
1
2
c1 − c2 x + c3 x dx = 0
3
−1
¸1
·
c2 x2 c3 x3
= 0
+
c1 x +
6
3 −1
2
2c1 + c3 = 0,
3
Z
·
1
Z
x c1 + c2 x + c3 x −
−1
¸
1
2
1
Z
−1
1
−1
·
Z
2
x c1 + c2 x + c3 x −
2
1
xt(c1 + c2 t + c3 t ) dt dx =
µ
¶
1
x c1 − c2 x + c3 x2 dx
3
−1
¶
Z 1µ
1 2
3
c1 x − c2 x + c3 x dx
3
−1
· 2
¸1
c1 x
c 2 x3 c 3 x4
+
+
2
9
4 −1
2
c2
9
c2
Z
Z
2
1
−1
¯
3 ¯1
=
x ¯
3 ¯−1
=
1 1
+
3 3
2
3
2
=
3
= 3,
=
¸
Z
xt(c1 + c2 t + c3 t ) dt dx =
2
−1
1
−1
35
x2 dx
x3 dx
Z
µ
1
x
Z
−1
1 µ
−1
2
1
c1 − c2 x + c3 x2
3
¶
dx =
¯1
x4 ¯¯
4 ¯−1
¶
1 3
1 1
4
c1 x − c2 x + c3 x dx =
−
3
4 4
· 3
¸1
c1 x
c2 x4 c3 x5
+
+
= 0
3
12
5 −1
2
2
c1 + c3 = 0.
3
5
2
Solving for c1 , c2 , and c3 gives c1 = 0, c2 = 3, and c3 = 0, resulting in the approximate solution
being S3 (x) = 3x. This is the same result obtained using the collocation method, and also
happens to be the exact solution to eq. (27) due to the selection of the linearly independent
functions.
6.6
Numerical Solutions To Fredholm Equations
A numerical solution to a general Fredholm integral equation can be found by approximating
the integral by a finite sum (using the trapezoid rule) and solving the resulting simultaneous
equations.
6.6.1
Nonhomogeneous Equations of the 2nd Kind
In the general Fredholm Equation of the second kind in eq. (19),
Z
b
u(x) = f (x) +
K(x, t)u(t) dt,
a
the interval (a, b) can be subdivided into n equal intervals of width ∆t = b−a
. Let t0 = a,
n
tj = a + j∆t = t0 + j∆t, and since the variable is either t or x, let x0 = t0 = a, xn = tn = b,
and xi = x0 + i∆t (i.e. xi = ti ). Also denote u(xi ) as ui , f (xi ) as fi , and K(xi , tj ) as Kij . Now
if the trapezoid rule is used to approximate the given equation, then:
·
Z b
1
u(x) = f (x) +
K(x, t)u(t) dt ≈ f (x) + ∆t K(x, t0 )u(t0 )
2
a
¸
1
+K(x, t1 )u(t1 ) + · · · + K(x, tn−1 )u(tn−1 ) + K(x, tn )u(tn )
2
36
or, more tersely:
·
¸
1
1
u(x) ≈ f (x) + ∆t K(x, t0 )u0 + K(x, t1 )u1 + · · · + K(x, tn )un .
2
2
There are n + 1 values of ui , as i = 0, 1, 2, . . . , n. Therefore the equation becomes a set of
n + 1 equations in ui :
·
1
ui = fi + ∆t Ki0 u0 + Ki1 u1 + · · · + Ki(n−1) un−1
2
¸
1
+ Kin un , i = 0, 1, 2, . . . , n
2
that give the approximate solution to u(x) at x = xi . The terms involving u may be moved to
the left side of the equations, resulting in n + 1 equations in u0 , u1 , u2 , . . . , un :
µ
¶
∆t
∆t
1−
K00 u0 − ∆tK01 u1 − ∆tK02 u2 − · · · −
K0n un = f0
2
2
∆t
∆t
K1n un = f1
− K10 u0 + (1 − ∆tK11 ) u1 − ∆tK12 u2 − · · · −
2
2
..
..
.
µ
¶ .
∆t
∆t
Knn un = fn .
− Kn0 u0 − ∆tKn1 u1 − ∆tKn2 u2 − · · · + 1 −
2
2
This may also be written in matrix form:
KU = F,
where K is the matrix of coefficients

K00 −∆tK01
1 − ∆t
2
 − ∆t K10 1 − ∆tK11

2
K=
..
..

.
.
∆t
− 2 Kn0
−∆tKn1
U is the matrix of solutions



U =

u0
u1
..
.
un
37
···
···
..
.
− ∆t
K0n
2
− ∆t
K1n
2
..
.
··· 1 −



c ,

∆t
Knn
2



,

and F is the matrix of the nonhomogeneous part

f0
 f1

F =  ..
 .
fn



.

Clearly there is a unique solution to this system of linear equations when |K| 6= 0, and either
infinite or zero solutions when |K| = 0.
Example: Use the trapezoid rule to find a numeric solution to the integral equation in eq.
(23):
Z
1
u(x) = sin x +
(1 − x cos xt)u(t) dt
0
at x = 0, 12 , and 1. Choose n = 2 and ∆t = 1−0
= 12 , so ti = i∆t = 2i = xi . Using the trapezoid
2
rule results in
µ
¶
1 1
1
ui = f i +
Ki0 u0 + Ki1 u1 + Ki2 u2 , i = 0, 1, 2,
2 2
2
or in matrix form:

 

− 14 K02
u0
sin 0
1 − 41 K00 − 12 K01
 − 1 K10 1 − 1 K11 − 1 K12   u1  =  sin 1 .
4
2
4
2
− 14 K20
− 21 K21 1 − 14 K22
u2
sin 1

Substituting fi = f (xi ) = sin 2i and Kij = K(xi , tj ) = 1 − 2i cos ij4


1
1
1 − ¡41 (1 − 0)
−
(1
−
0)
−
(1
−
0)
¢
¡2
¢
¡4
¢
 −1 1 − 1
1 − 21 ¡ 1 − 12 cos¢14 − 14 1 − 12 cos 12  
4
2
− 41 (1 − 1)
− 12 1 − cos 12
1 − 14 (1 − cos 1)
 3

− 12
− 14
4
 − 1 1 + 1 cos 1 − 1 + 1 cos 1  
8
2
4
4
4
8
2
0 − 12 + 12 cos 12 34 + 14 cos 1


0.75
−0.5
−0.25
u0
 −0.125 0.7422 −0.1403   u1
u2
0
−0.0612 0.8851
into the matrix obtains
 

sin 0
u0
u1  =  sin 12 
u2
sin 1
 

u0
sin 0
u1  =  sin 12 
u2
sin 1
 

0
 ≈  0.4794  ,
0.8415
which can be solved to give u0 ≈ 1.0132, u1 ≈ 1.0095, and u2 ≈ 1.0205. The exact solution of
eq. (23) is known to be u(x) = 1, which compares very well with these approximate values.
38
6.6.2
Homogeneous Equations
The general homogeneous Fredholm equation in eq. (21),
Z b
u(x) = λ
K(x, t)u(t) dt
a
is solved in a similar manner as nonhomogeneous Fredholm equations using the trapezoid
rule. Using the same terminology as the previous section, the integral is divided into n equal
subintervals of width ∆t, resulting in n + 1 linear homogeneous equations
·
¸
1
1
ui = λ∆t Ki0 u0 + Ki1 u1 + · · · + Ki(n−1) un−1 + Kin un , i = 0, 1, . . . , n.
2
2
Bringing all of the terms to the left side of the equations results in
¶
µ
λ∆t
λ∆t
1−
K00 u0 − λ∆tK01 u1 − · · · −
K0n un = 0
2
2
λ∆t
λ∆t
K10 u0 + (1 − λ∆tK11 ) u1 − · · · −
K1n un = 0
−
2
2
..
..
.
µ
¶ .
λ∆t
λ∆t
−
Kn0 u0 − λ∆tKn1 u1 − · · · + 1 −
Knn un = 0.
2
2
Letting λ =
1
µ
simplifies this system by causing µ to appear in only one term
µ
¶
∆t
∆t
µ−
K00 u0 − ∆tK01 u1 − ∆tK02 u2 − · · · −
K0n un =
2
2
∆t
∆t
− K10 u0 + (µ − ∆tK11 ) u1 − ∆tK12 u2 − · · · −
K1n un =
2
2
..
µ
¶ .
∆t
∆t
Knn un =
− Kn0 u0 − ∆tKn1 u1 − · · · + µ −
2
2
Again, the equations may be written in matrix notation as
KH U = 0
where



0=

0
0
..
.
0
39



,

of each equation:
0
0
..
.
0.
U is the same as the nonhomogeneous case:



U =

u0
u1
..
.



,

un
and KH is the coefficient matrix for the homogeneous

µ − ∆t
K00 −∆tK01
2
 − ∆t K10 µ − ∆tK11

2
KH = 
..
..

.
.
∆t
−∆tKn1
− 2 Kn0
equations
···
···
..
.
− ∆t
K0n
2
− ∆t
K1n
2
..
.
··· µ −



.

∆t
Knn
2
An infinite number of nontrivial solutions exist iff |KH | = 0. This condition can be used to find
the eigenvalues λ of the system by finding µ = λ1 as the zeros of |KH | = 0.
Example: Use the trapezoid rule to find a numeric solution to the homogeneous Fredholm
equation
Z
1
u(x) = λ
K(x, t)u(t) dt
(29)
0
with the symmetric kernel
½
K(x, t) =
x(1 − t), 0 ≤ x ≤ t
t(1 − x), t ≤ x ≤ 1
at x = 0, x = 12 , and x = 1. Therefore n = 2 and ∆t = 12 , and
µ
¶
i j
Kij = K
,
, i, j = 0, 1, 2.
2 2
The resulting matrix is

µ
0
 0 µ−
0
0
1
8

0
0 .
µ
For the system of homogeneous equations obtained to have a nontrivial solution, the determinant must be equal to zero:
¯

¯ µ
µ
¶
0
0
¯
¯ 0 µ − 1 0  = µ2 µ − 1 = 0,
8
¯
8
¯ 0
0
µ
40
or, µ = 0 or µ = 18 .
For µ = 18 , λ = µ1 = 8 and substituting into the system of equations obtains u0 = 0, u2 = 0,
and u1 as an arbitrary constant. Therefore there are two zeros at x = 0 and x = 1, but an
arbitrary value at x = 21 . It can be found by approximating the function u(x) as a triangular
function connecting the three points (0, 0), ( 12 , u1 ), and (1, 0):
½
2u1 x,
0 ≤ x ≤ 21
,
u(x) =
−2u1 (x − 1), 12 ≤ x ≤ 1
and making its norm be one:
Z
1
u2 (x) dx = 1.
0
Substituting results in
Z
4u21
1
2
Z
2
x dx +
0
4u21
1
(x − 1)2 dx = 1
1
2
u21 u21
+
= 1
6
6
u21
= 1
3
√
u1 =
3.
√
So the approximate numerical values are u(0) = 0, u( 21 ) = 3, and u(1) = 0.
For comparison
with an exact solution, the orthonormal eigenfunction is chosen to be
√
2
u1 (x) = 2 sin πx, which corresponds to the eigenvalue λ1 =
to the ap√ π , which is close
√
proximate eigenvalue λ = 8. The exact solution gives u1 (0) = 2 sin 0 = 0, u1 ( 21 ) = 2 sin π2 ≈
√
1.4142, and u1 (1) = 2 sin π = 0.
7
An example of a separable kernel equation
Solve the integral equation
Z
π
φ(x) − λ
sin(x + t)φ(t)dt = 1
0
2
where λ 6= ± .
π
41
(30)
The kernel is K(x, t) = sin x cos x + sin t cos x where
a1 (x) = sin x
a2 (x) = cos x
b1 (x) = cos t
b2 (x) = sin t
where we have use the additive identity sin(x + t) = sin x cos t + sin t cos x.
Choosing f (x) = 1 we obtain
Z
Z
π
f1 =
b1 (t)f (t)dt =
cos tdt = 0
Z0 π
f2 =
Z0 π
b2 (t)f (t)dt =
sin tdt = 2
0
0
and so
·
0
2
F =
Z
(31)
π
b1 (t)a1 (t)dt =
Z
0
sin t cos tdt = 0
Z
π
0
π
π
2
Z0 π
Z0 π
π
=
b2 (t)a1 (t)dt =
sin2 tdt =
2
Z0 π
Z0 π
sin t cos tdt = 0
b2 (t)a2 (t)dt =
=
a12 =
a12
¸
Z
π
a11 =
a21
π
b1 (t)a2 (t)dt =
0
cos2 tdt =
0
which yields
·
A=
0
π
2
π
2
¸
0
Now we can rewrite the original integral equation as C = F + λAC
¸ · ¸
· π ¸·
¸
·
0
0 2
c1
c1
=
+λ π
c2
2
c2
0
2
or
42
(32)
(33)
·
1
π
λ
2
π
λ
2
¸·
1
c1
c2
¸
·
0
2
=
¸
(34)
λπ
c2 = 0
2
λπ
c2 −
c1 = 2λ
2
c1 −
with solutions c1 = 8λ/(4 − λ2 π 2 ), c2 = 4λ2 π/(4 − λ2 π 2 ).
The general solution is φ(x) = 1 + c1 a2 + c2 a1 = 1 + c1 cos x + c2 sin x and the unique solution
8λ
4λ2 π
of the integral equation is φ(x) = 1 +
cos x +
sin x.
(4 − λ2 π 2 )
(4 − λ2 π 2 )
IVP1
φ0 (x) = F (x, φ(x)), 0 ≤ x ≤ 1
φ(0) = φ0
Z
x
⇒ φ(x) =
F (t, φ(t))dt + φ0
0
IVP2
φ00 (x) = F (x, φ(x)), 0 ≤ x ≤ 1
φ(0) = φ0 , φ0 (0) = φ00
Z
0
x
F (t, φ(t))dt + φ00
Z 0x Z s
⇒ φ(x) =
ds
F (t, φ(t))dt + φ00 (x) + φ0
⇒ φ (x) =
0
0
we know that:
Z
Z
x
Z
x
0
ds
Z s
ds
0
Z
s
G(s, t)dt =
0
0
F (t, φ(t))dt =
x
G(s, t)ds
t
(x − t) F (t, φ(t))dt
0
43
x
dt
Z
0
Z
x
Therefore,
Z
x
⇒ φ(x) =
0
(x − t) F (t, φ(t))dt + φ00 (x) + φ0
Theorem 1 Degenerate Kernels
Pn If k(x, t) is a degenerate kernel on [a, b] × [a, b] which can
be written in the form k(x, t) = i=0 ai (x)bi (t) where a1 , . . . , an and b1 , . . . , bn are continuous,
then
1. for every continuous function f on [a, b] the integral equation
Z b
φ(x) − λ
k(x, t) φ(t)dt = f (x) (a ≤ x ≤ b)
(35)
a
possesses a unique continuous solution φ, or
2. the homogeneous equation
Z
b
k(x, t) φ(t)dt = 0 (a ≤ x ≤ b)
φ(x) − λ
(36)
a
has a non-trivial
solution φ, in which case eqn. 35 will have non-unique solutions if and
Rb
only if a f (t)ψ(t) dt = 0 for every continuous solution ψ of the equation
Z b
ψ(x) − λ̄
k(t, x) ψ(t)dt = 0 (a ≤ x ≤ b).
(37)
a
Theorem 2 Let K be a bounded linear map from L2 (a, b) to itself with the property that {Kφ :
φ ² L2 (a, b)} has finite dimension. Then
1. the linear map I − λK has an inverse and (I − λK)−1 is a bounded linear map, or
2. the equation ψ − λKψ = 0 has a non-trivial solution ψ ² L2 (a, b).
In the specific case where the kernel k generates a compact operator on L2 (a, b), the integral
equation
Z b
φ(x) − λ
k(x, t) φ(t)dt = f (x) (a ≤ x ≤ b)
a
will have a unique solution, and the solution operator (I − λK)−1 will be bounded, provided
that the corresponding homogeneous equation
Z b
φ(x) − λ
k(x, t) φ(t)dt = 0 (a ≤ x ≤ b)
a
has only the trivial solution L2 (a, b).
44
8
Bibliographical Comments
In this last section we provide a list of bibliographical references that we found useful in writing this survey and that can also be used for further studying Integral Equations. The list
contains mainly textbooks and research monographs and do not claim by any means that it
is complete or exhaustive. It merely reflects our own personal interests in the vast subject of
Integral Equations. The books are presented using the alphabetical order of the first author.
The book [Cor91] presents the theory of Volterra integral equations and their applications,
convolution equations, a historical account of the theory of integral equations, fixed point theorems, operators on function spaces, Hammerstein equations and Volterra equations in abstract
Banach spaces, applications in coagulation processes, optimal control of processes governed by
Volterra equations, and stability of nuclear reactors.
The book [Hac95] starts with an introductory chapter gathering basic facts from analysis,
functional analysis, and numerical mathematics. Then the book discusses Volterra integral
equations, Fredholm integral equations of the second kind, discretizations, kernel approximation, the collocation method, the Galerkin method, the Nystrom method, multi-grid methods
for solving systems arising from integral equations of the second kind, Abel’s integral equation,
Cauchy’s integral equation, analytical properties of the boundary integral equation method,
numerical integration and the panel clustering method and the boundary element method.
The book [Hoc89] discusses the contraction mapping principle, the theory of compact operators,
the Fredholm alternative, ordinary differential operators and their study via compact integral
operators, as well as the necessary background in linear algebra and Hilbert space theory.
Some applications to boundary value problems are also discussed. It also contains a complete
treatment of numerous transform techniques such as Fourier, Laplace, Mellin and Hankel, a
discussion of the projection method, the classical Fredholm techniques, integral operators with
positive kernels and the Schauder fixed-point theorem.
The book [Jer99] contains a large number of worked out examples, which include the approximate numerical solutions of Volterra and Fredholm, integral equations of the first and second
kinds, population dynamics, of mortality of equipment, the tautochrone problem, the reduction
of initial value problems to Volterra equations and the reduction of boundary value problems
for ordinary differential equations and the Schrodinger equation in three dimensions to Fredholm equations. It also deals with the solution of Volterra equations by using the resolvent
kernel, by iteration and by Laplace transforms, Green’s function in one and two dimensions and
with Sturm-Liouville operators and their role in formulating integral equations, the Fredholm
equation, linear integral equations with a degenerate kernel, equations with symmetric kernels
and the Hilbert-Schmidt theorem. The book ends with the Banach fixed point in a metric
space and its application to the solution of linear and nonlinear integral equations and the use
45
of higher quadrature rules for the approximate solution of linear integral equations.
The book [KK74] discusses the numerical solution of Integral Equations via the method of
invariant imbedding. The integral equation is converted into a system of differential equations.
For general kernels, resolvents and numerical quadratures are being employed.
The book [Kon91] presents a unified general theory of integral equations, attempting to unify
the theories of Picard, Volterra, Fredholm, and Hilbert-Schmidt. It also features applications
from a wide variety of different areas. It contains a classification of integral equations, a description of methods of solution of integral equations, theories of special integral equations,
symmetric kernels, singular integral equations, singular kernels and nonlinear integral equations.
The book [Moi77] is an introductory text on integral equations. It describes the classification of
integral equations, the connection with differential equations, integral equations of convolution
type, the method of successive approximation, singular kernels, the resolvent, Fredholm theory,
Hilbert-Schmidt theory and the necessary background from Hilbert spaces.
The handbook [PM08] contains over 2500 tabulated linear and nonlinear integral equations and
their exact solutions, outlines exact, approximate analytical, and numerical methods for solving
integral equations and illustrates the application of the methods with numerous examples. It
discusses equations that arise in elasticity, plasticity, heat and mass transfer, hydrodynamics,
chemical engineering, and other areas. This is the second edition of the original handbook
published in 1998.
The book [PS90] contains a treatment of a classification and examples of integral equations,
second order ordinary differential equations and integral equations, integral equations of the
second kind, compact operators, spectra of compact self-adjoint operators, positive operators,
approximation methods for eigenvalues and eigenvectors of self-adjoint operators, approximation methods for inhomogeneous integral equations, and singular integral equations.
The book [Smi58] features a carefully chosen selection of topics on Integral Equations using complex-valued functions of a real variable and notations of operator theory. The usual
treatment of Neumann series and the analogue of the Hilbert-Schmidt expansion theorem are
generalized. Kernels of finite rank, or degenerate kernels, are discussed. Using the method of
polynomial approximation for two variables, the author writes the general kernel as a sum of a
kernel of finite rank and a kernel of small norm. The usual results of orthogonalization of the
square integrable functions of one and two variables are given, with a treatment of the RieszFischer theorem, as a preparation for the classical theory and the application to the Hermitian
kernels.
The book [Tri85] contains a study of Volterra equations, with applications to differential equa46
tions, an exposition of classical Fredholm and Hilbert-Schmidt theory, a discussion of orthonormal sequences and the Hilbert-Schmidt theory for symmetric kernels, a description of the Ritz
method, a discussion of positive kernels and Mercer’s theorem, the application of integral equations to Sturm-Liouville theory, a presentation of singular integral equations and non-linear
integral equations.
References
[Cor91] C. Corduneanu. Integral equations and applications. Cambridge University Press,
Cambridge, 1991.
[Hac95] W. Hackbusch. Integral equations, volume 120 of International Series of Numerical Mathematics. Birkhäuser Verlag, Basel, 1995. Theory and numerical treatment,
Translated and revised by the author from the 1989 German original.
[Hoc89] H. Hochstadt. Integral equations. Wiley Classics Library. John Wiley & Sons Inc.,
New York, 1989. Reprint of the 1973 original, A Wiley-Interscience Publication.
[Jer99] A. J. Jerri. Introduction to integral equations with applications. Wiley-Interscience,
New York, second edition, 1999.
[KK74] H. H. Kagiwada and R. Kalaba. Integral equations via imbedding methods. AddisonWesley Publishing Co., Reading, Mass.-London-Amsterdam, 1974. Applied Mathematics and Computation, No. 6.
[Kon91] J. Kondo. Integral equations. Oxford Applied Mathematics and Computing Science
Series. The Clarendon Press Oxford University Press, New York, 1991.
[Moi77] B. L. Moiseiwitsch. Integral equations. Longman, London, 1977. Longman Mathematical Texts.
[PM08] A. D. Polyanin and A. V. Manzhirov. Handbook of integral equations. Chapman &
Hall/CRC, Boca Raton, FL, second edition, 2008.
[PS90]
D. Porter and D. S. G. Stirling. Integral equations. Cambridge Texts in Applied
Mathematics. Cambridge University Press, Cambridge, 1990. A practical treatment,
from spectral theory to applications.
[Smi58] F. Smithies. Integral equations. Cambridge Tracts in Mathematics and Mathematical
Physics, no. 49. Cambridge University Press, New York, 1958.
[Tri85]
F. G. Tricomi. Integral equations. Dover Publications Inc., New York, 1985. Reprint
of the 1957 original.
47