SOOCHOW JOURNAL OF MATHEMATICS
Volume 31, No. 3, pp. 449-466, July 2005
AN ALGORITHM FOR COMPUTING INVERSES
OF TRIDIAGONAL MATRICES WITH APPLICATIONS
BY
QASSEM AL-HASSAN
Abstract. An algorithm for computing inverses of nonsingular tridiagonal matrices
is introduced. This algorithm is obtained via computing the LU decomposition of
the matrix followed by inversion of L and U. This process yields a simple recurrence
relation which is second order, homogeneous, and linear. This relation is used to
generate the entries of the required inverse. Some applications of the use of this
algorithm are also included.
1. Introduction
Tridiagonal matrices play a central role in the solution of a large spectrum
of problems in different disciplines of sciences and engineering, such as spline
approximation, numerical solution of ordinary and partial differential equations,
solution of linear systems of equations, among many other applications. The
inverse of tridiagonal matrices is needed to solve these problems.
This work is devoted to the construction of a simple and robust algorithm
that computes the inverse of a tridiagonal matrix under the condition that all
the entries of the superdiagonal are nonzero.
Section 2 contains the theoretical justification and derivation of the algorithm
for a special tridiagonal matrix, and section 3 contains the algorithm for a general
tridiagonal matrix, and two illustrative examples, while section 4 contains some
applications of the use of this algorithm to solve some important problems, such
as the computation of the determinant, numerical solution of boundary value
Received June 7, 2004; revised October 7, 2004.
AMS Subject Classification. 65Q05, 39A05.
Key words. tridiagonal matrices, Crout’s LU factorization, recurrence relation.
449
450
QASSEM AL-HASSAN
problems, second order linear nonhomogeneous initial value difference equations,
and computation of different types of orthogonal polynomials. Finally, section 5
is devoted to some concluding remarks.
Throughout this work, we will adapt the conventions:
n
Y
n
X
ηi = 1,
i=m
ηi = 0
i=m
whenever n < m.
2. Theoretical Justification
Given the nonsingular k × k tridiagonal matrix
α1
1 0 .
0
β2
A= 0
α2 1 0
.
.
.
.
.
.
.
. .
1
(2.1)
0 β k αk
We want to compute the inverse of this special tridiagonal matrix. To achieve
this, we start with the following lemmas.
Lemma 2.1. The second-order homogeneous difference equation:
γi = αi γi−1 − βi γi−2 ,
for
i≥2
(2.2)
subject to the initial conditions: γ 0 = 1, γ1 = α1 has the solution:
γj = Dj = det(principal j × j submatrix of A), j = 2, 3, ... .
Proof. The result is obtained by computing the determinant by expansion
about the last column.
Lemma 2.2. Let the sequence {φi }ki=1 be defined recursively by:
φk =
and
φi = βi+1 φi+1 +
1
,
γk−1 γk
1
,
γi−1 γi
i = k − 1, k − 2, ..., 1.
COMPUTING INVERSES OF TRIDIAGONAL MATRICES
451
Then φi has the form:
φi =
k Y
m
X
m=i
βj
j=i+1
1
γm−1 γm
, i = 1, ..., k.
Proof. The result is obtained by direct computation of φ i using the recursive
definition.
Now, it is known that the Crout’s LU factorization of A gives a lower bidiagonal matrix L and a unit upper bidiagonal matrix U , (see [1]), where
γ1
γ0
0
.
.
0
β
2
L=
0
γ2
γ1
0
.
.
.
.
.
.
.
.
.
.
γk
0 βk
γk−1
,
1
γ0
γ1
0
.
0
0
U =
0
1
γ1
γ2
0
.
.
.
.
.
.
.
γk−2
γk−1
0
0
1
.
.
This comes from the following observation: The lower diagonal entries of the
product LU are just the βj0 s, the upper diagonal entries of the product LU are of
the form:
γj γj−1
γj−1 γj
γ
γ
j
= 1, and the diagonal entries are of the form: β j γj−2
+ γj−1
=
j−1
αj .
Lemma 2.3. The entries of the lower triangular matrix L −1 = (lij ) are given
by:
lij =
0
γi−1
when i < j
when i = j
γj
i+j
(−1)
γj−1
γi
i
Q
βm
when i > j.
m=j+1
Proof. The nondiagonal entries of the product LL −1 have the form:
βj lj−1,j−n +
γj
lj,j−n
γj−1
= βj (−1)2j−n−1
=0
γj−n−1
γj−1
j−1
Y
m=j−n+1
βm + (−1)2j−n
γj γj−n−1
γj−1 γj
j
Y
m=j−n+1
βm
452
QASSEM AL-HASSAN
and the diagonal entries have the form:
γj γj−1
= 1.
γj−1 γj
Lemma 2.4. The entries of the upper triangular matrix U −1 = (uij ) are
given by:
i+j γi−1 ,
when i < j,
γj−1
(−1)
uij =
1,
when i = j,
0,
when i > j.
Proof. The nondiagonal entries of the product U U −1 are of the form:
(−1)n
γi−1
γi+n−1
γi
γi−1
= 0,
γi γi+n−1
+ (−1)n+1
n = 1, ..., k − i.
Theorem 2.5. The entries of A−1 = (aij ) are of the form:
aii = (γi−1 )2 φi ,
i = 1, ..., k
aij = (−1)i+j γi−1 γj−1 φj ,
aji =
j
Y
i = 1, ..., k − 1,
j = i + 1, ..., k
and
βn aij .
n=i+1
Proof. First, the diagonal elements of A −1 = U −1 L−1 have the form:
aii =
=
k
X
m=i
k
X
uim lmi
(−1)i+m
m=i
= (γi−1 )
2
m
k Y
X
m=i
2
= (γi−1 ) φi .
Second, for i < j, we have
aij =
k
X
m=j
γi−1
γi−1
(−1)m+i
γm−1
γm
uim lmj
n=i+1
βn
1
γm γm−1
m
Y
n=i+1
βn
COMPUTING INVERSES OF TRIDIAGONAL MATRICES
=
k
X
(−1)i+m
m=j
γi−1
γj−1
(−1)m+j
γm−1
γm
= (−1)i+j γi−1 γj−1
m
k Y
X
m=j
= (−1)
i+j
βn
n=j+1
m
Y
453
βn
n=j+1
1
γm γm−1
γi−1 γj−1 φj
and
aji =
k
X
(−1)i+j
m=i
γj−1 γi−1
γm−1 γm
= (−1)i+j γj−1 γi−1
h
= (−1)i+j γi−1 γj−1
=
j
Y
j
Y
l=i+1
βl
h
βn
n=i+1
k
X
m
Y
1
γ
γ
m=i m−1 m
m=i
l=i+1
=
k
X
m
Y
n=i+1
m
Y
1
γm−1 γm
(−1)i+j γi−1 γj−1 φj
βn
βn
n=j+1
j
i Y
βl
l=i+1
i
βl aij .
Finally, to complete the argument, we show that the product AA −1 = I, where
I is the identity matrix. We do that as follows:
1. The diagonal elements of this product are of the form:
βi ai−1,i + αi aii + ai+1,i
= βi (−γi−2 γi−1 ) φi + αi (γi−1 )2 φi − βi+1 (γi−1 γi ) φi+1
1
= −βi (γi−2 γi−1 ) φi + αi (γi−1 )2 φi − γi−1 γi φi −
γi−1 γi
2
= −βi (γi−2 γi−1 ) φi + αi (γi−1 ) φi − γi−1 γi φi + 1
= 1 + γi−1 φi [αi γi−1 − βi γi−2 − γi ]
= 1.
2. The nondiagonal entries of AA−1 are of the form:
βi ai−1,j + αi aij + ai+1,j .
454
QASSEM AL-HASSAN
Case 1: If i < j and i + 1 ≤ j, then
βi ai−1,j + αi aij + ai+1,j
h
i
h
i
= βi (−1)i+j−1 γi−2 γj−1 φj +αi (−1)i+j γi−1 γj−1 φj +(−1)i+j+1 γi γj−1 φj
= (−1)i+j−1 γj−1 φj [βi γi−2 −αi γi−1 + γi ]
= 0.
Case 2: If i > j and i − 1 ≥ j, then
βi ai−1,j + αi aij + ai+1,j
= βi
i−1
Y
βn aj,i−1 + αi
n=j+1
=
i
Y
=
=
i
Y
n=j+1
=
i
Y
n=j+1
= 0.
i+1
Y
βn aj,i+1
n=j+1
i
Y
βn (−1)i+j γi−1 γj−1 φi
n=j+1
βn (−1)i+j+1 γi γj−1 φi+1
n=j+1
n=j+1
n=j+1
βn (−1)i+j−1 γj−1 γi−2 φi−1 + αi
i+1
Y
i
Y
βn aji +
n=j+1
+
i
Y
h
h
βn (−1)i+j−1 γj−1 γi−2 φi−1 − αi γi−1 φi + γi φi −
βn (−1)i+j−1 γj−1 γi−2 βi φi +
1
γi−1 γi
i
1
1 i
−αi γi−1 φi +γi φi −
γi−2 γi−1
γi−1
h
βn (−1)i+j−1 γj−1 φi βi γi−2 − αi γi−1 + γi
i
3. The General Tridiagonal Matrix
Let T be a general tridiagonal matrix of the form:
λ1 µ1
τ2
T = 0
..
..
0
...
0
λ2 µ2
0
..
τ3
λ3
..
..
..
..
..
µk−1
..
0
τk
λk
(3.1)
COMPUTING INVERSES OF TRIDIAGONAL MATRICES
455
where µi are all nonzero. It is known (see [2]) that if we set:
µk = β1 = 1,
λi
, i = 1, ..., k,
µi
αi =
βi =
τi
, i = 2, ..., k
µi
then T = diag(µ1 , µ2 , ..., µk )A . Therefore, we have
T
−1
=A
−1
1
1 1
, , ...,
diag
µ1 µ2
µk
.
An Algorithm
Given a k × k tridiagonal matrix T as in (3.1). The following algorithm
computes T −1 = (tij ).
Step 1: Set γ0 = 1, γ1 =
Step 2: Set φk =
1
γk−1 γk
λ1
λi
τi
, µk = 1, and γi = γi−1 − γi−2 ,
µ1
µi
µi
, and φi =
τi+1
1
φi+1 +
,
µi+1
γi−1 γi
Step 3: Set tii = (γi−1 )2 φi /µi ,
i = 2, ..., k.
i = k − 1, k − 2, ..., 1.
i = 1, ..., k.
Step 4: For i = 1, ..., k − 1 and j = i + 1, i + 2, ..., k, set
h
tij = (−1)
i+j
i
γi−1 γj−1 φj /µj
and tji =
j
h Y
τn i µj
tij
.
µ
µi
n=i+1 n
Remark. It is worth to mention at this point that Step 4 makes the complexity of the algorithm O(k 2 ).
Illustrative Examples
(1) When k = 2.
Consider the matrix
T =
"
λ1 µ1
τ2
λ2
#
.
Applying the steps of the above algorithm, we have:
456
QASSEM AL-HASSAN
λ1
, τ1 = µ2 = 1 and
µ1
λ2 λ1 − µ 1 τ2
λ1
− τ2 =
.
γ2 = λ 2 γ1 − τ 2 = λ 2
µ1
µ1
Step 1: γ0 = 1, γ1 =
1
µ1
µ21
µ1
=
=
and
γ1 γ2
λ1 λ2 λ1 − µ 1 τ2
λ1 (λ2 λ1 − µ1 τ2 )
1
µ1 λ2
τ2
φ2 +
=
.
φ1 =
µ2
γ1 γ0
λ2 λ1 − µ 1 τ2
Step 2: φ2 =
1
1
µ1 λ2
λ2
=
=
and
µ1
λ2 λ1 − µ 1 τ2 µ1
λ2 λ1 − µ 1 τ2
1
λ1
λ1
1
= γ12 φ2
=
=
.
µ2
λ2 λ1 − µ 1 τ2 µ2
λ2 λ1 − µ 1 τ2
Step 3: t11 = γ02 φ1
t22
µ1
1
=−
and
µ2
λ2 λ1 − µ 1 τ2
1
µ1 τ2
.
=−
λ2 λ1 − µ 1 τ2 µ1
Step 4: t12 = −γ0 γ1 φ2
t21
So, the required inverse is
T −1 =
µ1
λ2 λ1 −µ1 τ2
λ1
λ2 λ1 −µ1 τ2
λ2
λ2 λ1 −µ1 τ2
2
− λ2 λ1τ−µ
1 τ2
−
.
Compared with the closed form result of R. K. Mallik in [2], and using the
same notation, the calculations in [2], are done as follows:
r1 = E0 (1) = 1
r2 = E1 (1) = −
λ1
µ1
µ1 τ2
λ1 λ2
E2 (2)
λ2
λ2
λ2
s1 = −
=−
=−
µ1 τ 2 λ1 λ2 = − λ λ − µ τ
µ1 E2 (1)
µ1 (1 + σ2 )
µ1 (1 − λ λ ) µ 1
2 1
1 2
σ2 = −
2 1
1
E2 (3)
µ1
1
s2 = −
=−
= − λ2 λ1
1
E2 (1)
λ2 λ1 − µ 1 τ2
µ (λ2 λ1 − µ1 τ2 ) λ λ
1
2 1
λ2
u1 = −s1 =
λ2 λ1 − µ 1 τ2
F2 (3)
τ2
τ2
u2 = −
=−
= s2
F2 (1)
µ1
λ2 λ1 − µ 1 τ2
COMPUTING INVERSES OF TRIDIAGONAL MATRICES
v1 = 1
λ1
τ2
λ2
t11 = u1 v1 =
λ2 λ1 − µ 1 τ2
λ1
t22 = u2 v2 =
λ2 λ1 − µ 1 τ2
µ1
t12 = r1 s2 = −
λ2 λ1 − µ 1 τ2
τ2
t21 = u2 v1 = −
.
λ2 λ1 − µ 1 τ2
v2 = F1 (1) = −
Thus we get
T −1 =
µ1
λ2 λ1 −µ1 τ2
λ1
λ2 λ1 −µ1 τ2
λ2
λ2 λ1 −µ1 τ2
2
− λ2 λ1τ−µ
1 τ2
−
.
(2) When k = 3.
λ1 µ1
0
Consider the matrix T =
τ2 λ2 µ2 .
0 τ 3 λ3
Applying the steps of the above algorithm, we have:
λ1
, τ1 = µ3 = 1,
µ1
τ2
λ2 λ1 − µ 1 τ2
λ2
γ1 − γ0 =
and
γ2 =
µ2
µ2
µ1 µ2
λ3
τ3
λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
γ3 =
γ2 − γ1 =
.
µ3
µ3
µ1 µ2 µ3
Step 1: γ0 = 1, γ1 =
Step 2: φ3 =
µ21 µ22 µ3
1
=
,
γ2 γ3
(λ2 λ1 − µ1 τ2 )(λ3 λ2 λ1 − λ3 µ1 τ2 − λ1 µ2 τ3 )
φ2 =
1
λ3 µ21 µ2
τ3
φ3 +
=
and
µ3
γ2 γ1
λ1 (λ3 λ2 λ1 − λ3 µ1 τ2 − λ1 µ2 τ3 )
φ1 =
1
µ1 (λ2 λ3 − µ2 τ3 )
τ2
φ2 +
=
.
µ2
γ1 γ0
λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
Step 3: t11 = (γ02 φ1 )/µ1 =
λ2 λ3 − µ 2 τ3
,
λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
457
458
QASSEM AL-HASSAN
λ1 λ3
and
λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
λ2 λ1 − µ 1 τ2
.
= (γ22 φ3 )/µ3 =
λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
t22 = (γ12 φ2 )/µ2 =
t33
λ3 µ1
,
λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
τ2
µ2
λ3 τ2
=
t12
=−
,
µ2 µ1
λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
µ1 µ2
= (γ0 γ2 φ3 )/µ3 =
,
λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
µ1 µ2
τ2 τ3
τ2 τ3
=
=
µ2 µ3 λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
λ1 µ2
= (−γ1 γ2 φ3 )/µ3 = −
and
λ3 λ2 λ1 − λ 3 µ1 τ2 − λ 1 µ2 τ3
λ1 µ2
λ1 τ3
µ3
τ3
−
=−
.
=
µ3 λ3 λ2 λ1 −λ3 µ1 τ2 −λ1 µ2 τ3 µ2
λ3 λ2 λ1 −λ3 µ1 τ2 −λ1 µ2 τ3
Step 4: t12 = (−γ0 γ1 φ2 )/µ2 = −
t21
t13
t31
t23
t32
These results are the same Mallik obtained in [2], Example 4.2, page 125.
4. Applications
4.1. The determinant
The difference equation in (2.2) gives a simple and robust method for the
computation of the determinant of any tridiagonal matrix of the form A above.
As was proved in Lemma 1, this determinant is just γ k . As a byproduct of the use
of this difference equation, the determinant of the j × j principal submatrix of A
is obtained at the j th step, j = 1, ..., k − 1. Furthermore, this computation of the
determinant of A is done in 2k − 2 multiplications and k − 1 subtractions, which
totals to 3k − 3 operations, a quite low cost compared to O[k(k − 1)] operations
of the regular method for computing the determinant.
The computation of the determinant of a general tridiagonal matrix T as in
(3.1) adds only k − 1 multiplications, because
det(T ) =
k−1
Y
i=1
µk
!
det(A)
which totals to 4k − 4, still much cheaper than O[k(k − 1) 2 ].
COMPUTING INVERSES OF TRIDIAGONAL MATRICES
459
4.2. Numerical solution of boundary value problems
Consider the two-point boundary-value problem which describes the steady
state phenomenon of temperature distribution in a rod with endpoints fixed at
0◦ and with a distributed heat source f (x) (see [3]). The governing equation of
this is the boundary value ordinary differential equation:
−
d2 u
= f (x),
dx2
0 ≤ x ≤ 1,
u(0) = u(1) = 0.
The discretization of this differential equation using central differences with
mesh points x = jh, 0 ≤ j ≤ n, yields the system:
−uj+1 + 2uj − uj−1 = h2 f (jh),
j = 1, ..., n
and this in turn yields the system (for h = 16 , n = 5)
2 −1 0
0
0
u1
f (h)
f (2h)
−1 2 −1 0 0 u2
0 −1 2 −1 0 u3 = h2 f (3h) .
f (4h)
0 0 −1 2 −1 u
4
0
0
0 −1 2
f (5h)
u5
Setting γ0 = 1, γ1 = −2, and γi = −2γi−1 − γi−2 , i ≥ 2.
The solution of resulting initial value problem is given by:
γi = (−1)i (1 + i),
i = 2, ..., 5.
So,
γ2 = 3,
γ3 = −4,
γ4 = 5,
γ5 = −6,
and
φ5 = −
1
1
1
5
1
, φ4 = − , φ3 = − , φ2 = − , φ1 = − .
30
12
6
3
6
But, tii = (γi−1 )2 φi implies that
5
4
3
4
5
t11 = − , t22 = − , t33 = − , t44 = − , t55 = −
6
3
2
3
6
460
QASSEM AL-HASSAN
and tij = tji = (−1)i+j γi−1 γj−1 φj implies that:
2
1
1
t12 = t21 = − , t13 = t31 = − , t14 = t41 = −
3
2
3
2
1
t15 = t51 = − , t23 = t32 = −1, t24 = t42 = −
6
3
1
1
t25 = t52 = − , t34 = t43 = −1, t35 = t53 = −
3
2
2
t45 = t54 = −
3
and tji =
tji
µi
implies that:
T −1
=
5
6
2
3
1
2
1
3
1
6
2
3
4
3
1
2
1
1
3
2
3
1
3
2
1
2
3
1
3
1
4
3
2
3
1
2
1
6
1
3
1
2
2
3
5
6
.
Finally,
h2 T −1
f (h)
f (2h)
f (3h)
f (4h)
f (5h)
gives the required result.
4.3. Difference equations of the second order posed as boundary value
problems
4.3.1. The case of constant coefficients
The difference equation
1
1
yn+2 + yn+1 − yn = n,
2
2
1 ≤ n ≤ 4,
y 1 = y6 = 0
has the general solution
yn = −
128 1 n 227
5
( ) −
(−1)n + n − .
33 2
66
2
COMPUTING INVERSES OF TRIDIAGONAL MATRICES
461
The solution of this boundary value difference equation is done by solving the
system:
1
2
1
−1 1
2 2
0 −1
2
0
0
1
1
0
y3 2
= .
1
y4 3
1
2
−
1
and γi =
2
which has the solution:
y2
0
1
2
1
2
Using the above algorithm, we get:
Step 1: γ0 = 1, γ1 =
0
4
y5
1
1
γi−1 + γi−2 ,
2
2
1 1
γi = (− )i +
3 2
i≥2
2
, i ≥ 2. Thus,
3
3
5
11
γ2 = , γ 3 = , γ 4 = .
4
8
16
1
128
1
32
=
, φ 3 = β 4 φ4 +
= ,
γ3 γ4
55
γ2 γ3
33
24
10
1
1
=
= .
and φ1 = β2 φ2 +
φ2 = β 3 φ3 +
γ1 γ2
11
γ0 γ1
11
Step 2: φ4 =
6
10
, a22 = (γ1 )2 φ2 = ,
11
11
6
10
2
2
= (γ3 ) φ3 =
and a44 = (γ3 ) φ4 = .
11
11
Step 3: a11 = (γ0 )2 φ1 =
a33
Step 4: a12 = −γ0 γ1 φ2 = −
a31 = β2 β3 a13 =
2
16
2
, a14 = −γ0 γ3 φ4 = − , a41 = β2 β3 β4 a14 = ,
11
11
11
a23 = −γ1 γ2 φ3 = −
a42 = β3 β4 a24 =
12
6
8
, a21 = β2 a12 = , a13 = γ0 γ2 φ3 = ,
11
11
11
4
, a32 = β3 a23 =
11
2
11 ,
a24 = γ1 γ3 φ4 =
12
6
2
, a34 = −γ2 γ3 φ3 = − , a43 = β4 a34 = .
11
11
11
But,
y2
y
3
=
y4
y5
8
,
11
10
11
6
11
2
11
2
11
−
12
11
6
11
2
11
2
11
8
11
4
− 11
6
11
6
11
−
16
11
8
11
12
− 11
10
11
1
2
.
3
4
462
QASSEM AL-HASSAN
Therefore, we have:
y2 = −
54
38
24
64
, y3 = , y4 = − , y5 =
.
11
11
11
11
4.3.2. The case of variable coefficients
Consider the difference equation:
yn+2 + (3n − 2)yn+1 − (2n)yn = n2 ,
1 ≤ n ≤ 4,
y1 = y6 = 1.
The solution of this difference equation involves the solution of the system:
1
−4
0
0
1
0
0
4
1
−6
7
0
y3 4
= .
1
y4 9
0
− 8 10
y2
3
y5
15
Using the above algorithm, we obtain the following:
Step 1: γ0 = 1, γ1 = 1, γ2 = 8, γ3 = 62, γ4 = 684.
155
9672
1
, φ3 =
, φ2 =
,
42408
(171)(16)(31)
(171)(16)(31)
46128
.
φ1 =
(171)(16)(31)
Step 2: φ4 =
Step 3: a11 = (γ0 )2 φ1 =
a33 = (γ2 )2 φ3 =
2883
1209
, a22 = (γ1 )2 φ2 ==
,
5301
10602
31
620
, a44 = (γ3 )2 φ4 =
.
5301
342
Step 4: a12 = −γ0 γ1 φ2 = −
a31 = β2 β3 a13 =
1209
2418
155
, a21 = β2 a12 =
, a13 = γ0 γ2 φ3 =
,
10602
5301
10602
1
48
1860
, a14 = −γ0 γ3 φ4 = −
, a41 = β2 β3 β4 a14 =
,
5301
684
171
a23 = −γ1 γ2 φ3 = −
a42 = β3 β4 a24 =
5
15
1
, a32 = β3 a23 =
, a24 = γ1 γ3 φ4 =
,
342
171
684
4
2
16
, a34 = −γ2 γ3 φ4 = −
, a43 = β3 a34 =
.
57
171
171
COMPUTING INVERSES OF TRIDIAGONAL MATRICES
463
Hence,
y2
y
3
=
y4
y5
2883
5301
2418
5301
1860
5301
48
171
1209
− 10602
155
10602
5
− 342
620
5301
16
171
1209
10602
15
171
4
57
1
− 684
1
684
2
− 171
31
342
3
4
=
9
15
745
684
623
684
244
342
161
342
.
4.4. Orthogonal polynomials
Consider the set {pn (x)}n≥0 of orthogonal polynomials on an interval [a,b]
with respect to a nonnegative weight function w(t). It is known that this set
satisfies the second-order homogeneous difference equation ([4]):
pn (x) =
cn
(x − an )
pn−1 (x) −
pn−2 (x),
bn
bn
n≥2
(3.2)
where bn 6= 0, and p0 (x) and p1 (x) are given initial conditions.
Comparing (3.2) with (2.2) gives:
αn =
x − an
bn
and βn =
cn
,
bn
n ≥ 2.
Thus,
pj (x) = det(principal j × j submatrix of A)
where
A=
p0 (x) = 1 and p1 (x) =
p1 (x)
1
0
... ...
0
c2
b2
1
0 ...
0
0
x−a2
b2
c3
b3
x−a3
b3
1 0
.
.
.
.
.
.
.
.
.
.
..
1
0
ck
bk
x−ak
bk
x−a1
b1
.
.
.
.
.
,
4.4.1. Legendre polynomials
Legendre polynomials are orthogonal on [-1,1] with respect to the weight
function w(x) = 1. They satisfy the recursion relation:
pn (x) =
2n − 1
n−1
xpn−1 (x) −
pn−2 (x),
n
n
n≥2
464
QASSEM AL-HASSAN
with p0 (x) = 1,
p1 (x) = x. Thus,
an = 0,
n
,
2n − 1
bn =
n−1
.
2n − 1
cn =
So,
αn = x (
2n − 1
),
n
βn =
n−1
,
n
and
x
1
1
2
0
pn (x) = det
.
.
0 ...
...
0
3
2x
2
3
1
0
...
0
5
3x
1
0
.
.
.
.
.
.
.
.
.
..
1
0
n−1
n
( 2n−1
n )x
.
.
.
,
n ≥ 2.
4.4.2. Laguerre polynomials
Laguerre polynomials {Ln (x)}n≥0 are orthogonal on (0, ∞) with respect to
the weight function w(x) = e−x . They satisfy the recursion relation
Ln (x) = (2n − 1 − x) Ln−1 (x) − (n − 1)2 Ln−2 (x),
n≥2
with L0 (x) = 1, L1 (x) = 1 − x. Thus,
an = (2n − 1),
bn = −1,
cn = −(n − 1)2 .
So,
αn =
and
Ln (x) = det
x − (2n − 1)
= (2n − 1 − x),
−1
1−x
1
0
1
0
...
...
0
3−x
1
0
...
0
5−x 1
0
.
.
4
.
.
.
.
.
βn = (n − 1)2
.
.
.
.
.
.
..
.
0 (n −
1
1)2
2n − 1 − x
,
n ≥ 2.
COMPUTING INVERSES OF TRIDIAGONAL MATRICES
465
4.4.3. Hermite polynomials
Hermite polynomials are orthogonal on the whole real line with respect to
the weight function w(x) = e−
x2
2
. They satisfy the recursion relation:
Hn (x) = 2x Hn−1 (x) − 2(n − 1) Hn−2 (x),
n≥2
with H0 (x) = 1, H1 (x) = 2x. Thus,
1
bn = ,
2
an = 0,
cn = n − 1
and
αn = 2x,
βn = 2(n − 1).
So,
2x 1
2 2x
0
Hn (x) = det
.
.
.
4
0
1
...
0
...
...
2x 1
0
0
0
.
,
.
.
.
.
.
1
.
.
.
..
.
.
0 2(n − 1) 2x
n ≥ 2.
4.4.4. Chebyshev polynomials
Chebyshev polynomials are orthogonal on (−1, 1) with respect to the weight
1
function w(x) = √
. They satisfy the recursion relation:
1 − x2
Tn (x) = 2x Tn−1 (x) − Tn−2 (x),
with T0 (x) = 1, T1 (x) = x. Thus,
an = 0,
b n = cn =
and
αn = 2x,
βn = 1.
1
2
n≥2
466
QASSEM AL-HASSAN
So
Tn (x) = det
x
1
0
... ...
0
1 2x
1
0 ...
0
1
2x 1 0
.
.
.
.
0
.
.
.
. ..
.
.
.
0 1 2x
.
.
,
.
1
n ≥ 2.
5. Conclusion
By using an idea as simple as the LU decomposition of a tridiagonal matrix,
we think that we have constructed an algorithm to calculate its inverse. The
algorithm is robust and easy to implement. The cornerstone of this construction is
the simple second order linear homogeneous difference equation (2.2). Moreover,
the relation between the quantities {γ i }ki=0 and the determinants of the principal
minor submatrices makes the algorithm applicable in a number of applications,
such as: Numerical solution of special types of boundary value problems, second
order difference equations posed as boundary value problems − with constant and
variable coefficients, and computing various types of orthogonal polynomials.
References
[1] E. Isaacson and H. B. Keller, Analysis of Numerical Methods, 2nd edition, Dover Publications, 1994.
[2] R. K. Mallik, The inverse of a tridiagonal matrix, Linear Algebra and Applications, 325
(2001), 109-139.
[3] G. Strang, Linear Algebra and Its Applications, 2nd edition, Academic Press, New York,
1980.
[4] G. Szego, Orthogonal Polynomials, 4th edition, AMS, Providence, RI, 1981.
Department of Basic Sciences, University of Sharjah, Sharjah, United Arab Emirates.
E-mail: [email protected]
© Copyright 2026 Paperzz