Explicit formula for the inverse of a tridiagonal

Available online at www.sciencedirect.com
Applied Mathematics and Computation 197 (2008) 345–357
www.elsevier.com/locate/amc
Explicit formula for the inverse of a tridiagonal matrix
by backward continued fractions
Emrah Kılıç
TOBB University of Economics and Technology, Mathematics Department, 06560 Ankara, Turkey
Abstract
In this paper, we consider a general tridiagonal matrix and give the explicit formula for the elements of its inverse. For
this purpose, considering usual continued fraction, we define backward continued fraction for a real number and give some
basic results on backward continued fraction. We give the relationships between the usual and backward continued fractions. Then we reobtain the LU factorization and determinant of a tridiagonal matrix. Furthermore, we give an efficient
and fast computing method to obtain the elements of the inverse of a tridiagonal matrix by backward continued fractions.
Comparing the earlier result and our result on the elements of the inverse of a tridiagonal matrix, it is seen that our method
is more convenient, efficient and fast.
2007 Published by Elsevier Inc.
Keywords: Tridiagonal matrix; Factorization; Inverse; Backward continued fraction
1. Introduction
There has been increasing interest in tridiagonal matrices in many different theoretical fields, especially in
applicative fields such as numerical analysis, orthogonal polynomials, engineering, telecommunication system
analysis, system identification, signal processing (e.g., speech decoding, deconvolution), special functions, partial differential equations and naturally linear algebra (see [3,5,10,13,14,18]). In many of these areas, inversions
of tridiagonal matrices are necessary. Parallel computations or efficient algorithms [7,15], indirect formulas
[1,2,14,17,19],direct formulas of some certain cases [5,13] for such inversions are known. Some authors consider a general tridiagonal matrix of finite order and then describe the LU factorizations, determine the determinant and inverse of a tridiagonal matrix under certain conditions (see [4,6,8,11,16]).
Recently explicit formula for the elements of the inverse of a general tridiagonal matrix inverse is given by
Mallik [12]. The author’s approach is based on linear difference equations. Then the author extends the results
on the explicit solution of a second-order linear homogenous difference equation with variable coefficients to
the nonhomogeneous case. Thus the author obtains the explicit formula by applying these extended results to
a boundary value problem.
E-mail address: [email protected]
0096-3003/$ - see front matter 2007 Published by Elsevier Inc.
doi:10.1016/j.amc.2007.07.046
346
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
In [4], the authors give two types LU factorization of the n · n tridiagonal matrix A
3
2
d 1 a1
7
6
..
7
6b d
.
2
7
6 2
A¼6
7
.. ..
7
6
4
. an1 5
.
bn
by setting
(
bi ¼
dn
if i ¼ 1;
d1
di bi
bi1
if i ¼ 2; . . . ; n;
ai1
as A = L1U1 and A = L2U2
2
1
6 b ..
6 2
.
6 b1
L1 ¼ 6
6
..
6
. 1
4
bn
bn1
1
where
3
7
7
7
7
7
7
5
and
2
b1
6b
6 2
L2 ¼ 6
6
4
..
7
7
7 and
7
5
.
bn1
..
.
bn1
7
7
7;
7
an1 5
bn
3
b2
..
.
3
a1
..
.
6
6
U1 ¼ 6
6
4
and
2
b1
bn
2
6
6
6
U2 ¼ 6
6
4
1
3
a1
b1
..
.
..
1
.
7
7
7
7
an1 7
bn1 5
1
called the Doolittle and Crout factorizations, respectively.
A general continued fraction representation of a real number x is one of the form
x ¼ a0 þ
b1
a1 þ a2 þb. 2
..
ð1Þ
;
b
þann
where a0, a1, . . . and b1, b2, . . . are integers. Such representations are sometimes in the form x ¼ a0 þ ab11þ
for compactness.
b2
bann
a2 þ
Definition 1. The fraction
b1 b 2
bk
C k ¼ a0 þ
a1 þ a2 þ ak
with k = 0, 1, . . . , where k 6 n, is called the kth forward convergent of the continued fraction (1).
Convergents are usually denoted C n ¼ QP n where the sequences {Pn} and {Qn} are defined by the recurrence
n
relations for n > 0 [9]
P n ¼ an P n1 þ bn P n2 ;
P 1 ¼ 1; P 0 ¼ a0 ;
Qn ¼ an Qn1 þ bn Qn2 ;
Q1 ¼ 0; Q0 ¼ 1:
ð2Þ
In this paper, we define backward continued fraction (BCF) for a real number and derive some elementary convergent properties of it. Also using the backward continued fractions, we give some results about LU factorization and determinant of a tridiagonal matrix. The notion backward continued fraction was firstly used in
[20] for vector valued continued fractions. Further, we present an explicit formula for the elements of the inverse of a general tridiagonal matrix. Our approach is based on using the backward continued fractions.
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
347
Further note that in the next studies, we will again use backward continued fractions for LU factorizations,
determinants and inverses of some certain Toeplitz matrices.
2. Backward continued fraction
In this section, we give some new definitions and results. Let a0, a1, . . . , an be real numbers. Now we define
simple backward continued fraction in the following inductive manner.
Definition 2 (Backward Continued Fraction)
½a0 b ¼ a0
and ½a0 ; a1 ; . . . ; an b ¼ an þ
1
½a0 ; a1 ; . . . ; an1 b
for n P 1:
For example,
½2b ¼ 2;
1
1 7
¼3þ ¼ ;
½2b
2 2
1
1
2 58
½2; 3; 8b ¼ 8 þ
¼8þ 7 ¼8þ ¼ :
½2; 3b
7
7
2
½2; 3b ¼ 3 þ
Note that, in general
½a0 ; a1 b ¼ a1 þ
1
;
½a0 b
½a0 ; a1 ; a2 b ¼ a2 þ
1
1
¼ a2 þ
;
½a0 ; a1 b
a1 þ a10
and
½a0 ; a1 ; . . . ; an b ¼ ½an ; an1 ; . . . ; a0 :
The following theorem shows that a given backward fraction can be represented by another backward fraction
with one less term.
Theorem 3. Let ai real numbers. Then ½a0 ; a1 ; . . . ; an b ¼ ½a1 þ a10 ; a2 ; . . . ; an b :
Proof. The proof is easy.
For example, [1, 3, 8, 2]b = [4, 8, 2]b.
In generally, the ai are called the terms, or, partial quotients, of the backward continued fraction. Clearly, if
all the ai are rational numbers (or integers), then [a0, a1, . . . , an]b is rational. h
Definition 4. Let A = [a0, a1, . . . , an]b be a backward continued fraction and let C bk ¼ ½a0 ; a1 ; . . . ; ak b . C bk is
called the kth convergent of backward continued fraction A.
For example, let A = [8, 4, 2, 3]b. Then
C b0 ¼ ½8b ¼ 8;
1 33
C b1 ¼ ½8; 4b ¼ 4 þ ¼ ;
8
8
1
8
74
C b2 ¼ ½8; 4; 2b ¼ 2 þ
¼ ;
¼2þ
33 33
4 þ 18
C b3 ¼ A ¼ ½8; 4; 2; 3b ¼ 3 þ
1
33 255
¼
:
¼3þ
1
74
74
2 þ 4þ1
8
348
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
Next we develop an iterative method to generate the convergents of a backward continued fraction, assuming
that we have the terms ak. Define sequence {pn} by the recurrence relation for n > 0
pn ¼ an pn1 þ pn2
for n P 0;
where p1 = 1, p1 = a0. The sequence {pn} is obtained from the sequence {Pn} by taking bn = 1 for all n in (2).
Theorem 5. Given a backward continued fraction A = [a0, a1, . . . , an]b. If 0 6 k 6 n and C bk is the kth backward
convergent to A, C bk ¼ ½a0 ; a1 ; . . . ; ak b , then C bk ¼ ppk .
k1
Proof (Induction on n). If k = 0, then by the definition of the sequence {pn},
then
p0
p1
¼ a0 ¼ ½a0 b ¼ C b0 . If k = 1,
p 1 a1 a 0 þ 1
1
¼
¼ a1 þ ¼ ½a0 ; a1 b ¼ C b1 :
a0
a0
p0
Suppose that the claim is true for k 1. Then we show that the claim is true for k. Thus, by induction
hypothesis
pk
ak pk1 þ pk2
1
1
¼
¼ ak þ pk1 ¼ ak þ
:
½a0 ; a1 ; . . . ; ak1 b
pk1
pk1
p
k2
Therefore, by the definition of the backward continued fraction
pk
¼ ½a0 ; a1 ; . . . ; ak1 ; ak b ¼ C bk :
pk1
h
So the proof is complete.
For example, we compute [8, 4, 2, 3]b by Theorem 5. For p1 = 1, p0 = 8, we have p1 = 33, p2 = 74, p3 = 255
and so ½8; 4; 2; 3b ¼ pp3 ¼ 255
.
74
2
From the above results, we have the following result: Let A = [a0, a1, . . . , an] be a finite continued fraction,
in usual manner, with kth convergent Ck and let B = [a0, a1, . . . , an]b be a backward continued fraction with kth
convergent C bk . Then the numerators of C bk and Ck are the same.
Let a0, a1, . . . and b1, b2, . . . , bn be integers. Normally, a general continued fraction representation of a real
number x is one of the form
x ¼ a0 þ
b1
a1 þ a2 þb. 2
..
:
b
þan
n
Now we define a general backward continued fraction. Let a0, a1, . . . , an and b1, b2, . . . , bn be integers.
Definition 6. The general backward continued fraction have the form
0fl
We denote such representations by in the form
b1 b2
bn
C ¼ a0 þ
a1 þ a2 þ an
b
b
for compactness. We denote the kth convergent of general backward fraction by
b1 b2
bk
:
C bk ¼ a0 þ
a1 þ a2 þ ak b
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
349
For example, we compute 3; 25 ; 74 b . Thus a0 = 3, a1 = 5, a2 = 4 and b1 = 2, b2 = 7, and
a2 þ
b2
7
89
¼4þ
¼ :
b1
2
17
5
þ
a1 þ a 0
3
h
The backward continued fraction a0 þ ab11þ
bn2
bn
ba10 . Indeed, 3; 25 ; 74 b ¼ 4; 75 ; 23 .
an1 þ an2 þ
b2
bann
a2 þ
i
b
is equal to the usual continued fraction ½an þ
We give a ratio for a general backward fraction. Recall that the sequence {Pn} be as in (2), that is, for n > 0
P n ¼ an P n1 þ bn P n2 ;
where P1 = 1, P0 = a0.
Then we have the following the theorem as a generalization of Theorem 5.
h
i
b1 b2
bn
b
Theorem 7. Given a general backward continued
fraction
A
¼
a
þ
0
a1 þ a2 þ
an b . If 0 6 k 6 n and C k is the kth
h
i
k
backward convergent to A, that is, C bk ¼ a0 þ ab11þ ab22þ bakk , then C bk ¼ PPk1
.
b
Proof h(Induction
on n). If n = 0, then we have C b0 ¼ ½a0 b and
i
C b1
b1
a1
b1
a0
a1 a0 þb1
a0
P0
¼ a0 . If
P 1
P1
{Pn}, P 0 ¼ a1 aa00þb1 .
n = 1, then we have
¼ a0 þ
¼ a1 þ ¼
and by definition of the sequence
Suppose that the equab
tion holds for k. Then we show that the equation holds for k + 1. Thus, by the definition of fraction and the
inductive hypothesis
bkþ1
bkþ1 akþ1 P k þ bkþ1 P k1 P kþ1
C bkþ1 ¼ akþ1 þ b ¼ akþ1 þ P k ¼
¼
:
Pk
Pk
Ck
P k1
So the proof is complete.
h
By the well known results and the above results, we have the following Corollary.
h
i
b1 b2
bn
hCorollary 8. Given
i a usual continued fraction a0 þ a1 þ a2 þ . . . an and a backward continued fraction
a0 þ ab11þ ab22þ . . . bann . Then the numerators of these fractions are the same, Pn.
b
Now using backward continued fractions, we reobtain LU factorization of a tridiagonal matrix.
3. LU Factorization of a tridiagonal matrix
In this section, we give the LU decomposition of a general tridiagonal matrix by backward continued
fractions.
Let Gn = [gij] be a n · n tridiagonal matrix with gk,k = xk for 1 6 k 6 n and gk+1,k = zk and gk,k+1 = yk for
1 6 k 6 n 1. That is,
2
3
x1 y 1
6
7
..
6 z1 x 2
7
.
7:
ð3Þ
Gn ¼ 6
6
7
..
..
4
. y n1 5
.
zn1 xn
Suppose that the entries of Gn are given, that is, we have x1, . . . , xn, y1, . . . , yn1 and z1, . . . , zn1.
We define a general backward continued fraction C bn by the entries of the matrix Gn as follows:
y 1 z1 y 2 z2
y n1 zn1
y n1 zn1
b
C n ¼ x1 þ
¼ xn þ
:
y n2 zn2
x2 þ x3 þ
xn
x
n1 þ xn2 þ.
b
ð4Þ
..
y z
x2 þ x1 1
1
If we rearrange the sequence {Pn} with ai = xi+1 for 0 6 i 6 n 1, bi = yizi for 1 6 i 6 n, then
P n ¼ xnþ1 P n1 y n zn P n2 ;
where P1 = 1, P0 = x1.
ð5Þ
350
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
Thus few convergents of C bn are
P0
¼ x1 ;
C b1 ¼ ½x1 b ¼
P 1 y z1
P 1 x 1 x 2 y 1 z1
C b2 ¼ x1 þ 1
¼
¼
;
x2 b P 0
x1
y z1 y 2 z2
P 2 x 3 x 2 x 1 x 3 y 1 z1 x 1 y 2 z2
¼
¼
:
C b3 ¼ x1 þ 1
x2 þ
x3 b P 1
x1 x2 y 1 z1
We define two n · n matrices as follows:
3
2
1
0
7
6 z1b 1
7
6 C1
7
6
z2
1
7
6
b
C2
L1 ¼ ½lij ¼ 6
7
7
6
..
..
7
6
. 05
.
4
zn1
0
1
Cb
ð6Þ
n1
and
2
6
6
6
6
U 1 ¼ ½uij ¼ 6
6
6
4
C b1
y1
C b2
0
y2
C b3
0
3
7
7
7
..
7
.
7;
7
..
7
. y n1 5
C bn
ð7Þ
C bi
where
is the ith convergent of the backward continued fraction given by (4) for 1 6 i 6 n.
Now we give the Doolittle type LU decomposition of matrix Gn with the following Theorem.
Theorem 9. Let the matrix Gn and the general backward fraction C bn be as in (3) and (4), respectively. Then the
LU decomposition of the matrix Gn have the form
Gn ¼ L1 U 1 ;
where L1 and U1 given by (6) and (7), respectively.
Proof. We consider three cases. First, we consider the case i = j. By the definitions of matrices U1 and L1, we
have lij = 0 for i > j + 1 and j > i, lii = 1 for 1 6 i 6 n, and uij = 0 for i > j and j > i + 1, ui,i+1 = yi for
1 6 i 6 n 1. Then, by matrix multiplication, we have for 1 6 i 6 n
gii ¼
n
X
li;k uk;i ¼ li;i1 ui1;i þ li;i ui;i ¼ y i1 li;i1 þ C bi ¼ y i1
k¼1
Clearly we may write the last equation as, for 1 6 i 6 n
0
1 0
B
gii ¼ y i1 @
1
zi1
y i1 zi1
C B
C
zi2 A þ @xi þ
zi2 A;
xi1 þ x yþi2
xi1 þ x yþi2
y i3 zi3
y i3 zi3
i2
.
xi3 þ . .
i2
.
xi3 þ . .
which provides, for 1 6 i 6 n
gii ¼ xi :
Now we consider the case i + 1 = j. Thus, for 1 6 i 6 n 1
n
X
gi;iþ1 ¼
li;k uk;iþ1 ¼ li;i ui;iþ1 ¼ ui;iþ1 ¼ y i :
k¼1
zi1
þ C bi :
C bi2
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
351
Finally we consider the case i = j + 1. Then we have, for 1 6 i 6 n 1
n
X
zi
liþ1;k uk;i ¼ liþ1;i ui;i ¼ b C bi ¼ zi :
giþ1;i ¼
Ci
k¼1
h
Thus the proof is complete for all cases.
Therefore we have the following Corollary.
Corollary 10. Let the matrix Gn and the general backward continued fraction C bn be as in (3) and (4), respectively.
Then
n
Y
C bi ;
det Gn ¼
i¼1
where
C bi
is the ith convergent of (4).
Proof. From Theorem 9, we have Gn = L1U1. Thus det Gn = det(L1U1). Since L1 is the unit bidiagonal matrix
and U1 is the upper bidiagonal matrix with diagonal entries are the convergents of (4), the conclusion is immediately seen. h
Then we can express the detGn by the terms of sequence {Pn}.
Corollary 11. Let the matrix Gn is as in (3). Then for n > 1
det Gn ¼ P n1 ;
where Pn given by (5).
Proof. From Theorem 9 and Corollary 10, we have det Gn ¼
det Gn ¼
Qn1
b
i¼0 C i
and C bi ¼ PP i1
. Then we write
i2
P0 P1
P n2 P n1 P n1
¼
P 1 P 0
P n3 P n2
P 1
and since P1 = 1, detGn = Pn1. So the proof is complete.
h
Now we consider the second kind LU factorization of matrix Gn called the Crout factorization.
Define two n · n matrices as follows:
3
2 b
0
C1
7
6
7
6 z1 C b2
7
6
7
6
..
7
6
L2 ¼ ½tij ¼ 6
7
.
z2
7
6
7
6
.
7
6
b
.
. C n1
5
4
0
ð8Þ
C bn
zn1
and
2
1
6
6
6
6
6
U 2 ¼ ½hij ¼ 6
6
6
6
6
4
y1
C b1
1
0
y2
C b2
1
0
3
7
7
7
7
7
..
7;
7
.
7
. . y n1 7
. Cb 7
5
n1
1
where C bi is the ith convergent of the backward continued fraction given by (4) for 1 6 i 6 n.
ð9Þ
352
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
Then we have the following Theorem.
Theorem 12. Let the matrix Gn and the general backward fraction C bn be as in (3) and (4), respectively. Then the
Crout type LU decomposition of matrix Gn have the form
Gn ¼ L2 U 2 ;
where L2 and U2 given by (8) and (9), respectively.
Proof. By the definitions of matrices L2 and U2, tii ¼ C bi for all i, ti+1,i = zi for 1 6 i 6 n 1, hii = 1 for all i,
hi;iþ1 ¼ Cy ib for 1 6 i 6 n 1 and 0 otherwise. Thus we start with the case i = j. Thus, by the definitions of C bi ’s
i
for 1 6 i 6 n
gii ¼
n
X
ti;k hk;i ¼ tii hii þ ti;i1 hi1;i ¼ C bi þ zi1
k¼1
y i1
C bi1
!
¼ xi :
Consider the case i + 1 = j. Thus for 1 6 i 6 n 1
!
n
X
yi
b
gi;iþ1 ¼
ti;k hk;iþ1 ¼ tii hi;iþ1 ¼ C i
¼ yi:
C bi
k¼1
Finally consider the case i = j + 1. Thus for 1 6 i 6 n 1
giþ1;i ¼
n
X
tiþ1;k hk;i ¼ tiþ1;i hi;i ¼ zi :
k¼1
So the proof is complete.
h
By Theorem 12, we have that detGn = det(L2U2). Since det L2 ¼ C b1 C b2 C bn and det U2 = 1, det Gn ¼
C b1 C b2 C bn . By the definitions of C bi ’ s, we reobtain the result of Corollary 11.
4. Inversion of a tridiagonal matrix by BCF
In this section, we use the advantage of backward continued fractions and give inversion of a tridiagonal
matrix. Before this, we give some results about the inversion of matrices L1 and U1. Then we have the following Lemma.
Lemma 13. Let the n · n unit bidiagonal matrix L1 have the form (6) and let Q1 ¼ ðqij Þ ¼ L1
1 : Then
8
i1
>
iþj Q zk
>
ð1Þ
if i > j;
>
>
Cb
<
k¼j k
qij ¼
>
1
if i ¼ j;
>
>
>
:
0
if i < j:
Proof. Denote L1Q1 by R = (rij). If i = j, then rii ¼
tions of L1 and Q1, for 1 < i 6 n
n
X
Pn
k¼1 lik qki
¼ lii qii ¼ 1: If i + 1 > j P 1, then by the defini-
zi1
zi1
iþj1
rij ¼
lik qkj ¼ li;i1 qi1;j þ lii qij ¼ b qi1;j þ qij ¼ ð1Þ
C i1
C bi1
k¼1
!
!
i1
i1
Y
Y
zk
zk
iþj1
iþj
¼ ð1Þ
þ ð1Þ
¼ 0:
b
b
C
C
k
k
k¼j
k¼j
!
i2
Y
zk
b
k¼j C k
!
þ ð1Þ
iþj
i1
Y
zk
b
k¼j C k
!
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
353
If i + 1 = j P 1, then we immediately obtain ri,i+1 = 0. It is clear that rij = 0 for the case i < j. So we obtain
R = In where In is the n · n unit matrix. Similarly it can be shown that Q1L1 = In. Thus the proof is
complete. h
Lemma 14. Let the n · n bidiagonal matrix U1 have the form (7) and let H 1 ¼ ðhij Þ ¼ U 1
1 . Then
hij ¼
8
j1
ð1Þiþj Q
>
>
> Cb
>
>
< j k¼i
yk
C bk
if i < j;
1
>
>
C bi
>
>
>
:
0
if i ¼ j;
if i > j:
Proof.
Denote U1H1 by
E = (eij). If i = j, then, by the definitions of U1 and H1, we obtain eii ¼
Pn
b 1
u
h
¼
u
h
¼
C
ik
kj
ii
ii
i C b ¼ 1. It is clear that eij = 0 for the case i > j. Finally we consider the case i < j.
k¼1
i
Thus by the definitions of U1 and H1,
eij ¼
n
X
uik hkj ¼ uii hij þ ui;iþ1 hiþ1;j ¼ ð1Þ
k¼1
¼ ð1Þ
iþj
iþj
C bi
Qj1 !
Qj1
y
y
iþjþ1
Qj k¼i k b þ ð1Þ
Qj k¼i k b
C
k¼iþ1 k
k¼iþ1 C k
!
Qj1
Qj1 !
y
y
iþjþ1
k
k
k¼iþ1
y i Qj
Qjk¼i b þ ð1Þ
b
k¼i C k
k¼iþ1 C k
!
¼ 0:
Thus we see that E = In (n · n unit matrix). Also it can be shown that H1U1 = In. So we have the
conclusion. h
Suppose that yizi 5 0 for 1 6 i 6 n 1. Then we have the following theorem for the inversion of a tridiagonal
matrix.
Theorem 15. Let the n · n tridiagonal matrix Gn have the form (3). Let G1
n ¼ ½wij denote the inverse of Gn
. Then
8
k1
n
P
Q y t zt
>
1
1
>
>
þ
if i ¼ j;
>
C bi
C bk
ðC bt Þ2
>
>
t¼i
k¼iþ1
>
>
>
>
!!
>
j1 >
n
k1
<
P
Q y t zt
iþj Q y t
1
1
þ
if i < j;
wij ¼ ð1Þ
Cb
C bj
C bk
ðC bt Þ2
t¼i t
t¼j
k¼jþ1
>
>
>
>
>
!
>
k1
>
>
i1
n
>
P
Q y t zt
iþj Q zt
>
1
1
>
þ
if i > j:
>
: ð1Þ
Cb
C bi
C bk
ðC bt Þ2
t¼j t
t¼i
k¼iþ1
Proof. We consider first case i = j. By matrix multiplication and since hij = 0 for i > j and G1
n ¼
1
ðL1 U 1 Þ ¼ H 1 Q1 , we have
wii ¼
n
X
hik qki ¼
k¼1
n
X
1
¼ bþ
C i k¼iþ1
"
n
X
hik qki ¼ hii qii þ hi;iþ1 qiþ1;i þ þ hin qni ¼ hii qii þ
k¼i
ð1Þ
iþk
k 1
Y
t¼i
!
yt
k
Y
ðC bt Þ1
t¼i
n
X
hik qki
k¼iþ1
!!
iþk
ð1Þ
k 1
Y
zt
b
C
t¼i
t
!!#
!
n
k 1
X
1
1 Y
y t zt
¼ bþ
:
C i k¼iþ1 C bk t¼i ðC bt Þ2
354
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
If i < j, then by the definition Q1, we write
n
n
n
X
X
X
wij ¼
hik qkj ¼
hik qkj ¼ hij qjj þ hi;jþ1 qjþ1;j þ hi;jþ2 qjþ2;j þ þ hin qnj ¼ hij qjj þ
hik qkj
k¼j
k¼1
¼ ð1Þ
iþj
j1
Y
!
j
Y
ðC bk Þ1
yk
k¼i
!
þ
k¼i
n
X
k¼jþ1
"
ð1Þ
iþk
k 1
Y
t¼i
yt
k
Y
k¼jþ1
!
ðC bt Þ1
ð1Þ
t¼i
Q
Q
3
k1
k1
j1
n
yt
zt
Y
X
t¼i
t¼j
1
y
iþj
k
4ð1Þiþj 5
¼ ð1Þ
þ
Qk b Qk1 b C bj k¼i C bk
k¼jþ1
t¼i C t
t¼j C t
2 Qj1 Qk1 3
!
j1
n
Y
X
t¼i y t
t¼j y t zt
1
y
iþj
iþj
k
4 5
¼ ð1Þ
þ
ð1Þ
Qk1 b 2 b Qj1 b
C bj k¼i C bk
C
C
ðC
Þ
k¼jþ1
t¼i t
t¼j
k
t
!
!
"
!#
j1
j1
n
k 1
Y
Y
yt
yt X
1 Y
y t zt
iþj 1
iþj
¼ ð1Þ
þ ð1Þ
b
b
b 2
C bj t¼i C bt
t¼i C t
t¼j ðC t Þ
k¼jþ1 C t
!"
"
!##
j1
n
k 1
Y
X
yt
1
1 Y
y t zt
þ
¼ ð1Þiþj
:
b
b
b
b 2
C
C
C
t¼j ðC t Þ
t¼i
t
j
k
k¼jþ1
!
kþj
k 1
Y
t¼j
zt
C bt
!#
2
Finally, we consider last case i > j. Thus, by the definition of H1,
n
n
n
X
X
X
wij ¼
hik qkj ¼
hik qkj ¼ hii qij þ hi;iþ1 qiþ1;j þ hi;iþ2 qiþ2;j þ þ hin qnj ¼ hii qij þ
hik qkj
k¼1
k¼i
!
"
!!#
Qk1 !
i1
n
k 1
Y
Y
X
1
zt
zt
iþj
iþk
kþj
t¼i y t
¼ b ð1Þ
ð1Þ Qk b
þ
ð1Þ
b
b
Ci
t¼j C t
t¼j C t
k¼iþ1
t¼i C t
2
3
Qk1
Qk1
!
iþj
iþj
i1
n
Y
X
t¼i y t
t¼j zt
ð1Þ
zt
ð1Þ
4
Q
Q
5
¼
þ
b
k1 b
k1 b
C bi
C bk
C
C
t¼j C t
k¼iþ1
t
t¼i
t¼j t
2 Qk1 Qi1 3
!
iþj
n
Yi1 zt
X
t¼i y t zt
t¼j zt
ð1Þ
iþj
4 Q
5
¼
þ
ð1Þ
Q
b
b
t¼j C
k1
i1 b
b
b 2
Ci
ðC
Þ
C
t
k¼iþ1 C k
t
t¼i
t¼j t
!
!
!
iþj
i1
i1
n
k
1
Y zt
Y zt X 1 Y y zt
ð1Þ
iþj
t
¼
þ ð1Þ
b
b
b
b 2
C bi
t¼j C t
t¼j C t
k¼iþ1 C k t¼i ðC t Þ
!"
!#
i1
n
k 1
Y
X
zt
1
1 Y
y t zt
iþj
¼ ð1Þ
þ
:
b
C bi k¼iþ1 C bk t¼i ðC bt Þ2
t¼j C t
We have the conclusion for all cases.
k¼iþ1
h
Considering the statement of Theorem 15, we can give the following Corollary for fast computing of the
inverse elements of a tridiagonal matrix.
Corollary 16. Let wij denote the (i, j) th element of the inverse of matrix Gn given by (3). Then
k1
8
n
P
Q y t zt
>
1
1
>
þ
if i ¼ j;
> Cb
>
C bk
ðC bt Þ2
i
>
t¼i
k¼iþ1
>
>
<
j1
Q yt
wij ¼ ð1Þiþj wjj
if i < j;
Cb
>
t¼i t
>
>
>
iQ
1
>
>
iþj
zt
>
if i > j:
: ð1Þ wii
Cb
t¼j
t
ð10Þ
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
355
Consequently while computing the nonmain-diagonal elements of the inverse of a tridiagonal matrix, we
reduce the required levels by the main diagonal elements of its inverse.
If we compare the
onal matrix, then we
As an example of
shown
2
a1
6
T ¼ ½tij ¼ 4 d1
0
earlier some methods and our results about the computing of the elements of a tridiagsee that our process is more convenient, efficient and fast.
Theorem 15 and Corollary 16, now we consider a tridiagonal matrix T of order 3 as
b1
a2
3
0
7
b2 5:
d2
a3
Thus according to our process, we construct the following backward continued fraction C b3 :
C b3 ¼ a3 þ
Then
its
b2 d2
:
a2 þ ba11d1
convergents
are
as
follows:
a1 a2 a3 b1 d1 a3 b2 d2 a1
.
a1 a2 b1 d1
1 d1
C b1 ¼ a1 ; C b2 ¼ a2 þ ba11d1 ¼ a1 a2ab
1
and
C b3 ¼ a3 þ
b2 d2
a2 þ
b1 d1
a1
For i = j, by Theorem 15, the main diagonal elements of the inverse of matrix T = (tij) are given by
1
b d1
b d1 b d2
a 2 a 3 b2 d 2
þ b 1 b 2 þ b 12 b2 2 b ¼ ;
b
a
b
d
C 1 C 2 ðC 1 Þ
1 2 2 a1 a2 a3 þ b 1 a3 d1
ðC 1 Þ ðC 2 Þ C 3
1
b d2
a1 a3
¼ bþ b2 b 2¼
;
a1 b2 d2 a1 a2 a3 þ b1 a3 d1
C 2 C 3 ðC 2 Þ
t1
11 ¼
t1
22
t1
33 ¼
1
ða1 a2 b1 d1 Þ
¼
:
C b3 a1 b2 d2 a1 a2 a3 þ b1 a3 d1
The upper diagonal elements of T1 are then given by
b1 1
b1 a3
t22 ¼
;
a1 b2 d2 a1 a2 a3 þ b1 a3 d1
C b1
2
Y
bt 1
b1 b2
¼
t ¼
;
b 33
a
b
d
a
C
1
2
1 a 2 a 3 þ b1 a 3 d 1
2
t
t¼1
t1
12 ¼ t1
13
t1
23 ¼ b2 1
a1 b2
t ¼
:
b 33
a
b
d
a
C2
1 2 2
1 a 2 a 3 þ b1 a 3 d 1
The lower diagonal elements of T1 are given by
d1 1
a3 d1
t ¼
;
a1 b2 d2 a1 a2 a3 þ b1 a3 d1
C b1 22
d1 d2
d1 d2
¼ b b t1
;
33 ¼
a
b
d
a
C1 C2
1 2 2
1 a2 a3 þ b 1 a3 d1
d2
d 2 a1
¼ b t1
:
33 ¼
a
b
d
a
C2
1 2 2
1 a 2 a3 þ b 1 a3 d1
t1
21 ¼ t1
31
t1
32
From (10), it is clear that if the matrix Gn is a symmetric tridiagonal matrix, that is,
3
2
x1 y 1
7
6
..
7
6y
.
7
6 1 x2
Gn ¼ 6
7
..
..
7
6
4
. y n1 5
.
y n1
xn
¼
356
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
then the inverse of the matrix Gn ; G1
n ¼ ½wij is given by
8
n
k1
P
Q y 2t
>
1
1
>
þ
if i ¼ j;
>
b
b
< Ci
Ck
ðC bt Þ2
t¼i
k¼iþ1
wij ¼
iQ
1
>
>
yt
> ð1Þiþj
w
otherwise:
:
C b ii
t¼j
t
Finally, in order to see that how susceptible the process to loss of accuracy through subtraction is, we give a
numerical example where the matrix is near singularity.
Let
2
3
2 1 0
0
6
7
6 1 1 2
0 7
7
A¼6
60 3 2
3 7
4
5
0 0 4 1:999
then the det A = 0.018. Thus according to the our method, for the matrix A, y1 = 1, y2 = 2, y3 = 3, z1 = 1,
z2 = 3, z3 = 4 and so we easily obtain
C b1 ¼ 2;
3
C b2 ¼ ;
2
C b3 ¼ 6;
C b4 ¼ 0:001:
From Corollary 16, we have the diagonal entries of matrix A
!
!
4
k1
4
k1
X
X
1
1 Y
y t zt
1
1 Y
y t zt
w11 ¼ b þ
; w22 ¼ b þ
;
C 1 k¼2 C bk t¼1 ðC bt Þ2
C 2 k¼3 C bk t¼2 ðC bt Þ2
w33 ¼
1
y z3
þ b 3 b 2;
b
C 3 C 4 ðC 3 Þ
w44 ¼
1
C b4
and so
w11 ¼ 1996
;
9
w22 ¼
8002
;
9
w33 ¼
1999
;
6
w44 ¼ 1000:
Since the nondiagonal entries of A1 can be computed by the diagonal entries of it, we note that
y 1 w22 4001
y y w33
1999
y y y w44
1000
; w13 ¼ 1 b2 b ¼ ; w14 ¼ 1 b2 b3 b ¼ ;
¼
b
9
9
3
C1
C1 C2
C1 C2 C3
z1 w11 4001
y w33
3998
y y w44 2000
w21 ¼ b ¼
; w23 ¼ 2 b ¼ ; w24 ¼ 2 b3 b ¼
;
9
9
3
C1
C2
C2 C3
z1 z2 w33 1999
z2 w33
1999
y w44
; w32 ¼ b ¼ ; w34 ¼ 3 b ¼ 500;
w31 ¼ b b ¼
6
3
C1 C2
C2
C3
w12 ¼ w41 ¼ z1 z2 z3 w44
2000
;
¼
3
C b1 C b2 C b3
w42 ¼
z1 z2 z3 w44 4000
;
¼
3
C b1 C b2 C b3
Clearly the matrix A1 takes the form
2 1996 4001
1999
1000
9
9
9
3
6 4001
8002
3998
2000
6 9
9
9
3
A1 ¼ 6
6 1999 1999 1999
500
4 6
3
6
2000
3
4000
3
2000
3
1000
3
7
7
7:
7
5
w43 ¼ z3 w44 2000
:
¼
3
C b3
E. Kılıç / Applied Mathematics and Computation 197 (2008) 345–357
357
5. Remarks and conclusions
In this present paper, we consider the Doolittle type LU factorization of a tridiagonal matrix by BCF. After
this, we easily obtain the inverses of factor matrices L and U. Thus we find an explicit formula for the elements
of the inverse of a general tridiagonal matrix. Comparing some old results and our result on the computing of
the elements of the inverse of a tridiagonal matrix, one can see that the number of required computations in
our method are less than the number of required computations of earlier methods. Also the same result on the
inverse of a tridiagonal matrix by the Crout type LU factorization of a tridiagonal matrix by BCF can easily
be obtained.
References
[1] W.W. Barrett, A theorem on inverses of tridiagonal matrices, Linear Algebra Appl. 27 (1979) 211–217.
[2] W.W. Barrett, P.J. Feinsilver, Inverses of banded matrices, Linear Algebra Appl. 41 (1981) 111–130.
[3] A. Bunse-Gerstner, R. Byers, V. Mehrmann, A chart of numerical methods for structured eigenvalue problems, SIAM J. Matrix Anal.
Appl. 13 (2) (1992) 419–453.
[4] R.L. Burden, J.D. Fairs, A.C. Reynolds, Numerical Analysis, second ed., Prindle, Weber & Schmidt, Boston, MA, 1981.
[5] C.F. Fischer, R.A. Usmani, Properties of some tridiagonal matrices and their application to boundary value problems, SIAM J.
Numer. Anal. 6 (1) (1969) 127–142.
[6] C.M. da Fonseca, On the eigenvalues of some tridiagonal matrices, J. Comput. Appl. Math. 200 (1) (2007) 283–286.
[7] G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., Johns Hopkins University Press, Baltimore, MD, 1996.
[8] W.W. Hager, Applied Numerical Linear Algebra, Prentice-Hall International Editions, Englewood Cliffs, NJ, 1988.
[9] W.B. Jones, W.J. Thron, Continued Fraction, Analytic Theory and Applications, Addison Wesley, Reading, MA, 1980.
[10] E. Kilic, D. Tasci, Factorizations and representations of the backward second-order linear recurrences, J. Comput. Appl. Math. 201
(1) (2007) 182–197.
[11] D.H. Lehmer, Fibonacci and related sequences in periodic tridiagonal matrices, Fibonacci Quart. 13 (1975) 150–158.
[12] R.K. Mallik, The inverse of a tridiagonal matrix, Linear Algebra Appl. 325 (1–3) (2001) 109–139.
[13] R.M.M. Mattheij, M.D. Smooke, Estimates for the inverse of tridiagonal matrices arising in boundary-value problems, Linear
Algebra Appl. 73 (1986) 33–57.
[14] G. Meurant, A review on the inverse of symmetric tridiagonal and block tridiagonal matrices, SIAM J. Matrix Anal. Appl. 13 3
(1992) 707–728.
[15] M. El-Mikkawy, A fast algorithm for evaluating nth order tridiagonal determinants, J. Comput. Appl. Math. 166 (2) (2004) 581–584.
[16] M. El-Mikkawy, A. Karawia, Inversion of general tridiagonal matrices, Appl. Math. Lett. 19 (8) (2006) 712–720.
[17] F. Romani, On the additive structure of the inverses of banded matrices, Linear Algebra Appl. 80 (1986) 131–140.
[18] S.-F. Xu, On the Jacobi matrix inverse eigenvalue problem with mixed given data, SIAM J. Matrix Anal. Appl. 17 (3) (1996) 632–639.
[19] T. Yamamoto, Y. Ikebe, Inversion of band matrices, Linear Algebra Appl. 24 (1979) 105–111.
[20] H. Zhao, G. Zhu, P. Xiao, A backward three-term recurrence relation for vector valued continued fractions and its applications, J.
Comput. Appl. Math. 142 (2) (2002) 389–400.