Submitted to SIAM Journal on Matrix Analysis and Applications

International Journal of Computer & Mathematical Sciences
IJCMS
25
ISSN 2347 – 8527
Volume 5, Issue 6
June 2016
A Note on the Jordan-like Decomposition with Applications
Yeong-Jeu Sun
Department of Electrical Engineering,
I-Shou University, Kaohsiung, Taiwan, R.O.C.
ABSTRACT
In this paper, the Jordan-like decomposition problem of the square matrix is firstly introduced. Based on the Jordan
decomposition, a simple proof is provided to guarantee the solution existence of such a problem. This is done by
explicitly constructing the required delta functions. Such a decomposition can be used to solve the infimum and
supremum of some specific functions. Finally, a numerical example is provided to illustrate the main results.
KEY WORDS: Matrix Factorization, Matrix Measure, Matrix Theory.
1. INTRODUCTION
In the past decades, various decompositions (or factorizations) in linear system have been proposed, such as
LU factorization [1-5], cartesian decomposition [6], QR factorization [7, 8], Schur decomposition [7, 8],
spectral decomposition [9, 10], QR-like decomposition [11], singular value decomposition [12, 13], spectral
factorization [14-17], polar decomposition [7], and others; see, for instance, [18-19] and the references therein.
Such decompositions lead to very efficient methods for solving a linear system. In this paper, we introduce a
new decomposition problem for a square matrix A, called the Jordan-like decomposition of A. We show that
such a decomposition can be derived in an easy way. Besides, this type of decomposition can be used to solve
the infimum and supremum of some specific functions.
2. PRELIMINARY AND MAIN RESULTS
Nomenclature
C mn : the set of all complex m by n matrices,
NC : the set of all complex and nonsingular n by n matrices,
z : the modulus of a complex number z,
I n : identity matrix of dimension n  n ,
Re : the real part of the complex number  ,
A 1 : the inverse of the matrix A,
i (A) : the i-th eigenvalue of the matrix A,
p : 1,2, p,
det A : the determinant of the matrix A,
diag 1  2   n : the diagonal matrix with diagonal elements  i ,

A : the induced Euclidean norm of A  C nn ; A  max i ( A* A)
i

12
,
c A : the condition number of the nonsingular matrix A  C nn ; c A  A  A 1 ,
1
 (A) : the matrix measure of A  C nn ;  ( A)  max A*  A,
2
  A : the largest singular value of the matrix A  C nn ;   A  A ,
25
Yeong-Jeu Sun
International Journal of Computer & Mathematical Sciences
IJCMS
26
ISSN 2347 – 8527
Volume 5, Issue 6
June 2016
  A : the least singular value of the matrix A  C
nn
1

 A1 , if det A  0;
;   A  

0, otherwise.
Lemma 1. Let A  C mn and B  C nm . Suppose the matrix of I m  AB is invertible. Then we have
(i)
I n  BA is invertible;
(ii)
I n  BA1  I n  BI m  AB 1 A .
Proof.
(i) It can be readily obtained that
Im
B
A I m  AB

In
B
0
 I m  AB  I n  I m  AB
In
and
Im
B
A Im

In
0
A
 I m  I n  BA  I n  BA ,
I n  BA
which implies that I m  AB  I n  BA , and the proof is completed.
(ii) It can be shown that
I n  BA  I n  BI m  AB 1 A  I n  BA  BI m  AB 1 A BABI m  AB 1 A
1
 I n  BA  BI m  AB I m  AB  A
 I n  BA  BA
 In ,
□
and the proof is completed.
Lemma 2. Let D  C
Then we have
nn
, N C
nn
, and   0 . Assume the matrices D   N and D are invertible.
D   N 1  D 1   RD1 , with
Proof.


R  D 1 I n   ND1 N ;
By Lemma 1, one has
D   N 1  DI n  D 1 N 1

 I n  D 1 N

1
D 1
 I n  RD 1
 D 1  RD 1
This completes the proof. □
Lemma 3. [20] Let A  C nn . Then we have   A  i  A    A .
Lemma 4. [7]
Let A  C nn be an invertible matrix. Then we have


1
c A  max i  A  min  j  A  .
 j

i
Lemma 5. [20] Let A  C nn . Then we have
26
Yeong-Jeu Sun
International Journal of Computer & Mathematical Sciences
IJCMS
27
ISSN 2347 – 8527
Volume 5, Issue 6
June 2016
   A  Rei  A    A,  i  n .
Based on the Jordan canonical form, we introduce the Jordan-like decomposition of the square matrix A as
follows.
Lemma 6. Given any A  C nn , there exists a nonsingular matrix-valued function P   C nn , with
  0 , such that
P 1  AP   D   N ,
(1)
where D is diagonal with eigenvalues of A on its main diagonal and all elements of N are either 0 or 1 with all
nonzero elements located on the diagonal above the main diagonal. In this case, the nonsingular matrix-valued
function of P  is called the delta function of A. Some suitable delta functions of A are given by
P  : MKa,  ,
K a,    diag a a
where

a
M
 a
2
is
n 1
the
modal
nn
, with a  0 .
 C
matrix
of
the
matrix
A
nn
and
Proof. Let J be the Jordan form of A  C , then J  D  N . Since M is the modal matrix of the matrix
A, one has J  M 1 AM . Thus it can be shown that
P 1  AP    K 1 a,  M 1 AMK a,  
 K 1 a,  JK a,  
 K 1 a,  D  N K a,  
 K 1 a,  DK a,    K 1 a,  NK a,  
 D  N .
This completes the proof. □










Using the Jordan-like decomposition, we may easily obtain the following Theorems.
Theorem 1.
(i)
(ii)

sup  P


nn
Let A  C . Then we have

AP   min   A .
inf  P 1 AP  max i  A ;
PNC
i
1
i
i
PNC
Proof.
(i) By Lemma 3, one has
 P 1 AP   max i P 1 AP   max i  A ,  P  NC .
i
(2)
i
By Lemma 6, it is easy to see that


lim  P 1  AP( )  lim P 1  AP   lim D   N  D  max i  A .
 0
 0

 0

 P
i
(3)
1
From (2) and (3), we conclude inf  P AP  max i  A .
PNC
(ii)If det A  0 and P  NC , one has
i
1

AP  0  min i  A. If det A  0 and P  NC , by
i
Lemma 3, one has
 P 1 AP   min i P 1 AP   min i  A .
i
By Lemma 6, one has
27
Yeong-Jeu Sun
i
(4)
International Journal of Computer & Mathematical Sciences
IJCMS
28
ISSN 2347 – 8527
Volume 5, Issue 6
June 2016


lim  P 1  AP   lim
 0
 0
P 1  AP 
1
1
 lim
D   N 1
 0
1


1
D 1
1
min  A 
1
i
i
 min i  A .
i


(5)
1
Thus we have shown that sup  P AP  min i  A , in view of (4) and (5), and the proof is completed.
PNC
i
□
Theorem 2.
Let A  C nn be an invertible matrix. Then we have




1
inf c P 1 AP  max i  A  min  j  A  .
 j

PNC
i
For any P  NC , one has
Proof.
cP
1


AP  P 1 AP  P 1 AP

1
  

 max   A  min   A  ,



 max i P 1 AP  min  j P 1 AP 
 j

i
1
1
i
i
(6)
j
j
in view of Lemma 4. By Lemma 6 and Lemma 2, it can be deduced that



 

lim c P 1  AP   lim P 1  AP   P 1  AP 
 0
 0

 lim D   N  D   N 
 0
1


 lim D   N  D 1   RD1
 0
1

 D  D 1


1
 max i  A  min  j  A  ,
 j

i


inf cP AP   max   A  min   A 


(7)
with R  D 1 I n   ND1 N . Hence we have
1
PNC
i
i
j
j
1
,
in view of (6) and (7), and the proof is completed.
Theorem 3.
28
Let A  C nn . Then we have
Yeong-Jeu Sun
□
International Journal of Computer & Mathematical Sciences
IJCMS
29
ISSN 2347 – 8527
Volume 5, Issue 6
June 2016
(i)
(ii)


inf  P 1 AP  max Rei  A;
PNC
i
 

sup    P 1 AP  min Rei  A .
PNC
i
Proof.
(i) For any P  NC , one has
 P 1 AP   max Rei P 1 AP   max Rei  A ,
i
(8)
i
in view of Lemma 5. By Lemma 6, it can be obtained that


lim  P 1  AP   lim  D   N    D   max Rei  A.
 0
 0

(9)
i

From (8) and (9), we conclude inf  P 1 AP  max Rei  A.
PNC


i
(ii) By Theorem 3 (i), one has inf   P AP  max Rei  A   min Rei  A,
PNC
1
i


i
 

which implies that min Rei  A   inf   P 1 AP  sup    P 1 AP . This completes the proof.
PNC
i
PNC
□
Based on Theorem 1 and Theorem 3, we may obtain the following corollary.
Corollary 1.


A  C nn are positive. Then we have
Suppose all eigenvalues of the matrix


inf  P 1 AP  inf  P 1 AP  inf P 1 AP  max i  A.
PNC
PNC
PNC
i
3. ILLUSTRATIVE EXAMPLE
In this section, we provide an example to illustrate the main results. Consider the square matrix of
4
 1

A 0

0
 0
1
2
0
0
0
1
1
3
0
0
2
3
0
2
1
2
0
0 .

1
2
Clearly the modal matrix of A is
 0 3 2
 1 9  2

M   7 0
0

 2 4 0
 2
4
0
1
1
0
0
0
and i  A i  5  1, 3. In addition, one has
3
0

M 1 AM  0

0
0
29
0
1
0
0
0
0
0
3
0
0
0
0
1
3
0
Yeong-Jeu Sun
0
3 0

0 1
0

0   D  N  0 0


1
0 0
0 0
3
0
0
3
0
0
0
0
0
3
0
0  0
0 0
0   0
 
0  0
3 0
0
0
1

0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0 .

1
0
International Journal of Computer & Mathematical Sciences
IJCMS
30
ISSN 2347 – 8527
Volume 5, Issue 6
June 2016
By Lemma 6 with a  2 , a suitable delta function of the matrix A is given by
 0  6

  2 18
P    14
0

 8
 4
 4
8

4 2
 4 2
0
0
0
2 3
2 3
0
0
0
0 

0 
2 4  .

0 
0 
Finally, by Lemma 6, Theorem 1, 2, and 3, it can be verified that
3
0

P 1  AP    D   N  0

0
0
0
1
0
0
0

sup  P
0
0
3
0
0
0
0

0
0

0    0


0
0
0
3
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0 ;

1
0



AP   lim  P  AP    1  min   A ;
inf  P 1 AP  lim  P 1  AP    3  max i  A ;
PNC
PNC


 0
1
i
1
 0 

i
i



1
inf cP 1 AP   lim c P 1  AP    3  max i  A  min  j  A  ;
 j

PNC
i
 0




AP   lim    P  AP    1  min Re  A.
inf  P 1 AP  lim  P 1  AP    3  max Rei  A;
PNC
 
sup    P
PNC
 0
1
i
1
 0 
i
i
4. CONCLUSION
In this paper, the Jordan-like decomposition problem of the square matrix has been firstly introduced. Based
on the Jordan decomposition, a simple proof has been offered to guarantee the solution existence of such a
problem. It has been done by explicitly constructing the required delta functions. Such a decomposition can be
used to solve the infimum and supremum of some specific functions. Finally, a numerical example has been
provided to illustrate the main results.
ACKNOWLEDGEMENTS
The author thanks the Ministry of Science and Technology of Republic of China for supporting this work
under grant MOST-104-2221-E-214-030.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
O. Bretscher, Linear Algebra with Applications, Prentice-Hall, New Jersey, 2001.
R.E. Larson and B.H. Edwards, Elementary Linear Algebra, Houghton Mifflin, Boston, 2000.
D.C. Lay, Linear Algebra and its Applications, Addison-Wesley, Massachusetts, 2000.
C.T. Pan, On the existence and computation of rank-revealing LU factorizations, Linear Algebra and its
Applications 316 (2000), 199-222.
D.J. Wright, Introduction to Linear Algebra, McGraw-Hill, Boston, 1999.
X. Zhan, Norm inequalities for cartesian decompositions, Linear Algebra and its Applications 286 (1999),
297-301.
R. Bronson, Matrix Operations, Prentice-Hall, McGraw-Hill, New York, 1989.
30
Yeong-Jeu Sun
International Journal of Computer & Mathematical Sciences
IJCMS
31
ISSN 2347 – 8527
Volume 5, Issue 6
June 2016
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
L. Dieci and T. Eirola, On smooth decompositions of matrices, SIAM journal on Matrix Analysis and Applications
20 (1999), 800-819.
K. Akataki, K. Mita and Y. Itoh, Relationship between mechanomyogram and force during voluntary contraction
reinvestigated using spectral decomposition, European Journal of Applied Physiology 80 (1999), 173-179.
P. Praus and F. Sureau, Spectral decomposition of intracellular complex fluorescent signals using multiwavelength
phase modulation lifetime determination, Journal of Fluorescence 10 (2000), 361-364.
H. Dai, An algorithm for symmetric generalized inverse eigenvalue problem, Linear Algebra and its Applications
296 (1999), 79-98.
J. Kamm and J.G. Nagy, Kronecker product and SVD approximations in image restoration, Linear Algebra and its
Applications 284 (1998), 177-192.
B.C. Levy, A note on the hyperbolic singular value decomposition, Linear Algebra and its Applications 277 (1998),
135-142.
D.A. Bini, L. Gemiganani and B. Meini, Computations with infinite Toeplitz matrices and polynomials, Linear
Algebra and its Applications 343 (2001), 21-61.
D.J. Clements, B.D.O. Anderson, A.J. Laub and J.B. Matson, Spectral factorization with imaginary axis zeros,
Linear Algebra Applications 250 (1997), 225-252.
D.J. Clements and K. Glover, Spectral factorization via Hermitian pencils, Linear Algebra Applications 122 (1989),
797-846.
R.F. Curtain, Linear operator inequalities for strongly stable weakly regular linear systems, MCSS-Mathematics of
Control Signals and Systems 14 (2001), 299-337.
D. Chu, L.D. Lathauwer and B.D. Moor, On the computation of the restricted singular value decomposition via the
cosine-sine decomposition, SIAM journal on Matrix Analysis and Applications 22 (2001), 580-601.
H. Yacov and P.C. Teo, Canonical decomposition of steerable functions, Journal of Mathematical Imaging and
Vision 9 (1998), 83-95.
A. Weinmann, Uncertain Models and Robust Control, Springer-Verlag, New York, 1991.
31
Yeong-Jeu Sun