Chapter 3 OPTIMAL DECORRELATION AND THE KLT

Chapter 3 OPTIMAL DECORRELATION AND THE KLT
3.1 Introduction
Karhunen-Loeve transform (KLT) is a unitary transform that diagonalizes the covariance
or the correlation matrix of a discrete random sequence. [3.5] This decorrelation property is
desirable because processing (quantization, coding etc) of any one coefficient in the KLT
domain has no direct bearing on the others. Also, as will be shown later, it is considered as an
optimal transform among all discrete transforms based on a number of criteria. It is, however,
used infrequently as it is dependent on the statistics of the sequence i.e. when the statistics
change so also the KLT. Because of this signal dependence, generally it has no fast
algorithm. Other discrete transforms such as cosine transform (DCT), (see chapter 5) even
though suboptimal, have been extremely popular in video coding. The principal reasons for
the heavy usage of DCT are 1) it is signal independent 2) it has fast algorithms resulting in
efficient implementation and 3) its performance approaches that of KLT for a Markov-1
signal with large adjacent correlation coefficient. In spite of this, KLT has been used as a
bench mark in evaluating the performance of other transforms. It has also provided an
incentive for the researchers to develop signal independent (fixed) transforms that not only
have fast algorithms, but also approach KLT in terms of performance.
This chapter defines and develops the KLT and also lists the performance criteria. It is
also extended to two dimensional random signals. It concludes with applications which
illustrate the decorrelation property and its significance in image compression.
3.2 Karhunen-Loeve Transform
Let R be the N  N  correlation matrix of a random complex sequence
x =  x1 , x2 , x3 ,, x N T given by
 x1

 x 2
 x3

R = E [xxH ] =E  .
 .

 .
 x
 N
 x1 x1*

*
 x 2 x1
R = E  x3 x1*

 ....
 x x*
 N 1




 * * *
 x1 x 2 x3  x *N






x1 x 2*
x1 x3*
.... .... ....
x 2 x 2*
x3 x 2*
x 2 x3*
x3 x3*
.... .... ....
....
....
.... .... ....
x N x 2*
x N x3*
.... .... ....
.... .... ....












  

*
*
x1 x *N   E x1 x1  E x1 x N 

 
x 2 x *N  

*  =            
x3 x N

 

....  


x N x N *   E x N x1*  E x N x *N 

1
 



 
 
where E is the expectation operator and E x j x *j is the autocorrelation of x j and E x j xk*
 
is the crosscorrelation between x j and xk  , j  k., Note that R is Hermitian. Let the
unitary matrix which diagonalizes R be defined as  such that
 1 =  H ,  H =  ,
 H R  =  1 R  =  ,
(3.1)
 = Diag. 1 ,  2 , k ,,  N  .
Here,  i , i  1,2,3,...N are the eigenvalues of R.  is called the KLT matrix and it
decorrelates the random sequence x. This can be seen when the forward and inverse KLT are
considered. Let y be the forward transform of x
y=  1 x =  H x
and let the inverse transform of y be
x = y
(3.2)
where y =  y1, y 2 ,..., y N T represents the random sequence in the transform domain. The
correlation matrix for y is then
  
E yy H = E  H xx H

(3.3)
 
=  H E xx H 
= H R = 
It is clear from (3.3) that the random sequence y has no cross- correlation. In other words, x
has been decorrelated by the KLT matrix  . Such a transform is also called principal
component or Hotelling transform. It is a statistically optimal transform. All other transforms
(suboptimal) are evaluated against this benchmark, KLT.
Eigenvalues and eigenvectors of R
To show the signal dependence of KLT, we consider the eigenvalues and eigenvectors of
R. From (3.1)
 H R  =  
or
R =  
(3.4)
where   1 2 3 ... N ,
2
and
 i   i1 i 2 i 3 ... iN T
i  1,2,3,  , N
being the ith column of  .Writing the right hand side of (3.4) in full, we have
 11

 12
 13


1 N
 21
 22
 23
...

...
2N
...
...
...
1
 N 1  
 N 2  

 N 3 

 

 NN  


  11    21
 
 

  22

12 



= 1 13 ,  2   23
 
 
     
    
  1N   2 N
2

3
.
.

   31
 
   32
,   
 3  33
  
 
  3 N
.









 N 
(3.5)

  N 1 




  N 2 
,  ,    
N

 N 3 

  

  
 NN 

or
R  i =  i  i , i  1,2,3,  , N
(3.6)
It is clear form (3.6) that  i ’s are real and positive the eigenvalues of R and  i ’s are the
corresponding eigenvectors. When the eigenvalues  i ’s are arranged in descending order so
that 1   2     N , the auto-correlations of transformed signal vector are arranged in
descending order.
Similar considerations based on the diagonalization of the covariance matrix of a random
sequence x produce the KLT based on the covariance matrix.
Let the covariance matrix in KLT domain be

E y   y y   y 
H

3
 y1   y1

 y2   y2
= E 



 y N   yN



*
*
*
  y1   y1  ,  y 2   y 2  , ,  y N   yN 











(3.7)
It is desired that this covariance matrix be diagonal, i.e.

E y -  y y   y 
H
 =   = diag. 1 ,  2 ,,  N  
Here  y = E y= E y1 , E y 2 ,, E y N  =  y1 ,  y 2 ,,  yN  is the mean of
T


the random
vector y. This diagonalization can be achieved if the vector y   y is related to the signal
vector x   x  by a unitary tansformation such that
 y      x   
1
x
y
where  x  E  x  and  is the unitary transform matrix. It can be seen that if  is made

up of the eigenvectors for the data covariance matrix E  x   x  x   x 
R   ' ,
or
R   i   i  i ,
H
 = R , so that
i  1,2,3,  , N
where '  1 R'
then,

 
E  y   y  y   y  =E  1  x   x  x   x  
H

=  1 E  x   x  x   x 
H
H
  
1
R    

(3.8)
It is noted that although  and  play the same role of diagonalization, they are in general
different unless x is a zero mean random vector.
3.3 Application of KLT in data compression
As KLT diagonalizes a correlation matrix or covariance matrix, it is possible to represent
an N-dimensional random vector x by only some of its coefficients in the KLT domain with
negligible error. It is only logical to select m out of the N KLT coefficients that represent the
first m largest eigenvalues. By quantizing and coding these m coefficients, x can be
reconstructed with minimal error. This is the essence of data compression or bandwidth
reduction i.e., the m KLT coefficients require less number of bits compared to that required
for x. This role of the KLT in data compression can be illustrated in a general maximum
variance zonal filter as shown in Figure 3.1.
4
x
ŷ m
y
Î m
A
B
x̂
Fig. 3.1 Maximum variance zonal filter
The data vector x is transformed by the operation block A into the transform domain
vector y which undergoes compression by the operator block Î m and then transformed by B
back into the data domain as x̂ . The operators A, Î m and B are selected to minimize the
mean square error(mse) of x  x̂ . In the case of simple compression Î m is a N  N 
2
diagonal matrix 1  m  N  with the first m diagonal elements as ones and remaining
diagonal elements, as zeros i.e.
Î m  Diag. 1,1,,1,0,0,,0 .
The random vector x = x1 , x2 , x3 ,, x N  is mapped into y by the orthogonal matrix
T
A i.e.,
y  Ax   y1 , y2 , y3 ,, y N 
T
T
yˆ m  Iˆm y   y1 , y2 , y3 ,, ym ,0,0,0,,0
T
xˆ  Byˆ m  xˆ1 , xˆ2 , xˆ3 ,, xˆ N 
(3.9a)
(3.9b)
(3.9c)
where B is another orthogonal matrix. It can be shown that the mse between x and x̂ is
minimized for a given m when A =  H and B  A 1   provided the columns
(eigenvectors) of  are arranged, so that the corresponding eigenvalues are in descending
order 1  2     N . This implies that A and B correspond to KLT and inverse KLT
respectively. Hence the mean square error (mse) is given by
mse =

1 N
E    x k  x€k 2 
N  k 1

 N 2
= E   yk  =
k m1 
(3.10)
N

k  m 1
k
5
since mse is invariant under unitary transformation. This implies that of all the discrete
orthogonal transforms, KLT achieves the minimum mse when only a subset of the m KLT
coefficients are retained. The remaining N-m coefficients representing small eigenvalues are
set to zero. This is the key to data compression. In general, KLT is considered to:
1. pack the most energy in the least number of KLT coefficients;
2. minimize the mse between the original and reconstructed signal for a given
number of coefficients;
3. achieve the minimum rate for rate-distortion function, among all unitary
transforms;
4. decorrelate the signal in the transform domain.
In view of these properties, KLT is applicable to pattern recognition, classification and
bit rate reduction (compression) by retaining only the first m KLT coefficients i.e.
y1 , y 2 ,, y m that correspond to the m largest eigenvalues out of the N coefficients. The
KLT basis functions (eigenfunctions) for a Markov-1 process with   0.95 and N=16 are
shown in Fig.3.2 (See Prob. 3.5)
Fig. 3.2 KLT basis functions for N=16 and   0.95 for a Markov-1 signal
6
3.4
KLT for 2D random field
KLT developed for 1D random field can be extended to the 2D case. This has
applications in processing of 2D signals such as multispectral imagery. For simplicity,
assume the square random field to be real and represented by
 x11
x
 21
   x31


 x N 1

x12
x13

x 22
x 23

x32
x33




xN 2
xN 3 
x1N 
x 2 N 
x3 N  .


x NN 
(3.11)

Define  the N 2  N 2 correlation matrix as follows:
 
  E uuT ,

(3.12)

where u is the N 2 1 column vector obtained from the lexicographic ordering of  i.e.,
u = x11 x12  x1N x21 x22  x2 N  x N1 x N 2  x NN T .
By using u, a one–dimensional random field, to represent the N  N image, the
diagonalization of its correlation matrix defined in (3.12) can be considered as in Section 3.2.
The problem, however, quickly becomes formidable for N of moderate size, since the
diagonalization problem to produce the eigenvectors for the KLT is of dimension N 2  N 2 .
The relevant equation is
 k  k k
k  1,2,, N 2
(3.13)
where  k and  k are respectively the eigenvalues and eigenvectors.  k ’s form the columns
of the KLT matrix  for this 2-dimensional random field. In terms of the pixels x m ,n in the
2-dimensional image, (3.13) becomes
 E x
N
N
m 1 n 1
m ,n
, xm,n   k ,l m, n 
(3.14)
k , l , m, n  1,2,, N .
=  k ,l k ,l m, n   ,
By assuming separable statistics the development of 2D KLT can be considerably simplified.
Under this assumption, the row and column statistics are considered completely independent
7
and identically distributed. Hence the correlation matrix and the eigenfunctions become
separable as shown in the following,




E xm,n ; xm,n  E x m,n ; x m,n E x m,n ; x m,n

(3.15)
and
 kl m, n   1 mk  2 nl
m, m, n, n, k , l  1,2,, N .
1 and  2 are factor matrices in the KLT matrix  due to the separable statistics of rows
and columns. In fact, the diagonalization problem of the N 2  N 2 correlation matrix  can
now be separated so that we have:
and
  1  2 ,
(3.16a)
  1  2 ,
(3.16b)
 11 1 H  1 ,
(3.17a)
where
and
 2  2 2H  2 .
(3.17b)
Equations (3.16a) through (3.17b) represent two N  N diagonalization problems. 1 and 2
are N  N  diagonal matrices whose diagonal elements are the eigenvalues of the N  N 
correlation matrices 1 (based on row statistics) and  2 ( based on column statistics)
respectively. The symbol  stands for Kronecker product of two matrices i.e.
A B  C 
a 11 B
a 12 B . . .
a 1n B
a 21 B
a 22 B . . .
a 2n B
---
a m1 B
---
---
a m2 B . . .
(3.18)
---
a mn B
C is a matrix of size mp nq  when the matrices A and B are of sizes m n  and  p  q 
respectively. The KLT of u is  i.e.
   H u  1H   2H u
8
(3.19)
In conclusion, by modeling the image autocorrelation by a separable function (
independent row and column statistics), diagonalization of an N 2  N 2 matrix  is
considerably simplified into diagonalization of two N  N  matrices 1 and  2 .
Obtain basis images for KLT, p=0.9543, N=8. Assuming that the statistical properties along row and along
column are independent.
A first order Markov process is defined by

 1
 p

R   p2

 
 p N 1

p
p
p2
p3

1
p
p2

p
1
p





N 2




p N 1 
p N 2 

p N 3 
 

1 

Obtain eigenvectors of the correlation matrix R and sketch the basis images based on the eigenvectors.
Results:
KLT basis images when p=0.9543
9
rou=0.95;
N=8;
for i=1:N
for j=1:N
R(i,j)=rou^(abs(j-i));
end
end
[V,D]=eig(R);
for i=1:N
for j=1:N
B=V(:,i)*V(:,j)'
subplot(N,N,(j-1)*8+i);
imshow(B);
end
end
3.4 Applications
Although the generation of KLT involves estimating correlation/covariance matrices
with their diagonalization leading to eigenvalues and eigenvectors, a perusal of the
references under KLT indicates that it has found applications in image compression [3.10,
3.19], multispectral image compression [3.26, 3.27, 3.28], image segmentation and indexing
[3.46], recursive filtering [3.12, 3.30], image restoration [3.24], image representation,
recovery and analysis [3.42], multi layer image coding [3.28], neural clustering [3.3], speech
recognition [3.41], speaker recognition [3.47], speaker verification [3.4], feature selection
[3.45], texture classification [3.60], image and video retrieval[3.50, 3.59, 3.61, 3.62, 3.63,
3.64] etc. Of particular significance is the paper by Saghri, Tescher and Reagan [3.26]
wherein the KLT is applied in decorrelating across spectral bands followed by JPEG (Joint
Photographic Experts Group) algorithm. They were able to produce a range of compression
ratios (CR) starting with near lossless result at 5:1 CR to visually lossy results beginning at
50:1 CR. An adaptive approach in which the covariance matrix is periodically updated based
on the terrain (water, forest, cloud, ice, desert, etc.) is utilized in achieving these high
compression ratios. It is only appropriate at this stage to describe this application in detail.
10
Multispectral images (both satellite and airborne) exhibit a high degree of spatial and
spectral correlations. The proposed scheme (Fig. 3.3) involves 1D KLT to decorrelate across
spectral bands followed by the JPEG algorithm (This involves 2D DCT of spectrally
decorrelated images for spatial decorrelation.)
Fig. 3.3 Terrain-adaptive compression block diagram [3.26]  IEEE 1995.
In this experiment 16 unequal multispectral bands (images) covering the visible through
infrared regions (0.36 to 12.11 micron wavelength) acquired by a multispectral scanner (M7
sensor) (16 band Airfield test image set) are partitioned into sets of nonoverlaping images
i.e., sub-block sets. The multispectral sub-block sets are used to obtain the covariance
matrices, (Fig.3.4) eigenvalues and the eigenvectors. The eigenplanes (spectrally
decorrelated images) are formed by matrix multiplication of the sub- block set and basis
functions (Fig. 3.5)
11
Fig. 3.4 Correlation coefficient matrix [3.26]  IEEE 1995
Fig. 3.5 Removing spectral correlation via a Karhunen-Loeve transformation
[3.26]  IEEE1995
The effectiveness of KLT can be observed in Fig 3.6 wherein the first nine spectrally
decorrelated eigen-planes of the 16 bands are shown. The first two and three eigen-planes
have more than 80% of the energy of the test set. Hence the remaining eigen-planes can be
coarsely quantized (some of them can even be dropped) resulting in bit rate reduction.
Another measure of compression capability is the rapid decrease in the eigen-values
(variances of the eigen-planes). This is evident from the variance distribution of the test
images (Fig. 3.7) [3.26]. In fact, the superiority of KLT can be further observed in Fig 3.8
12
wherein the variance distributions for the KLT and DCT are compared . Large variances for
DCT imply more bits to code those images. Further gains in spectral decorrelation via KLT
have been achieved by using a terrain adaptive approach. As the multispectral images exhibit
a number of different terrains (water, forest, cloud, ice, desert, etc.) the covariance matrix and
eigen-planes (hence the KLT) are updated frequently. This, of course, involves additional
complexity and increased overhead. Overall bit-rate reduction is a cumulative result of
decorrelation of spectral bands by KLT followed by implementing the JPEG algorithm [3.54]
on the spectrally decorrelated eigen-images.
Fig. 3.6 Eigen images of the test image set (first of the total 16) [3.26] IEEE 1995
13
Fig. 3.7 Ordered variances of the eigen images [3.26]  IEEE 1995
Fig. 3.8 KLT versus DCT for spectral decorrelation [3.26]  IEEE 1995
Another application of KLT is in video segmentation, classification and indexing
[3.46]. This is useful in random retrievals of video clips from large data bases. This approach
for automatic video scene segmentation and content-based indexing is very robust and
reduces the potential for fast scene change detection. The principal component analysis
extracts effective discriminating features from the reduced data set that can be reliably used
in video scene change detection, segmentation and indexing tasks.
Summary
Both 1D and 2D Karhunen-Loeve transforms (KLT) are defined and developed. Their
properties are outlined. By assuming row and column independent statistics, generation and
implementation of 2D-KLT are simplified into two 1D-KLTs. In spite of its computational
complexity it is utilized in specific fields such as multispectral imaging. Also, KLT serves as
a benchmark in evaluating other discrete transforms. A specific application wherein
multispectral imagery has been decorrelated is illustrated.
14
Exercises
3.1 Show that when eigen-vectors (columns) of  are rearranged, the corresponding
eigen-values rearrange accordingly.
3.2 See Fig. 3.1 It is stated that the mse between x and x€ is minimized when A and B
N
correspond to KLT. Prove this. Show that this mse is

k  m 1
k
.
3.3 Show that the sum of variances of an N-point signal under orthogonal transformation
is invariant.
Hint: Given y = Ax where A*  A 1 , x = x1 , x2 ,, x N T data vector and y
T
=  y1 , y 2,  , y N  transform vector. Show that
T
N

k 1
(  2x
kk
 2x
kk
N
   2y
k 1
kk
and  y2kk are variances of x k and y k respectively.)
3.4 The rate-distortion function RD in bits/sample for a specified distortion D is defined as
RD =
D=
1 N
1


2
max  0, log 2 ~kk
/ 

N k 1
2




1 N
2
min  , ~kk
.

N k 1
~ 2 is the variance of the kth coefficient
The parameter  is determined from D. 
kk
in any orthonormal transform domain. Show that among all the unitary transforms, KLT
yields the minimum rate R D .
3.5 For a first order Markov process defined by
 1

 
R = 2

 
  N 1


 3   N 1 

 2   N 2 
   N 3 
1
2


1









 
1 
15
where  is the adjacent correlation coefficient, show that the eigenvalues are
1  2
k 
1  2  cos  k   2
k = 1, 2, ..., N
where
 k are the real positive roots of the transcendental equation
tan N =
1   sin 
2
cos   2    2 cos 
for N = even. (Similar result is valid for N odd.)
Show that the m k th element of the KLT matrix  is
 mk
 2
 
 N  k



1/ 2
 
N  1   k  ,
sin  k  m 


2
 2 
 
m, k  1,2,, N
3.6 Obtain the eigenvalues and sketch the eigenvectors for an order-1 Markov process
with  = 0.9 and N = 16
3.7 Repeat Prob. 3.6 for  =0.85.
3.8 Assume separable statistics, obtain and sketch eigenimages based on Probs. 3.6 and
3.7.
3.9 Derive (3.19).
3.10 Simulate the automatic video scene segmentation, classification, indexing and
retrieval schemes based on the techniques presented in [3.46] using some test
sequences.
16
References on KLT
3.1. D.B. Ponceleon el al, “Transform coding for low bit rate applications,” IS&T/SPIE
Symposium on Electronic Imaging: Science & Technology, Vol. 2187, pp. San Jose,
CA, Feb.1994.
3.2. I.S. Reed and L.S. Lan, “ A fast KLT for data compression,” IEEE Trans. SP, (under
review). Also published in SPIE/VCIP, Vol.2094, pp. Cambridge, MA, Nov. 1993.
3.3. G. Martinelli, L.P. Ricolti and G, Marcone, “Neural clustering for optimal KLT
image compression,” IEEE Trans. SP, Vol. 41, pp. 1737-1739, April 1993.
3.4. L. Netsch, “A robust telephone-based speaker verification system,” Ph.D.
Dissertation proposal, Univ. of Texas at Arlington, Arlington, TX, 1992.
3.5. A.K. Jain, “Fundamentals of digital image processing,” Chapters 2, 5 and 11,
Englewood Cliffs, NJ, Prentice Hall, 1989.(See also several references listed in section
5.11 on page 187)
3.6. A.K. Jain, “A fast Karhunen-Loeve transform for a class of random processes,”
IEEE Trans. Commun. Vol. COM-24, pp. 1023-1029, Sept. 1976
3.7. V.R. Algazi and D.J. Sakrison, “On the optimality of Karhunen-Loeve expansion,”
IEEE Trans. IT, Vol. IT-15, pp.319-321, March 1969.
3.8. S. Bhama, H. Singh and N.D. Phadke, “Parallelism for the faster implementation of
the K-L transform for image compression,” Pattern recognition letters, Vol. 14, pp.
651-660, Aug. 1993.
3.9. J.B Burl, “Estimating the basis functions of the Karhunen-Loeve transform,” IEEE
Trans. ASSP, Vol. 37, pp.99-105, Jan. 1989.
3.10. H.M. Abbas and M.M. Fahmy, “Neural model for Karhunen-Loeve transform with
applications to adaptive image compression,” IEE Proc. 1,Vol. 140, pp. 135-143,
April 1993.
3.11. M. Nakagawa and M. Miyahara, “Generalized Karhunen-Loeve transformation- I:
Theoretical Consideration,” IEEE Trans. Commun. Vol. C-35, pp. 215-223, Feb.
1987
3.12. A.K. Jain, “A fast Karhunen-Loeve transform for recursive filtering of images
corrupted by white and colored noise,” IEEE Trans. Comput. Vol. C-26, pp. 560571, June 1977.
3.13. I. Selin, “Detection theory,” Princeton, NJ: Princeton University Press, 1965.
17
3.14. R.J. Clarke, “Transform coding of images,” Orlando, FL: Academic Press, 1985.
3.15 H.W. Jones, D.N. Hein and S.C. Knauer, “The Karhunen-Loeve, discrete cosine
and related transforms via the Hadamard transform,” Proc. Intl. Telemeter. Conf.,
pp. 87-98, Los Angeles, CA, Nov. 1978.
3.16 R.J. Clarke, “Relation between the Karhunen-Loeve and sine transforms,” Electron.
Lett., Vol. 20, pp. 12-13, Jan. 1984.
3.17 P.S. Kumar and K.M.M. Prabhu, “A special case for the KLT of Markov-1
Process,” IEEE Trans. SP (Under review).
3.18. Y.H. Chan, “On the substitution of the Karhunen-Loeve transform,” IEEE Trans. SP
(Under review).
3.19. L.S Lan and I.S. Reed, “Image compression with the adaptive approximate
Karhunen-Loeve transform,” SPIE/VCIP, Vol. 2308, pp.
, Chicago, IL, Sept.
1994.
3.20. J. Zhang and G. Walter, “A wavelet based KL-like expansion for wide sense
stationary random processes,” IEEE Trans. SP, Vol. 42, pp. 1737-1745, July 1994.
3.21. S.C. Huang and Y.F. Huang, “Principal component vector quantization,” J VCIR,
Vol. 4, pp. 112-120, March 1993.
3.22. S.C. Huang and Y.F. Huang, “A constrained vector quantization scheme for realtime code book retransmission,” IEEE Trans. CSVT, Vol. 4, pp. 1-7, Feb. 1994.
3.23. H. Kitajima and T. Shimone, “Some aspects of the fast Karhunen-Loeve
transform,” IEEE Trans. Commun., Vol. 28, pp. 1773-1776, Sept. 1980.
3.24. B.R. Hunt and O. Kubler, “Karhunen-Loeve multispectral image restoration, part 1:
Theory,” IEEE Trans. ASSP, Vol. ASSP-32, pp. 592-600, June 1984.
3.25. G.W. Wornell, “A Karhunen-Loeve-like expansion for 1/f processes via wavelets,”
IEEE Trans. IT, Vol. 36, pp. 859-861, July 1990.
3.26. J.A. Saghri, A.G. Tescher and J.T. Reagan, “Practical transform coding of
multispectral imagery,” IEEE SP Magazine, Vol. 12, pp. 32-43, Jan. 1995.
3.27. V.D. Vaughn and T.S. Wilkinson, “System considerations for multispectral image
compression designs,” IEEE SP Magazine, Vol. 12, pp. 19-31, Jan. 1995.
3.28. D. Tretter and C.A. Bouman, “Optimal transforms for multispectral and multilayer
image coding,” IEEE Trans. IP, Vol. 4, pp. 296-308, March 1995.
18
3.29. D.J. Percival, “Compressed representation of a backscatter ionogram data base
using Karhunen-Loeve techniques,” ICIP, pp. , Edinburgh, U.K. July 1995.
3.30. N.B. Chakrabarti, T.V.K.H. Rao and N.R. Krishna, “A recursive filter
implementation of KL transform,” SP, Vol. 44, pp. 269-284, July 1995.
3.31. T. Sikora, S. Bauer and B. Makai, “Efficiency of shape-adaptive 2-D transforms for
coding of arbitrarily shaped image segments,” IEEE Trans. CSVT, Vol. 5, pp. 254258, June 1995.
3.32.
I.S. Reed and L.S. Lan, “A fast Karhunen-Loeve transform (KLT) for data
compression,” J VCIR, Vol. 5, pp. 304-316, Dec. 1994.
3.33. X.G. Xia and B.W. Suter, “On vector Karhunen-Loeve transform and optimal
vector transforms,” IEEE Trans. CSVT, Vol. 5, pp. 372-374, Aug. 1995.
3.34.
W. Ding, “Optimal vector transform for vector quantization,” IEEE SP Letters,
Vol. 1, pp.110-113, July 1994.
3.35.
R.D. Ding and S. Haykin, “Optimally adaptive transform coding,” IEEE Trans.
IP, Vol. 4, pp. 1358-1370, Oct. 1995.
3.36.
V.N. Kurashov and J.S. Musatenko, “Approximate Karhunen-Loeve transform for
image processing,” Photonics West, IS &T/SPIE Symp. On Electronic Imaging:
Science & Technology, Vol. 2666, pp. , San Jose, CA, Feb. 1996.
3.37. F. Claveau and M. Poirier, “Real time FFT based cross-covariance method for
vehicle speed and length measurement using an optical sensor,” ICSPAT 96,
pp.1831-1835, Boston, MA, Oct.1996.
3.38. Y.S. Musatenko and V.N. Kurashov, “Nonlinear improving of Karhunen-Loeve
bases obtained by approximate 2D procedures,” IS&T/SPIE’s 9th Annual Symp.,
Electronic Imaging, Vol. 3026, San Jose, CA, Feb. 1997.
3.39.
P.Waldemar and T. Ramstad, “Hybrid KLT-SVD image compression,” ICASSP
97, Vol. 4, pp. 2713-2716, Munich, Germany, April 1997.
3.40. O.G. Guleryuz and M.T. Orchard, “Optimized nonorthogonal transforms for image
compression,” IEEE Trans. IP, Vol. 6, pp. 507-522, April 1997.
3.41. A. Herrara, M. Martinez and O. Sanchez, “An acoustic isolated speech recognition
approach using KLT and VQ,” ICSPAT 97, San Diego, CA, Sept. 1997.
3.42.
N.A. Ziyal et al, “Image representation, recovery and analysis using principal
component analysis,” ICSPAT 97, San Diego, CA, Sept. 1997.
19
3.43. D.J. Hamilton, W.A. Sandham and A. Blanco, “Electrocardiogram data compression
using non-linear principal component analysis,” IEEE SP Letters (in print).
3.44. J. Hall and J. Crowe, “Ambulatory electrocardiogram compression using wavelet
packets to approximate Karhunen-Loeve transform,” Int. J. Applied SP, Vol. 3, pp.
25-36, 1996.
3.45. J. Kitter and P.C. Young, “ A new approach to feature selection based on the
Karhunen-Loeve expansion,” Pattern Recognition, Vol. 5, pp. 335-352, 1973.
3.46. K.J. Han and A.H. Tewfik, “Eigen-image based video segmentation and indexing,”
ICIP 97, (Vol. II), pp. 538-541, Santa Barbara, CA, OCT. 1997.
3.47. C.C.T. Chen, C.T. Chen and C.M. Tsai, “Karhunen-Loeve transform for text
independent speaker recognition,” 1997 Intl. Symp. on Communications, Hsinchu,
Taiwan, Dec. 1997.
3.48. Z. She, R.E. Bogner and D.A. Gray, “An eigenvector approach for inverse synthetic
aperture radar (SAR) motion compensation and imaging,” TENCON 97, IEEE
Region 10 Annual Conf., Brisbane, Australia, Dec. 1997.
3.49. B.R. Epstein et al, “Multispectral KLT-wavelet data compression for LANDSAT
thematic mapper images,” Proc. DCC, Mar. 1992.
3.50. A.P. Pentland, R.W. Pickard and S. Scarloff, “Photobook: tools for content based
manipulation of image databases,” SPIE, Vol. 2185, pp. 34-47, San Jose, CA, 1994
(storage and retrieval for image and video databases)
3.51 M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Of Cognitive
Neuroscience, Vol. 3, pp.73-86, 1991.
3.52. C. S. Chen and K. S. Huo, “Karhunen-Loeve method for data compression and
speech synthesis,” IEE Proc., Vol. 138, pp. 377-380, Oct. 1991.
3.53. D. J. Mudugamuwa and A. B. Bradley, “Optimal transform for segmented
parametric speech coding,” ICASSP 98, pp. 53-56, Seattle, WA, May 1998.
3.54 W.B. Pennebaker and J.L. Mitchell, “JPEG still image data compression standard,”
New York, NY: Van Nostrand Reinhold, 1993.
3.55. M.F. Chouikha, E.T. Gilmore and N. Ziyad. “Adaptive principal component
extraction (APEX) for image compression,” ICSPAT 98, Toronto, Canada, Sept.
1998.
3.56. M. Unser, “ Wavelets, filterbanks, and the Karhunen-Loeve transform ”, EUSIPCO98, vol. 3, pp. 1737-1741, Island of Rhodes, Greece, Sept.1998.
20
3.57. N. Tsapatsoulis V. Alexopoulos and S. Kollias, “A vector based approximation of
KLT and its application to face recognition,” EUSIPCO-98, vol. 3, pp. 1581-1585,
Island of Rhodes, Greece, Sept.1998.
3.58. N. Ziyad, E.T. Gilmore and M.F. Chouikha, “Improvements for image compression
using adaptive principal component extraction,” 32nd Asilomar Conf. on Signals,
Systems, and Computers, Pacific Grove, CA, Nov. 1998.
3.59. D.l. Swets and J. Weng, "Using discriminant eigenfeatures for image retrieval," IEEE
Trans. PAMI, vol.18, pp.831-836, Aug. 1996.
3.60. X. Tang and W.K. Stewart, 'Texture classification using principal component analysis
techniques," Proc. of SPIE, vol. 2315, pp. 22-35, 1994.
3.61. C. Faloutsos and K-I. Lin, "Fastmap: A fast algorithm for indexing, data-mining and
visualization of traditional and multimedia datasets," In Proc. of SIGMOD, pp: 163174, 1995.
3.62.
R. Ng and A. Sedighian, "Evaluating multi-dimensional indexing structures for
images transformed by principal component analysis," In Proc. SPIE Storage and
Retrieval for Image and Video Databases, 1996.
3.63.
S. Chandrasekaran et al, "An eigenspace update algorithm for image analysis,"
CVGIP: Graphical models and image processing journal, 1997.
3.64.
D. White and R. Jain, "Similarity indexing: Algorithm and performance," In Proc.
SPIE Storage and Retrieval for Image and Video Databases, 1996.
3.65.
Z. Wang and J. B-Arie, "3D motion estimation using expansion matching and KL
based canonical images," IEEE ICIP, pp. MP11-7, Chicago, IL, Oct. 1998.
3.66.
A. Hjorungnes and T. A. Ramstad, "Minimum mean square error transform coders,"
IEEE ISPACS'98, pp. 738-742, Melbourne, Australia, Nov. 1998.
3.67.
A.M. Krot, and V.O. Kudryavtsev, "Eigen transforms over finite rings in filter bank
structures," IEEE ISPACS'98, pp. 738-742, Melbourne, Australia, Nov. 1998.
3.68.
M. Kirby and L. Sirovich, "Application of the Karhunen-Loeve procedure for the
characterization of human faces," IEEE Trans. Pattern Anal. Machine Intell, vol. 12,
pp. 103-108, 1990.
3.69.
L. S. Shapiro and J. M. Brady, "Feature-based correspondence: An eigenvector
approach," Image Vision Computing, vol. 10, pp. 283-288, 1992.
3.70
A.V. Nefian, "Face detection and recognition using hidden Markov models," IEEE
21
ICIP, pp. MA5.05, Chicago, IL, Oct. 1998.
3.70. Y. Yan and J. Zhang, "Rotation invariant 3D reconstruction for face recognition,"
IEEE ICIP, pp. MA5.08, Chicago, IL, Oct. 1998.
3.71. H. Celebi and H. J. Trusset, "Colorimetric restoration of digital images," IEEE ICIP,
pp. MA6.10, Chicago, IL, Oct. 1998.
3.73. D. Nandy and J. B-Arie, "EXM eigen templates detecting and classifying arbitrary
junctions", IEEE ICIP, pp. MA7.01, Chicago, IL, Oct. 1998.
3.74. A. Kirac and P. P. Vaidyanathan, "Optimal nonuniform orthogonal filter banks for
subband coding and signal representation," IEEE ICIP, pp.WP5.06, Chicago, IL, Oct.
1998.
3.75. E. Sahouria and A. Zakhor,” Content analysis of video using principal components,”
IEEE ICIP, pp. WP1.06, Chicago, IL, Oct. 1998.
3.76. C. Chang, A.A. Maciejewski and V. Balakrishnan, “Eigen-decomposition-based
analysis of video images, “Photonics West, SPIE, vol. 3656, San Jose, CA, Jan. 1999.
3.77. L.A. Chan and N. M. Nasrabadi, “Wavelet-eigen transformation for automatic target
recognition,” Photonics West, SPIE, vol. 3647, San Jose, CA, Jan. 1999.
3.78. H. Liu,” Real-time human face recognition using eigenface-based optical filtering,”
Photonics West, SPIE, vol. 3645, San Jose, CA, Jan. 1999.
3.79. S.M. Phoong and Y.P. Lin, “PLT versus KLT,” ISCAS99, pp.
June 1999.
,Orlando, FL, May-
3.80. J. Lee, “ Optimized quadtree for Karhunen-Loeve transform in multispectral image
coding,” IEEE Trans. IP, vol. 8, pp. 453-461, April 1999.
3.81. R. D. Dony, and S. Haykin, “ Optimally adaptive transform coding” IEEE Trans. IP,
Vol. 4, pp. 1358-1370, 1995.
3.82. B.R. Epsein et al., “ Multispectral KLT-wavelet data compression for Landsat thematic
mapper images, DCC, pp. 200-208, March 1992.
3.83. R. Kongkchandra, K.Tamee, and C. Kimpan, “ Improving Thai isolated word
recognition by using Karhunen-Loeve transformation and learning vector
quantization”, IEEE ISPACS’99, Pukhet, Thailand, Dec. 1999.
3.84. R. Kongkchandra, K.Tamee, and C. Kimpan, “ Using Karhunen-Loeve transformation
for feature reduction and tones analysis in Thai harmonic frequency speech”, ”, IEEE
ISPACS’99, Pukhet, Thailand, Dec. 1999
3.85. B. Lahme and R. Miranda, “ Karhunen-Loeve decomposition in the presence of
symmetry-part I ,” IEEE Trans IP, vol. 8, pp. 1183-1190, Sept. 1999.
22
3.86. T. Tanaka and Y. Yamashita, “ Image coding using vector embedded Karhunen-Loeve
transform,” IEEE ICIP, Kobe, Japan, Oct. 1999.
.
3.87. T.K. Moon and W.C. Stirling, “ Mathematical methods and algorithms,” Upper Saddle
River, NJ: Prentice Hall, 2000.
3.88. E. Kreyszig, “ Advanced engineering mathematics,” 7th Edition, NewYork, NY: John
Wiley, 1993.
3.89. S. Chitwong et al, “ Enhancement of color image obtained from principal component
analysis using local area histogram equalization,” IEEE ISPACS, Honolulu, HI, Nov.
2000.
3.90. S. Bharitkar and C. Kyriakakis, “ Eigenfilters for signal cancellation,” IEEE ISPACS,
Honolulu, HI, Nov. 2000.
3.91. M. Turk and A. Pentland, “ Face processing: models for recognition,” Intelligent
Robots and Computer Vision VIII, SPIE, Philadelphia, PA, 1989.
3.92. B. Moghaddam and A. Pentland, “ Face recognition using view-based and modular
eigenspaces,” Automatic Systems for Identification and Inspection of Humans, SPIE,
vol. 2277, July 1994.
3.93. C.E. Davila, “ Blind KLT coding and vector quantization,” IEEE 9th DSP Workshop,
Hunt, TX, Oct. 2000.
3.94. R.D. Dony, “ Karhunen-Loeve transform,” Ch. 1 in the transform and data
compression handbook (Eds. K.R. Rao and P.C. Yip), Boca Raton, FL: CRC Press,
2001.
3.95. R. Kongkchandra, K.Tamee, and C. Kimpan, “ Improving Thai isolated word
recognition by using Karhunen-Loeve transformation and learning vector
quantization”, IEEE ISPACS’99, Pukhet, Thailand, Dec. 1999.
3.96. R. Kongkchandra, K.Tamee, and C. Kimpan, “ Using Karhunen-Loeve transformation
for feature reduction and tones analysis in Thai harmonic frequency speech”, IEEE
ISPACS’99, Pukhet, Thailand, Dec. 1999
3.97. W.D. Ray and R.M Driver, “ Further decomposition of the Karhunen-Loeve series
representation of a stationary random process”, IEEE Trans. IT, vol. IT-16, pp. 663668, Nov. 1970.
3.98. MathWorks. MATLAB http://www.mathworks.com
3.99 Netlib Repository. EISPACK http://www.netlib.org/eispack
3.100 Netlib Repository. LAPACK
3.101 Netlib Repository. LINPACK
http://www.netlib.org/lapack
http://www.netlib.org/linpack
3.102 GNU Octave. http://www.che.wisc.edu/octave
3.103 Research Systems. http://www.rsinc.com
3.104R. Cendrillon and B. Lovell, “ Real-time face recognition using eigenfaces,”
SPIE/VCIP 2000, vol. 4067, pp.
, Perth, Australia, June 2000.
3.105 S.J. Akkarakaran and P. P. Vaidyanathan, “ Existence and optimality of nonuniform
principal component filter banks,” EUSIPCO2000, Tampere, Finland, Sept. 2000.
23
http://eusipco2000.cs.tut.fi
3.106 J.J. Eggers, J.K. Su and B. Girod, “ Public key watermarking by eigenvectors
of linear transforms, “ EUSIPCO2000, Tampere, Finland, Sept.
2000.http://eusipco2000.cs.tut.fi
3.107A. Quddus and M. Gabbouj, “ Selection of natural scale in discrete wavelet domain
using eigenvalues,” EUSIPCO2000, Tampere, Finland, Sept.
2000.http://eusipco2000.cs.tut.fi
3.108Yang, “ Face recognition using kernel eigenfaces,” IEEE ICIP, Vancouver, Canada,
Sept. 2000.
3.109 Rizvi, “ A modular clutter rejection technique for FLIR imagery using region-based
principal component analysis,” IEEE ICIP, Vancouver, Canada, Sept. 2000.
3.110 J. K. Han and A. H. Tewfik, “Eigen-image based video segmentation and indexing,”
IEEE ICIP97, vol. 2, pp.538-541, Santa Barbara, CA, Oct. 1997.
3.111 C-Y. Chang et al, “Fast eigenspace decomposition of correlated images,” Proc. Intl.
Conf. on Intelligent Robots and Systems (IROS), vol. 1, pp. 7-12, Victoria, Canada. Oct.
1998.
3.112 D. Stefanoiu and I. Tabus, “ Degenerate eigenvalues – a method to design adaptive
discrete time wavelets,” EUSIPCO2000, Tampere, Finland, Sept.
2000.http://eusipco2000.cs.tut.fi
3.113 S.J. Akkarakaran and P. P. Vaidyanathan, “ Principal component filter banks:
existence issues and application to modulated filter banks,” IEEE ISCAS 2000,
Geneva, Switzerland, May 2000.
3.114 K.I. Kim, “ Kernel principal component analysis for texture classification,”
IEEE TENCON, Kuala Lumpur, Malaysia, Sept. 2000.
www.cairo.utm.my/TENCON2000
3.115 C.E. Davila, “ Blind KLT coding and vector quantization,” IEEE DSP
Workshop, Hunt, TX, Oct. 2000.
3.116 C. Bregles and Y.K. Kao, “ Eigenlips for robust speech recognition.” IEEE
ICASSP, pp.669-672, Adelaide, Australia, 1994.
3.117 P.N. Belhumeur, J.P. Hespanha and D.J. Kriegman, ‘ Eigenfaces vs. fisherfaces:
recognition using class specific linear projection,” IEEE Trans. PAMI, vol.19,
pp.711-720, July1997.
3.118 K. Mase and A. Pentland, “ Automatic lip reading by optical-flow analysis,”
Syst. Compt. Japan, vol.2, pp.67-76, Jan.1991.
3.119M. Turk and A. Pentland, ‘ Face recognition,” J. Cognitive Neuroscience, vol.3,
pp.71-86, Jan. 1991.
3.120 E. Sahouria and A. Zakhor, “ Content analysis of video using principal
components,” IEEE Trans. CSVT, vo. 9, pp. 1290-1298, Dec. 1999.
3.121 W. Xiangdong, “ Image segmentation using eigencluster extraction,” IEEE
ICASSP,
Salt Lake City, May 2001.
3.122 M. Hasan and A. Hasan, “ Parallelizable eigenvalue decomposition techniques via
the
matrix sector function,’ IEEE ICASSP, Salt Lake City, May 2001
3.123 T. Chen, “ Principal component analysis for facial animation,”. IEEE ICASSP, Salt
Lake City, May 2001.
24
3.124 J-Y. Gan, Y-W. Zhang and S-Y. Mao, “ Application of adaptive principal
components extraction algorithm in the feature extraction of human face,” IEEE
ISIMP 2001, Hong Kong, May 2001
3.125M. Flierl and B. Girod, “ Video coding with motion compensation for groups of
pictures,” IEEE ICIP 2002, pp. I-69 – I-72, Rochester, NY, Sept. 2002.
3.126S. Ouyang and Z. Bao, “ Fast principal component extraction by a weighted
information criterion, “ IEEE Trans. SP, vol. 50, pp.
, Aug. 2002.
3.127K. Chung, S.C. Kee and S.R. Kim, “ Face recognition using principal component
analysis of Gabor filter responses,” Proc. 1999 Intrnl. Workshop on recognition,
analysis and tracking of faces and gestures in real-time systems, pp. 53-57, 1999.
3.128M.M. Rahman . and S. Ishikawa, “ Eigenspace tuning for human standing pose
detection,” . IS&T/SPIE’s 15 th Annual Symp., vol. 5014, Santa Clara, CA,
Jan.2003
3.129B. Li and J. Wei, “ Remote sensing image fusion on PCA and WT,” IS&T/SPIE’s
15 th Annual Symp.., vol. 5014, Santa Clara, CA, Jan. 2003.
3.130M.E. Tipping and C.M. Bishop, “ Probabilistic principal component analyzers,”
Journal of the Royal Statistical Society, vol. 61, pp. 611-622, 1999.
3.131P.S. Chavez and J.A. Bowell, “ Comparison of the spectral information content of
LANDSAT thematic mapper and SPOT for three different sites in Phoenix,
Arizona,” Photogrammetric Engineering and remote sensing, vol. 54, pp. 16991708, 1988.
.
3.132 D.D. Muresan and T.W. Parks, “Adaptive Principal Components and Image
Denoising,” IEEE ICIP, Barcelona, Spain, 2003.
3.133 P. Hao and Q. shi, “Reversible Integer KLT for Progressive-to-Lossless
Compression of Multiple Component Images.” IEEE ICIP, Barcelona,
Spain, 2003.
3.134 N. Le Bihan and S.J. Sangwine, “Quaternion Principal Component Analysis
of Color Images,” IEEE ICIP, Barcelona, Spain, 2003.
3.135 Q. Zhou et al, “
Natural Scene Synthesis Using Multiple Eigenspaces. ,” IEEE ICIP,
Barcelona, Spain, 2003.
3.137 Y.Mami and D. Charlet, “Speaker identification by anchor models with
PCA/LDA post-processing,” IEEE ICASSP, pp. vol. I, pp.180-183, 2003.
3.138Y. Onishi and K. Iso, “Speaker adaptation by hierarchical eigenvoice,”
IEEE ICASSP, vol I, pp. 576-579, 2003.
3.139 B. Milner and X. Shao, “Low bit-rate feature vector compression using
transform coding and non-uniform bit allocation,” IEEE ICASSP, vol II,
pp.129-132, 2003.
3.140.F. Valente and C. Wellekens, “Minimum classification error/
eigenvoices training for speaker identification,” IEEE ICASSP, vol II, pp. 213216, 2003.
3.141 D. Erdogmus et al, “On the convergence of sipex: a simultaneous
principal components extraction algorithm,” IEEE ICASSP, vol II, pp. 697700, 2003.
25
3.136 K.L. Diamantaras, and Th.Papadimitriou, ” Blind signal separation using
oriented PCA neural models,” IEEE ICASSP, vol II, pp.733-736, 2003.
3.137 K.P. Larn and S.T. Mak, “An FPGA-based Eigenfilter using fast hebbian
learning,” IEEE ICASSP, vol II, pp.765-768, 2003.
3.138 S. Winter, H. Sawada and S. Makino, “Geometrical understanding of the
PCA subspace method for over determined blind source separation,”
IEEE ICASSP, vol II, pp. 769-772, 2003.
3.139 X. Wang and X. Tang, “An improved bayesian face recognition algorithm in
pca subspace,” IEEE ICASSP, vol IIi, pp.129-132, 2003.
3.140 R. Pique and L. Torres, “Efficient face coding in video sequences combining
adaptive principal component analysis and a hybrid codec approach,”
IEEE ICASSP, vol IIi, pp.629-632, 2003.
3.141 A. Kalivas, A. Tefas and I. Pitas, “Watermarking of 3d models using principal
component analysis,” IEEE ICASSP, vol V, pp.676-679, 2003.
3.142 L. Congde et al, “Adaptive robust kernel PCA algorithm,” IEEE ICASSP, vol
VI, pp.621-624, 2003.
3.143 T.J. Schreier and L.L. Schraf, “The Karhunen-Loeve expansion of improper
complex random signals with applications in detection,” IEEE ICASSP,
vol VI, pp.717-720, 2003.
3.144 B. Lahme and P. Miranda, “ Karhunen-Leve decomposition in the presence of
symmetry – Part I, IEEE Trans. SP, vo. 8, pp. 1183-1190, Sept. 1999.
3.145 G.S. Koutsogiannis,and J. Soraghan, Classiification and de-noising of
communication signals using kernel principal component analysis
(KPCA) IEEE International Conference on Acoustics, Speech, and Signal
Processing, 2002. Proceedings. (ICASSP '02)., Volume: 2 , pp, 1677 -1680,
2002.
3.146 H-C Kim, D. Kim and S.Y. Bang, “Face retrieval using 1st and 2nd order
PCA mixture model, IEEE ICIP2002, VOL. , PP.
, 2002.
3.147L. Wang and T.K. Tan, “ Experimental results of face description based
on the 2nd-order eigenface method,” ISO/MPEG m6001, Geneva, May
2000.
3.148 L.Wang and T.K. Tan, “ A new proposal for face feature description,”
ISO/MPEG m5750, Noordwijkerhout, March 2000.
3.149I.T. Jolliffe, “ Principal component analysis,” New York, NY: SpringerVerlag, 1896.
3.150 W.Zhao et al, “ Discriminant analysis of principal components for face
recognition,” Face Recognition: From theory to applications, Springer
Verlag, pp. 73-85, 1998.
3.151M. Singh, M.K. Mandal and A. Basu, “ Robust KLT tracking with
Gaussian weighting functions,” TEEE Trans PAMI (under review)
3.152S. Chitroub, “ PCA-ICA neural network model for POLSAR images
analysis”, IEEE ICASSP 2004, Montreal, Canada, May 2004.
3.153M.Y Kim and W.B. Kleijn, “ Classified VQ of the speech signal”, IEEE
Trans. Speech and audio Processing, vol. 12, pp. 277-289, May 2004.
3.154 D. Carevic and T. Caelli, “ Region-based coding of color images using
Karhunen-Loeuve transform”, Graphical model image process., vol. 59,
pp. 27 – 38, Jan. 1997.
26
3.155 A. LEVEY and M. Lindenbaum, “Sequential Karhunen-Loeve basis
extraction and its application to images”, IEEE Trans. Image
processing, vol.9, pp. 1371-1374, Aug. 2000.
3.156. M.Y. Kim and W.B. Kleijn, “ KLT-based adaptive classified VQ of the
speech signal”, IEEE Trans. on Speech and audio processing, vol. 12,
pp. 277-289, May 2004.
3.157 T. Tanaka, “ Generalized subspace rules for on-line PCA and their
application in signal and image compression”, IEEE ICIP 2004, vol. ,
pp.
, Singapore, Oct. 2004.
3.158 L. Torres and D. prado, “ A proposal for high compression of faces in
video sequences using adaptive eigenfaces”, IEEE ICIP, pp.
,
Rochester, NY, 2002.
3.159
R. Pique and L. Torres, « Efficient face coding in video sequences
combining adaptive principal component analysis and a hybrid codec
approach”, IEEE ICASSP, vol. III, pp. 629-632, Hong Kong, 2003.
3.160
S. Chandrasekaran et al, “ An eigenspace update algorithm for
image analysis, Graphical Models and Image Processing, vol. 59, pp.
321-332, Sept. 1997.
3.161
D. Carevic and T. Caelli, “ Region based coding of color images
using KLT”, Graph. Models image process., vol. 59, pp.27-38, 1997.
“High-fidelity
multichannel audio coding with Karhunen-Loeve
transform”, IEEE Trans. on Speech and Audio Processing, vol.11,
3.162
Dai Yang Hongmei Ai Kyriakakis, C. and Kuo, C.-C.J
pp. 365-380, July 2003.
3.163
J. Lee, “ Optimized quadtree for Karhunen-Loeve transform in
multispectral image coding”, IEEE Trans. IP, vol. 8, pp. 453-461, April
1999.
3.164
Dai Yang Hongmei Ai Kyriakakis, C. and Kuo, C.-C. J. “ An exploration of
multichannel audio
coding”,Proc. SPIE, vol. 4207, pp. 89-100, 2000.
3.165
D. Yang, H. Ai, Kyriakakis, C. and Kuo, C.-C.J, “ Adaptive Karhunen-loeve
transform for enhanced multichannel audio coding”, Proc.
SPIE, vol. 4475, pp.43-54, 2001.
Karhunen-Loeve transform for
3.167 Characterizing Virtual Eigensignatures for General Purpose Face
Recognition
Daniel B Graham and Nigel M Allinson.
(in) Face Recognition: From Theory to Applications ,
NATO ASI Series F, Computer and Systems Sciences, Vol. 163.
H. Wechsler, P. J. Phillips, V. Bruce, F. Fogelman-Soulie and T. S. Huang
(eds), pp 446-456, 1998.
3.168 V. Vranic, D. Saupe and J. Richter, “ Tools for 3D-object retrieval:
Karhunen-Loeve transform and spherical harmonics”, Proc. IEEE 2001
Workshop on multimedia signal processing, pp. 293-298, Cannes,
France, Oct. 2001.
27
3.169 G. Karabulut, D. Panario and A. Yongacoglu, “ Integer to integer
Karhunen Loeve transform over finite field”, IEEE ICASSP 2004, pp.
, Montreal, Canada, May 2004.
3.170
Pun-Mo Ho; Tien-Tsin Wong; and Chi-Sing Leung “’Compressing the illumination-adjustable images
with principal component analysis”, IEEE Trans. CSVT, vol. 15. pp.355-364, March 2005.
3.171 I. Elishakoff, “ Eigenvalues of inhomogeneous structures”, Boca Raton,
FL; CRC Press, 2005.
3.172 X. Kang, et al, “ Digital watermarking based on multi-band wavelet and
principal component analysis”, SPIE-VCIP2005, vol. 5960, pp. 1112-1118,
Beijing, China, July 2005.
3.173
D. Erdogmus et al, “ Recursive principal components analysis using
eigenvector matrix perturbation”, EURASIP J. on Applied Signal
Processing, vol. 2004, pp.
, Oct. 2004.
3.174 M. Sharkas, “ Application of DCT blocks with principal component
analysis for face recognition”, WSEAS Intrnl. Conf. on SSIP, pp.
, Corfu, Greece, Aug. 2005,
28