Blind Decorrelation and Deconvolution Algorithm For Multiple

Blind Decorrelation and Deconvolution Algorithm For
Multiple-Input Multiple-Output System I: theorem derivation
Tommy Yu, Daching Chen, Greg Pottie, and Kung Yao
Electrical Engineering Department
University of California, Los Angeles
Los Angeles, CA 90095-1594
ABSTRACT
The problems of blind decorrelation and blind deconvolution have attracted considerable interest recently. These two
problems traditionally have been studied as two dierent subjects, and a variety of algorithms have been proposed
to solve them. In this paper, we consider these two problems jointly in the application of a multi-sensor network
and propose a new algorithm for them. In our model, the system is a MIMO system (multiple-input multipleoutput) which consists of linearly independent FIR channels. The unknown inputs are assumed to be uncorrelated
and persistently excited.11 Furthermore, inputs can be colored sources and their distributions can be unknown.
The new algorithm is capable of separating multiple input sources passing through some dispersive channels. Our
algorithm is a generalization of Moulines' algorithm1 from single input to multiple inputs. The new algorithm is
based on second order statistics which require shorter data length than the higher order statistics algorithms for the
same estimation accuracy.
Keywords: blind deconvolution, blind decorrelation
1. INTRODUCTION
There is a growing interest in deploying a large number of inexpensive micro-sensor nodes with signal processing
capability and using this sensor network to understand the environment. In the application of battleeld awareness,
wide area distribution of micro-sensors provides seismic, acoustic, magetic, and imaging tactical information. These
sensor measurements will be used for target detection and identication. In particular, we consider the system model
for target identication using a seismic sensor network. Seismic sensor measurements can be modeled as the output
of an unknown system driven by an unknown source. The unknown system characterizes the propagation media for
the seismic waves launched by the target. The target motion is modeled by some unknown sources. In the scenario
of multiple targets, we have an unknown MIMO system. We want to estimate both the system transfer function and
the input source using the noisy sensor measurement only. The estimated system transfer function and the input
source will be used as signatures for target identication.
The problem of multiple sources separation using output data only has been studied as a blind decorrelation
problem. In the area of blind decorrelation algorithms, we traditionally assume there is at least one non-dispersive
path between each input source to the outputs. A variety of algorithms have been proposed for solving this multiple
source separation problem. Weinstein et al.5 investigated a 2 x 2 linear time invariant system. Two unknown inputs
are assumed to be statistically uncorrelated. The algorithm further assumes the diagonal element of the transition
matrix to be 1. That is, one signal can be viewed of as the interference for the other. Weinstein5 developed an iterative
algorithm for estimating the unknown channel coecients based on cross-correlation. However, the convergence of
the algorithm is not guaranteed if both channel coecients are unknown, and it depends on the initial starting point
of the algorithm. Furthermore, the algorithm only works for a 2 x 2 system and the generalizaton of this algorithm
to more sources can be very dicult.
Yellin6 7 developed an algorithm for multiple sources separation based on higher order statistics similar to Weinstein's algorithm. However, this algorithm suers the same drawbacks as Weinstein's method. Furthermore, since
Further author information: (Send correspondence to T. Yu )
T. Yu: E-mail: [email protected]
the algorithm is based on higher order statistics, the input has to be non-Gaussian. This assumption limits the application of this algorithm in communication systems since the high-density QAM signal is very close to a Gaussian
source.
Moulines et al.1 proposed a blind identication algorithm for single-input multiple-output systems. The algorithm
is based on second order statistics and it estimates the system transfer function by exploring the subspace structure
of the received data. In this model, the unknown system can be characterized by a FIR channel. The input is
assumed to be persistently excited11 but otherwise unknown. The persistently excited requirement is a very loose
constraint and most broadband sources have this property. Moulines' algorithm is one type of blind deconvolution
algorithm. In the context of blind deconvolution algorithms, we traditionally assume there is only one input source
for the system. In this paper, we generalize Moulines' method from single input to multiple inputs by exploring the
linear independence property between channels and uncorrelated property between input sources.
The paper is organized as follows: In Section II, the multiple-input multiple-output problem is formulated in
matrix format. In addition, the correlation matrix of the output data is formulated and the subspace structure of
this matrix is investigated. A blind decorrelation and deconvolution algorithm, based on the concept of subspaces,
is proposed in Section III. Conclusions will be drawn and further research directions will be addressed in Section IV.
2. PROBLEM FORMULATION
Consider a sensor network of Ns sensors and Nt sources. Let dj (n) denote the signal from source j , j = 1 : : : Nt .
The received signal at sensor i is given by:
xi (n) =
N
1 X
X
t
m=;1 j =1
dj (m)hij (n ; m) + bi (n)
(1)
where bi (n) is the measurement noise at sensor i and it is assumed to be independent of the source signal. hij (i =
1 : : : Ns j = 1 : : : Nt ) is the transfer function from source j to sensor i. In the multi-sensor network, hij characterizes the propagation medium from target j to sensor i.
Assume each channel can be modeled by a FIR lter of order M . Stacking N successive samples of the received
signal sequence, i.e., Xi = xi (n) : : : xi (n ; N + 1)]T (dim. N x 1), we obtain the following:
Xi (n) = Hi D + Bi
(2)
where Bi = bi (n) : : : bi (n ; N + 1)]T (dim. N x 1), and D = D1T : : : DNT t ]T (dim. Nt (M + N ) x 1). Di is a
vector of (M + N ) successive samples of the ith source signal sequence, i.e., Di = di (n) : : : di (n ; N ; M + 1)]T .
Hi is the N x Nt (M + N ) ltering matrix from multiple sources to sensor i, i.e., Hi = Hi1 : : : HiNt ]. Hij is a N x
(M + N ) matrix dened as:
0 h (0) h (M ) 0
ij
ij
B
0
h
(0)
h
(
M
)
0 ij
ij
Hij def
=B
B@ ...
0
0
0
0
..
.
hij (0) hij (M )
1
CC
CA
(3)
where hij (k) denotes the kth tap of hij .
Combine all sensor measurements into one vector X = X1T : : : XNT s ]T (dim. Ns N x 1) to obtain the following:
X = HD + B
(4)
where B = B1T : : : BNT s ]T (dim. Ns N x 1) and H = H1T : : : HNT s ]T (dim. Ns N x Nt (M + N )).
As in the method of Moulines,1 4 the identication is based on the Ns N x Ns N auto-correlation matrix Rx =
E fXX T g for the measurement vector X . Since the additive measurement noise is assumed to be independent of the
transmitted sequence, Rx is expressed as:
Rx = HRd H T + Rb
(5)
where Rd = E fDDT g and Rb = E fBB T g respectively denote the auto-correlation matrices of the transmitted
symbol vector D and of the measurement noise vector B . Assume all input sources are statistically uncorrelated,
that is:
E fdi (t)dj (t ; )g = 0
8 i 6= j
(6)
Then the source correlation matrix Rd (dim. Nt (M + N ) x Nt (M + N )) has the following structure:
0 R1 0 d
B
0
Rd2 0 B
B
.
..
Rd = B
B
B
@ ...
0
0
..
.
0
1
CC
CC
CC
A
(7)
0 0 RdNt
where Rdi denotes the correlation matrix of source i and is assumed to be full-rank but otherwise unknown. The
rank of Rd is Nt (M + N ).
Before proceeding, note that matrix H is full column rank, i.e., rank(H ) = Nt (M + N ). The requirement for
this condition is:
1) the polynomials H (ij) (z ) have no common zero, where H (ij) (z ) is dened as:
H (ij) (z ) def
=
M
X
k=0
h(ijk) z k
(8)
2) N is greater than or equal to the maximum degree M of the polynomials H (ij) (z ), i.e., N M
3) Ns N Nt (M + N )
4) at least one polynomial H (ij) (z ) has degree M for each j
2.1. Subspace Decomposition
In order to preserve the subspace structure of Rd , the dimension of Rx should be chosen to greater than Rd , i.e.,
Ns N Nt (M + N ). Let 0 1 Nt (M +N ) denote eigenvalues of Rx . Assume noises at all sensors are the
same with power 2 . Since Rd is full rank, the signal part of the correlation matrix Rx , i.e., HRd H T has rank
Nt (M + N ), hence:
i = 0 : : : Nt (M + N ) ; 1
i = Nt (M + N ) : : : Ns N ; 1
i > 2
i = 2
(9)
Denote the unit-norm eigenvectors associated with the eigenvalues 0 : : : Nt (M +N );1 by S0 : : : SNt (M +N );1 denote those corresponding to Nt (M +N ) : : : Ns N ;1 by G0 : : : GNs N ;Nt (M +N );1 . Also dene S (dim. Ns N x
Nt (M + N )) and G (dim. Ns N x (Ns N ; Nt (M + N ))) as follows:
S = S0 : : : SNt (M +N );1
G = G0 : : : GNsN ;Nt (M +N );1
The correlation matrix Rx is thus also expressed as:
(10)
Rx = S diag (0 : : : Nt (M +N );1 )S T + 2 GGT
(11)
The columns of matrix S span the so-called signal subspace (dim. Nt (M + N )), while the columns of G span its
orthogonal complement, the noise subspace. Since H is a full column rank matrix, the signal subspace is also the
linear space spanned by the columns of H . By orthogonality between the noise and the signal subspace, the columns
of H are orthogonal to any vector in the noise subspace. That is,
GTi H = 0
0 i Ns N ; Nt (M + N ) ; 1
(12)
Furthermore, let H = K1 : : : KNt ] where Ki = H1Ti : : : HNT si ]T , the following is true as well:
GTi Kj = 0
0 i Ns N ; Nt (M + N ) ; 1
1 j Nt
(13)
This relation is the basis for the new decorrelation and deconvolution algorithm. Note that the relation is linear
in the unknown coecients. This leads to the possibility of simple identication procedures, provided this equation
actually characterizes the channel coecients.
T ]T (dim. Ns (M + 1) x
Let Gi = GTi1 : : : GTiNs ]T where Gik is N x 1 vector, and dene Gi = GiT1 : : : GiN
s
(M + N )) where Gik (dim. (M + 1) x (M + N )) has the following structure:
0 G (1) G (2) G (N ) 0 BB ik0 Gikik (1) Gik (2) ik Gik (N ) 0
Gik = B
..
..
..
..
..
@ ...
.
.
.
.
.
0
0
1
C
C
.. C
. A
0
(14)
Gik (1) Gik (2) Gik (N )
where Gik (l) denotes the lth coecients of vector Gik .
From Lemma 1 in Moulines,1 Equation 13 can be rewritten in the following format:
hTj Gi = 0
0 i Ns N ; Nt (M + N ) ; 1
1 j Nt
(15)
Before proceeding, we want to claim the following. A proof is given in Appendix A.
Lemma 1: If H is a full column rank matrix, any matrix F which spans the same column space and has the
same ltering structure as H must have the following property: any ltering matrix Fk (1 k Nt ) in F is a linear
combination of all Kj 's (1 j Nt ) in H and vice versa.
We show in the next section how the above relations can be used to estimate the channel coecients even in the
cases where Rd is unknown. Similarly to Moulines' algorithm, this method relies on the specic structure of H , the
ltering matrix.
3. ALGORITHM FORMULATION
A blind identication procedure consists of estimating the Ns Nt (M + 1) x 1 vector h of channel coecients, i.e., h
(M )
(0)
(M ) T
= hT1 : : : hTNt ]T and hj = h(0)
1j : : : h1j : : : hNs j : : : hNs j ] (dim. Ns (M + 1) x 1), solely from the observations
of X .
3.1. Subspace-Based Parameter Estimation Scheme
The orthogonality condition is linear in the channel coecients. In practice, only sample estimates Gi of the
noise eigenvectors are available and hj is solved in the least square sense. Furthermore, it can be shown that
hj (j = 1 : : : Nt ) are linearly indepdendent vectors. The system identication problem can be restated as solving
Nt linearly independent vectors, i.e., hj (j = 1 : : : Nt ) that minimize the following quadratic form
q(hj ) = hTj Qhj
Q=
where
Ns N ;X
Nt (M +N )
i=1
Gi GiT
(16)
Estimates of hj can be obtained by minimizing q(hj ) subject to a properly chosen constraint avoiding the trivial
solution hj = 0. In this new algorithm, the quadratic constraint is chosen. That is, minimize q(hj ) subject to
khj k = 1. If there is only one source (Nt =1), the solution is the unit-norm eigenvector associated with the smallest
eigenvalue of matrix Q. This is the result Moulines presented in his paper.1 However, in the case of multiple
sources (Nt > 1), there are Nt vectors and we need to compute the Nt eigenvectors associated with the Nt smallest
eigenvalues of Q.
Denote Nt unit-norm eigenvectors associated with the smallest Nt eigenvalues of matrix Q by fi (i = 1 : : : Nt ).
However, since the real channel coecient vectors hi (i = 1 : : : Nt ) need not be orthogonal to each other, the
eigenvectors fi we get are just a basis of the real channel coecient vectors.
h1 : : : hNt ] = f1 : : : fNt ] C
(17)
where C is a Nt x Nt matrix.
T ]T (dim. Ns (M + 1) x 1) where fij is a (M + 1) x 1 vector, dene F = F1 : : : FN ] (dim.
Let fi = fiT1 : : : fiN
t
s
T ]T (dim. Ns N x (M + N )) where Fij (dim. N x (M + N )) has the
Ns N x Nt (M + N )) and Fi = FiT1 : : : FiN
s
following structure:
0 fij (0) fij (M ) 0
BB 0 fij (0) fij (M ) 0 Fij = B
B@ ...
.
.
0
..
..
0
0
0
..
.
fij (0) fij (M )
1
CC
CC
A
(18)
Let yj (n)(j = 1 : : : Nt ) be the signal that satises the following:
yj (n) =
N
X
t
k=1
cjk dk (n)
(19)
Let Yi be M + N successive samples from yi (n) and Y = Y1T : : : YNTt ]T . It can be shown that:
X = FY + B
(20)
After F is determined, a linear deconvolution lter E (dim. Nt(M + N ) x Ns N ) can be found by:
E = (F T F );1 F T
(21)
We can recover yj (n)(j = 1 : : : Nt ) using this deconvolution lter E . However, the objective of the decorrelation
and deconvolution algorithm is to recover dk (n)(k = 1 : : : Nt). From Equation 19, we observe yj (n) is a linear
combination of dk (n). Using the property stated in Equation 6, recovering dk (n) from yj (n) becomes a problem
of separating a mixture of indepedent signals. This problem can be solved readily using an algorithm proposed by
Molgedey.8 The algorithm can be summarized as follows:
1. Once yj (n)(j = 1 : : : Nt ) have been calculated, generate matrices M0 (symmetric correlation matrix) and M1
(time delayed correlation matrix).
M0ij = E fyi (n)yj (n)g
M1ij = E fyi (n)yj (n + l)g
(22)
2. C can be solved as an eigenvalue problem. That is:
(M0 M1;1 )C = C (0 ;1 1 )
(23)
where 0 1 are diagonal matrices.
Once C is determined, system transfer function coecients h = hT1 : : : hTNt ]T can be solved using Equation 17.
Let C ;1 be the inverse of C . Input source D = D1T : : : DNT t ]T can be solved as:
Di =
N
X
Cij;1 Yj
t
(24)
j =1
Before proceeding, we want to address the computation complexity of this new algorithm.
1. Deconvolution: In this operation, we estimate the channel dispersion of the system and equalize channel
dispersion using deconvolution lter E . As a result, we reduce this MIMO system with FIR channels into a mixture
system C . In this process, we perform one eigenvalue decomposition on Rx (dim. NNs x NNs ), one eigenvalue
decomposition on Q (dim. Ns (M + 1) x Ns (M + 1)), and one matrix inversion on F (dim. Ns N x Nt (M + N )).
From the conditions we state for Ns , Nt , M , and N , the computation is dominated by the eigenvalue decomposition
on Rx . That is an O((NNs )3 ) operation.
2. Decorrelation: In this operation, we separate the mixture of independent sources using Molgedey's algorithm.
In this process, we perform one matrix inversion on M1 (dim. Nt x Nt ), one matrix inversion on C (dim. Nt x
Nt ), and one eigenvalue decomposition on M0 M1;1 (dim. Nt x Nt ). In this case, the computation is dominated by
the eigenvalue decomposition which is an O((Nt )3 ) operation. In comparison to the deconvolution operation, the
computation of the decorrelation operation is negligible.
3.2. Variations of the Subspace-Based Parameter Estimation Scheme
We have shown the subspace-based parameter estimation scheme using the noise subspace of the sensor data. Similar
to Moulines's algorithm, our algorithm can be modied to use the signal subspace of the sensor data for parameter
estimation.
The system identication based on the signal subspace can be stated as solving for Nt linearly indepedent vectors
hj that maximize the following quadratic form
~ j
q~(hj ) = hTj Qh
where
Q~ =
Nt (X
M +N )
i=1
Si SiT
(25)
where Si is the matrix associated with eigenvector Si and it is dened similarly as Gi .
Under the unit-norm constraints, both the noise- and signal-related quadratic forms give identical solutions. The
computational complexity comparison between the two depends on values of Ns , Nt , M , and N .
4. CONCLUSION
This paper presents a new algorithm on decorrelation and deconvolution for the multiple-input multiple-output
system. The new algorithm improves on the previous algorithms.1 5 6 8 We consider blind decorrelation and blind
deconvolution jointly and solve these two problems together. This greatly expands the range of problems to which
blind decorrelation and blind deconvolution can be applied. It is possible to perform a blind deconvolution operation
even if there is more than one input source. On the other hand, blind decorrelation is possible for problems in which
all the channels are dispersive.
This algorithm is operating with block data only. In a real life application, it is preferable to have the algorithm
operating in an adaptive mode. That is, the algorithms need to estimate the channel response and the deconvolution
lter continuously with new incoming data. Further work is in progress in this area.
5. APPENDIX A
PROOF OF LEMMA 1
Let H be dened as in Equation 4 and has full column rank. Assume F has the same structure as H and it spans
the same column space as H , which implies F also has full column rank. For any ltering matrix Ki in H , dene a
Ns x 1 vector hj (m) as
hj (m) = h1j (m) h2j (m) : : : hNsj (m)]T
(26)
If we properly interchange the rows of Hj , we obtain a new Ns N x (M + N ) block-Toeplitz matrix HjN
2 h (0) h (1) h (M ) 0 j
j
j
6
0
h
(0)
h
(1)
hj (M ) 0
j
j
6
HjN = 64 ..
..
.
.
0
hj (0)
0
3
77
75
0
(27)
hj (1) hj (M )
Since matrix HjN is obtained from H through interchanging of rows, their column spaces are cannonically equivalent. The superscript N emphasizes that it has N \block rows". Similarly, we can dene fj (m) and FjN for F . HjN
can be expressed in the following forms
h (0) P N ;1 j
j
0N (N ;1) HjN ;1
HN ;1 0
HjN =
s
j
=
QNj ;1
Ns (N ;1)
h j (M )
(28)
where 0Ns(N ;1) (dim. Ns (N ;1) x 1) is an all-zero vector. The matrix PjN ;1 is dened as hj (1) hj (2) : : : hj (M ) 0Nsx(N ;1) ].
The matrix QNj ;1 is dened as 0Nsx(N ;1) hj (0) hj (1) : : : hj (M ; 1)].
Since H and F span the same column space, so do HN and F N . Therefore, there exists a set of scalars j (0)(1 j Nt ) and a set of vector xj (0)(1 j Nt ) such that
h (0) X
N fk (0)
j
=
t
0Ns(N ;1)
This linear system can be rewritten as
k=1
PkN ;1
0Ns(N ;1) FkN ;1
(0) k
xk (0)
(29)
N ;
X
t
hj (0) =
k=1
Nt
X
0Ns(N ;1) =
k=1
k (0)fk (0) + PkN ;1xk (0)
FkN ;1 xk (0)
(30)
F N ;1 = F1N ;1 : : : FNNt;1 ] is a full column rank matrix. Therefore,
xj (0) = 0
N
X
t
hj (0) =
k=1
k (0)fk (0)
(31)
i.e., the vector h0 (j ) is a linear combination of all f0 (j )(1 j Nt ). Using the fact that hj (1)T hj (0)T 0TNs(N ;2) ]
are in the range(F N ) and Equation 31, it can be shown that there existed a (M + N ; 1) x 1 vector xk (1) satised
the following
N ;
X
h (0) j
k=1
Nt
X
=
0Ns(N ;2)
k (1)fk (0) + PkN ;1 xk (1)
t
hj (1) =
k=1
FkN ;1 xk (1)
(32)
Using the full rank property of F N ;1 and Equation 31, Equation 32 yields xk (1) = k (0) 0TM +N ;2 ]T for all k.
Pinning this back to Equation 32
hj (1) =
N
X
t
k=1
k (1)fk (0) +
N
X
t
k=1
k (0)fk (1)
(33)
Repeat this process (M+1) times, we have the following:
hj (0) =
hj (1) =
..
.
hj (M ) =
N
X
t
k=1
Nt
X
k=1
N
X
t
k=1
k (0)fk (0)
k (1)fk (0) +
N
X
t
k=1
k (M )fk (0) +
k (0)fk (1)
N
X
t
k=1
k (M ; 1)fk (1) + +
N
X
t
k=1
k (0)fk (M )
On the other hand, we can obtain the following using the second expression in Equation 28
(34)
hj (0) =
hj (1) =
..
.
hj (M ) =
N
X
t
k=1
Nt
X
k=1
N
X
t
k=1
k (0)fk (M ) +
N
X
t
k=1
k (1)fk (M ; 1) + +
k (1)fk (M ) + +
N
X
t
k=1
N
X
t
k=1
k (M )fk (0)
k (M )fk (1)
k (M )fk (M )
(35)
Furthermore, the following relation is true.
2 3
2 3
1
1
6
7
6
2
2 7
F M +1 664 .. 775 = F M +1 664 .. 775
.
.
Nt
(36)
Nt
where k and k are dened as
k = k (M ) k (M ; 1) : : : k (0) 0TM ]T
k = 0TM k (M ) k (M ; 1) : : : k (0)]T
(37)
Since F M +1 is a full column rank matrix, the only solution for Equation 36 is k = k for all k. This implies (i)
k (0) = k (M ) for all k, (ii) k (m) = 0 1 m M , (iii) k (m) = 0 0 m M ; 1. Plug these results back to
Equation 34, we conclude
HjN =
N
X
t
k=1
k (0)FkN
(38)
It is equivalent to say Hj can be expressed a linear combination of Fk (1 k Nt) for 1 j Nt . This completes
the proof.
ACKNOWLEDGMENTS
This work was partially supported by DARPA ETO DABT6863-95-C-0050, DARPA TTO F04701-97-C-0010, and a
TRW Ph.D. Fellowship.
REFERENCES
1. E. Moulines, P. Duhamel, J. Cardoso, and S. Mayrargue, \Subspace Methods for the Blind Identication of
Multichannel FIR Filters", IEEE Trans. Signal Processing, vol. 43, pp. 516-525, 1995.
2. M. Gureli and C. Nikias, \EVAM: An Eigenvector-Based Algorithm for Multichannel Blind Deconvolution of
Input Colored Signals", IEEE Trans. Signal Processing, vol. 43, pp. 134-149, 1995.
3. M. Kristensson and B. Ottersten, \A Statistical Approach to Subspace Based Blind Identication", IEEE Trans.
Signal Processing, vol. 46, pp. 1612-1623, 1998.
4. K. Abed-Meraim, J. Cardoso, A. Gorokhov, P. Loubation, and E. Moulines, \On Subspace Methods for Blind
Identication of Single-Input Multiple-Output FIR Systems", IEEE Trans. Signal Processing, vol. 45, pp. 42-55,
1997.
5. E. Weinstein, M. Feder, and A. Oppenheim, \Multi-Channel Signal Separation by Decorrelation", IEEE Trans.
Speech and Audio Processing, vol. 1, pp. 405-413, 1993.
6. D. Yellin and E. Weinstein, \Criteria for Multichannel Signal Separation", IEEE Trans. Signal Processing,
vol. 42, pp. 2158-2168, 1994.
7. D. Yellin and E. Weinstein, \Multichannel Signal Separation: Methods and Analysis", IEEE Trans. Signal
Processing, vol. 44. pp. 106-118, 1996.
8. L. Molgedey and H.G. Schuster, \Separation of a Mixture of Independent Signals Using Time Delayed Correlations", Physical Review Letters, vol. 72, pp. 3634-3637, 1994.
9. O. Shalvi and E. Weinstein, \Super-Exponential Methods for Blind Deconvolution", IEEE Trans. Inform. Theory, vol. 39, pp. 504-519, 1993.
10. L. Tong, G. Xu, and T. Kailath, \Blind Identication and Equalization Based on Second-Order Statistics: A
Time Domain Approach", IEEE Trans. Inform. Theory, vol. 40, pp. 340-349, 1994.
11. L. Ljung, System Identication: Theory for the User, 1st ed.. Englewood Clis, NJ: Prentice-Hall, 1987.