Time-frequency formulation, design, and implementation of time

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 5, MAY 2000
1417
Time–Frequency Formulation, Design, and
Implementation of Time-Varying Optimal Filters for
Signal Estimation
Franz Hlawatsch, Member, IEEE, Gerald Matz, Student Member, IEEE, Heinrich Kirchauer, and
Werner Kozek, Member, IEEE
Abstract—This paper presents a time–frequency framework
for optimal linear filters (signal estimators) in nonstationary
environments. We develop time–frequency formulations for the
optimal linear filter (time-varying Wiener filter) and the optimal
linear time-varying filter under a projection side constraint. These
time–frequency formulations extend the simple and intuitive
spectral representations that are valid in the stationary case to
the practically important case of underspread nonstationary processes. Furthermore, we propose an approximate time–frequency
design of both optimal filters, and we present bounds that show
that for underspread processes, the time-frequency designed
filters are nearly optimal. We also introduce extended filter design
schemes using a weighted error criterion, and we discuss an
efficient time–frequency implementation of optimal filters using
multiwindow short-time Fourier transforms. Our theoretical
results are illustrated by numerical simulations.
Index Terms—Nonstationary random processes, optimal filters,
signal enhancement, signal estimation, time-frequency analysis,
time-varying systems, Wiener filters.
I. INTRODUCTION
T
HE enhancement or estimation of signals corrupted
by noise or interference is important in many signal
processing applications. For stationary random processes, the
mean-square error optimal linear estimator is the (time-invariant) Wiener filter [1]–[7], whose transfer function is given
by
(1)
and
denote the power spectral densities
where
of the signal and noise processes, respectively. The extension
of the Wiener filter to nonstationary processes yields a linear,
time-varying filter (“time-varying Wiener filter”) [2], [4]–[7]
for which the simple frequency-domain formulation in (1) is
Manuscript received June 17, 1998; revised September 20, 1999. This work
was supported by FWF Grants P10012-ÖPH and P11904-TEC. The associate
editor coordinating the review of this paper and approving it for publication was
Prof. P. C. Ching.
F. Hlawatsch and G. Matz are with the Institute of Communications and
Radio-Frequency Engineering, Vienna University of Technology, Vienna,
Austria.
H. Kirchauer is with Intel Corp., Santa Clara CA 95052 USA.
W. Kozek is with Siemens AG, Munich, Germany.
Publisher Item Identifier S 1053-587X(00)03289-X.
no longer valid. We may ask whether a similarly simple formulation can be obtained by introducing an explicit time dependence, i.e., whether in a joint time-frequency (TF) domain the
time-varying Wiener filter can be formulated as
(2)
and
are suitably defined. Such
where
a TF formulation would greatly facilitate the interpretation,
analysis, design, and implementation of time-varying Wiener
filters.
In this paper, we provide an answer to this and several other
questions of theoretical and practical importance.
• We show that for underspread [8]–[12] nonstationary processes, the TF formulation in (2) is approximately valid
and
are chosen as the Weyl
if
symbol [13]–[16] and the Wigner–Ville spectrum (WVS)
[12], [17]–[19], respectively. We present upper bounds on
the associated approximation errors.
• We propose an efficient, intuitive, and nearly optimal TF
design of signal estimators that is easily adapted to modified (weighted) error criteria.
• We discuss an efficient TF implementation of optimal filters using the multiwindow short-time Fourier transform
(STFT) [8], [20].
• We also consider the TF formulation, approximate TF
design, and TF implementation of an optimal projection
filter.
Previous work on the TF formulation of time-varying Wiener
filters [21]–[24] has mostly used Zadeh's time-varying transfer
and the evolutionary spectrum
function [25], [26] for
and
. A Wiener filtering
[12], [27]–[29] for
procedure using a time-varying spectrum based on subband
AR models has been proposed in [30]. A somewhat different
approach is the TF implementation of signal enhancement
by means of a TF weighting applied to a linear TF signal
representation such as the STFT, the wavelet transform, or a
filterbank/subband representation. In [31] and [32], the STFT
is multiplied by a TF weighting function similar to (2), where
are chosen as the physical spectrum [18],
[19], [33]. Wiener filter-type modifications of the STFT and
subband signals have long been used for speech enhancement
[34]–[37]. Methods that perform a Wiener-type weighting of
wavelet transform coefficients are described in [38]–[41].
1053–587X/00$10.00 © 2000 IEEE
1418
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 5, MAY 2000
The TF formulations proposed in this paper differ from the
above-mentioned work on several points:
• While, usually, Zadeh's time-varying transfer function
and the evolutionary spectrum or physical spectrum are
and
, respectively,
chosen for
our development is based on the Weyl symbol and the
WVS. This has important advantages, especially regarding TF resolution. Specifically, the Weyl symbol and
the WVS are much better suited to systems and processes
with chirp-like characteristics. Furthermore, contrary to
the evolutionary spectrum and physical spectrum, the
WVS is in one-to-one correspondence with the correlation
function.
• Our TF formulations are not merely heuristic but are
shown to yield approximations to the optimal filters
featuring nearly optimal performance in the case of
jointly underspread signal and noise processes. We
present bounds on the associated approximation errors,
and we discuss explicit conditions for the validity of our
TF formulations. In particular, we show that the usual
quasi-stationarity assumption is neither a sufficient nor a
necessary condition.
• Contrary to the conventional heuristic STFT enhancement schemes, the multiwindow STFT implementation
discussed here is based on a theoretical analysis that
shows how to choose the STFT window functions and
how well the multiwindow STFT filter approximates the
optimal filter.
The paper is organized as follows. Section II reviews the
time-varying Wiener filter and the optimal time-varying projection filter. Section III reviews some TF representations and
the concept of underspread systems and processes. In Section
IV, a TF formulation of optimal filters is developed. Based on
this TF formulation, Section V introduces simple and intuitive
TF design methods for optimal filters. In Section VI, a multiwindow STFT implementation of optimal filters is discussed.
Finally, Section VII presents numerical simulations that compare the performance of the various filters and verify the theoretical results.
II. OPTIMAL FILTERS
This section reviews the theory of the time-varying Wiener
filter [2], [4]–[7] and a recently proposed “projection-constrained” optimal filter [42]. For stationary processes,there
exist simple frequency-domain formulations of both optimal
filters. These will be generalized to the nonstationary case in
Section IV.
A. Time-Varying Wiener Filter
be a zero-mean, nonstationary, circular complex or
Let
real random process with known correlation function
E
or, equivalently, correlation operator1
. This
R
correlation operator
of a random process x (t) is the self-adjoint
and positive (semi-)definite linear operator [43] whose kernel is the correlation
function r (t; t ) = Efx(t) x (t )g . In a discrete-time setting,
would be
a matrix.
1The
R
signal process is contaminated by additive zero-mean, nonstawith known
tionary, circular complex or real random noise
. Signal
and noise
are assumed
correlation operator
. We form an estito be uncorrelated, i.e., E
of the signal
from the noisy observation
mate
using a linear, generally time-varying system (linear
,2
operator [43]) with impulse response (kernel)
Our performance index will be the mean-square error (MSE),
E
E
i.e., the expected energy
tr
of the estimation error
, which can be
shown to be given by
tr
tr
(3)
denotes the trace of an operator,
denotes the adHere, tr
and
are the expected energies of
joint of [43], and
and the signal distortion
the residual noise
, respectively. The linear system minican be shown [2], [4]–[7] to satisfy
mizing the MSE
(4)
The solution of (4) with minimal rank3 is the “time-varying
Wiener filter”
(5)
denotes the (pseudo-)inverse of
on its range
. The minimum MSE
achieved by the Wiener filter is given by
where
tr
tr
tr
(6)
, let
In order to interpret the time-varying Wiener filter
range
and the noise
us define the signal space
range
. Since
, the obspace
. We note that4
servation space is given by
, and
. It can now be shown that
the optimal signal estimate is an element of the signal space ,
. Since
and
, we also have
i.e.,
. Since it can also be shown that
, we get
. These results are intuitive since any contribufrom outside
would unnecessarily increase the
tion to
and noise space
resulting error. Furthermore, if signal space
are disjoint,
, then the Wiener filter performs
and obtains perfect reconan oblique projection [44] onto
, i.e.,
.
struction of the signal
are from 01 to 1 unless stated otherwise.
general solution of (4) is
+
, where
denotes the minimal
rank solution in (5), is an arbitrary operator, and
is the orthogonal projection operator on S , the orthogonal complement space of S = rangef g
with
=
+
. (For S = L (IR) [where L (IR) denotes the space of
square-integrable functions],
becomes the unique solution of (4).
4Since the signals s(t) etc. are random, relations like s(t) 2 S are to be
understood to hold with probability 1.
2Integrals
3The
R
X
R R
H XP
H
H
P
R
HLAWATSCH et al.: TIME-FREQUENCY FORMULATION, DESIGN, AND IMPLEMENTATION OF TIME-VARYING SIGNAL ESTIMATORS
In some applications, e.g., if signal distortions are less acceptable than residual noise, we might consider replacing the MSE
in (3) by a “weighted MSE”
between expected signal energy and expected noise energy in
span
is
E
E
tr
tr
(7)
. This amounts to replacing
by
and
so that the resulting modified Wiener filter is
with
by
B. Optimal Time-Varying Projection Filter
using a linear, timeWe next consider the estimation of
varying filter that is an orthogonal projection operator, i.e.,
, and self-adjoint,
[43].
it is idempotent,
Although the projection constraint generally results in a larger
MSE, it leads to an estimator that is robust in a specific sense
[45]. Further advantages regarding a TF design are discussed in
Subsection V-B.
and
denote the eigenvalues and normalized
Let
,
eigenfunctions, respectively, of the operator
. It is shown in [42] that the mini.e.,
imum-rank orthogonal projection operator minimizing the
MSE in (3) is given by
where
denotes the rank-one orthogonal projection
with
operator defined by
, and
is
the index set corresponding to positive eigenvalues. Thus, the
performs a projection onto the
optimal projection filter
spanned by all eigenfunctions of
space
corresponding to positive eigenvalues. The MSE obtained with
is given by
tr
tr
(8)
Thus, if (and only if)
, then in span
, the expected signal energy is larger than the expected noise energy.
E
It is then easily shown that E
for any signal
, i.e., in
, the expected signal
energy is larger than the expected noise energy. Equivalently,
E
, which shows that the optimal
E
projects onto the space
where,
projection filter
on average, there is more signal energy than noise energy. This
is intuitively reasonable.
If we use the “weighted MSE” (7) instead of the MSE (3), the
is constructed as before
resulting optimal projection filter
replaced by
.
with
C. Stationary Processes
The special cases of stationary processes and nonstationary
white processes allow particularly simple frequency-domain
and time-domain formulations, respectively, of both optimal
filters. These formulations will motivate the development of TF
formulations of optimal filters in Section IV.
and
are wide-sense stationary processes, the corIf
and
are time-invariant,
responding correlation operators
i.e., of convolution type, and the optimal system becomes timeinvariant as well. The MSE in (3) does not exist and must be re. For any linear,
placed by the “instantaneous MSE” E
, the intime-invariant system with transfer function
stantaneous MSE can be shown to be
E
(9)
and
are the power spectral densities (PSD’s)
where
and
, respectively. The optimal system minimizing
of
[2]–[7]. A “min(9) satisfies
imal” solution of this equation is
For an interpretation of this result, let us partition the obserinto three orthogonal subspaces corresponding
vation space
to positive, zero, and negative eigenvalues of :
span
span
span
so that
and
, where
are the orthogonal projection operators on
, and , respectively. Note that
the spaces
. We now consider the expected energies of the signal and
. The expected energy of any
noise processes in the space
in the one-dimensional (1-D) space span
process
is given by E
. Hence, the difference
1419
(10)
denotes the set of
where
frequencies where the PSD of the observation
is positive. The minimum instantaneous MSE
is given by
E
(11)
, and
The frequency-domain formulations (10) and (11) allow
an intuitively appealing interpretation of the time-invariant
and
Wiener filter (see Fig. 1). Let
denote the sets of frequencies where
and
, respectively, are positive. Then, (10) shows
1420
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 5, MAY 2000
III. TIME-FREQUENCY REPRESENTATIONS AND UNDERSPREAD
PROCESSES
In this section, we review some fundamentals of TF analysis
that will be used in Sections IV and V for the TF formulation
and TF design of optimal filters.
A. Time–Frequency Representations
Fig. 1.
Frequency-domain interpretation of the time-invariant Wiener filter.
that
for
, i.e., the Wiener filter passes
all observation components in the frequency bands where
only signal energy is present, which is intuitively reasonable.
for
, i.e., outside , the
Furthermore,
observation is suppressed, which is again reasonable since
outside , there is only noise energy. In the frequency ranges
, where both signal and noise are present, there is
with the values of
depending on the
and
at the respective frequency.
relative values of
The MSE in (9) can again be replaced by a weighted MSE.
Here, the weights can even be frequency dependent, which results in the weighted MSE
We first review four TF representations on which our TF formulations will be based.
• The Weyl symbol (WS) [13]–[16] of a linear operator
with kernel (impulse
(linear, time-varying system)
is defined as
response)
(15)
where and denote time and frequency, respectively. For
underspread systems (see Section III-B), the WS can be
considered to be a “time-varying transfer function” [8],
[11], [46], [47].
• The Wigner–Ville Spectrum (WVS) [17]–[19] of a nonstais defined as the WS of the
tionary random process
correlation operator
(12)
(16)
. The optimal filter minimizing this
with
reweighted MSE is then simply given by (10) with
and
replaced by
.
placed by
. For staWe next consider the optimal projection filter
is time-invariant with
tionary processes, it can be shown that
the zero-one valued transfer function
For underspread processes (see Section III-B), the WVS
can be interpreted as a “time-varying PSD” or expected
[8], [9], [12].
TF energy distribution of
• The spreading function [8], [13], [16], [26], [46], [48],
[49] of a linear operator is defined as
(13)
is the indicator function of
, which is the set of frequencies where the signal PSD
is an idealized bandpass
is larger than the noise PSD. Thus,
can be
filter with one or several pass bands. Note that
by a rounding operation, i.e.,
obtained from
round
. The instantaneous MSE obtained with
can
be shown to be given by
where
E
(14)
and
.
If the weighted instantaneous MSE (12) is used, then the optimal projection filter is the idealized bandpass filter with pass.
band(s)
and
is
The case of nonstationary white processes
dual to the stationary case discussed above. Here, the timevarying Wiener filter and projection filter are “frequency-invariant,” i.e., simple time-domain multiplications. All results
and interpretations are dual to the stationary case.
where
where and denote time lag and frequency lag, respectively. The spreading function is the 2-D Fourier transadmits the folform of the WS. Any linear operator
defined as
lowing expansion into TF shift operators
[8], [13], [16], [49]:
(17)
Hence, the spreading function describes the TF shifts introduced by (see Section III-B).
• The expected ambiguity function [8]–[12] of a nonstais defined as the spreading function
tionary process
of the correlation operator
(18)
The expected ambiguity function is the 2-D Fourier transform of the WVS. It can be interpreted as a TF correlation
function in the sense that it describes the correlation of
HLAWATSCH et al.: TIME-FREQUENCY FORMULATION, DESIGN, AND IMPLEMENTATION OF TIME-VARYING SIGNAL ESTIMATORS
process components whose TF locations are separated by
in time and by in frequency [8]–[12].
B. Underspread Systems and Processes
Since our TF formulation of optimal filters will be valid for
the class of underspread nonstationary processes, we will now
review the definition of underspread systems and processes
[8]–[12], [46], [47].
According to the expansion (17), the effective support of the
describes the TF shifts caused by
spreading function
a linear time-varying system . A global description of the TF
is given by the “displacement spread”
shift behavior of
[8], [11], [46], which is defined as the area of the smallest recplane and possibly
tangle [centered about the origin of the
with oblique orientation] that contains the effective support of
. A system
is called underspread [8], [11], [46] if
, which means that
is highly concentrated
plane and, hence, that
causes
about the origin of the
is the 2-D Fourier transonly small TF shifts. Since
, this implies that
is a smooth funcform of
and
are called jointly underspread
tion. Two systems
if their spreading functions are effectively supported within the
.
same small rectangle of area
Let us next consider a nonstationary, zero-mean random
in (18)
process. The expected ambiguity function
describes the correlation of process components whose TF locations are separated by in time and by in frequency [8]–[12].
Since the expected ambiguity function is the spreading function
, i.e.,
,
of the correlation operator
of a process
it is natural to define the correlation spread
as the displacement spread of its correlation operator,
. A nonstationary random process
is then
i.e.,
[8]–[11]. Since in this case the
called underspread if
expected ambiguity function is highly concentrated about the
plane, process components at different (i.e.,
origin of the
not too close) TF locations will be effectively uncorrelated.
is a smooth, effectively
This also implies that the WVS of
and
non-negative function [8], [12]. Two processes
are called jointly underspread if their expected ambiguity
functions are effectively supported within the same small
.
rectangle of area
Two special cases of underspread processes are “quasistationary” processes with limited temporal correlation width and
“nearly white” processes with limited time variation of their statistics. We emphasize that quasistationarity alone is neither necessary nor sufficient for the underspread property.
1421
that similar simplifications can be developed for underspread
nonstationary processes [50].
A. Time–Frequency Formulation of the Time-Varying Wiener
Filter
We assume that
and
are jointly underspread processes; specifically, their expected ambiguity functions are assumed to be exactly zero outside a rectangle (centered about
plane) of area
. The Wiener
the origin of the
, where the underfilter can then be split up as
is defined by
spread part
(19)
is the indicator function of , and the overspread
is defined by
. That is,
and
lie inside and outside , respectively. Since the spreading function
is the 2-D Fourier transform of the Weyl symbol, (19) implies
; here, denotes 2-D convois the 2-D Fourier transform of
and,
lution and
is a
hence, a 2-D lowpass function. This means that
.
smoothed version of
is characterized by the
We now recall that the Wiener filter
. A first step toward a TF
relation (4), i.e.,
is to notice that removing the overspread part
formulation of
from
does not greatly affect the validity of this relation,
i.e.,
where
part
Indeed, it is shown in Appendix A that the Hilbert-Schmidt (HS)
incurred by this
norm [43] of the error
approximation is upper bounded as
(20)
and, furthermore, that the “excess MSE” resulting from using
instead of
is bounded as
(21)
IV. TIME–FREQUENCY FORMULATION OF OPTIMAL FILTERS
In Section II, we saw that the time-varying Wiener filter
and projection filter are described by equations involving
linear operators and signal spaces. This description is rather
abstract and leads to computationally intensive design procedures and implementations. However, for stationary or white
processes, there exist simple frequency-domain or time-domain
formulations that involve functions instead of operators and
intervals instead of signal spaces and thus allow a substantially
simplified design and implementation. We will now show
denotes the MSE obtained with
. Hence, if
is
Here,
and
are jointly underspread,
small, i.e., if
and
. This shows that the underspread
is nearly optimal.
part
This being the case, it suffices to develop a TF formulation
. Intuitively, we could conjecture that a natural TF
for
formulation of the operator relation
would be
. Indeed,
norm of the error
it is shown in Appendix B that the
1422
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 5, MAY 2000
incurred by this
approximation is upper bounded as
(22)
can furthermore be formuAn upper bound involving
norm) as well. Hence,
lated for the error magnitude (
if
is small,
and
are jointly underspread. Defining the TF
i.e., if
is effectively
region where
(with some5 small
positive by
), we finally obtain the following TF formulation for
:
(23)
tr
The minimum MSE in (6) is given by
which, for
underspread, can be shown to equal
. Inserting (23), we obtain the approximate TF formulation
(24)
The relations (23) and (24) constitute a TF formulation of the
time-varying Wiener filter that extends the frequency-domain
formulation (10), (11) that is valid for stationary processes
and the analogous time-domain formulation that is valid for
nonstationary white processes to the much broader class of
underspread nonstationary processes. This TF formulation
of the time-varying Wiener filter allows a simple and intuitively appealing interpretation. Let us define the TF support
and
are effectively
regions where the WVS of
and
positive by
. The TF regions
and
correspond to the signal spaces
and
underlying the respective processes [42], [51]. With these
definitions at hand, the Wiener filter can be interpreted in the
TF domain as follows (see Fig. 2).
, i.e., in the TF re• In the “signal only” TF region
gion where only signal energy is present, it follows from
(23) that the WS of the Wiener filter is approximately one:
Fig. 2. TF interpretation of the time-varying Wiener filter.
• In the “no signal” TF region, where no signal energy is
, it follows from (23) that the
present, i.e., for
WS of the Wiener filter is approximately zero:
This “stop region” of the Wiener filter consists of two
, where only
parts: i) the “noise only” TF region
noise energy is present and ii) the outside (complement)
, i.e., the TF region
of the observation TF region
where neither signal nor noise energy is present; here,
since (23) corresponds to the minimal
Wiener filter.
, where both
• In the “signal plus noise” TF region
signal energy and noise energy are present, the WS of
the Wiener filter assumes values approximately between
0 and 1
Here,
performs a TF weighting that depends on the
relative signal and noise energy at the respective TF point.
and
Indeed, it follows from (23) that the ratio of
is given by the “local signal-to-noise ratio”
SNR
:
SNR
In particular, for equal signal and noise energy, i.e.,
, there is
.
SNR
This TF interpretation of the time-varying Wiener filter extends the frequency-domain interpretation valid in the stationary
case (see Fig. 1) to the broader class of underspread processes.
B. Time–Frequency Formulation of the Time-Varying
Projection Filter
Thus,
passes all “noise-free” observation components
without attenuation or distortion.
5An analysis of the WS of positive operators ([8] and [47]) suggests that we
use kA k = kA k .
We recall from Section II-B that the optimal projection filter
performs a projection onto the space
span
spanned by all eigenfunctions of the operator
associated with positive eigenvalues. In order to obtain
a TF formulation of this result, we consider the following three
HLAWATSCH et al.: TIME-FREQUENCY FORMULATION, DESIGN, AND IMPLEMENTATION OF TIME-VARYING SIGNAL ESTIMATORS
1423
TF regions on which the signal energy is, respectively, larger
than, equal to, or smaller than the noise energy:
It can finally be shown that the MSE obtained with
(8)] is approximately given by
[cf.
With
which extends the relation (14) to general underspread processes.
, these TF regions can alternatively be written as
V. APPROXIMATE TIME–FREQUENCY DESIGN OF OPTIMAL
FILTERS
Note that
.
There exists a correspondence between the “pass space”
span
of
and the TF region
. Indeed, if
and
are jointly underspread, then
is an underspread
system. It is here known [8], [11], [47] that TF shifted versions
of a well-TF localized function
are approximate eigenfunctions of , and the WS values
are the corresponding approximate eigenvalues, i.e.,
In the previous section, we have shown that the time-varying
Wiener and projection filters can be reformulated in the TF doand
are
main if the nonstationary random processes
jointly underspread. We will now show that this TF formulation
of optimal filters leads to simple design procedures that operate
in the TF domain and yield nearly optimal filters.
A. Time–Frequency Pseudo-Wiener Filter
We recall that for jointly underspread processes, an approximate expression for the Wiener filter's WS is given by (23). Let
by setting
us now define another linear, time-varying filter
its WS equal to the right-hand side of (23) [50]:
(26)
Hence, the optimal pass space
(spanned by all eigenfuncwith
) corresponds to the TF region
tions
(comprising the TF locations
of all
such that
). The action of
—passing signals in
and suppressing signals in
—can
therefore be reformulated in the TF plane as passing signals in
and suppressing signals outside
. Thus,
the TF region
passes signal components in the TF rewe conclude that
gion where the local TF SNR is larger than one and suppresses
signal components in the complementary TF region, which is
intuitively reasonable.
performs a projection onto
The optimal projection filter
the space ; for jointly underspread signal and noise, this space
is a “simple” space as defined in [42]. The WS of the projection
operator on a simple space essentially assumes the values 1 (on
) and 0 (on the TF stop region)
the TF pass region, in our case
can be approximated as6
[42]. Hence, the WS of
(25)
is the indicator function of
. This extends
where
the relation (13) to general underspread processes. Note that the
only allows
to pass or suppress
projection property of
; no intermediate type of attenuation (which
components of
between 0 and 1) is
would correspond to values of
possible.
6Experiments show that this approximation
is valid apart from oscillations of
about the values 0 or 1. Hence, the approximation can be improved
by a slight smoothing of L (t; f ) , which corresponds to the removal of overspread components from P (cf. Section IV-A).
L
(t; f )
as the TF pseudo-Wiener filter. For jointly underWe refer to
,
, where (23) is a good approximation,
spread processes
,
a combination of (26) and (23) yields
. Hence, the TF pseudo-Wiener filter
is
and thus,
of the Wiener
a close approximation to the underspread part
and, therefore, is nearly optimal; furthermore, the TF
filter
(see Section IV-A) applies equally well to
interpretation of
. The deviation from optimality is characterized by the error
bounds in (20)–(22). For processes that are not jointly undermust be expected to perform poorly. Note
spread, however,
is a self-adjoint operator since according to (26), the
that
is real-valued.
WS of
For jointly strictly underspread processes, i.e., underspread
processes whose support rectangle is oriented parallel to the
and axes [8], [29], the WVS occurring on the right-hand side
of (26) can be replaced by the generalized WVS [17]–[19], the
(generalized) evolutionary spectrum [23], [24], [28], [29], or the
physical spectrum (expected spectrogram) using an appropriate
analysis window [18], [19], [33]. This is possible since in the
strictly underspread case, these spectra become nearly equivalent [8], [9], [12], [19], [29].
is defined in the TF
While the TF pseudo-Wiener filter
domain, the actual calculation of the signal estimate can be performed directly in the time domain according to
1424
where
, which is the impulse response of
obtained from the WS in (26) as [cf. (15)]
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 5, MAY 2000
, can be
(27)
An efficient approximate multiwindow STFT implementation
will be discussed in Section VI.
, the TF pseudo-Wiener
Compared with the Wiener filter
possesses two practical advantages:
filter
• Modified a priori knowledge: Ideally, the calculation
requires knowledge of the correlation
(design) of
and
[see (5)], whereas the design of
operators
requires knowledge of the WVS
and
[see (26)]. Although correlation operators and WVS are
mathematically equivalent due to the one-to-one mapping (16), the WVS are much easier and more intuitive
to handle than the correlation operators or correlation
functions since WVS have a more immediate physical
significance and interpretation. In practice, the quantities constituting the a priori knowledge (correlation
operators or WVS) are usually replaced by estimated
or schematic/idealized versions, which are much more
easily controlled or designed in the TF domain than in
the correlation domain. For example, an approximate or
partial knowledge of the WVS will often suffice for a
reasonable filter design.
• Reduced computation: The calculation (design) of
requires a computationally intensive and potentially
unstable operator inversion (or, in a discrete-time setting,
a matrix inversion). In the TF design (26), this operator
inversion is replaced by simple and easily controllable
pointwise divisions of functions. Assuming discrete-time
signals of length , the computational cost of the degrows with
, whereas that of
(using
sign of
.
divisions and FFT’s) grows only with
The TF pseudo-Wiener filter satisfies an approximate “TF opjointly underspread, it can be shown that
timality.” For
the MSE obtained with any underspread system is approximately given by
(28)
(assuming a minimal solution, i.e.,
Minimizing
for
) then yields the
in (26) and, thus, the
. This interpretation suggests an exTF pseudo-Wiener filter
tended TF design that is based on the following weighted MSE
[extending (12)]
with smooth TF weighting function
(29)
. The filter minimizing this
with
reweighted TF error is given by (26) with
and
replaced by
placed by
.
B. Time–Frequency Pseudo-Projection Filter
Next, we consider a TF filter design that is motivated by the
approximate expression (25) for the WS of
. Let us define a
new linear system by setting its WS equal to the right-hand
side of (25):
Since
is the indicator function of the TF region
, the WS of is 1 for
those TF points where there is more signal energy than noise
energy and 0 elsewhere. It is interesting that the WS of is a
rounded version of the WS of the TF pseudo-Wiener filter
in (26), i.e.,
round
, which is consisdiffers from
tent with the stationary case. Furthermore,
only on the signal-plus-noise TF region
; in
, we have
the TF pass region
, and in the TF stop region outside
, there is
.
In general, is not exactly an orthogonal projection operator.
and
, where
However, for jointly underspread processes
,
(25) is a good approximation, there is
. Hence, the TF-designed filter is a reasonand thus,
, and we
able approximation to the optimal projection filter
will thus call it TF pseudo-projection filter. However, for processes that are not jointly underspread, must be expected to
perform poorly and to be very different from an orthogonal projection operator.
Although is defined in the TF domain, the actual calculation of the signal estimate can be done in the time domain using
as (cf.
the impulse response of that is derived from
(27))
(30)
An efficient approximate multiwindow STFT implementation
will be discussed in Section VI.
, the TF
Compared with the optimal projection filter
pseudo-projection filter has two advantages:
• Modified/reduced a priori knowledge: The design of
requires knowledge of the space that is spanned
by the eigenfunctions of the operator
corresponding to positive eigenvalues (see Section II-B),
merely presupposes that we
whereas the design of
in which the signal dominates
know the TF region
the noise. This knowledge is of a much simpler and more
intuitive form, thus facilitating the use of approximate or
partial information about the WVS.
requires the so• Reduced computation: The design of
lution of an eigenvalue problem, which is computationally intensive. In contrast, the proposed TF design only requires an inverse WS transformation [see (30)]. Assuming
discrete-time signals of length , the computational cost
grows with
, whereas that of
of the design of
(using FFT’s) grows with
.
is furthermore advantaThe TF pseudo-projection filter
or
since
geous as compared with the Wiener-type filters
HLAWATSCH et al.: TIME-FREQUENCY FORMULATION, DESIGN, AND IMPLEMENTATION OF TIME-VARYING SIGNAL ESTIMATORS
its design is less expensive and more robust (especially with respect to errors in estimating the a priori knowledge and with
respect to potential correlations of signal and noise).
Similar to the TF pseudo-Wiener filter, the TF pseudo-projection filter satisfies a “TF optimality” in that it minimizes the
in (28) under the constraint of a
-valued WS,
TF MSE
. Minimizing the TF weighted MSE
i.e.,
in (29) rather than
, again under the constraint
, results in a generalized TF pseudo-projection filter
that is given by
, where
.
C. Time–Frequency Projection Filter
For jointly underspread processes, the TF pseudo-projection
approximates the optimal projection filter
, but it
filter
is not exactly an orthogonal projection operator. If an orthogonal projection operator is desired, we may calculate the orthogonal projection operator that optimally approximates
in the sense of minimizing
. This can be shown to
be equivalent to minimizing
[42], [51]. That is, the TF designed projection
filter (hereafter denoted by ) is the orthogonal projection op.
erator whose WS is closest to the indicator function
is given by the orthogonal
It is shown in [42] and [51] that
span
projection operator on the space
spanned by all eigenfunctions
of with associated eigen, i.e.,
values
with
For jointly underspread processes where is close to an oris close to and, hence, also
thogonal projection operator,
close to the optimal projection filter . However, due to the inmay lead to a larger MSE than
herent projection constraint,
. In addition, the derivation of
from requires the solution of an eigenproblem. Therefore, the TF projection filter
appears to be less attractive than the TF pseudo-projection filter
.
VI. MULTIWINDOW STFT IMPLEMENTATION
Following [8] and [20], we now discuss a TF implementation
of the Wiener and projection filters that is based on the multiwindow short-time Fourier transform (STFT) and that is valid
in the underspread case.
A simple TF filter method is an STFT filter that consists of
the following steps [8], [13], [20], [52]–[57]:
• STFT analysis: Calculation of the STFT [19], [54], [55],
[58] of the input signal
Fig. 3. Multiwindow STFT implementation of the Wiener filter.
• STFT synthesis: The output signal
is the inverse STFT
[54], [55], [58] of the weighted STFT:
STFT
We note that
is the signal whose STFT is closest to
STFT
in a least-squares sense [57].
These steps implement a linear, time-varying filter (hereafter
and
.
denoted ) that depends on
We recall from Section IV-A that for jointly underspread proof the Wiener filter
is nearly
cesses, the underspread part
for
, it follows that
optimal. Since
where
is any real-valued function that is 1 on [a “minis the indicator function
, cf.
imal” choice for
(19)]. Let denote the linear system defined by
. If
is chosen such that
,
is a self-adjoint operator with real-valued eigenvalues and
. It is shown in [8] and [20]
orthonormal eigenfunctions
can be expanded as
that
(31)
are STFT filters with TF weighting function
and STFT windows
. Thus,
is
represented as a weighted sum of STFT filters with identical
and orthonormal windows
TF weighting function
. In practice, the eigenvalues
often decay quickly so
that it suffices to use only a few terms of (31) corresponding
to the largest eigenvalues (an example will be presented in
are arranged in
Section VII). Assuming that the eigenvalues
nonincreasing order, we then obtain
where the
This yields the approximate multiwindow STFT implementation
that is depicted in Fig. 3. It can be shown [8] that the
of
is bounded as
approximation error
STFT
Here,
where
is a normalized window.
• STFT weighting: Multiplication of the STFT by a
weighting function
STFT
1425
STFT
which can be used to estimate the appropriate order . Since
and
are assumed jointly underspread, (23) holds, and
1426
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 5, MAY 2000
we obtain the following simple approximation to the STFT
:
weighting function
(32)
A particularly efficient discrete-time/discrete-frequency version of the multiwindow STFT implementation that uses filter
banks is discussed in [8]. Furthermore, it is shown in [8] and
, the STFT’s used in the
[20] that in the case of white noise
multiwindow STFT implementation can additionally be used to
, which is required to calculate
estimate the WVS of
according to (32).
A heuristic, approximate TF implementation of time-varying
Wiener filters proposed in [31] and [32] uses a single STFT filter
) with TF weighting function
(i.e.,
where
and
are the physical spectra [18], [19],
and
, respectively. Compared with the mul[33] of
tiwindow STFT implementation discussed above, this suffers
from a threefold performance loss since
i) Only one STFT filter is used.
ii) The physical spectrum is a smoothed version of the WVS
[18], [33].
iii) The STFT window is not chosen as the eigenfunction of
corresponding to the largest eigenvalue.
An approximate multiwindow STFT implementation can
also be developed for the TF pseudo-projection filter (which
). Here, the TF
approximates the optimal projection filter
weighting function is
Since
is not exactly underspread, the function
defining the windows
and the coefficients
must be
, and
chosen such that it covers the effective support of
hence
(33)
can be constructed from
Subsequently, the and
as explained above. The resulting multiwindow STFT filter
leads to an approximation error
that can be
bounded as
with
. This bound is a measure of the exand will thus be small in the underspread
tension of
even for
since
case. However,
(33) is only an approximation.
VII. NUMERICAL SIMULATIONS
This section presents numerical simulations that illustrate and
corroborate our theoretical results. Using the TF synthesis tech-
Fig. 4. Second-order statistics of signal and noise. (a) WVS of s (t) . (b) WVS
of n(t) . (c) Expected ambiguity function of s(t) . (d) Expected ambiguity
function of n(t) . The rectangles in parts (c) and (d) have area 1 and thus allow
assessment of the underspread property of s(t) and n(t) . The signal length is
128 samples.
nique introduced in [59], we synthesized random processes
and
with expected energies
and
,
respectively. Thus, the mean input SNR is SNR
dB. The WVS and expected ambiguity functions of
and
are shown in Fig. 4. From the expected ambiguity functions, it is seen that the processes are jointly underspread. The
, its underspread part
, and the
WS’s of the Wiener filter
are shown in Fig. 5(a)–(c). The TF
TF pseudo-Wiener filter
pass, stop, and transition regions of the filters are easily recogis a smoothed version of
nized. It is verified that the WS of
and that the WS of
closely approximates that
the WS of
. Fig. 5(d)–(f) compare the WS’s of the optimal projection
of
filter , TF pseudo-projection filter , and TF projection filter
. It is seen that the WS’s of these filters are similar, except
for small-scale oscillations, and that the WS of is a rounded
.
version of the WS of
SNR
SNR
The mean SNR improvements SNR
(where SNR
with
) achieved
with the Wiener-type and projection-type filters are listed in
is
Table I. The performance of the TF pseudo-Wiener filter
. Similarly,
seen to be very close to that of the Wiener filter
the performance of the TF pseudo-projection filter is close
. In fact, performs
to that of the optimal projection filter
and the TF projection filter ,
even slightly better than both
HLAWATSCH et al.: TIME-FREQUENCY FORMULATION, DESIGN, AND IMPLEMENTATION OF TIME-VARYING SIGNAL ESTIMATORS
H
Fig. 5. WS’s of (a) Wiener filter
. (b) Underspread part
pseudo-projection filter ~ . (f) TF projection filter
.
P
P
TABLE I
MEAN SNR IMPROVEMENT ACHIEVED BY
WIENER-TYPE AND PROJECTION-TYPE FILTERS
H
of Wiener filter. (c) TF pseudo-Wiener filter
H . (d) Optimal projection filter P
1427
. (e) TF
which can be attributed to the orthogonal projection constraint
and
(see Section V-C).
underlying
The multiwindow STFT implementation discussed in Secapproximated according to (32), is contion VI, with
sidered in Fig. 6. The signal and noise processes are as before. Fig. 6(a)–(c) show the WS’s of the multiwindow STFT
using filter order
and . For growing
filters
becomes increasingly similar to the Wiener filter or
the TF pseudo-Wiener filter [cf. Fig. 5(a)–(c)]. Fig. 6(d) shows
depends
how the mean SNR improvement achieved with
is seen
on . Although the single-window STFT filter
1428
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 5, MAY 2000
Fig. 6. Multiwindow STFT implementation of the Wiener filter. (a)–(c) WS’s of multiwindow STFT filters with filter order K
improvement versus STFT filter order K .
=1
;
4;
and 10 . (d) Mean SNR
H
Fig. 7. Filtering experiment involving an overspread noise process. (a) WVS of s(t) . (b) WVS of n(t) . (c) WS of Wiener filter
. (d) Expected ambiguity
function of s(t) . (e) Expected ambiguity function of n(t) . (f) WS of TF pseudo-Wiener filter
. The rectangles in parts (d) and (e) have area 1 and thus allow
assessment of the under-/overspread property of s(t) and n(t) . (In particular, part (e) shows that n(t) is overspread.) The signal length is 128 samples.
H
to result in a significant performance loss, a modest filter order
already yields practically optimal performance.
An experiment in which the underspread assumption is viis again underspread
olated is shown in Fig. 7. The signal
(and, in addition, reasonably quasistationary), but the noise
is not underspread—it is reasonably quasistationary, but its temporal correlation width is too large, as can be seen from Fig. 7(e).
A comparison of Fig. 7(c) and (f) shows that the TF pseudois significantly different from the Wiener filter
Wiener filter
. The heavy oscillations of the WS of
in Fig. 7(c) inis far from being an underspread system. The
dicate that
and
were constructed such that they lie in
processes
is an
linearly independent (disjoint) signal spaces. Here,
,
oblique projection operator [44] that perfectly reconstructs
thereby achieving zero MSE and infinite SNR improvement. On
the other hand, due to the significant overlap of the WVS of
and
, the TF pseudo-Wiener filter
merely achieves
an SNR improvement of 4.79 dB. This example is important
and
is
since it shows that mere quasistationarity of
not sufficient as an assumption permitting a TF design of optimal filters. It also shows that in certain cases—specifically,
when signal and noise have significantly overlapping WVS but
belong to linearly independent signal spaces—the TF designed
filter performs poorly, even though the optimal Wiener filter
achieves perfect signal reconstruction. We stress that this situation implies that the processes are overspread (i.e., not underspread [12]), thereby prohibiting beforehand the successful use
of TF filter methods.
Finally, we present the results of a real-data experiment illustrating the application of the TF pseudo-Wiener filter to speech
enhancement. The speech signal used consists of 246 pitch periods of the German vowel “a” spoken by a male speaker. This
HLAWATSCH et al.: TIME-FREQUENCY FORMULATION, DESIGN, AND IMPLEMENTATION OF TIME-VARYING SIGNAL ESTIMATORS
1429
H
Fig. 8. Speech enhancement experiment using TF pseudo-Wiener filter. (a) WS of TF pseudo-Wiener filter
. (b) Two pitch periods of noise-free speech signal.
(c) Noise-corrupted speech signal. (d) Enhanced speech signal. All signals are represented by their time-domain waveforms and their (slightly smoothed) Wigner
distributions [19], [58], [60], [61]. Horizontal axis: time (signal duration is 256 signal samples, corresponding to 2.32 ms). Vertical axis: frequency in kilohertz.
signal was sampled at 11.03 kHz and prefiltered in order to emphasize higher frequency content. The resulting discrete-time
signal was split into 123 segments of length 256 samples (corresponding to 2.32 ms), each of which contains two pitch periods.
Sixty four of these segments were considered as individual realizations of a nonstationary process of length 256 samples and
were used to estimate the WVS of this process. Furthermore, we
used a stationary AR noise process that provides a reasonable
model for car noise. Due to stationarity, the noise WVS equals
the PSD for which a closed-form expression exists; hence, there
was no need to estimate the noise WVS. Using the estimated
signal WVS and the exact noise WVS, the TF pseudo-Wiener
was then designed according to (26). The WS of
filter
is shown in Fig. 8(a).
The remaining 59 speech signal segments were corrupted by 59 different noise realizations and input to the TF
. The input SNR (averaged over the
pseudo-Wiener filter
dB, and the average SNR of
59 realizations) was SNR
the enhanced signal segments obtained at the filter output was
dB, corresponding to an average SNR improveSNR
ment of about 6.7 dB. Results for one particular realization are
shown in Fig. 8(b)–(d).
VIII. CONCLUSION
We have developed a time-frequency (TF) formulation and
an approximate TF design of optimal filters that is valid for underspread, nonstationary random processes, i.e., nonstationary
processes with limited TF correlation. We considered two types
of optimal filters: the classical time-varying Wiener filter and a
projection-constrained optimal filter. The latter will produce a
larger estimation error but has practically important advantages
concerning robustness and a priori knowledge.
The TF formulation of optimal filters was seen to allow an
intuitively appealing interpretation of optimal filters in terms
of passing, attenuating, or suppressing signal components located in different TF regions. The approximate TF design of
optimal filters is practically attractive since it is computation-
ally efficient and uses a more intuitive form of a priori knowledge. We furthermore discussed an efficient TF implementation
of time-varying optimal filters using multiwindow STFT filters.
Our TF formulation and design was based on the Weyl
symbol (WS) of linear, time-varying systems and the WVS
of nonstationary random processes. However, if the processes
satisfy a more restrictive type of underspread property (if they
are strictly underspread [8], [29]), then our results can be
extended in that the WS can be replaced by other members of
the class of generalized WSs [48] (such as Zadeh's time-varying
transfer function [25]), and the WVS can be replaced by other
members of the class of generalized WVS [18], [19] or generalized evolutionary spectra [29]. This extension is possible
since for strictly underspread systems, all generalized WS’s are
essentially equivalent and for strictly underspread processes,
all generalized WVS and generalized evolutionary spectra are
essentially equivalent [8], [9], [12], [19], [29], [47].
APPENDIX A
UNDERSPREAD APPROXIMATION OF THE WIENER FILTER:
PROOF OF THE BOUNDS (20) AND (21)
and
We assume that the expected ambiguity functions of
are exactly zero outside the same (possibly obliquely oriented) centered rectangle with area (joint correlation spread)
.
We first prove the bound (20). Using
and
, the approximation error
can be developed as
(34)
We will now show that
that is oriented as
rectangle
is zero outside the centered
but has sides that are twice as
1430
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 5, MAY 2000
long. Using
, it can be shown [13] that
, where
Furthermore,
fore obtain
tr
according to (6). We there-
tr
tr
(35)
and
is known as the twisted convolution of
[13]. In the
domain,
corresponds
, which can be rewritten as
to
. With
, we obtain
(36)
tr
tr
tr
tr
tr
The first term is zero, i.e., tr
, since
outside . The second term can be bounded by
applying Schwarz' inequality:
tr
Using (38), the upper bound in (21) follows.
The twisted convolution [cf. (35)] implies that
APPENDIX B
TF FORMULATION OF THE WIENER FILTER: PROOF OF THE
BOUND (22)
We consider the approximation error
and, hence, that the support of
is confined to
[recall that both
and
are confined to ].
is confined to
The crucial observation now is that since
and
is confined to , (36) implies that
must also be confined to
since any contrioutside
cannot be canceled by
butions of
and, thus, would contradict (36). Hence, we
can write (34) as
Subtracting and adding
, this can be rewritten as
with
According to the triangle inequality, the
norm of
is
. It is shown in [8], [11], and
bounded as
is bounded as
[46] that
(37)
Applying the Schwarz inequality to (35) yields the bound
.
is
Inserting this into (37) and using the fact that the area of
(i.e., four times the area of ), we obtain (20):
where we have used
bounded as
. Furthermore,
is
(38)
We next prove the bound (21). The lower bound
is trivial since
is the minimum possible MSE. Let us concan
sider the upper bound. With (3), the MSE achieved by
be written as
tr
tr
tr
tr
tr
where we have used (38). Inserting these two bounds into
gives (22).
REFERENCES
[1] N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary
Time Series. Cambridge, MA: MIT Press, 1949.
[2] T. Kailath, Lectures on Wiener and Kalman Filtering. Wien, Austria:
Springer, 1981.
[3] A. Papoulis, Probability, Random Variables, and Stochastic Processes. New York: McGraw-Hill, 1984.
[4] L. L. Scharf, Statistical Signal Processing. Reading, MA: Addison
Wesley, 1991.
[5] C. W. Therrien, Discrete Random Signals and Statistical Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1992.
HLAWATSCH et al.: TIME-FREQUENCY FORMULATION, DESIGN, AND IMPLEMENTATION OF TIME-VARYING SIGNAL ESTIMATORS
[6] H. V. Poor, An Introduction to Signal Detection and Estimation. New
York: Springer, 1988.
[7] H. L. Van Trees, Detection, Estimation, and Modulation Theory, Part
I: Detection, Estimation, and Linear Modulation Theory. New York:
Wiley, 1968.
[8] W. Kozek, “Matched Weyl-Heisenberg Expansions of Nonstationary
Environments,” Ph.D. dissertation, Vienna Univ. Technol., Vienna,
Austria, Mar. 1997.
[9] W. Kozek, F. Hlawatsch, H. Kirchauer, and U. Trautwein, “Correlative
time-frequency analysis and classification of nonstationary random processes,” in Proc. IEEE-SP Int. Symp. Time-Frequency Time-Scale Anal.,
Philadelphia, PA, Oct. 1994, pp. 417–420.
[10] W. Kozek, “On the underspread/overspread classification of nonstationary random processes,” in Proc. Int. Conf. Ind. Appl. Math.,
K. Kirchgässner, O. Mahrenholtz, and R. Mennicken, Eds. Berlin,
Germany, 1996, pp. 63–66.
, “Adaptation of Weyl-Heisenberg frames to underspread environ[11]
ments,” in Gabor Analysis and Algorithms: Theory and Applications, H.
G. Feichtinger and T. Strohmer, Eds. Boston, MA: Birkhäuser, 1998,
ch. 10, pp. 323–352.
[12] G. Matz and F. Hlawatsch, “Time-varying spectra for underspread and
overspread nonstationary processes,” in Proc. 32nd Asilomar Conf. Signals, Syst., Comput., Pacific Grove, CA, Nov. 1998, pp. 282–286.
[13] G. B. Folland, Harmonic Analysis in Phase Space. Princeton, NJ:
Princeton Univ. Press, 1989.
[14] A. J. E. M. Janssen, “Wigner weight functions and Weyl symbols of
non-negative definite linear operators,” Philips J. Res., vol. 44, pp. 7–42,
1989.
[15] W. Kozek, “Time-frequency signal processing based on the
Wigner-Weyl framework,” Signal Process., vol. 29, pp. 77–92,
Oct. 1992.
[16] R. G. Shenoy and T. W. Parks, “The Weyl correspondence and time-frequency analysis,” IEEE Trans. Signal Processing, vol. 42, pp. 318–331,
Feb. 1994.
[17] W. Martin and P. Flandrin, “Wigner–Ville spectral analysis of nonstationary processes,” IEEE Trans. Acoust., Speech, Signal Processing, vol.
ASSP-33, pp. 1461–1470, Dec. 1985.
[18] P. Flandrin and W. Martin, “The Wigner–Ville spectrum of nonstationary random signals,” in The Wigner Distribution—Theory
and Applications in Signal Processing, W. Mecklenbräuker and F.
Hlawatsch, Eds. Amsterdam, The Netherlands: Elsevier, 1997, pp.
211–267.
[19] P. Flandrin, Time-Frequency/Time-Scale Analysis. San Diego, CA:
Academic, 1999.
[20] W. Kozek, H. G. Feichtinger, and J. Scharinger, “Matched multiwindow
methods for the estimation and filtering of nonstationary processes,” in
Proc. IEEE ISCAS, Atlanta, GA, May 1996, pp. 509–512.
[21] N. A. Abdrabbo and M. B. Priestley, “Filtering non-stationary signals,”
J. R. Stat. Soc. Ser. B., vol. 31, pp. 150–159, 1969.
[22] J. A. Sills, “Nonstationary signal modeling, filtering, and parameterization,” Ph.D. dissertation, Georgia Inst. Technol., Atlanta, Mar. 1995.
[23] J. A. Sills and E. W. Kamen, “Wiener filtering of nonstationary signals
based on spectral density functions,” in Proc. 34th IEEE Conf. Decision
Contr., Kobe, Japan, Dec. 1995, pp. 2521–2526.
[24] H. A. Khan and L. F. Chaparro, “Nonstationary Wiener filtering based on
evolutionary spectral theory,” in Proc. IEEE ICASSP, Munich, Germany,
May 1997, pp. 3677–3680.
[25] L. A. Zadeh, “Frequency analysis of variable networks,” Proc. IRE, vol.
76, pp. 291–299, Mar. 1950.
[26] P. A. Bello, “Characterization of randomly time-variant linear channels,”
IEEE Trans. Commun. Syst., vol. COMM-11, pp. 360–393, 1963.
[27] M. B. Priestley, “Evolutionary spectra and nonstationary processes,” J.
R. Stat. Soc. Ser. B., vol. 27, no. 2, pp. 204–237, 1965.
, Spectral Analysis and Time Series—Part II. London, U.K.: Aca[28]
demic, 1981.
[29] G. Matz, F. Hlawatsch, and W. Kozek, “Generalized evolutionary spectral analysis and the Weyl spectrum of nonstationary random processes,”
IEEE Trans. Signal Processing, vol. 45, pp. 1520–1534, June 1997.
[30] A. A. Beex and M. Xie, “Time-varying filtering via multiresolution parametric spectral estimation,” in Proc. IEEE ICASSP, Detroit, MI, May
1995, pp. 1565–1568.
[31] P. Lander and E. J. Berbari, “Enhanced ensemble averaging using the
time-frequency plane,” in Proc. IEEE-SP Int. Symp. Time-Frequency
Time-Scale Anal., Philadelphia, PA, Oct. 1994, pp. 241–243.
[32] A. M. Sayeed, P. Lander, and D. L. Jones, “Improved time-frequency
filtering of signal-averaged electrocardiograms,” J. Electrocardiol., vol.
28, pp. 53–58, 1995.
1431
[33] W. D. Mark, “Spectral analysis of the convolution and filtering of nonstationary stochastic processes,” J. Sound Vibr., vol. 11, no. 1, pp. 19–63,
1970.
[34] J. S. Lim and A. V. Oppenheim, “Enhancement and bandwidth compression of noisy speech,” Proc. IEEE, vol. 67, pp. 1586–1604, Dec. 1979.
[35] Y. Ephraim and D. Malah, “Speech enhancement using a minimum mean-square error short-time spectral amplitude estimator,”
IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-32, pp.
1109–1121, Dec. 1984.
, “Speech enhancement using a minimum mean-square error log
[36]
spectral amplitude estimator,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-33, pp. 443–445, Apr. 1985.
[37] G. Doblinger, “Computationally efficient speech enhancement by spectral minima tracking in subbands,” in Proc. Eurospeech, Madrid, Spain,
Sept. 1995, pp. 1513–1516.
[38] G. W. Wornell and A. V. Oppenheim, “Estimation of fractal signals from
noisy measurements using wavelets,” IEEE Trans. Signal Processing,
vol. 40, pp. 611–623, Mar. 1992.
[39] G. W. Wornell, Signal Processing with Fractals: A Wavelet-Based Approach. Englewood Cliffs, NJ: Prentice-Hall, 1995.
[40] M. Unser, “Wavelets, statistics, and biomedical applications,” in Proc.
IEEE SP Workshop Stat. Signal Array Process., Corfu, Greece, June
1996, pp. 244–249.
[41] S. P. Ghael, A. M. Sayeed, and R. G. Baraniuk, “Improved wavelet denoising via emprirical Wiener filtering,” in Proc. SPIE Wavelet Appl.
Signal Image Process. V, San Diego, CA, July 1997, pp. 389–399.
[42] F. Hlawatsch, Time-Frequency Analysis and Synthesis of Linear Signal
Spaces: Time-Frequency Filters, Signal Detection and Estimation, and
Range-Doppler Estimation. Boston, MA: Kluwer, 1998.
[43] A. W. Naylor and G. R. Sell, Linear Operator Theory in Engineering
and Science, 2nd ed. New York: Springer, 1982.
[44] R. T. Behrens and L. L. Scharf, “Signal processing applications of
oblique projection operators,” IEEE Trans. Signal Processing, vol. 42,
pp. 1413–1424, June 1994.
[45] G. Matz and F. Hlawatsch, “Robust time-varying Wiener filters: Theory
and time-frequency formulation,” in Proc. IEEE-SP Int. Symp. TimeFrequency Time-Scale Anal., Pittsburgh, PA, Oct. 1998, pp. 401–404.
[46] W. Kozek, “On the transfer function calculus for underspread LTV channels,” IEEE Trans. Signal Processing, vol. 45, pp. 219–223, Jan. 1997.
[47] G. Matz and F. Hlawatsch, “Time-frequency transfer function calculus
(symbolic calculus) of linear time-varying systems (linear operators)
based on a generalized underspread theory,” J. Math. Phys., Special
Issue on Wavelet and Time-Frequency Analysis, vol. 39, pp. 4041–4071,
Aug. 1998.
[48] W. Kozek, “On the generalized Weyl correspondence and its application to time-frequency analysis of linear time-varying systems,” in Proc.
IEEE-SP Int. Symp. Time-Frequency Time-Scale Anal., Victoria, Ont.,
Canada, Oct. 1992, pp. 167–170.
[49] K. A. Sostrand, “Mathematics of the time-varying channel,” Proc. NATO
Adv. Study Inst. Signal Process. Emphasis Underwater Acoust., vol. 2,
pp. 25.1–25.20, 1968.
[50] H. Kirchauer, F. Hlawatsch, and W. Kozek, “Time-frequency formulation and design of nonstationary Wiener filters,” in Proc. IEEE ICASSP,
Detroit, MI, May 1995, pp. 1549–1552.
[51] F. Hlawatsch and W. Kozek, “Time-frequency projection filters and
time-frequency signal expansions,” IEEE Trans. Signal Processing,
vol. 42, pp. 3321–3334, Dec. 1994.
[52] I. Daubechies, “Time-frequency localization operators: A geometric
phase space approach,” IEEE Trans. Inf. Theory, vol. 34, pp. 605–612,
July 1988.
[53] W. Kozek and F. Hlawatsch, “A comparative study of linear and
nonlinear time-frequency filters,” in Proc. IEEE-SP Int. Sympos.
Time-Frequency Time-Scale Analysis, Victoria, B.C., Canada, Oct.
1992, pp. 163–166.
[54] M. R. Portnoff, “Time-frequency representation of digital signals and
systems based on short-time Fourier analysis,” IEEE Trans. Acoust.,
Speech, Signal Processing, vol. ASSP-28, pp. 55–69, Feb. 1980.
[55] S. H. Nawab and T. F. Quatieri, “Short-time Fourier transform,” in Advanced Topics in Signal Processing, J. S. Lim and A. V. Oppenheim,
Eds. Englewood Cliffs, NJ: Prentice-Hall, 1988, ch. 6, pp. 289–337.
[56] R. Bourdier, J. F. Allard, and K. Trumpf, “Effective frequency response
and signal replica generation for filtering algorithms using multiplicative
modifications of the STFT,” Signal Process., vol. 15, pp. 193–201, Sept.
1988.
[57] M. L. Kramer and D. L. Jones, “Improved time-frequency filtering
using an STFT analysis-modification-synthesis method,” in Proc.
IEEE-SP Int. Symp. Time-Frequency Time-Scale Anal., Philadelphia,
PA, Oct. 1994, pp. 264–267.
1432
[58] F. Hlawatsch and G. F. Boudreaux-Bartels, “Linear and quadratic timefrequency signal representations,” IEEE Signal Processing Mag., vol. 9,
pp. 21–67, Apr. 1992.
[59] F. Hlawatsch and W. Kozek, “Second-order time-frequency synthesis of
nonstationary random processes,” IEEE Trans. Inform. Theory, vol. 41,
pp. 255–267, Jan. 1995.
[60] T. A. C. M. Claasen and W. F. G. Mecklenbräuker, “The Wigner distribution—A tool for time-frequency signal analysis; Parts I–III,” Philips
J. Res., vol. 35, pp. 217–250; 276–300; 372–389, 1980.
[61] W. Mecklenbräuker and F. Hlawatsch, Eds., The Wigner Distribution—Theory and Applications in Signal Processing. Amsterdam,
The Netherlands: Elsevier, 1997.
Franz Hlawatsch (S’85–M’88) received the
Dipl.-Ing., Dr.techn., and Univ.-Dozent degrees
in electrical engineering communications engineering/signal processing from the Vienna
University of Technology, Vienna, Austria, in 1983,
1988, and 1996, respectively.
Since 1983, he has been with the Institute of
Communications and Radio-Frequency Engineering,
Vienna University of Technology. From 1991 to
1992, he spent a sabbatical year with the Department
of Electrical Engineering, University of Rhode
Island, Kingston. He authored the book Time-Frequency Analysis and Synthesis
of Linear Signal Spaces—Time-Frequency Filters, Signal Detection and
Estimation, and Range-Doppler Estimation (Boston, MA: Kluwer, 1998) and
co-edited the book The Wigner Distribution—Theory and Applications in
Signal Processing (Amsterdam, The Netherlands: Elsevier, 1997). His research
interests are in signal processing with emphasis on time-frequency methods
and communications applications.
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 5, MAY 2000
Gerald Matz (S’95) received the Dipl.-Ing. degree
in electrical engineering from the Vienna University
of Technology, Vienna, Austria, in 1994.
Since 1995, he has been with the Institute of Communications and Radio-Frequency Engineering, Vienna University of Technology. His research interests
include the application of time-frequency methods to
statistical signal processing and wireless communications.
Heinrich Kirchauer was born in Vienna, Austria, in
1969. He received the Dip.-Ing. degree in communications engineering and the doctoral degree in microelectronics engineering from the Vienna University
of Technology, Vienna, Austria, in 1994 and 1998,
respectively.
In the summer of 1997 , he held a visiting research
position at LSI Logic, Milpitas, CA. In 1998, he
joined Intel Corporation, Santa Clara, CA. His
current interests are modeling and simulation of
problems for microelectronics engineering with
special emphasis on lithography simulation.
Werner Kozek (M’94) received the Dipl.-Ing. and
Ph.D. degrees in electrical engineering from the Vienna University of Technology, Vienna, Austria, in
1990 and 1997, respectively.
From 1990 to 1994 he was with the Institute of
Communications and Radio-Frequency Engineering,
Vienna University of Technology, and from 1994 to
1998, he was with the Department of Mathematics,
University of Vienna, both as a Research Assistant.
Since 1998, he has been with Siemens AG, Munich,
Germany, working on advanced xDSL technology.
His research interests include the signal processing and information theoretic
aspects of digital communication systems.