Parameter Estimation for Sinusoidal Signals with

15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, September 3-7, 2007, copyright by EURASIP
PARAMETER ESTIMATION FOR SINUSOIDAL SIGNALS WITH DETERMINISTIC
AMPLITUDE MODULATION
S. Schuster, S. Scheiblhofer, and A. Stelzer
Institute for Communications and Information Engineering, Christian Doppler Laboratory for Integrated Radar Sensors,
University of Linz, Altenbergerstr. 69, A-4040, Linz, Austria
phone: +43(0)732-2468-1822, fax: +43(0)732-2468-9712, email: [email protected]
web: www.icie.jku.at/cd-labor
ABSTRACT
that account for the unknown modulation, with Re{·} and Im{·}
denoting the real and the imaginary part operator, respectively, to
the ACRB and to the standard DFT-based estimators
N−1
(b)
ψ̂1 = arg max ∑ x̃[n] exp(−j2πψ1 n) ,
(5)
ψ1 n=0
 N−1

(b)
Im ∑ x̃[n] exp −j2π ψ̂1 n


(b)
n=0

φ̂1 = arctan
(6)
 .
 N−1
(b)
Re ∑ x̃[n] exp −j2π ψ̂1 n
The problem of estimating the phase and frequency of an amplitude modulated sinusoidal signal is well-known. Estimators and
corresponding performance bounds both for noise-like modulating
functions and for purely deterministic modulating functions of different parametric models, i.e. polynomial or auto-regressive, are
available in the literature. In this paper, we consider the case of
known and unknown deterministic modulating functions with unspecified model. It will be shown that the asymptotic Cramèr-Rao
lower bounds (ACRBs) for known and unknown modulating functions are equal. Some similarities to the case of multiplicative noise
are shown. Furthermore, the variance and bias of the standard discrete Fourier transform (DFT)-based estimators in case of deterministic amplitude modulation are discussed, and maximum likelihood estimators (MLEs) in case of known modulating functions are
derived.
n=0
Note that both (3) and (5) can be efficiently implemented with the
fast Fourier transform (FFT)-algorithm and zero-padding. For the
estimators (5) and (6), we further require a[n] ≥ 0 for all n. Otherwise, heavily biased estimates may result. Furthermore, it will be
discussed that it is signal-to-noise ratio (SNR) dependent whether
the estimator (3) or (5) should be preferred, an effect that has also
been found for the multiplicative noise case in [1]. Also, the NLS
estimator for the case of known modulating function will be derived. All results for this case will be marked with a superscript (c).
Note that the restriction of the possible range of ψ1 occurs because
of the squaring operation in (3) and is not necessary for the standard estimators (5) and (6) as well as if the modulating function is
known. Both real and complex data are treated for reasons that will
become obvious later.
1. INTRODUCTION AND SIGNAL MODEL
The signal models
x̃[n] = s̃[n; θ] + w̃[n] = a[n] exp(j(2πψ1 n + φ1 )) + w̃[n]
(1)
x[n] = s[n; θ] + w[n] = a[n] cos(2πψ1 n + φ1 ) + w[n]
(2)
and
2. DERIVATIONS
for the real data case are studied here, with n = 0, 1, . . . , N − 1,
−π ≤ φ1 < π and −1/4 ≤ ψ1 < 1/4 for the complex data case, respectively 0 ≤ ψ1 < 1/4 for the real data case. A tilde over a variable denotes a complex variable. Let
For all following derivations, we assume generally that N is large
enough and a[n] is smooth enough (i.e. the continuous-time counterpart of a[n] is a continuous function) so that the following asymptotic expressions hold for i = 0, 1, 2 in good approximation:
θ = [ a[0] a[1] · · · a[N − 1] φ1 ψ1 ]T
1
N i+1
denote the vector of unknown parameters. The additive measurement noise is assumed to be (circular) white, Gaussian noise with
variance σ 2 . The cases that the amplitude modulating function
a[n] can be described as multiplicative noise or obeys a parametric model[1, 2, 3, 4, 5] has been extensively discussed in the literature. Here, the case that a[n] is an arbitrary but strictly deterministic,
real-valued envelope, which is assumed to be a smooth, continuous
function, as it often occurs in real-world signal processing systems,
is treated. One topic of this paper is to compare the performance of
the well-known [2] nonlinear least squares (NLS) estimators
(a)
ψ̂1
(a)
φ̂1
N−1
2
= arg max ∑ x̃[n] exp(−j4πψ1 n)
ψ1 n=0
 N−1

(a)
Im ∑ x̃[n]2 exp −j4π ψ̂1 n


1
n=0

= arctan


N−1
2
(a)
Re ∑ x̃[n]2 exp −j4π ψ̂1 n
1
N i+1
∑ a[n]2 cos(4πψ1 n + 2φ1 ) ni
≈ 0
(7)
≈ 0
(8)
n=0
N−1
∑ a[n]2 sin(4πψ1 n + 2φ1 ) ni
n=0
N−1
∑ ni ≈
n=0
N i+1
,
i+1
(9)
see [6] and also [7]. The case of a[n] ≡ 0 is excluded for obvious
reasons in all derivations.
2.1 ACRB, Known Modulating Function
2.1.1 Complex Data
(3)
Assuming a[n] is known up to a multiplicative constant A1 , that is
a[n] = A1 a0 [n], we have θ = [ A1 φ1 ψ1 ]T . To calculate the ACRB,
(
)
N−1
2
∂ s̃∗ [n; θ] ∂ s̃[n; θ]
[F(θ)]km = 2 Re ∑
(10)
∂ θk
∂ θm
σ
n=0
(4)
n=0
©2007 EURASIP
N−1
951
15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, September 3-7, 2007, copyright by EURASIP
can be applied due to the AWGN assumption to calculate the elements of the Fisher information matrix (FIM) [8], with k, m = 1, 2, 3
and ∗ denoting complex conjugation. We obtain
(
N−1
2
[F(θ)]11 = 2 Re ∑ a0 [n] exp(−j(2πψ1 n + φ1 ))
σ
n=0
N−1
2 ∑
)
× a0 [n] exp(+j(2πψ1 n + φ1 ))
2
[F(θ)]22 = 2 Re
σ
=
2
∑ a[n] n2
≥2σ
2 (17)
real
N−1
N−1
N−1
2 2
2
2
∑ a[n] n ∑ a[n] − ∑ a[n] n
n=0
2
n=0
a0 [n]2
2
∑ a[n]
2 .
N−1
N−1
N−1
2 2
2
2
∑ a[n] n ∑ a[n] − ∑ a[n] n
σ2
≥
real 2π 2
n
o
(c)
var ψ̂1
σ2
n=0
=
2A21 N−1 0 2
a [n]
σ 2 n=0
2.2.1 Complex Data
(11)
∑
For
N−1
∑ −jA1 a [n] exp(−j(2πψ1 n + φ1 ))
n=0
1 n + φ1 ))
=
4πA21 N−1 0 2
a [n] n
σ2 ∑
n=0
×j2πnA1 a0 [n] exp(j(2πψ1 n + φ1 )) =
(12)
8π 2 A21 N−1 0 2 2
∑ a [n] n
σ2
n=0
θ = [ a[0] a[1] · · · a[N − 1] φ1 ψ1 ]T ,
we can also apply (10), but F(θ) is now of dimension
(N + 2) × (N + 2). After some work, the following elements of the
FIM can be obtained:
2
k = m = 1, 2, . . . , N
2


0σ
k, m = 1, 2, . . . , N, k 6= m
[F(θ)]km = 2
Re{−ja[n]} = 0
k = N + 1, m = 1, 2, . . . , N


 σ22
Re{−j2πna[n]}
=
0
k = N + 2, m = 1, 2, . . . , N.
2
σ
0
(
N−1
2
[F(θ)]33 = 2 Re ∑ −j2πnA1 a0 [n] exp(−j(2πψ1 n + φ1 ))
σ
n=0
)
Furthermore, [F(θ)]N+1,N+1 , [F(θ)]N+1,N+2 , and [F(θ)]N+2,N+2
are given by (11), (12), and (13), respectively. All other elements
are obtained by the symmetry of the FIM. It can be seen that F(θ) is
block-diagonal, and hence the well-known rule for inverting blockdiagonal matrices can be applied. We obtain
(13)
and [F(θ)]12 = [F(θ)]13 = 0, as can be easily verified. All other
elements can be obtained using the symmetry of the FIM. Inverting
the block-diagonal FIM yields the following asymptotic bounds
N−1
cmplx
∑
var{â[n]} ≥
n=0
N−1
N−1
n=0
n=0
2
2
∑ a0 [n] n2 ∑ a0 [n] −
N−1
cmplx
2
2
∑ a0 [n] n
2.2.2 Real Data
n=0
Using the abbreviation β = 2πψ1 n + φ1 , some trigonometric identities, and (16), the following elements of the FIM can be obtained:
≥
1

σ2


0
[F(θ)]km = −a[n] sin(β ) cos(β )


σ2

 −a[n]2πn sin(β ) cos(β )
N−1
2
∑ a0 [n]
2 .
N−1
2(2π)2 A21 N−1 0 2 2 N−1 0 2
2
0
a
[n]
n
a
[n]
−
a
[n]
n
∑
∑
∑
σ2
n=0
n=0
n=0
(15)
σ2
[F(θ)]N+1,N+2 =
2.1.2 Real Data
[F(θ)]N+2,N+2 =
Similarly to the complex case but assuming 0 ψ1 1/2 so that
interference between the positive and the negative half of the spectrum can be neglected, application of
©2007 EURASIP
1
σ2
[F(θ)]N+1,N+1 =
N−1
∑
n=0
k = N + 2, m = 1, 2, . . . , N.
1
σ2
N−1
∂ s[n; θ] ∂ s[n; θ]
∂ θk
∂ θm
1
N−1
∑ a[n]2 sin2 (β ) ≈ 2σ 2 ∑ a[n]2
n=0
N−1
n=0
1
N−1
∑ a[n]2 n sin(β ) cos(β ) ≈ 2σ 2 ∑ a[n]2 n
n=0
2 N−1
1
σ2
k = m = 1, 2, . . . , N − 1
k, m = 1, 2, . . . , N − 1, k 6= m
k = N + 1, m = 1, 2, . . . , N
n=0
As a simple validation, consider a0 [n] ≡ 1, which yields, after some
straightforward manipulations and using (9), the standard ACRBs
for the “classical” sinusoidal parameter estimation problem without
modulation. Furthermore, above bounds also hold if a[n] is completely known.
[F(θ)]km =
σ2
,
2
and the bounds for the phase and the frequency are equal to the
bounds when the modulating function is known and are given by
(14) and (15). This somewhat unexpected result is further discussed
in Section 2.5.
a0 [n]2 n2
(14)
o
n
(c)
var ψ̂1
n=0
2.2 ACRB, Unknown Modulating Function
n=0
(
σ2
2A21
n=0
(18)
∑ −jA1 a0 [n] exp(−j(2πψ1 n + φ1 ))
)
≥
n=0
N−1
n=0
n=0
a0 [n] exp(j(2πψ
n
o
(c)
var φ̂1
n=0
N−1
× jA1 a0 [n] exp(j(2πψ1 n + φ1 ))
×j2πnA1
N−1
n
o
(c)
var φ̂1
(
)
2
[F(θ)]32 = 2 Re
σ
yields the following ACRBs:
(2π)
σ2
∑ a[n]2 n2 sin2 (β ) ≈
n=0
n=0
2 N−1
(2π)
2σ 2
∑ a[n]2 n2 .
n=0
Using standard formulas for the determinants of partitioned matrices, it can be easily seen that the so obtained FIM is singular. This
typically occurs if the number of unknown parameters is greater
than N, like here. Rao [9] suggested to use the Moore-Penrose Pseudoinverse of the FIM, which yields the same bounds (17) and (18)
(16)
952
15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, September 3-7, 2007, copyright by EURASIP
∂ ξ (ψ1 ) ≈
2π
sin(2πψ
m
+
φ
)
a[n]n
−
m
a[n]
.
1
1
∑
∑
∂ w[m] w=0
n
n
as for the case of known modulating function. However, Stoica et
al. [10] showed that in most cases, there is no unbiased estimator
with finite variance for θ. In fact, it is intuitively clear that in case
of real data, there is no unbiased counterpart to the estimator
n
o
(a)
(a)
â[n] = Re x̃[n] exp −j2π ψ̂1 n exp −jφ̂1
(19)
Inserting above expressions into (22) yields
∂ hψ̂ (b) (w) 1 ∂ ξ (ψ1 ) 1
≈ − 0
∂ w[m] ξ (ψ1 ) ∂ w[m] w=0
w=0
N−1
N−1
sin(2πψ1 m + φ1 ) ∑ a[n]n − m ∑ a[n]
n=0
n=0
= − "
# ,
in the complex data case. Whereas in the complex case it can be
seen from (19) that it is possible to eliminate the exponential terms
by premultiplying with its estimated inverse functions, this is not
possible in the real data case. However, simulation results indicate
that the resultant frequency and phase estimates are unbiased when
applying (3) for real data.
2
N−1
−
∑ a[n]n
π
n=0
N−1
N−1
n=0
n=0
∑ a[n]n2
∑ a[n]
and thus we obtain from (20) and (21) that
2.3 Variance and Bias, DFT-based Estimators
bias{ψ̂1 }real ≈ 0
2.3.1 Real Data
and
To derive approximate expressions for bias and variance of (5) under the assumption that a[n] ≥ 0, a Taylor series expansion approach
is used [8], with the estimator being viewed as a function of the
noise:
N−1 ∂ h (b) (w) ψ̂1
(b)
w[m] .
ψ̂1 = hψ̂ (b) (w) ≈ h(0) + ∑
∂
w[m]
1
m=0
n
o
(b)
var ψ̂1
real
≈
2
N−1
N−1
2 (2πψ m + φ )
sin
a[n]n
−
m
a[n]
∑
∑
∑
1
1
σ 2 m=0
n=0
n=0
"
2 # 2
π2
N−1
N−1
N−1
∑ a[n]n − ∑ a[n]n2
∑ a[n]
N−1
w=0
Because (5) is given as an implicit function of ψ1 only, a Taylor
series expansion of the squared DFT magnitude spectrum (which is
equivalent to using the estimator (5))
n=0
n=0
2
N−1
J(ψ) = ∑ x[n] exp(−j2πψn)
n=0
≈
2
N−1
∑ a[n]n − m ∑ a[n]
σ2
m=0 n=0
n=0
"
2 #2 . (24)
2π 2 N−1
N−1
N−1
∑ a[n]n − ∑ a[n]n2
∑ a[n]
n=0
around the true value ψ1 is used, yielding
∂ J(ψ) ∂ ψ ψ=ψ
ξ (ψ1 )
1
.
= ψ1 − 0
hψ̂ (b) (w) ≈ ψ1 − 2
∂ J(ψ) ξ
(ψ1 )
1
2
∂ψ
n=0
In matrix notation, denoting T the transpose of a matrix and C the
covariance matrix of the data, we obtain [8] the following expressions for the bias and the variance
n
o
(b)
(20)
bias ψ̂1
≈ hψ̂ (b) (0) − ψ1 .
real
N−1
N−1
∑ a[n]
1
n=0
∑
m=0
(21)
2.3.2 Complex Data
w=0
where the last line follows from the assumption that the measurement noise is white . The derivative of hψ̂ (b) (w) is given by
1
∂ hψ̂ (b) (w) 1
∂ w[m] !
.
=
w=0
w=0
(22)
After some further work and application of large-sample approximations under the assumption 0 ψ1 1/2, we obtain
1 ∂ ξ (ψ1 )
ξ (ψ1 ) ∂ ξ 0 (ψ1 )
− 0
+
ξ (ψ1 ) ∂ w[m]
ξ 0 (ψ1 )2 ∂ w[m]
ξ (ψ1 )|w=0 ≈ 0.
ξ 0 (ψ1 )w=0 ≈ 2π 2
2
©2007 EURASIP
∑ a[n]n
n
− 2π 2
Completely analogous to the case of real data and under the same
assumptions, the following asymptotic variance expressions are obtained:
2
N−1 N−1
N−1
a[n]n
−
m
a[n]
∑
∑
∑
n
o
σ2
(b)
m=0 n=0
n=0
var ψ̂1
≈ 2 "
#2
cmplx
8π
2
2
∑ a[n]n − ∑ a[n]n
∑ a[n]
n
n
o
(b)
var φ̂1
∑ a[n]n2
n
∑ a[n]
n=0
(25)
The mean-square error and threshold level of the estimator (25)
have been treated in [11]. Also here, a simple validation is possible by setting a[n] ≡ A1 and comparing the results to the classical
ACRB for sinusoidal parameter estimation.
o
n
∂ hψ̂ (b) (w) T
(b)
1
var ψ̂1
≈
∂w N−1
n=0
Some notes on (23) and (24): If the interaction of the positive and
the negative half of the spectrum cannot be neglected (leakage), it is
advantageous to use windowing functions to suppress the resulting
bias, on the expense of further increasing the estimation variance. A
possibly used windowing function can be incorporated into above
derivation.
Similarly, we obtain
2
N−1
2
4π
a[n]n
∑
n
o
n
o
N
(b)
(b)
n=0
var φ̂1
≈ 2σ 2 .
2 + 2 var ψ̂1
ψ=ψ1
= σ2
n=0
N−1 N−1
∑
∂ hψ̂ (b) (w) 1
·C·
∂w w=0
w=0
!2
∂ hψ̂ (b) (w) 1
,
∂ w[m] (23)
cmplx
∑ a[n]
≈
σ2
N
2 +
2 N−1
∑ a[n]
n=0
n
953
n
n
2
∑ a[n]n
n
o
(b)
n=0
.
2 var ψ̂1
N−1
∑ a[n]
4π 2
N−1
n=0
15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, September 3-7, 2007, copyright by EURASIP
2.4 Estimator, Known Modulating Function
2.5 Comparison, Interpretations
If a[n] is known, this can be advantageously incorporated. Using
the trigonometric identity
For brevity, we concentrate on the frequency here, but the discussion is equally valid for the phase, too. The fact that the ACRBs for
known and unknown modulating functions are equal is somewhat
unexpected and also the estimators (3) and (27) seem to be different at a first glance. However, considering (19), we can rewrite the
estimator (3) as follows:
N−1
(a)
2
ψ̂1 = arg max ∑ x̃[n] exp(−j4πψ1 n)
ψ1 n=0
N−1
= arg max ∑ x̃[n] x̃[n] exp(−j2πψ1 n) exp(−j2πψ1 n)
ψ1 n=0
N−1
≈ arg max ∑ x̃[n] â(n, ψ1 ) exp(−j2πψ1 n)
ψ1 n=0
cos(β1 + β2 ) = cos(β1 ) cos(β2 ) − sin(β1 ) sin(β2 ) ,
the model (2) can be rewritten to
x[n] = s[n; α; ψ1 ] + w[n]
= a[n] α1 cos(2πψ1 n) + a[n] α2 sin(2πψ1 n) ,
with α1 = cos(φ1 ) and α2 = − sin(φ1 ). In matrix notation, denoting
α = [ α1 α2 ]T , we have
x = H(ψ1 ) α + w,
with
h1 = [ a[0] γ[0] a[1] γ[1] · · · a[N − 1] γ[N − 1] ]T
T
h2 = a[0] γ 0 [0] a[1] γ 0 [1] · · · a[N − 1] γ 0 [N − 1] ,
(a)
N−1
for large enough SNR so that ψ̂1 is near ψ1 . The exponential
term containing the constant phase is eliminated by the magnitude.
Hence, the estimator (3) is in fact very similar to (27), but uses an
estimate of the modulating function for weighting the data according to the SNR of each individual sample. This also explains the
interesting phenomenon that, assuming a[n] ≥ 0, for relatively low
SNRs the estimator (3) often performs in fact poorer than (5) that
completely neglects the modulation. For low SNRs, the estimate
of the amplitude modulating function that is built-in into (3) performs very poorly, and hence deteriorates the estimation result. It is
furthermore interesting to compare this to a result given by Stoica
and Besson [12], [4] for a[n] being a stationary Gaussian random
process:
1
6
1+
(29)
var{ψ̂1 } ≈
η
2η
n=0
asymptotically, where the SNR η = ra [0] /σ 2 and
where γ[n] = cos(2πψ1 n) and γ 0 [n] = sin(2πψ1 n). Applying the
principle of separable LS, it is shown e.g. in [8] that
−1
(c)
ψ̂1 = arg max xT H HT H
HT x .
(26)
ψ1
Assuming 0 ψ1 1/2, the large-sample approximations (7) and
(8) can be applied, which diagonalizes the Grammian HT H, because
1 T
1
1
h h2 = hT2 h1 =
N 1
N
N
N−1
1 T
1
h h1 =
N 1
N
N−1
1 T
1
h h2 =
N 2
N
N−1
∑ a[n]2 cos(2πψ1 n) sin(2πψ1 n) ≈ 0
n=0
1
∑ a[n]2 cos2 (2πψ1 n) ≈ 2N ∑ a[n]2
n=0
1
N−1
n
o
ra [0] = E a[n]2 .
∑ a[n]2 sin2 (2πψ1 n) ≈ 2N ∑ a[n]2 .
n=0
n=0
The second term in brackets of (29) causes exactly the same behaviour as for the case of purely deterministic modulation. It is
negligible for high SNRs, that is it makes no difference whether or
not the modulating function is known. On the other hand, it significantly increases the variance for low SNRs. Hence, there is a
trade-off choosing the estimator (3) or (5) depending on the available SNR, similarly to the case of multiplicative noise as described
in [1]. The situation is completely different if the modulating function is known. Although a strict analytical proof has not been found,
simulation results indicate that
o
n
o
n
o
n
o
n
(a)
(c)
(b)
(c)
≤ var ψ̂1
and var ψ̂1
≤ var ψ̂1 ,
var ψ̂1
Inserting above equations into (26) and expanding it yields, after
some manipulations,
2
2 N−1
(c)
ψ̂1 ≈ arg max ∑ a[n] x[n] exp(−j2πψ1 n)
ψ1 N n=0
N−1
, arg max ∑ a[n] x[n] exp(−j2πψ1 n) .
(27)
ψ1 n=0
Furthermore, it is straightforward to show that
 N−1

(c)
a[n] x[n] sin 2π ψ̂1 n
∑

 n=0
(c)

φ̂1 ≈ − arctan
.
 N−1
(c)
∑ a[n] x[n] cos 2π ψ̂1 n
regardless of a[n] and the SNR, as expected.
One strength of the used approach not to specify any particular model for the envelope when deriving the asymptotic variances
and bounds is that it is quickly possible to calculate the asymptotic
variances for a particular model of the envelope by simply inserting
a[n] in the derived formulas.
(28)
n=0
Eq. (27) and (28) can be nicely interpreted. In principle, the
estimators are very similar to the standard DFT-based estimators
(5) and (6), respectively. However, here the data is weighted according to the amplitude modulating function. This makes sense
because samples where |a[n]| takes on high values have higher SNR
compared to samples where |a[n]| is low. The estimators (27) and
(28) can be implemented computationally efficient with the FFTalgorithm, because the weighting of the data can be applied directly
on the data before the estimator is applied. A further comparison
of all the derived results will be given next. Note that similar results are obtained for complex data, but however, the assumption
that ψ1 is not close to the boundaries is not necessary, nor the used
large-sample approximations.
©2007 EURASIP
3. SIMULATION RESULTS
For the simulations, we used the SNR definition 10 log10 1/σ 2 ,
complex data, 1000 trials, and sufficient zero-padding to avoid discretization effects of the frequency axis. Fig. 1 shows the two modulating functions that have been used for simulation, where modulating functions like modulating function 1 typically occur e.g.
in wide-bandwidth linear frequency modulated continuous wave
radar signal processing applications. As can be seen in Fig. 2,
the asymptotic estimation variances of the estimators (3), (5), and
954
15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, September 3-7, 2007, copyright by EURASIP
4. CONCLUSION
(27) are quite similar, but at low SNRs, it is advantageous here
to use the standard DFT-based estimators. In Fig. 3, the modulating function has been chosen to be an exponential decaying function a[n] = exp(−0.02n). For high SNRs, the resulting RMSEs
of the estimators that account for the unknown modulating functions achieve the asymptotic CRLB, being MLEs under the AWGN
assumption. The RMSE of the standard DFT-based estimators is
much higher. In any case, if the modulating function is known, the
derived estimators that account therefor performs best, as expected.
In this paper, asymptotic performance bounds for the sinusoidal parameter estimation problem with deterministic amplitude modulation have been derived. It has been shown that the bounds are equal,
regardless whether the modulating function is known or not. Furthermore, the NLS estimator in case of known modulating function
has been derived, together with the asymptotic variances of standard
DFT-based estimators in case of amplitude modulated data.
REFERENCES
[1] M. Ghogho, A. Swami, and A. K. Nandi, “Non-linear least
squares estimation for harmonics in multiplicative and additive noise,” Signal Processing, no. 78, pp. 43–60, 1999.
[2] O. Besson and P. Stoica, “Sinusoidal Signals with Random
Amplitude: Least-Squares Estimators and Their Statistical
Analysis,” IEEE Trans. Signal Processing, vol. 43, no. 11, pp.
2733–2744, Nov. 1995.
[3] J. M. Francos and B. Friedlander, “Bounds for Estimation of
Multicomponent Signals with Random Amplitude and Deterministic Phase,” IEEE Trans. Signal Processing, vol. 43, no. 5,
pp. 1161–1172, May 1995.
[4] O. Besson and P. Stoica, “Nonlinear least-squares frequency
estimation and detection for sinusoidal signals with arbitrary
envelope,” Tech. Rep 97 093R, Uppsala University, Aug.
1997.
[5] M. Ghogho, A. Swami, and T. S. Durrani, “Frequency estimation in the presence of doppler spread: Performance analysis,”
IEEE Trans. Signal Processing, vol. 49, no. 4, pp. 777–789,
Apr. 2001.
[6] P. Stoica, R. L. Moses, B. Friedlander, and T. Soderstrom,
“Maximum Likelihood Estimation of the Parameters of Multiple Sinusoids from Noisy Measurements,” IEEE Trans. Signal
Processing, vol. 37, no. 3, pp. 378–392, Mar. 1989.
[7] E. L. Lehmann, Elements of Large-Sample Theory.
New
York, NY: Wiley Interscience, 2004.
[8] S. M. Kay, Fundamentals of Statistical Signal ProcessingEstimation Theory. Englewood Cliffs, Upper Saddle River,
NJ: Prentice Hall, 1993.
[9] C. R. Rao, Linear Statistical Inference and Its Applications,
2nd ed. New York, NY: Wiley Interscience, 2002.
[10] P. Stoica and T. L. Marzetta, “Parameter Estimation Problems
with Singular Information Matrices,” IEEE Trans. Signal Processing, vol. 49, no. 1, pp. 87–90, Jan. 2001.
[11] S. Schuster, S. Scheiblhofer, R. Feger, and A. Stelzer, “Phase
Estimation Mean Square Error and Threshold Effects for Windowed and Amplitude Modulated Sinusoidal Signals,” in 14th
European Signal Processing Conference (EUSIPCO), Florence, Italy, Sept. 2006.
[12] O. Besson and P. Stoica, “Frequency Estimation and Detection
for Sinusoidal Signals with Arbitrary Envelope: A Nonlinear
Least-Squares Approach,” IEEE Acoustics, Speech, and Signal Processing Conf. (ICASSP ’98), vol. 4, pp. 2209–2212,
May 1998.
Figure 1: Different modulating functions used.
Figure 2: Monte Carlo simulation of the resultant frequency estimation RMSE for a sinusoid with amplitude modulation modulating
function 1. N = 512, ψ1 = 0.125, and φ1 = 0 have been used.
Figure 3: Monte Carlo simulation of the resultant frequency estimation RMSE for a sinusoid with amplitude modulation modulating
function 2. N = 512, ψ1 = 0.125, and φ1 = 0 have been used.
©2007 EURASIP
955