Digital Signal Processing (ET 4235) Chapter 4: Signal Modeling

Digital Signal Processing (ET 4235)
Chapter 4: Signal Modeling
Given a signal (set of samples), how can it be modeled using a filter?
error
Original (bmp, 780 kB)
Parametrized model (jpg 10%, 5 kB)
1
4b Stoch. Modeling
October 28, 2013
Signal Modeling—Motivation
Motivation 1: Efficient transmission/storage
2
1
1
0
−1
0
5
10
15
20
25
30
0
5
10
15
20
25
30
0
5
10
15
20
25
30
0
−1
−1
−2
−3
0
1
1
0
0
5
10
15
time [n]
20
25
30
−1
Direct: store all samples
Coded:
• Estimate the parameters of the signal,
• Model the signal, e.g., sum of sinusoids:
P
• Store the parameters instead of the original samples.
4b Stoch. Modeling
2
Example: GSM speech coding, MP3 audio coding, JPEG image coding,
October 28, 2013
Motivation
Motivation 2: Interpolation/extrapolation
Interpolation/extrapolation requires a model, e.g., bandlimited/lowpass, sum of sinusoids, etc.
2
1
1
0.5
0
0
−1
−0.5
−1
−2
0
2
4
6
8
−3
10
time [n]
3
0
5
10
15
time [n]
4b Stoch. Modeling
20
25
October 28, 2013
30
Motivation
Extrapolation of Lena
Horizontal extrapolation:
(In fact, there are stationarity requirements that would prohibit this. . . )
4
4b Stoch. Modeling
October 28, 2013
Motivation
Example: ionospheric
calibration
Objects in field of view
see different ionospheric
phase and gain
ionosphere
phase screen
(time varying)
Ionosphere
Station beam
field of view
geometric delays
Full array aperture
LOFAR station
station
beamformers
x1 (t)
LOFAR station
xJ (t)
The ionosphere causes small delays in the reception of signals that modify the
apparent direction of astronomical sources.
Low frequency radio telescopes can ‘sample’ the ionosphere in the direction of
calibration sources. For other directions, we rely on interpolation.
A simple ionospheric model specifies correlations in phase delay
5
distance between points:
as function of
4b Stoch. Modeling
October 28, 2013
Motivation
Example: Interpolation of correlated variables
Suppose we have samples of random variables
that are correlated. Given a new sample , can we
predict the corresponding ?
:
by minimizing
, and estimate
Pose the model:
If we stack the measured samples in vectors and , then we obtain the estimate
(cf. the Wiener filter in Ch. 7.) This is the solution of the Least Squares problem
6
4b Stoch. Modeling
October 28, 2013
Motivation
Motivation 3: Filter design
Design a filter that approximates a desired response (e.g., ideal lowpass)
1.4
0.25
N=20
1.2
N=20
0.2
1
|H(ω)
0.15
h[n]
0.1
0.6
0.05
0.4
0
0.2
−0.05
−0.1
−10
0.8
−5
0
5
10
n
15
20
25
0
−1
30
−0.5
0
ω/π
0.5
The design depends on the filter model (FIR, IIR), filter order, error criteria, etc.
7
4b Stoch. Modeling
October 28, 2013
1
Signal models
Signal models
standard input
white noise
impulse
(or known signal)
Stochastic:
Models for
signal model
(filter)
Deterministic:
modeled
signal
observations
X
):
ARMA(
X
Special cases: AR( ), MA( )
8
4b Stoch. Modeling
October 28, 2013
Signal models
Model identification
and filter order
,
Given observations
, find the param-
best matches the
such that the modeled output signal
eters of
observations.
computational complexity for parameter estimation
model order selection
error criterion for the approximation
4b Stoch. Modeling
October 28, 2013
)
9
Issues:
stability of the filter (in case
.
For stochastic signals, we will try to match the correlation sequences:
Model identification via Least Squares
). The model
In the next slides, we consider deterministic input signals (impulse
,
: LTI, causal, rational. The desired signal has
is a filter
.
error
signal
“Minimize the error” depends on the definition of error, and the norm.
P
Least squares:
P
P
"!
with
The minimization of
P
requires
$
#
#
$
#
4b Stoch. Modeling
October 28, 2013
10
.
equations are nonlinear because of the division by
The resulting


#



Model identification via Least Squares
Alternative that leads to tractable results: consider “weighted” error
modified
error
signal
is in the nominator, the error equation is linear.
Now
Techniques that are based on this: Pade Approximation, Prony’s Method, Shank’s
Method.
11
4b Stoch. Modeling
October 28, 2013
Model identification via Pade Approximation
Pade approximation
signal samples exactly:
model parameters: Can match
We have
in terms of the parameters:
How to find
X
or
, and
where
for
Match exactly with
:
4b Stoch. Modeling
12

X


October 28, 2013
Model identification via Pade Approximation
Write these equations in matrix form:
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
...
.
..
...
.
..
.
..
.
..
.
.
..
.
..
..
...
...
.
:
, and first solve for the
Take the submatrix that does not involve the
.
matrix equation:
.
The solution is
This is a square
13
4b Stoch. Modeling
October 28, 2013
Model identification via Pade Approximation
:
Plug the solution back to find the
..
.
..
.
..
...
.
..
.
..
.
..
.
Disadvantages of the Pade approximation
The resulting model doesn’t have to be stable
, the approximation may be very bad
Outside the used interval
is singular, then a solution does not always exist that matches all signal
If
4b Stoch. Modeling
, i.e., smaller
, but then no guarantee on matching for samples beyond
14
,
values. (Sometimes a solution exists of lower order for
.)
October 28, 2013
Model identification via Prony’s Method
Derivation of Prony’s Method
.
"
"
October 28, 2013
4b Stoch. Modeling
..
.
..
.
15
..
.
..
...
.
..
.
..
.
.
.
..
...
.
..
.
.
..
.
.
..
..
..
..
.
X
..
X
X
Prony: solve
P

to find
For Pade, we first solved
P


, i.e.,
As before,
In matrix form:
Model identification via Prony’s Method
..
.
..
.
..
.
..
.
..
.
..
.
..
...
.
..
.
.
..
.
..
...
..
...
...
.
..
...
.
..
.
refers to the infinite-dimensional matrix.)
(Now,
This is a Least-Squares problem of an overdetermined system of equations.
The solution is
is not singular.
This assumes that
.
and
In practice,
are of finite size as determined by the available data.
16
4b Stoch. Modeling
October 28, 2013
Model identification via Prony’s Method
matrix
is positive (semi)definite by construction. We will see later that this makes
If
(marginally) stable.
) precisely as in Pade’s method:
.
This makes
(i.e.,
is known, we find
After
is singular, then this indicates that the filter order can be reduced.
.
over the entire data
Alternatively, we can find the numerator by minimizing
, this does not make a difference.
record. For the error criterion based on
and minimizes over the entire data.
Shank’s method switches back to
17
4b Stoch. Modeling
October 28, 2013
Example: filter design
Suppose we want to design an ideal linear phase lowpass filter:


otherwise

is the filter delay. The corresponding impulse response is
.
values, and choose
We will match
Ideal and truncated filter impulse response
ideal
truncated
0.4
0.2
0
−0.2
5
10
n
15
).
18
), ARMA filter (
Two cases: FIR filter (
0
4b Stoch. Modeling
October 28, 2013
Example: filter design
Pade approximation
Filter coefficients to match:
to find the design)
19
4b Stoch. Modeling
October 28, 2013
(Use Matlab
i
h
and subsequently the numerator coefficients are found as
i
gives
): solving the Pade equations for
otherwise


h
):

ARMA filter (
FIR filter (
h
Example: filter design
Ideal filter impulse response; design p=0,q=10
Difference of ideal filter impulse response with design p=0,q=10
0.2
ideal
p=0,q=10
0.4
0.1
0.2
0
0
−0.2
−0.1
0
10
20
30
40
50
60
70
n
Ideal filter impulse response; design p=q=5
80
−0.2
0
10
20
30
40
50
60
70
80
n
Difference of ideal filter impulse response with design p=q=5
10
20
30
0.2
ideal
p=q=5
0.4
0.1
0.2
0
0
−0.2
−0.1
0
10
20
30
40
n
50
60
70
80
−0.2
0
40
n
50
60
70
80
The ARMA(5,5) filter gives a larger error outside the specified interval. Also, this
filter is not linear phase (no symmetry).
20
4b Stoch. Modeling
October 28, 2013
Example: filter design
Filterdesign using Pade Approximation
10
ideal
p=0,q=10
p=q=5
0
magnitude [dB]
−10
−20
−30
−40
−50
−60
−70
0
pi/4
pi/2
ω
3pi/4
pi
The ARMA(5,5) filter does not have a good frequency response in the passband.
21
4b Stoch. Modeling
October 28, 2013
Example: filter design
Filterdesign using Pade and Prony Approximation
Ideal filter impulse response; Prony design p=q=5
10
ideal
pade, p=q=5
prony, p=q=5
elliptic
0
−10
ideal
Prony, p=q=5
0.4
0.2
magnitude [dB]
0
−20
−0.2
40
50
60
70
80
n
Difference between ideal filter impulse response and Prony design p=q=5
0.2
−30
−40
0
10
20
30
0
10
20
30
0.1
−50
0
−60
−70
−0.1
0
pi/4
pi/2
ω
3pi/4
pi
−0.2
40
n
50
60
70
80
The ARMA(5,5) design using Prony’s method is much better than the Pade apis designed to minimize the error over the entire domain
proximation, since
A specialistic design (elliptic filter of 5th order) can still be better, with full control
over the passband error and stopband attenuation.
22
4b Stoch. Modeling
October 28, 2013
Model identification via Prony’s Method
Alternative writing of the equations
Recall: The solution is
with
are
and
Thus, the entries of
can be written as
..
.
..
.







..
.
.

















..
The entries of




 ..
 .

The equation

$
X
$
X








are considered autocorrelations. In a stochastic context (dis-
cussed later), we arrive at very similar equations.
23
4b Stoch. Modeling
October 28, 2013
Model identification via Prony’s Method
These equations can also be summarized as
X
These are called the Prony normal equations.
P
Yet another writing form:
..
.
.
..
..
.
4b Stoch. Modeling

 
 
 
 
 .. 
 . 
 
24











..
.















 ..
 .




October 28, 2013
Model identification via Prony’s Method
Orthogonality principle
Consider the Least Squares problem
is
. After solving, the error vector
column span of
The orthogonality principle states, in general, that the error vector for the optimal
we could take a linear combination of these columns to reduce the error!
25
. If it was not the case,
colspan
, i.e.,
is orthogonal to all the columns of
4b Stoch. Modeling
October 28, 2013
Model identification via Prony’s Method
The minimum error
At the minimum, the total error is (due to the orthogonality principle)
This can further be written as
In terms of the autocorrelation sequence,
$
26
this can be written as
X
X
$
X
4b Stoch. Modeling
October 28, 2013
Model identification via Prony’s Method
..
.
..
.
.
..
.











..
























as








 .
 .
 .

This can be combined with the equation













or also as
is a unit vector. These are the augmented normal equa-
where
tions.
27
4b Stoch. Modeling
October 28, 2013
All-pole modeling
Special case: all-pole modeling
If
, then we have an all-pole model:
P
In some cases, this is an accurate physical model (e.g., speech)
Even if it is not a valid physical model, it is attractive because it leads to fast
(the Levinson algorithm)
computational algorithms to find
Recall that the solution is given by solving the overdetermined system (





)
28











..
.
..
.
..
.
..
.
..
.
..
.
.
..
.
..
































..
.
..
.
4b Stoch. Modeling











October 28, 2013
All-pole modeling
)
The normal equations become (premultiply with


$
October 28, 2013
4b Stoch. Modeling
..
.







$
29
..
.







where the autocorrelation sequence is in fact “stationary”:
X
















$
..
.
.
$
..
.
$
$







..
.

..
.
From this structure it is seen that the normal equations become








 ..
 .

$
..
.
.
..
..




.
$
$
..
.
.
..
.
..
$
..
.

$
..





 . 
  .. 






.. 
.
$
$
$
$




$











 .
..
..
.








All-pole modeling
For an all-pole model, the matrix
has a Toeplitz structure (constant along
diagonals); it can be efficiently inverted using the Levinson algorithm (later in
Ch.5).
. It can be shown that then
The numerator will be chosen as
Meaning: the autocorrelation sequence of the filter matches the first
lags of the
autocorrelation sequence of the specified data (“moment matching”).
30
4b Stoch. Modeling
October 28, 2013
All-pole modeling with finite data
. What changes?
samples,
Suppose we have only
Autocorrelation method

Here, the entire data set is used, and extended with zeros:



.
..
.




..
.
..
..
.
..
.














.

..











.. 
. 

.
..




 .
 ..






















:
We will have the same normal equations as before, but with a different
P
.





$
$
..
.
..
.
4b Stoch. Modeling




31








..
.

 ..
 .





October 28, 2013
All-pole modeling with finite data
Covariance method
and
is not zero for
If we know that
, it is more accurate to omit













 ..
 .

those equations that contain an extension with zeros:


..


.




..



.

  .. 
.
 . 
. . ..
..


. .




..
.
defined as
We will have slightly different normal equations as before, with
P
.
$
..
.








..
.




..
.



 ..
 .








The Toeplitz property is lost.
32
4b Stoch. Modeling
October 28, 2013
Example: pole estimation
. Estimate an AR(1) model
Consider the finite data sequence
):
for this signal (
Autocorrelation method
, where
The normal equations collapse to


X








 . 
 . 
 . 


X




 . 
 .. 






33
4b Stoch. Modeling
October 28, 2013
Example: pole estimation
Covariance method
, where
The normal equations collapse to





 . 
 .. 




X
..
.















X
, then
. Set
Thus,
The pole location is found exactly.
34
4b Stoch. Modeling
October 28, 2013
Example: channel inversion
:
must be causal and stable, typically FIR.
is not minimum-phase (has zeros outside the unit circle), causal inversion
If
Further
At the receiver, we wish to equalize (invert) the channel using an equalizer
.
Consider a communication channel with a known (estimated) transfer function
is not possible, and we allow for a delay:
35
4b Stoch. Modeling
October 28, 2013
Example: channel inversion
:
of length
Design of an FIR equalizer
X
:
is
..
.
The LS solution of this overdetermined system
..
.



















..
.








..
.


..
.
..
.
..
.
..
.
..
.
..
.







P
P
Minimize
P



. Because the
$
$
P
..
.
and







matrix
P
$
where
, or

Also, the corresponding normal equations are




..
..
 ..

 .

.
.

is Toeplitz, it can be inverted efficiently.
36
4b Stoch. Modeling
October 28, 2013
.