4. THE CONSTANT MODULUS ALGORITHM

4. THE CONSTANT MODULUS ALGORITHM
Outline
1. Introduction
2. CMA algorithm derivation
3. Simulations
4. Multistage CMA to find all sources
1
Introduction
Many communication signals have the constant modulus (CM) property:
FM, PM, FSK, PSK, ...
If these are corrupted by noise/interference, the CM property is lost
Can we find a filter w to restore this property, without knowing the sources?
The answer is yes. It is obtained by the constant modulus algorithm (CMA)
2
Data model
beamformer
We receive 1 signal with noise (plus interference)
xk = ask + nk
The source is unknown but has constant modulus: |sk | = 1 for all k.
Objective: construct a receiver weight vector w such that
H
yk = w xk = ŝk
Possible solution: look for a w such that |yk | = 1 for all k.
3
Cost function
Possible optimization problem:
min J(w)
h
i
2
2
J(w) = E (|yk | − 1)
where
w
The CMA cost function as a function of y (for simplicity, y is taken real here):
CMA cost function
9
8
7
5
2
(|y| − 1)
2
6
4
3
2
1
0
−2
−1.5
−1
−0.5
0
y
0.5
1
1.5
2
There is no unique minimum:
if yk = wH xk is CM, then another beamformer is αw, for any scalar |α| = 1
4
Cost function
Cost function for 2 real sources and 2 antennas:
µ=0.05
µ=0.05
5
Constant modulus algorithm
Cost function:
h
i
2
2
J(w) = E (|yk | − 1) ,
H
y k = w xk
Stochastic gradient method:
wk+1 = wk − µ ∇J(wk ) ,
µ > 0 is the step size
Computation of gradient (use |yk |2 = yk ȳk = wH xk xHk w):
∇J(w) = 2E{(|yk |2 − 1) · ∇(wH xk xHk w)}
= 2E {(|yk |2 − 1) · xk xHk w}
= 2E {(|yk |2 − 1) ȳk xk }
Replace expectation by instantaneous value and absorbe the factor 2 in µ:

 y
= w(k)H xk
k
CMA(2,2):
 w(k+1) = w(k) − µx (|y |2 − 1)ȳ
k
Similar to LMS, but with update error (|yk |2 − 1)yk .
6
k
k
Constant modulus algorithm
Advantages
• The algorithm is extremely simple to implement
• Adaptive tracking of sources
• Converges to minima close to the Wiener beamformers (for each source)
Disadvantages
• Noisy and slow
• Step size µ should be small, else instable
• Only one source is recovered (which one?)
• Possible misconvergence to local minimum (with finite data)
7
Simulations
after beamforming
2
2
1
1
imag(y)
imag(y)
before beamforming
0
−1
0
−1
−2
−2
0
real(y)
−2
−2
2
modulus of output
0
real(y)
2
2.5
abs(y)
2
1.5
1
0.5
0
0
500
1000
time [samples]
8
1500
2000
Simulations
convergence of CMA cost function
1
J
0.8
0.6
0.4
0.2
0
µ=0.2
0
µ=0.05
100
200
300
400
time [samples]
convergence of output error cost function
500
0.25
0.2
J
0.15
0.1
µ=0.2
0.05
0
0
µ=0.05
100
200
300
time [samples]
400
500
As with LMS, a larger step size makes the convergence faster but also more noisy.
9
Other CMAs
beamformer
Alternative cost function: CMA(1,2)
J(w) = E(|yk | − 1)2 = E(|w xk | − 1)2
H
Corresponding CMA iteration


yk := w(k)H xk
 w(k+1) := w(k) − µx (ȳ − ȳk )
k k
|yk |
Similar to LMS, but with update error yk − |yykk | .
The desired signal is estimated by ŝk =
yk
|yk | .
10
+
Other CMAs
Normalized CMA (NCMA): µ becomes scaling independent
w(k+1) := w(k) −
µ
ȳk
)
x
(ȳ
−
k k
kxk k2
|yk |
Orthogonal CMA (OCMA): whiten using data covariance R
w(k+1) := w(k) − µR−1
k xk (ȳk −
ȳk
)
|yk |
Least Squares CMA: block update, we iteratively solve
H
min kŝ − w Xk
w
where ŝ is the best blind estimate of the complete source vector based on w



yi = w(k)H xi for i = 1, 2, . . . , N


ŝ(k) := [ |yy11 | ,
y2
|y2 |
, ··· ,



 w(k+1) := (ŝ(k) X† )H
11
yN
|yN | ]
The CM Array
Try to find all sources...
CMA
LMS
CMA gives estimate of source 1: ŝ1,k
This is used as reference signal for LMS model matching:
(k+1)
â1
(k)
= â1
(k)
− µlms [â1 ŝ1,k − xk ]ŝ¯1,k
We can then remove this source and continue to find other sources:
(k)
x1,k = xk − â1 ŝ1,k
12