Equalizer

‫المحاضرة الثامنة‬
‫‪1‬‬
‫‪Omar Abu-Ella‬‬
‫‪5/21/2013‬‬
Introduction
 Wireless communications require signal processing
techniques that improve the link performance.
 Equalization, Diversity and Channel Coding are channel
impairment improvement techniques.
5/21/2013
Omar Abu-Ella
2
Equalization
 Equalization compensates for Inter Symbol Interference
(ISI) created by multipath.
 Equalizer is a filter at the receiver whose impulse response
is the inverse of the channel impulse response.
 Equalizers find their use in frequency selective fading
channels.
5/21/2013
Omar Abu-Ella
3
Diversity
 Diversity is another technique used to compensate
fast/slow fading and is usually implemented using two or
more receiving dimensions.
 Macro-diversity: mitigates large scale fading.
 Micro-diversity: mitigates small scale fading.





Space diversity
Time diversity
Frequency diversity
Angular diversity
Polarization diversity
5/21/2013
Omar Abu-Ella
4
Channel Coding
 Channel coding improves wireless communication link
performance by adding redundant data bits in the
transmitted message.
 At the baseband portion of the transmitter, a channel
coder maps a digital message sequence into another
specific code sequence containing greater number of bits
than original contained in the message.
 Channel Coding is used to correct deep fading or spectral
null.
5/21/2013
Omar Abu-Ella
5
General Framework
Selective
freq. fading
5/21/2013
Omar Abu-Ella
Fast\slow
fading
Deep
fading
6
Equalization
 ISI has been identified as one of the major obstacles to high speed data
transmission over mobile radio channels. If the modulation bandwidth
exceeds the coherence bandwidth of the radio channel (i.e., frequency
selective fading), modulation pulses are spread in time, causing ISI.
Classification:
 Time varying wireless channel requires adaptive equalization.
 An adaptive equalizers is classified into two major categories: non-
blind, blind equalizers.
 A non-blind adaptive equalizer has two phases of operation: training
and tracking.
5/21/2013
Omar Abu-Ella
7
Linear x Nonlinear
5/21/2013
Omar Abu-Ella
8
Adaptive Equalizers
5/21/2013
Omar Abu-Ella
9
Classification of Equalizers
Non-blind x blind equalizers
 Non-blind adaptive equalization algorithms rely on
statistical knowledge about the transmitted signal in order
to converge to a solution, i.e., (the optimum filter
coefficients “weights”)
 This is typically accomplished through the use of a pilot
training sequence sent over the channel to the receiver to
help it identifying the desired signal.
5/21/2013
Omar Abu-Ella
10
 Blind adaptive algorithms equalization do not
require prior training, and hence they are referred
to as “blind” algorithms.”
 These algorithms attempt to extract significant
characteristic of the transmitted signal in order to
separate it from other signals in the surrounding
environment.
5/21/2013
Omar Abu-Ella
11
Training Sequence:
 Initially a known, fixed length training sequence is sent by the
transmitter so that the receiver equalizer may average to a proper
setting.
 Training sequence is typically a pseudo-random binary signal or a
fixed, of prescribed bit pattern.
 The training sequence is designed to permit an equalizer at the receiver
to acquire the proper filter coefficient in the worst possible channel
condition.
 An adaptive filter at the receiver thus uses a recursive algorithm to
evaluate the channel and estimate filter coefficients to compensate for
the channel.
5/21/2013
Omar Abu-Ella
12
A Mathematical Framework
 The signal received by the equalizer is given by
 d(t) is the transmitted signal, h(t) is the combined impulse response of
the transmitter, channel and the RF/IF section of the receiver and nb (t)
denotes the baseband noise.
 The main goal of any equalization process is to satisfy this equation
optimally. In frequency domain it can be written as
 which indicates that an equalizer is actually an inverse filter of the
channel.
5/21/2013
Omar Abu-Ella
13
Zero Forcing Equalization
 Disadvantage: Since Heq (f) is inverse of Hch (f) so
inverse filter may excessively amplify the noise at
frequencies where the channel spectrum has high
attenuation, so it is rarely used for wireless link except
for static channels with high SNR.
5/21/2013
Omar Abu-Ella
14
A Generic Adaptive Equalizer
5/21/2013
Omar Abu-Ella
15
Adaptive equalizer
 The input to the equalizer as
 the tap coefficient vector as
 the output sequence of the equalizer
yk is the inner product of xk and
wk i.e.
 The error signal is defined as
5/21/2013
Omar Abu-Ella
16
 Assuming dk and xk to be jointly stationary, the Mean Square Error
(MSE) is given as
 The MSE can be expressed as
 where the signal variance σ2k, d = E[d2k] and the cross correlation
vector p between the desired response and the input signal is defined
as
 The input correlation matrix R is defined as an (N + 1) (N + 1) square
matrix, where
5/21/2013
Omar Abu-Ella
17
 Clearly, MSE is a function of wk. On equating
wk to 0, we get the condition
for minimum MSE (MMSE) which is known as Wiener solution:
 Hence, MMSE is given by the equation
5/21/2013
Omar Abu-Ella
18
Choice of Algorithms for Adaptive
Equalization
Factors which determine algorithm's performance are:
 Rate of convergence: Number of iterations required for
an algorithm, to converge close enough to optimal solution.
 Computational complexity: Number of operations
required to make one complete iteration of the algorithm.
 Numerical properties: robustness s against
computation errors, which influence the stability of the
algorithm.
5/21/2013
Omar Abu-Ella
19
Classic equalizer algorithms
Three classic equalizer algorithms are primitive for most
of Zero Forcing Algorithm (ZF)
today's wireless standards:
 Least Mean Square Algorithm (LMS)
 Recursive Least Square Algorithm (RLS)
 Constant Modulus Algorithms (CMA)
5/21/2013
Omar Abu-Ella
20
MSE Criterion
Unknown Parameter
(Equalizer filter response)
Received Signal
Desired Signal
N 1
J [ w]   (d [n]  wx[n])
2
n 0
Mean Square Error between the received signal
and the desired signal, filtered by the equalizer filter
LS Algorithm
5/21/2013
Omar Abu-Ella
LMS Algorithm
21
Least Mean Square (LMS) Algorithm
• Introduced by Widrow & Hoff in 1959
• Simple, no matrices calculation involved in the
adaptation
• In the family of stochastic gradient algorithms
• Approximation of the steepest–descent method
• Based on the Minimum Mean square Error (MMSE)
criterion.
• Adaptive process: recursive adjustment of filter tap
weights
5/21/2013
Omar Abu-Ella
22
Least Mean Square (LMS) Algorithm
 In practice, the minimization of the MSE is carried out recursively, and
may be performed by use of the stochastic gradient algorithm. It is the
simplest equalization algorithm and requires only 2N+1 operations per
iteration.
 LMS weights is computed iteratively by
 where the subscript k denotes the kth delay stage in the equalizer and µ
is the step size which controls the convergence rate and stability of the
algorithm.
5/21/2013
Omar Abu-Ella
23
Notations
 Input signal (vector): u(n)
H
 Autocorrelation matrix of input signal: Ruu = E[u(n)u (n)]
 Desired response: d(n)
 Cross-correlation vector between u(n) and d(n): Pud = E[u(n)d*(n)]
 Filter tap weights: w(n)
H
 Filter output: y(n) = w (n)u(n)
 Estimation error: e(n) = d(n) – y(n)
2
 Mean Square Error: J = E[ |e(n)| ] = E[e(n)e*(n)]
5/21/2013
Omar Abu-Ella
24
System Block diagram using LMS
u[n] = Input signal from the channel ;
d[n] = Desired Response
H[n] = Some training sequence generator
e[n] = Error feedback between :
A.) desired response.
B.) Equalizer FIR filter output
W = FIR filter using tap weights vector
5/21/2013
Omar Abu-Ella
25
Steepest Descent Method
 Steepest decent algorithm is a gradient based method which employs
recursive solution over problem (cost function)
 The current equalizer taps vector is w(n) and the next sample
equalizer taps vector weight is w(n+1), We could estimate the w(n+1)
vector by this approximation:
w[n  1]  w[n]  0.5 (J [n])
 The gradient is a vector pointing in the direction of the change in filter
coefficients that will cause the greatest increase in the error signal.
Because the goal is to minimize the error, however, the filter
coefficients updated in the direction opposite the gradient; that is why
the gradient term is negated.
 The constant μ is a step-size.
 After repeatedly adjusting each coefficient in the direction opposite to
the gradient of the error, the adaptive filter should converge.
26
Omar Abu-Ella
5/21/2013
Steepest Descent Example
•
Given the following function we need to obtain the vector that would give us the
absolute minimum.
Y (c1 , c2 )  C12  C22
•
y
C1  C2  0,
It is obvious that
give us the minimum.
C1
C2
Now lets find the solution by the steepest descend method
5/21/2013
Omar Abu-Ella
27
Steepest Descent Example
•
We start by assuming (C1 = 5, C2 = 7)
•
We select the constant µ. If it is too big, we miss the minimum. If it is too small,
it would take us a lot of time to het the minimum. We would select = 0.1
•
The gradient vector is:
 dy 
 dc  2C 
1
  1
y  
 dy  2C2 
 dc 
 2
• So our iterative equation is:
C1 
C1 
C   C   0.1 y
 2 [ n1]  2 [ n ]
C1 
2C1 
    0.5 * 0.1    0.9
C2 [ n ]
2C2 [ n ]
5/21/2013
Omar Abu-Ella
C1 
C 
 2 [ n ]
28
Steepest Descent Example
y
C  5 
Iteration 0 :  1    
C2  7
Initial guess
C1  4.5
Iteration1 :     
C2  6.3
C1  4.05
Iteration 2 :    

C
5
.
67


 2
......
C1  0.01 
Iteration 60 :    

C2  0.013
C1 
0 
lim n     
C2 [ n ] 0
Minimum
C1
C2
As we can see, the vector [c1,c2] converges to the value which would yield the
function minimum and the speed of this convergence depends on µ.
5/21/2013
Omar Abu-Ella
29
MMSE criterion for LMS
 MMSE – Minimum mean square error
 MSE =
E{[( d (k ) 
N
E{[( d (k )  y (k )] }  E{[( d (k ) 
2
 w(n)u(k  n)] }
2
n N
N
N
 w(n)u (k  n)] }  E{d (k ) }  2  w(n) P
2
N
2
n N
du
n N
( n)  
N
 w(n)w(m) R(n  m)
n N m N
Pdu (n)  E{d (k )u (n  k )}
Ruu (n  m)  E{u (m  k )u (n  k )}
 To obtain the LMS MMSE we should derivative
the MSE and compare it to (0):

N
N
( E{d (k ) }  2  w(n) Pdu (n)  
2
( MSE )

w(k )
n N
N
 w(n)w(m) R(n  m))
n N m N
w(k )
30
Omar Abu-Ella
5/21/2013
MMSE criterion for LMS
finally we get:
N
 ( MSE )
J (n) 
 2 Pdu (k )  2  w[n]Ruu (n  k ),
w(k )
n N
k  0,1,2,...
By equating the derivative to zero we get the MMSE:
1
wopt  R P
This calculation is complicated for the DSP (calculating the inverse matrix),
and can cause the system to not being stable because:
If there are NULLs in the noise, we could get very large values in the inverse
matrix.
Also we could not always know the Auto correlation matrix of the input and
the cross-correlation vector, so we would like to make an approximation of this.
5/21/2013
Omar Abu-Ella
31
LMS – Approximation of the Steepest
Descent Method
w(n+1) = w(n) + 2*[P – R w(n)] <= According the MMSE criterion
We assume the following assumptions:
• Input vectors :u(n), u(n-1),…,u(1) statistically independent vectors.
• Input vector u(n) and desired response d(n), are statistically independent of
d(n), d(n-1),…,d(1)
• Input vector u(n) and desired response d(n) are Gaussian-distributed R.V.
•Environment is wide-sense stationary;
In LMS, the following estimates are used:
H
R^uu = u(n)u (n) – Autocorrelation matrix of input signal
P^ud = u(n)d*(n) - Cross-correlation vector between u[n] and d[n].
*** Or we could calculate the gradient of |e[n]|2 instead of E{|e[n]|2 }
5/21/2013
Omar Abu-Ella
32
LMS Algorithm
ˆ w[n]}
w[n  1]  w[n]  {Pˆ – R
 w (n)  {u[n]d *[n] – u[n]u H [n] w[n]}
 w (n)  {u[n]{d *[n] – y*[n]}
We get the final result:
y[n]  w[n]u[n]
e[n]  d [n]  y[n]
w[n  1]  w[n]  {u[n]e * [n]}
33
Omar Abu-Ella
5/21/2013
LMS Step-size
 The convergence rate of the LMS algorithm is slow due to the fact that
there is only one parameter, the step size μ, that controls the adaptation
rate. To prevent the adaptation from becoming unstable, the value of μ is
chosen from
 where λi is the ith eigenvalue of the autocorrelation (covariance) matrix R.
5/21/2013
Omar Abu-Ella
34
LMS Stability
The size of the step size determines the algorithm convergence
rate:
• Too small step size will make the algorithm take a lot of
iterations.
• Too big step size will not convergence the weight taps.
Rule Of Thumb:
1

5(2 N  1) Pr
Where, N is the equalizer length
Pr, is the received power (signal+noise)
that could be estimated in the receiver.
5/21/2013
Omar Abu-Ella
35
LMS Convergence using different μ
5/21/2013
Omar Abu-Ella
36
LMS : Pros & Cons
LMS – Advantage:
• Simplicity of implementation
• Do NOT neglecting the noise like Zero-Forcing equalizer
• Avoid the need for calculating an inverse matrix.
LMS – Disadvantage:
Slow Convergence
Demands using of training sequence as reference
Thus decreasing the communication BW.
5/21/2013
Omar Abu-Ella
37
Recursive Least Squares (RLS)
5/21/2013
Omar Abu-Ella
38
Blind Algorithms
 “Blind” adaptive algorithms are defined as those algorithms which do
not need a reference or training sequence to determine the required
complex weight vector.
 They try to restore some type of property to the received input data
vector.
 A general property of the complex envelope and digital signals is the
constant modulus of these received signals
5/21/2013
Omar Abu-Ella
39
Constant Modulus Algorithm (CMA)
 Constant Modulus Algorithm (CMA):
 used for constant envelope modulation.
5/21/2013
Omar Abu-Ella
40