Analysis of Adaptive Volterra Filters For System Identification

ANALYSIS OF ADAPTIVE VOLTERRA FILTERS
Analysis of Adaptive Volterra Filters with LMS and RLS
Algorithms
Amrita Rai and Dr. Amit Kumar Kohli
Department of Electronics and Communication Engineering, Thapar University, Patiala 147004 Punjab
[email protected], [email protected]
__________________________________________________________________________________________________________
Abstract -- Linear filters have played a very crucial role in
the development of various signal processing techniques
and are relatively simple from conceptual and
implementation view points. However, there are several
situations in which the performance of linear filters is
unacceptable. At that situation adaptive polynomial filters
are used which perform satisfactorily. Adaptive
Polynomial filters are a nonlinear generalization of
adaptive linear filters that are based on nonrecursive and
recursive linear difference equations. Polynomial filters
often refer to as Volterra filter when input and output
signals are related through the Volterra series expansion.
In a non-stationary or time-varying environment, the
adaptive polynomial filter helps track the statistics of the
input data or the dynamics of a system. This article
explains details about adaptive Volterra filter with
different algorithms such as LMS (Least Mean Square )
and RLS ( Recursive Least Square). Also discussed are
the current research areas and problems associated with
the nonlinear adaptive filters.
real world applications. While Volterra filters have been
applied in many applications, they still present some
limitations because of their computational complexity which
increases exponentially with the filter order. When the
nonlinear system order is unknown, adaptive methods and
algorithms are widely used for the Volterra kernel estimation.
Keywords: - Volterra Series, Least Mean Square, Recursive
Least Square, System Identification.
1.
I. INTRODUCTION
ADAPTIVE filters are used in various areas where the
statistical knowledge of the signals to be filtered /analyzed is
not known a priori or the signal may be slowly time-invariant.
In Adaptive filtering, the adjustable filter parameters are to be
optimized. The notion of making filters adaptive, i.e., to alter
parameters (coefficients) of a filter according to some
algorithm, tackles the problems that we might not in advance
know, e.g., the characteristics of the signal, or of the unwanted
signal, or of a system influence on the signal that we like to
compensate. Adaptive filters can adjust to unknown
environment, and even track signal or system characteristics
varying over time [1-4].
General characteristics of adaptive filters:
1. Automatically adjustable: adapt in a changing system.
2. Can perform specific filtering or decision-making.
3. Have adaptation algorithms for adjusting parameters.
Over the last decade, Volterra filters or polynomial filters and
nonlinear adaptive infinite impulse-response filters have been
appealing areas of research and have been considered in many
Accuracy of the Volterra kernels will determine the accuracy
of the system model and the accuracy of the inverse system
used for compensation. The speed of kernel estimation
process is also important. A fast kernel estimation method
may allow the user to construct a higher order model that
gives an even better system representation.
Recently, more or less general representations of the Volterra
filter with its truncated version have received increasing
attention in nonlinear signal processing fields. There are
two important properties of the Volterra filter that can
further account for the attention paid to such nonlinear
structures [4-5].
2.
One important property relies on the fact that the output
of a Volterra filter depends linearly on the coefficients
of the filter itself. In other words, the Volterra filter may
be interpreted as extension of linear filters to the
nonlinear case. Therefore, many linear filters with the
corresponding adaptive algorithms can be extended to the
polynomial filters. Moreover, this characteristic can be
largely used to analyze quadratic filters, to find new
implementations.
Another interesting property results from the
representation of the nonlinearity by means of
multidimensional operators working on products of input
samples. Such characteristic allows for the description of
the filter behavior in the frequency domain by means of a
type of multidimensional convolution.
The current trend in the telecommunication systems design is
the identification and compensation of unwanted
nonlinearities. It was demonstrated that unwanted
nonlinearities in the system will have a detrimental effect on
its performance. There are various ways of reducing the
effects of undesired nonlinearities. The use of nonlinear
models to characterize and compensate harmful nonlinearities
offers a possible solution. The Volterra series have been
widely applied as nonlinear system modeling technique with
9
AKGEC JOURNAL OF TECHNOLOGY, Vol. 2, No. 1
considerable success. However, at present, no one general
method exists to calculate the Volterra kernels for nonlinear
systems, although they can be calculated for systems whose
order is known and finite.
II. VOLTERRA FILTER THEORY
The Volterra filter theory was first studied and developed by
Wiener, who mainly worked on the analysis of nonlinear
systems using white Gaussian input and so-called G functions.
Following his works, many researchers used the truncated
Volterra series for estimation and identification of nonlinear
systems [4]. However, for higher order or memory Volterra
filters, large amount of computational burdens are prohibitive
for most practical applications.
To overcome the computational complexity, Koh and Powers
propose an iterative factorization technique to design a
subclass of the SOV filters which can alleviate the complexity
of the filtering operations considerably and apply to nonlinear
system identification for Gaussian input signals. Furthermore,
reduction implementation of computational loads is proposed
that are composed of a square with subsequent linear filtering
or of two linear filters whose outputs are multiplied.
A more general approximation to the quadratic filter is
investigated, which is called multi memory decomposition
(MMD) and is composed of three linear filters connected by a
multiplier [11]. With these structures, though the
computational complexity can be reduced significantly
compared with the direct-form SOV filter, the stable
performance is not guaranteed, and the drawbacks of these
approaches are that the decomposition of Volterra series is not
unique in the identification procedures.
Moreover, based on the alternative adaptation of the
coefficients of the linear filters, a problem of local minima
may exist. Consequently, the convergence is not easily
established, especially for higher-order kernels. An alternative
very effective method based on a parallel-cascade structure for
adaptive Volterra filter is first proposed by applying singular
value (SV) decomposition to the coefficient matrix in order to
obtain an approximation based on its most important
eigenvectors [5].
Volterra filter using the normalized least mean square
(NLMS) to reduce its implementation complexity by using
fewer than the maximum number of branches required [7].
M.M. Banat proposed a pipelined Volterra filter utilizing the
recursive equation, and a pipelined implementation of
quadratic adaptive Volterra filters based on NLMS algorithm
was presented. Though these can effectively reduce
complexity of the implementation structure, the output of the
system becomes a nonlinear function of the filter coefficients
[8].
Therefore, estimating coefficients becomes nonlinear
estimation problem about the global optimal. Moreover, stable
and convergence performance of adaptive algorithm cannot be
settled by these architectures. Recently, two nonlinear blind
adaptive interference cancellation algorithms (exact Newton
and approximate Newton algorithms) based on the secondorder Volterra expansion were proposed and developed to
overcome the multiple access interference [10].
III. VOLTERRA SERIES EXPANSION FOR
NONLINEAR SYSTEMS
Let x[n] and y[n] represent the input and output signals,
respectively, of a discrete-time and causal nonlinear system.
The Volterra series expansion for y[n] using x[n] is given by:
yn  h0 
m1  0


m1  0
m2  0
1
1
1
  h m , m  xn  m  xn  m   ...



 h m  xn  m 


 
m1  0
m2  0
2
1
2
1
2
 h m , m , ..., m  xn  m  xn  m  ... x n  m   ...)

...
mp  0
p
1
2
p
1
2
p
(1)
In (l), hp [ml, m2,..., mp] is known as the p-the order Volterra
kernel of the system. Without any loss of generality, one can
assume that the Volterra kernels are symmetric, i.e., hp [ml,
m2. ..., mp] is left unchanged for any of the possible p!
Permutations of the indices ml, m2,..., mp. One can think of the
Volterra series expansion as a Taylor series expansion with
memory. The imitations of the Volterra series expansion are
similar to those of the Taylor series expansion both
expansions do not do well when there are discontinuities in
the system description. Volterra series expansion exists for
systems involving such type of nonlinearity. Even though
clearly not applicable in all situations, Volterra system models
have been successfully employed in a wide variety of
applications, and such models continue to be popular with
researchers in this area.
Among the early works on nonlinear system analysis is a very
important contribution by Wiener. His analysis technique
involved white Gaussian input signals and used
“G-functionals” to characterize nonlinear system behavior.
Following his work, several researchers employed Volterra
series expansion and related representations for estimation and
time-invariant or time variant nonlinear system identification.
Since an infinite series expansion like (1) is not useful in
filtering applications, one must work with truncated Volterra
series expansions.
IV. VOLTERRA KERNELS ESTIMATION BY THE LMS
ADAPTIVE ALGORITHM
The Volterra filter of fixed order and fixed memory adapts to
the unknown nonlinear system using one of the various
10
adaptive algorithms. The use of adaptive techniques for
Volterra kernel estimation has been well published. Most of
the previous work considers 2nd order Volterra filters. A
simple and commonly used algorithm uses an LMS adaptation
ANALYSIS OF ADAPTIVE VOLTERRA FILTERS


H n  h1 0, h1 1,...., h1 N  1, h2 0,0, h2 0,1, .... hQ N  1,...., N  1
T
(6)
Thus, (6) Volterra Filter input and output can be compactly
rewritten as
y n   H T n X e n 
(7)
The error signal e(n) is formed by subtracting y(n) from the
noisy desired response d(n), i.e.,
en   d n   y n   d n   H T n X e n 
(8)
For the LMS algorithm we minimize the Eq. (7)

criterion. Though the LMS algorithm has its weaknesses,
such as its dependence on signal statistics, which can lead to
low speed or large residual errors, it is very simple to
implement and well behaved compared to the faster recursive
algorithms [4]. A typical adaptive technique used for Volterra
kernels identification is shown in Fig.2


(9)
The well Known LMS update equation for a first order filter is
H n 1  H n   en X e n
(10)
where µ is small positive constant (referred to as the step size)
that determines the speed of convergence and also affects the
final error of the filter output [5]. The extension of the LMS
algorithm to higher order (nonlinear) Volterra filters involves
a few simple changes. Firstly the vector of the impulse
response coefficients becomes the vector of Volterra kernels
coefficients. Also the input vector, which for the linear case
contained only a linear combination, for nonlinear time
varying Volterra filters, complicates treatment.
V. VOLTERRA KERNEL ESTIMATION USING
THE RLS ADAPTIVE ALGORITHM
The RLS (recursive least squares) algorithm is another
algorithm for determining the coefficients of an adaptive
filter. In contrast to the LMS algorithm, the RLS algorithm
uses information from all past input samples (and not only
from the current tap-input samples) to estimate the (inverse
of the) autocorrelation matrix of the input vector.
Figure 2. Volterra Kernel Identification by Adaptive LMS
Method.
This section is to discuss the extension of the algorithm to the
nonlinear case using the previously defined input vectors. The
discrete time impulse response of a first order (linear) system
with memory span is aggregate of all the N most recent inputs
and their nonlinear combinations into one expanded input
vector as


E e 2 n   E d n   H T n X e n 
Figure 1. A Block Diagram of Adaptive Volterra Filter.

X e n   xn , xn  1,...., xn  N  1, x 2 n  xn  xn  1,...., x Q n  N  1
T
(5)
Similarly, the expanded filter coefficients vector H(n) is given
by
To decrease the influence of input samples from the far
past, a weighting factor for the influence of each sample is
used. A typical adaptive technique is shown in Figure 3.
The Volterra filter of a fixed order and a fixed memory adapts
to the unknown nonlinear system using one of the various
adaptive algorithms. The use of adaptive techniques for
Volterra kernel estimation has been well studied. Most of the
previous research considers 2nd order Volterra filters and
some consider the 3rd order case [15].
11
AKGEC JOURNAL OF TECHNOLOGY, Vol. 2, No. 1
H[n] can be recursively updated by realizing that
and
Cn   C n  1  X n X T n
(15)
Pn   Pn  1 d n X n
(16)
One can simplify the computational complexity a little bit by
making use of the matrix inversion lemma for inverting C[n].
This will result in the algorithm given below in eq.17. The
derivation is similar to that for the RLS linear adaptive filter
[15,2].
C 1 n  1 C 1 n  1  1k n X T nC 1 n  1
(17)
Figure 3. Volterra Kernal Identification by Adaptive RLS Method.
A simple and commonly used algorithm is based on the LMS
adaptation criterion. Adaptive Volterra filters based on the
LMS adaptation algorithm are computational simple but suffer
from slow and input signal dependant convergence behavior
and hence are not useful in many applications [15].
As in the linear case, the adaptive nonlinear system minimizes
the following cost function at each time:
n


J n   n  k d k  H T n X k 
2
(11)
k 0
where, H(n) and X (n) are the coefficients and the input signal
vectors, respectively, as defined in (4) and (5),  is a factor
that controls the memory span of the adaptive filter and d(k)
represents the desired output. The solution of equation (11)
can be obtained each time can be easily found by
differentiating J[n] with
respect to H[n], setting the
derivative to zero, and solving for H[n]. The optimal solution
at time n is given by [15,2]
H n  C 1 nP n
(12)
VI. SIMULATION RESULTS
In this section, we examine both adaptive second order
Volterra LMS filter and adaptive second order Volterra RLS
filter. The left side graph of the figure 4 and 5 shows the
adaptive filter coefficients after convergence which is almost
identical to the unknown filter h.
The right side graph shows the square error in dB versus time
during the adaptation process. The lower limit of the error
signal power in the learning curve is defined here by the
additive white noise added at the filter output (-60 dB).
In figure 4, Sample per sample filtering and coefficient update
using the Second Order Volterra Least Mean Squares or one
of its variants. The LMS second order Volterra filter learning
curve shows satisfactory results.
For further improvement in learning curve, we examine
Sample per sample filtering and coefficients update using the
Second Order Volterra Recursive Least Squares (SOVRLS)
adaptive algorithm implements the second order Volterra RLS
filter (SOVRLS).
The SOVRLS algorithm calculates the filter output
where,
n
C n   n  k X k  X T k 
(13)
updates the filter coefficients vector
. The filter output is
the sum of the outputs of the linear filter part
k 0
nonlinear part
and
and
and the
as given in Appendix.
n
Pn  n  k d k  X k 
k 0
(14)
Simulation results show improvement in second figure i.e
SOVRLS is more fissible for the system identification as
SOVLMS.
12
ANALYSIS OF ADAPTIVE VOLTERRA FILTERS
Figure 4: (a) The adaptive filter coefficients after convergence and (b) The learning curve for the FIR system identification problem
using the SOVLMS algorithm.
.
Figure 5: (a)The adaptive filter coefficients after convergence and (b) The learning curve for the FIR system identification problem
using the SOVRLS algorithm.
.
In recent years, most significant work is on a comparative
VII. CONCLUSION
evaluation of the tracking behaviors of the LMS and RLS
It is observed that Adaptive Polynomial filters are useful in
algorithm. Due to degradation in the tracking performance of
large number of applications. Most of the Adaptive
the LMS and RLS algorithm, the Kalman filter is the optimum
Polynomial system relations with nonlinearity can relate
linear tracking device. In reality using the Kalman filter
through a Volterra Series Expansion or a recursive nonlinear
theory, a constant which is clearly not the way to solve the
difference equation. The Volterra filter recently gained
tracking problem for a nonstationary environment.
significant interest in many advanced applications, including
VIII. REFERENCES
acoustic echo cancellation, Channel equalization, biological
[1].
Simon Haykin, “Adaptive Filter Theory”, Fourth Edition,
system modeling and image processing.
The Volterra filter used here is either truncated Volterra series
or fixed order Volterra series. The least mean square LMS
algorithm and the recursive least-squares RLS have
established themselves as principal tools for linear adaptive
filtering.
[2].
[3].
Pearson Education, 2008.
V. John Mathews, “Adaptive Polynomial Filters”, IEEE SP
Magazine, July 1991.
John Leis, “Adaptive Filter Lecture Notes & Examples”,
November 1, 2008
www.usq.edu.au/users/leis/notes/sigproc/adfilt.pdf .
13
AKGEC JOURNAL OF TECHNOLOGY, Vol. 2, No. 1
[4].
[5].
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
Tuncer C. Aysal and Kenneth E. Barner, “Myriad-Type
Polynomial Filtering”, IEEE Transactions on Signal
Processing, vol. 55, no. 2, February 2007.
Ezio Biglieri, Sergio Barberis, and Maurizio Catena,
“Analysis and Compensation of Nonlinearities in Digital
Transmission Systems”, IEEE Journal on selected areas in
Communications, vol. 6, no. 1, January 1988.
Roberto López-Valcarce and Soura Dasgupta, “SecondOrder Statistical Properties of Nonlinearly Distorted PhaseShift Keyed (PSK) Signals”, IEEE Communications
Letters, vol. 7, no. 7, July 2003.
Dong-Chul Park and Tae-Kyun Jung Jeong, “ComplexBilinear Recurrent Neural Network for Equalization of a
Digital Satellite Channel”, IEEE Transactions on Neural
Networks, vol. 13, no. 3, May 2002.
John Tsimbinos and Langford B. White, “Error
Propagation and Recovery in Decision-Feedback
Equalizers for Nonlinear Channels”, IEEE Transactions on
Communications, vol. 49, no. 2, February 2001.
Christoph Krall, Klaus Witrisal, Geert Leus and Heinz
Koeppl, “Minimum Mean-Square Error Equalization for
Second-Order Volterra Systems”, IEEE Transactions on
Signal Processing, vol. 56, no. 10, October 2008.
Alexandre Guérin, Gérard Faucon, and Régine Le
Bouquin-Jeannès, “Nonlinear Acoustic Echo Cancellation
Based on Volterra Filters”, IEEE Transactions on Speech
and Audio Processing, vol. 11, no. 6, November 2003.
Yang-Wang Fang, Li-Cheng Jiao, Xian-Da Zhang and Jin
Pan, “On the Convergence of Volterra Filter Equalizers
Using a Pth-Order Inverse Approach”, IEEE Transactions
on Signal Processing, vol. 49, no. 8, August 2001.
Kenneth E. Barner and Tuncer Can Aysal, “Polynomial
Weighted Median Filtering”, IEEE Transactions on Signal
Processing, vol. 54, no. 2, February 2006.
Georgeta Budura and Corina Botoca, “Efficient
Implementation of the Third Order RLS Adaptive Volterra
Filter”, FACTA Universitatis (NIS) Ser.: Elec. Energ. vol.
19, no. 1, April 2006.
A. Zaknich, “Principal of Adaptive Filter and Self Learning
System”, Springer Link –2005.
Charles W. Therrien, W. Kenneth Jenkins, and Xiaohui Li,
“Optimizing the Performance of Polynomial Adaptive
Filters: Making Quadratic Filters Converge Like Linear Filters”,
IEEE Transactions on Signal Processing, vol. 47, no. 4,
April 1999.
Amrita Rai is currently pursuing
PhD in DSP & VLSI design from
Thapar University, Patiala. After
obtaining BTech in ECE from
College of Engineering Chandrapur
(Nagpur University) in 1998, she
received MTech as a university
topper from Maharshi Dayanand
University, Rohtak in the field of
Power Electronics & Electrical
Drives.
Earlier worked in Superior Product
Industry Ltd. as a Design &
Development Engineer for four
years.
Since 2005 she is teaching at Lingaya’s Institute of Management &
Technology, Faridabad. Published papers in international journals like IFSA
(International Frequency and Sensor Association Publication) and attended
national and international IEEE Conference.
Dr Amit Kumar Kohli is currently
Assistant Professor in the t
Department of Electronics and
Communication
Engineering, Thapar University,
Patiala. He specializes in the area of
Signal Processing and
Wireless
Communication
Engineering. He obtained PhD from
IIT Roorkee in 2006 and ME from
Thapar University in 2002.
Received invitation from American
Journal Experts for significant
contribution in the field of Signal
Processing. Published a large
number of papers in national and
international journals.
He is a reviewer and member of the editorial board of journals like IEEE,
Elsevier and Springer. Won several awards during his student days and
professional career.
IX. APPENDIX
The filter output is the sum of the outputs of the linear filter part and the nonlinear part as follows:
14