Turkish Journal of Electrical Engineering & Computer Sciences http://journals.tubitak.gov.tr/elektrik/ Research Article Turk J Elec Eng & Comp Sci () : 1 – 8 c TÜBİTAK ⃝ doi:10.3906/elk-1306-232 Convex combination of recursive inverse algorithms 1 Mohammad Shukri SALMAN1 , Alaa Ali HAMEED2 , Bekir KARLIK2,∗ Electrical and Electronic Engineering Department, Mevlana University, Selçuklu, Konya, Turkey 2 Department of Computer Engineering, Selçuk University, Selçuklu, Konya, Turkey Received: 27.06.2013 • Accepted/Published Online: 05.11.2013 • Printed: ..201 Abstract: In this paper, a convex combination of the recently proposed recursive inverse (RI) and second-order RI algorithms in a system identification setting is proposed. Their combined performances are compared to the combined performances of the normalized least-mean-square (NLMS) algorithms in terms of mean-square error (MSE) and rate of convergence. Simulations show that the proposed algorithm provides a faster convergence rate with a lower MSE than that of the combined NLMS algorithms in both additive white Gaussian noise and additive correlated Gaussian noise environments. Key words: Adaptive filter, recursive inverse algorithms, convex combination 1. Introduction Digital filters are classified as linear or nonlinear, continuous-time or discrete-time, and recursive or nonrecursive [1]. Adaptive filtering techniques are frequently used in signal-processing applications [2,3]. Usually there is a tradeoff between the steady-state mean-square error (MSE) and the initial convergence rate of the adaptive filter [4,5]. For the least-mean-square (LMS) algorithm [4] and its variants [2,5,6], this tradeoff is usually controlled by the step size. A small step size leads to a relatively slow convergence rate with low MSE and vice versa. In the case of the least-squares (LS) algorithms, this tradeoff can be controlled by the forgetting factor and the initialization of the autocorrelation matrix or its inverse. In the last decades, a convex combination of adaptive filtering algorithms has been frequently used to overcome this tradeoff [7–11]. The convex scheme consists of combining 2 adaptive filters. One possibility is depicted in Figure 1. The output and error estimates of the adaptive filters are combined using the parameter λ(k). In [7], Mandic et al. proposed a collaborative adaptive learning algorithm using hybrid filters. They combined two different adaptive filters in order to attain a lower MSE with a high convergence rate. However, the combined performance of their proposed algorithm cannot exactly track the learning curves of both filters. In [11], Trump discussed the tracking performance of a combination of 2 NLMS adaptive filters. In his paper, the combiner can track the learning curves of the combined filters. The recently proposed recursive inverse (RI) adaptive filtering algorithm [12] has shown better performance compared to the NLMS in terms of rate of convergence. In [13], by using the second-order estimates of the correlations, the second-order RI algorithm has provided a MSE performance that cannot be attained using the NLMS algorithm. Hence, a convex combination of both of these RI algorithms would be a case of interest ∗ Correspondence: [email protected] 1 SALMAN et al./Turk J Elec Eng & Comp Sci in the sense that it provides fast convergence and lower MSE curves than those of the proposed convex adaptive algorithms. v(k) + h(k) + w1(k) - y1(k) x(k) e1(k) λ (k ) + y(k) 1– λ(k ) w2(k) y2(k) e2(k) - + h(k) + v(k) Figure 1. Convex combination of 2 adaptive filters for system identification. In this paper, we propose a possible convex combination of RI algorithms and compare its performance to that of the convex NLMS algorithm in terms of the MSE and convergence rate for a system identification setting, in both additive white Gaussian noise (AWGN) and additive correlated Gaussian noise (ACGN) environments. This paper is organized as follows. In Section 2, the RI algorithms are reviewed. In Section 3, the convex combination of the RI algorithms is presented. Simulation results that compare the performances of the proposed algorithm and the NLMS algorithm in a system identification setting are given in Section 4. Finally, conclusions are drawn. In this paper, the notation used is as follows. Lowercase letters stand for scalars, bold lowercase letters stand for vectors, bold uppercase letters stand for matrices, and the acronym T stands for the transposition symbol. 2 SALMAN et al./Turk J Elec Eng & Comp Sci 2. Recursive inverse algorithms Any stationary discrete-time stochastic process can be expressed as: x(k) = u(k) + v(k), (1) where u(k) is the desired signal and v(k) is the noise process. Removing noise from x(k) is a challenge in many signal-processing applications. One way to remove noise is by using adaptive filters following the process shown in Figure 2, where y(k) is the filter output, d(k) is the desired response and e(k) is the estimation error. Input Desired response Output x(k ) d (k ) y(k ) Adaptive Filter – ∑ + Estimation error e(k ) Figure 2. Block diagram of the statistical filtering problem. Many adaptive algorithms have been used to update the coefficients of the filter shown in Figure 2. In the recently proposed adaptive RI algorithm [12], the autocorrelation matrix is recursively estimated and not its inverse. The weight-updated equation of the RI is: w(k) = [I − µ(k)R(k)]w(k − 1) + µ(k)p(k), (2) where k is the time parameter (k = 1, 2, . . .), w(k) is the filter weight vector at time k with length N , I is an N ×N identity matrix, µ(k) is the variable step size, R(k) is the autocorrelation matrix of the tap-input vector, and p(k) is the cross-correlation vector between the tap-input vector and desired response of the adaptive filter. The correlations of the tap-input vector and the desired response are recursively estimated as: R(k) = βR(k − 1) + x(k)xT (k), (3) p(k) = βp(k − 1) + d(k)x(k), (4) where β is the forgetting factor, which is usually selected very close to unity, and the step size µ(k) is given as: µ(k) = where µmax < 2 λmax (R(k)) µ0 where µ0 < µmax , 1 − βk (5) and λmax (R(k)) is the maximum eigenvalue of R (k) . In order to improve the performance of the RI algorithm, a second-order estimate of the correlations [10] with the same updated equation as in Eq. (2) can be used: R(k) = β1 R(k − 1) + β2 R(k − 2) + x(k)xT (k), (6) p(k) = β1 p(k − 1) + β2 p(k − 2) + d(k)x(k), (7) By selecting β1 = β2 = 21 β, the computational complexity of the second-order RI will be comparable to that of the first-order RI algorithm. Taking the expectation of Eq. (6) gives: R(k) = 1 1 β1 R(k − 1) + β2 R(k − 2) + Rxx , 2 2 (8) 3 SALMAN et al./Turk J Elec Eng & Comp Sci where Rxx = E{x(k)xT (k)} and R (k) = E {R (k)} . After calculating the transfer function of Eq. (8), its poles are: ( ) √ z1 = 14 β − β 2 + 8β ) , ( (9) √ z2 = 14 β + β 2 + 8β where z1 and z2 have magnitudes of less than unity if β < 1. Solving Eq. (8) with the initial conditions R (−2) = R (−1) = R (0) = 0 yields: ( R (k) = ) 1 k k + α1 z1 + α2 z2 Rxx , β−1 (10) where α1 = α2 = β−z2 (1−β)(z2 −z1 ) β−z1 (1−β)(z2 −z1 ) . (11) Letting γ(k) = 1 + α1 z1k + α2 z2k , β−1 (12) the variable step size of the second-order RI algorithm is then: µ(k) = µ0 , γ(k) (13) whereµ0 and γ(k)are defined in Eqs. (4) and (11), respectively. This variable step size enables us to reach a low MSE that cannot to be attained by the NLMS or the first-order RI algorithm. 3. Convex combination of adaptive filters Consider the combination of 2 adaptive filters in the system identification setting, as shown in Figure 1. Each weight vector of both filters is updated using Eq. (2) with ei (k) = d(k) − wiT (k − 1)x(k) i = 1, 2 d(k) = hT x(k) + v(k), (14) where wiT (k) is the tap weight vector of the ith adaptive filter with i = 1, 2 and v (k) is the measurement noise. Both outputs of adaptive filters can be combined according to [9,10] as: y(k) = λ(k)y1 (k) + [1 − λ(k)]y2 (k), (15) whereyi (k) = wiT (k − 1)x(k) for i = 1, 2 are the outputs of the RI and second-order RI, respectively. λ(k) is given as referenced in [10]: λ(k) = 4 E[(d(k) − y2 (k))(y1 (k) − y2 (k))] . E [(y1 (k) − y2 (k))2 ] (16) SALMAN et al./Turk J Elec Eng & Comp Sci 4. Simulation results In this paper, in order to test the performance of the proposed algorithm under different noise environments, we compare the performances of combined RI and NLMS algorithms in a system identification setting for both AWGN and ACGN environments. The received signal was generated using a fourth-order autoregressive (AR(4)) model: x(k) = 1.79x(k − 1) − 1.85x(k − 2) + 1.27x(k − 3) − 0.41x(k − 4) + v0 (k), (17) where v0 is a white Gaussian signal with zero mean and variance σν20 = 0.15. This variance value is selected in order to provide a unity power of the input signal x(k). In practice, the expectation operators in Eq. (16) can be replaced by: Px (k) = (1 − γ)Px (k − 1) + γx2 (k), (18) where x (k) is the signal to be averaged and γ = 0.01 . Simulations were done with the following parameters: the filter length N = 16 taps, and noise variance in both experiments is selected to maintain the signal-to-noise ratio (SNR) at 30 dB. All the experiments are averaged over 200 independent runs. The unknown system is assumed to be a low-pass filter with the impulse response depicted in Figure 3. 4.1. Additive white Gaussian noise In this experiment, the input signal x (k) is assumed to be corrupted with AWGN. Simulations are done with the following parameters: for NLMS: µ1 = 0.5 and µ2 = 0.1. For RI: β = 0.991, µ0 = 0.00146. For the second-order RI: β = 0.997, µ0 = 0.05. From Figure 4, we observe a fast convergence at the beginning followed by a slower second convergence with lower MSE for both algorithms. The combination curve follows the fast converging and the low MSE curves in both cases. However, the RI algorithm converges faster than the NLMS algorithm (3700 iterations faster) with 8 dB lower MSE. Figure 5 shows the evolution curves of λ for both algorithms. The evolution of λ in the case of the RI approaches its minimum value much faster than that of the NLMS algorithm, which confirms the results in Figure 4. This high performance of the convex RI is due to 0.6 10 0 0.5 slow NLMS 10–1 combination NLMS 10–2 0.3 MSE Amplitude 0.4 0.2 slow RI combination fast RI NLMS 10–3 fast RI 10–4 0.1 10–5 0 –0.1 0 2 4 6 8 Time 10 12 14 16 Figure 3. Impulse response of the unknown system. 10–6 0 1000 2000 3000 4000 5000 Iteration 6000 7000 8000 Figure 4. MSE combination curves of NLMS and RI algorithms in AWGN with 30 dB SNR. 5 SALMAN et al./Turk J Elec Eng & Comp Sci the use of the variable step size and the instantaneous estimates of the correlations which, in turn, enhance the performance of the proposed algorithm. In order to check the robustness of the proposed algorithm according to the change in the SNR values, Figure 6 is provided. It is easy to note that the proposed algorithm keeps a fixed difference in its MSE value with respect to the convex NLMS algorithm. –10 1.2 1 Convex RI Convex NLMS –15 λ NLMS λ RI –20 0.8 MSE (dB) –25 λ 0.6 0.4 –30 –35 –40 0.2 –45 0 –0.2 0 –50 1000 2000 3000 4000 5000 Iteration 6000 7000 –55 8000 Figure 5. Evolution curves of λ in AWGN with 30 dB SNR. 0 5 10 15 SNR (dB) 20 25 30 Figure 6. MSE for convex RI and NLMS algorithms with different SNRs in AWGN. To test the performance of the proposed algorithm with respect to the change in parameters, experiment 1 was repeated with these parameters. For NLMS: µ1 = 0.4 and µ2 = 0.1. For RI: β = 0.993, µ0 = 0.001. For the second-order RI: β = 0.995, µ0 = 0.04. From Figure 7, we can see that the combined RI is faster than the combined NLMS by about 4000 iterations with 8 dB lower MSE. The curves of λ for both algorithms are shown in Figure 8. It seems that the evolution curves of λ in the RI algorithm case are faster than those of the NLMS algorithm. From this, we conclude that the proposed algorithm performance is robust against a relative change in its parameters. 0 1.2 slow NLMS 10–1 λ NLMS 0.8 combination NLMS 10–2 MSE λ RI 1 slow RI –3 10 0.6 combination RI fast NLMS λ 10 0.4 fast RI –4 10 0.2 –5 10 0 –6 10 0 1000 2000 3000 4000 5000 Iteration 6000 7000 8000 Figure 7. MSE combination curves of NLMS and RI algorithms in extensive simulations under AWGN with 30 dB SNR. 6 –0.2 0 1000 2000 3000 4000 5000 Iteration 6000 7000 8000 Figure 8. Evolution curves of λ in extensive simulations under AWGN with 30 dB SNR. SALMAN et al./Turk J Elec Eng & Comp Sci 4.2. Additive correlated Gaussian noise Now, in order to investigate the performance of the proposed algorithm due to the change in noise type, the input signal x (k) is assumed to be corrupted with ACGN. The ACGN is created using AR(1)(η(k) = 0.9η(k − 1) + v(k)), where v(k) is an additive white Gaussian signal with zero mean and variance that maintains 30 dB SNR. Simulations are done with the following parameters: for NLMS: µ1 = 0.5 and µ2 = 0.1. For RI: β = 0.990, µ0 = 0.0015. For the second-order RI: β = 0.997, µ0 = 0.05. From Figure 9, we observe a fast convergence at the beginning followed by a slower second convergence with lower MSE for both algorithms. The combination curve follows the fast converging and the low MSE curves in both cases. However, the RI algorithm converges faster than the NLMS algorithm (850 iterations faster) with 9.5 dB lower MSE. From this, we note that the proposed algorithm performs better than the convex NLMS algorithm with a higher difference (9.5 dB) than the case of the AWGN process (8 dB difference). This improvement is due to the instantaneous estimate of the correlations. Figure 10 shows the evolution curves of λ for both algorithms. The evolution of λ in the case of the RI approaches its minimum value faster than that of the NLMS algorithm, which confirms the results in Figure 9. On the other hand, the evolution curve of λ in the case of the NLMS algorithm fails to reach that minimum value, which means that the NLMS algorithm fails to reach the optimum MSE. 100 λRI 1 10–1 slow NLMS λNLMS 0.8 slow RI combination NLMS fast NLMS λ mse 10 –2 combination RI 10–3 0.6 fast RI 0.4 10–4 10–5 0 0.2 1000 2000 3000 4000 5000 Iteration 6000 7000 8000 Figure 9. MSE combination curves of NLMS and RI algorithms in ACGN with 30 dB SNR. 0 0 1000 2000 3000 4000 5000 Iteration 6000 7000 8000 Figure 10. Evolution curves of λ in ACGN with 30 dB SNR. To investigate the performance of the proposed algorithm due to the change in the SNR values, Figure 11 is provided. We note that the proposed algorithm almost keeps a fixed difference in its MSE value with respect to the convex NLMS algorithm (around 9 dB difference). 5. Conclusions In this paper, we have presented the convex combination of the recently proposed RI and second-order RI algorithms. Their combined performances are compared to the combined performances of the NLMS algorithms in a system identification setting. Simulation results have shown that the proposed combination of the RI algorithms provides much better performance in terms of MSE and rate of convergence than that of the combined NLMS algorithms in both AWGN and ACGN environments. This gain in performance is due to the use of the instantaneous estimates of the correlations and the variable step size in the update equation of the proposed algorithm. As a future study, a tracking analysis of the proposed algorithm would be a case of interest. 7 SALMAN et al./Turk J Elec Eng & Comp Sci –5 Convex RI Convex NLMS –10 –15 MSE (dB) –20 –25 –30 –35 –40 –45 –50 0 5 10 15 SNR (dB) 20 25 30 Figure 11. MSE for convex RI and NLMS algorithms with different SNRs in AWGN. Acknowledgment This study was supported by Scientific Research Project Unit of Selçuk University. References [1] Ozbay Y, Karlik B, Kavsaoglu AR. A Windows-based digital filter design. Math Comput Appl 2003; 8: 287–294. [2] Sayed AH. Adaptive Filter. Hoboken, NJ, USA: John Wiley & Sons, 2008. [3] Sayed AH. Fundamentals of Adaptive Filtering. New York, NY, USA: Wiley, 2003. [4] Widrow B, Stearns SD. Adaptive Signal Processing. Upper Saddle River, NJ, USA: Prentice Hall, 1985. [5] Haykin S. Adaptive Filter Theory. Upper Saddle River, NJ, USA: Prentice Hall, 2002. [6] Bilcu RC, Kuosmanen P, Egiazarian K. A transform domain LMS adaptive filter with variable step-size. IEEE Signal Proc Let 2002; 9: 51–53. [7] Mandic D, Vayanos P, Boukis C, Jelfs B, Goh SI, Gautama T, Rutkowski T. Collaborative adaptive learning using hybrid filters. In: IEEE International Conference on Acoustics, Speech, and Signal Processing; 16–20 April 2007; Honolulu, HI, USA. New York, NY, USA: IEEE. pp. 901–924. [8] Kosat SS, Singer AC. A performance-weighted mixture of LMS filters. In: IEEE International Conference on Acoustics, Speech, and Signal Processing; 19–24 April 2009; Taipei, Taiwan. New York, NY, USA: IEEE. pp. 3101–3104. [9] Arenas-Garcia J, Figueiras-Vidal AR, Sayed AH. Mean-square performance of convex combination of two adaptive filters. IEEE T Signal Proces 2006; 54: 1078–1090. [10] Azpicueta-Ruiz LA, Figueiras-Vidal AR, Arenas-Garcia J. A normalized adaptation scheme for the convex combination of two adaptive filters. In: IEEE International Conference Acoustics, Speech, and Signal Processing; 30 March–4 April 2008; Las Vegas, NV, USA. New York, NY, USA: IEEE. pp. 3137–3149. [11] Trump T. Tracking performance of a combination of two NLMS adaptive filters. IEEE 15th Workshop on Statistical Signal Processing; 29 August–1 September 2009; Cardiff, UK. New York, NY, USA: IEEE. pp. 181–184. [12] Ahmad MS, Kukrer O, Hocanin A. Recursive inverse adaptive filtering algorithm. Digit Signal Process 2011; 21: 491–496. [13] Ahmad MS, Kukrer O, Hocanin A. Recursive inverse adaptive filter with second order estimation of autocorrelation matrix. In: IEEE International Symposium on Signal Processing and Information Technology; 15–18 December 2010; Luxor, Egypt. New York, NY, USA: IEEE. pp. 482–484. 8
© Copyright 2026 Paperzz