2007 年度 第 20 回 信號處理合同學術大會論文集 第 20 卷 1 號 Sparse 업데이트를 가지는 무한대 norm에 기반한 부분 업데이트 적응 필터링 알고리즘 김성은, 최영석, 이재은, 송우진 포항공과대학교 전자전기공학과 Partial-Update L∞ -norm based Adaptive Filtering Algorithm with Sparse Updates Seong-Eun Kim, Young-Seok Choi, Jae-Eun Lee, and Woo-Jin Song Dept. Electronic and Electrical Engineering, Universitynormalized of Science and Technology (POSTECH) NLMS, the NSLMS algorithm Abstract: We presentPohang a partial-update has reduced computational complexity since it uses the sum of sign least-mean square (NSLMS) algorithm with [email protected] the absolute value of the input vector. Despite this sparse updates. A new constrained minimization conspicuous advantage, the NSLMS still is not problem based on the minimum disturbance in the sufficient to be used for the large number of filter L∞ -norm sense is developed. The proposed coefficients. As an alternative, one may choose to algorithm, referred to herein as the set-membership update only part of the filter coefficient vector at partial-update normalized sign least-mean square each iteration. Such algorithm, referred to as partial(SM-PU-NSLMS) algorithm, is derived by solving update (PU) algorithm, can reduce computational this minimization problem. This efficient algorithm complexity while performing close to their fulldecreases the number of updates and the update counterparts in terms of the convergence computational complexity per each iteration rate and the final mean-squared error (MSE). In the compared with the conventional L2-norm based literature, the partial-update NSLMS (PU-NSLMS) low-complexity algorithms. Experimental results algorithm was provided [5]. show that SM-PU-NSLMS algorithm has the good As another approach to reduce computational convergence performance with greatly reduced complexity, the reduction of updates of the filter computational complexity. coefficients, which is sparse in time, can reduce the Keywords: NSLMS algorithm, partial update, sparse average computational complexity. These algorithms updates can be derived within the framework of setmembership filtering (SMF) employ a bounded error constraint on the filter output [6]-[8]. Ⅰ. Introduction Motivated by these frameworks, we present a In the adaptive filtering algorithms, we have the low-complexity L∞ -norm based adaptive filtering choice of algorithms ranging from the simple algorithm which incorporates both the partial-update least-mean square (LMS) algorithm to the more scheme and the sparse-updates. The proposed complex recursive least squares (RLS) algorithm algorithm profits from the sparse updates related to with tradeoffs between the computational complexity the set-membership framework reducing the and the convergence speed. Among of them, the average computational complexity, as well as from normalized least-mean square (NLMS) algorithm is the reduced computational complexity obtained with widely used due to its low computational complexity the partial update of the coefficient vector. and simplicity. However, when a large number of Throughout the paper, the following norms are coefficients are required, the large number of filter adopted: coefficients decreases the usefulness of the NLMS L∞ -norm of a vector algorithm owing to increase complexity [1]-[3]. To deal with this obstacle, the normalized sign LMS Euclidean norm (L2-norm) of a vector (NSLMS) algorithm which is based on minimum L∞ 1 L1-norm of a vector. -norm has been developed [4]. The NSLMS This paper is organized as follows. Section II algorithm results in the reduction of the introduces the low-complexity L∞ -norm based computational complexity by updating the filter adaptive filtering algorithms employing the partial coefficient in a quantized form. Compared with the update scheme and the sparse updates, respectively. -1- 2007 年度 第 20 回 信號處理合同學術大會論文集 第 20 卷 1 號 estimation error [6]. If the specified bound is correctly chosen, there are infinite number of valid acceptable estimates of w . At time instant i , the constraint set i is defined as a set containing any In Section III, we derive the proposed algorithm. In Section VI, we present the computational complexity and the convergence performance of the proposed algorithm. Finally, conclusions are indicated in Section V. vector w for {ui , d (i)} pair which is given by i {w Ⅱ. Low-Complexity L∞ -norm based Adaptive Filtering Algorithms Let the exact M :| d (n) wT ui | } . feasibility set (5) i i n1 n Ⅱ.1. Partial-Update NSLMS Algorithms denote a set to be the intersection of i up to the This subsection reviews the PU-NSLMS algorithm proposed in [5]. The purpose of the PU-NSLMS algorithm is to derive an algorithm that only updates L out of the N filter coefficients. Let the L coefficients to be updated at time instant i be determined by an index set L (i) defined by time instant i . To obtain w which satisfy the exact feasibility set i , adaptive approaches are usually where L (i) {t1(i), t2 (i), , tL (i)} L {tk (i)}k 1 is taken from the set {1, used. The low-complexity minimum L∞ -norm adaptive algorithm with sparse updates, namely, setmembership NSLMS (SM-NSLMS) algorithm is proposed in [7]. This algorithm would be obtained to solve the constraint minimization problem can be written as w i 1 arg min w w i (6) (1) , N } . The matrix A L (i ) is a N N diagonal matrix having L w elements equal to one in the positions indicated by L (i) and zeros elsewhere. subject to | d (i) wT ui | Assuming the N 1 input vector, ui [u(i) u(i 1) u(i N 1)] T As in the error control procedure described in [4], we can obtain the filter coefficient vector wi1 . Thus, (2) and a desired signal d (i ) . Let the filter coefficient vector wi be an estimate for wo at every iteration i . Then the estimation error is given by e(i) d (i) wTi ui error control procedure developed in [4], the PU-NSLMS algorithm is derived in [5]. We regard the PU-NSLMS algorithm herein as the M-Max PU-NSLMS algorithm proposed in [5]. The PU-NSLMS algorithm updates the filter coefficients corresponding to the L largest elements of the vector ui . The update equations for the Ⅲ. Proposed Partial-Update NSLMS Algorithm with Sparse Updates Our objective is to reduce the computational complexity while performing close to the NLMS family. The resulting algorithm benefit from both the partial update and the sparse updates, as well as from the reduced computational complexity with the L∞ -norm adaptive filtering. Our approach, similar to that of SM-NSLMS, is to seek a coefficient vector that minimize the change of the consecutive filter coefficient vectors in the L∞ -norm sense, wi 1 wi , subject the constraint (4) where is step-size parameter, L (i) {tk (i)}kL1 , and wi 1 i which the additional constraint of updating 1, if k corresponds to one of the first L tk (i ) largest maxima of | u (i k 1) | 0, otherwise. Stability of the PU-NSLMS algorithm is guaranteed, if 0 max where [5] max (7) It is shown [7] that (i ) 1 / | e(i ) | so as to satisfy the constraint in (6) through the geometric interpretation of (7). (3) From the coefficients selection matrix A L (i ) and the PU-NSLMS algorithm can be written as e(i) wi 1 wi sign{A L (i )ui } A L (i )ui 1 the update equation is given by (i)e(i) wi 1 wi sign{ui } . ui 1 only L coefficients. Then, the constraint minimization problem can be written as w i 1 arg min w w i (8) w | d (i ) wT ui | subject to A L (i )(w w i ) 0 2L 2 2 L E{ A L (i )ui 1} . and N N E{ ui 1} where A L (i ) I A L (i ) is a complementary matrix Ⅱ.1. Sparse Updates which is used to represent that only L coefficients are updated. In SMF, the filter coefficient vector w designed to achieve a specified bound on the magnitude of the -2- 2007 年度 第 20 回 信號處理合同學術大會論文集 第 20 卷 1 號 wi 0.3 d (i ) w ui i d (i ) w T ui T ui 0.2 d (i ) w ui 0 T 0.1 d (i ) w T u i d (i ) wT ui i wi1 Echo path sign{A L ( i )ui } w NLMS ,i1 wi1 0 -0.1 -0.2 -0.3 -0.4 Fig. 1. Geometric interpretation of the proposed algorithm 0 20 40 60 Samples 80 100 120 Fig. 2. Impulse response of echo path to be identified Following the procedure proposed in [4], we can obtain the filter coefficient vector wi1 by adjusting is obtained to satisfy the constraint set i , if with a vector A L (i ) i to minimize the error signal hyperplane in i . Therefore, wi1 lies on the point e(i), i.e., w i 1 w i A L (i ) i . Let the a posteriori presented in Fig. 1. We investigate the step-size (i ) to guarantee the stability of proposed algorithm, since the step size of the PU-NSLMS is limited to max depend wi 1 i , wi1 belongs to the closest bounding error r (i ) define as r (i ) d (i ) (w i A L (i ) i )T ui . (9) In order to minimize the error signal, the r (i ) (1 (i))e(i) relation is introduced in [4] where (i ) is a parameter to be selected in the interval (0,1). Thus, we have (i )e(i ) iT A L (i )ui . on L in Section II. If wi1 is closer to w NLMS ,i1 than wi at every iteration, then we conclude that proposed algorithm converses. In addition, it is acceptable that the convergence rate of the update equation improves when wi1 is updated to be (10) closest to w NLMS ,i1 . Along this line of thought, we Then, the minimization L∞ -norm solution vector of i is given by (i)e(i) i A L (i )ui 1 sign{A L (i )ui } . consider following update strategies. Let the angle shown Fig. 1 denote the angle between the direction of sign{A L (i )ui } and u i . Let (11) angle define Thus, the updating equation is obtained in a similar manner as follows: (i)e(i) (12) wi 1 wi sign{A L (i )ui } A L (i )ui 1 1 / | e(i) | , where | e(i) | otherwise. 0 , the angle in the case of (wi 1 wi ) (wi 1 w NLMS ,i 1) . Then, cos is given by cos wi wi 1 / wi w NLMS ,i 1 where w NLMS ,i1 is given by w NLMS ,i 1 wi e(i)ui / ui where (i ) is data dependent and given by (i) as the NLMS algorithm. the 2 in , increase the If threshold temporarily at the iteration i to i so as to be (wi 1 wi ) (wi 1 w NLMS ,i 1) , i.e., (i ) is (13) and L (i) {tk (i)}kL1 is the same as that used in the modified by (i ) A L (i )ui PU-NSLMS. Fig. 1 illustrates the geometric interpretation for N=2 and L=1. For a certain filter coefficient vector wi , a vector that has same distance from wi in the 2 1 L ui 2 . As illustrated in Fig. 1, the constraint set is temporarily expanded in order to provide feasible solution, so that the stability of proposed algorithm is guaranteed and convergence speed improves. L∞ -norm sense, would be on the square which is centered at wi . Thus, the updated filter coefficient vector wi1 , which satisfies wi 1 wi , can be Ⅳ. Experimental Results We illustrate the performance of the proposed algorithm by carrying out computer simulation in a channel identification scenario. The order of unknown channel shown in Fig. 2 was N=128. The adaptive filter and the unknown channel are assumed to have the same number of taps. The input signal is any point on the square represented by the dashed line in Fig. 1. In addition, the direction of sign{A L (i )ui } is ( 1,0) or (0, 1) . Assume that the direction of sign{ui } is (0, 1) in Fig. 1. Since wi1 -3- 2007 年度 第 20 回 信號處理合同學術大會論文集 第 20 卷 1 號 Computational Complexity ALG. MULT. NLMS 2N+4 NSLMS N+2 PU-NSLMS N+2 SM-NSLMS N+2 Proposed N+2 SM-PU-NLMS N+L+2 5 ADD. DIV. COMP. 2N+4 1 2N+2 1 N+L+2 1 2log2(N)+2 2N+3 2 N+L+3 2 2log2(N)+2 N+L+3 2 2log2(N)+2 (a) PU-NSLMS(L=16) (b) Proposed(L=16) (c) SM-PU-NLMS(L=16) (d) NSLMS (e) SM-NSLMS (f) NLMS 0 -5 (a) MSE(dB) TABLE I. -10 (b) -15 (e) (f) randomly generated with zero mean and unit variance. The signal-to-noise ratio (SNR) was set to -20 (c) 30 dB. The mean square error (MSE), E{| e(i ) |2 } , is taken and averaged over 2000 independent trials. In Fig. 3, it is shown the MSE curves of the NSLMS, the SM-NSLMS [7], the PU-NSLMS, the SM-PU-NLMS, the NLMS and the proposed algorithms with L=16. We set the step-size of the NLMS, the NSLMS and the PU-NSLMS algorithms as 0.7, 0.51 and 0.38, respectively. The error bound is set to 0.04. The Table I lists the number of multiplications, additions, divisions, and comparisons of the computation of various algorithm at each iteration. As can be seen, the proposed algorithm has a similar or less computation compared with other algorithms. Moreover, the proposed algorithm reduced the number of updates by the sparse updates. Fig. 3 shows that the proposed algorithm has improved convergence performance than the PU-NSLMS and is comparable to the SM-PU-NLMS, the SM-NSLMS, and the NSLMS with reduced computational complexity. (d) -25 -30 0 1000 2000 3000 4000 5000 6000 Number of Iterations 7000 8000 9000 10000 Fig. 3. Plots of MSE of various algorithms for N=128, L=16 [3] J. Homer, I. Mareels, and C. Hoang, “Enhanced detection-guided NLMS estimation of sparse FIRmodeled signal channels,” IEEE Trans. Circuits Syst. I, vol. 53, no. 8, pp. 1783-1791, Aug. 2006. [4] S. H. Cho, Y. S. Kim, J. A. Cadzow, “Adaptive FIR filtering based on minimum L∞ -norm,” in Proc. IEEE Pacific Rim Conf, Commun., Comput., Signal Process., vol. 2, B.C., Canada, pp. 643-646, May 1991. [5] A. Tandon, M. N. S. Swamy, M. O. Ahmad, “Partial-update L∞ -norm based algorithm”IEEE Trans. circuits Syst. I, vol. 54, no. 2, pp. 411-419, Feb. 2007. Ⅴ. Conclusion We have proposed a reduced complexity minimum L∞ -norm based adaptive filtering algorithm. The reduction of the computational complexity is achieved by the reduced average computational complexity from the sparse updates and the reduced computational complexity resulting from partial updating. As a result, the efficient L∞ -norm based adaptive filtering algorithm is derived. [6] S. Gollamudi, S. Nagaraj, S. Kapoor, and Y.-F. Huang, “Set-membership filtering and a setmembership normalized LMS algorithm with an adaptive step size,” IEEE Signal Processing Lett. Vol. 5, pp. 111-114, May, 1998. [7] J. E. Lee, Y. S. Choi, and W. J. Song, “A low-complexity L∞ -norm adaptive filtering algorithm,” IEEE Trans. Circuits Syst. II, accepted for publication, 2007. Acknowledgement This work was supported by the Brain Korea (BK) 21 program funded by the Ministry of Education and HY-SDR Research Center at Hanyang University under the ITRC Program of MIC, Korea [8] S. Werner, M. L. R. de Campos, and P. S. R. Diniz, “Partial-update NLMS algorithm with date -selective updating,” IEEE Trans. Signal Processing, vol. 52, no. 4, pp. 938-949, Apr. 2004. References [1] S. Haykin, Adaptive Filter Theory, 4th edition, Up per Saddle River, NJ: Prentice Hall, 2002. [2] A. H Sayed, Fundamentals of Adaptive Filtering, Englewood Cliffs, NJ: Prentice Hall, 2003. -4-
© Copyright 2026 Paperzz