1092 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 4, APRIL 1997 Game Theory Approach to Discrete Filter Design and Kalman filters in terms of model uncertainty and gives better estimates. Xuemin Shen and Li Deng H1 Abstract—In this correspondence, a finite-horizon discrete filter design with a linear quadratic (LQ) game approach is presented. The exogenous inputs composed of the “hostile” noise signals and system initial condition are assumed to be finite energy signals with unknown statistics. The design criterion is to minimize the worst possible amplification of the estimation error signals in terms of the exogenous inputs, which is different from the classical minimum variance estimation error criterion for the modified Wiener or Kalman filter design. The approach can show how far the estimation error can be reduced under an existence condition on the solution to a corresponding Riccati equation. A numerical example filter with that of the is given to compare the performance of the conventional Kalman filter. H1 I. INTRODUCTION The celebrated Wiener and/or Kalman estimators have been widely used in noise signal processesing. This type of estimation assumes that signal generating processes have known dynamics and that the noise sources have known statistical properties. However, these assumptions may limit the application of the estimators since in many situations, only approximate signal models are available and/or the statistics of the noise sources are not fully known or are unavailable. In addition, both Wiener and Kalman estimators may not be robust against parameter uncertainty of the signal models. Recent developments in optimal filtering have focused on the H estimation methods [1]–[10]. The optimal H estimator is designed to guarantee that the operator relating the noise signals to the resulting estimation errors should possess an H norm less than a prescribed positive value. In the H estimation, the noise sources can be arbitrary signals with only a requirement of bounded noise. Since the H estimation problem involves the minimization of the worst possible amplification of the error signal, it can be viewed as a dynamic, two-person, zero sum game. In the game, the H filter (the designer) is a player prepared for the worst strategy that the other player (the nature) can provide, i.e., the goal of the filter is to provide an uniformly small estimation error for any processes and measurement noises and any initial states. In this correspondence, we define a difference game in which the state estimator and the disturbance signals (processes noise, initial condition and measurement noise) have the conflicting objectives of, respectively, minimizing and maximizing the estimation error. The minimizer picks the optimal filtered estimate, and the maximizer picks the worst-case disturbance and initial condition. We give a detailed derivation to solve the game that directly produces the solution for the discrete H filtering problem. A similar design approach has been proposed in [1] and [2] for the continuous case. We then give a numerical example to compare the H filter with the Kalman filter. The comparison includes the magnitudes of the transfer functions from processes and measurement noises to estimation errors, which are the estimations of the true signals. It is shown that the H filter is more robust compared with those of Wiener 1 1 1 1 FILTER DESIGN II. DISCRETE H Consider the following discrete-time system xk+1 = Ak xk + Bk wk yk = Ck xk + vk k = 0; 1; 1 1 1 ; N 0 1; zk 1 1 Manuscript received July 18, 1995; revised December 9, 1996. This work was supported by the Natural Science and Engineering Research Council (NSERC) of Canada. The associate editor coordinating the review of this paper and approving it for publication was Dr. Gregory H. Allen. The authors are with the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ont., Canada N2L 3G1. Publisher Item Identifier S 1053-587X(97)02592-0. x0 (1) = Lk xk : (2) 1 The H filter is required to provide an uniformly small estimation error ek = zk 0 z^k for any wk ; vk 2 l2 and x0 2 Rn : The measure of performance is then given by N 01 jjz 0 z^ jj k J = 1 1 = where xk 2 Rn state vector, wk 2 Rm process noise vector, yk 2 Rp measurement vector, vk 2 Rp measurement noise vector. Ak ; Bk ; and Ck are matrices of the appropriate dimensions. Assume that (Ak ; Bk ) is controllable and (Ck ; Ak ) is detectable. Define the measurement history as Yk = (yk ; 0 k N 0 1): The estimate of ^k at time k is computed based on the measurement history the state x up to N 0 1: We are not necessarily interested in the estimation of xk but in the estimation of a linear combination of xk 1 1 x0 k=0 jjx 0 x^ jj 0 N 2 0 01 Q fjjw jj + p 2 k 2 k k=0 W + jjv jj 2 k V g (3) where ((x0 0 x ^0 ); wk ; vk ) 6= 0; x ^0 is an a priori estimate of x0 ; Qk 0; p001 > 0; Wk > 0 and Vk > 0 are the weighting matrices, and jjsk jj2R = sTk Rk sk : The optimal estimate zk among all possible z^k (i.e., the worse-case performance measure) should satisfy sup J < 1= (4) where “sup” stands for supremum, and > 0 is a prescribed level of noise attenuation. The matrices Qk 0; Wk > 0; Vk > 0 and p0 > 0 are left to the choice of the designer and depend on performance requirements. Discrete H1 filtering can be interpreted as a minimax problem where the estimator strategy z^k play against the exogenous inputs wk ; vk and the initial state x0 : The performance criterion becomes min z ^ max (v ;w ;x ) = J 0 21 jjx 0 x^ jj 0 1 N 01 + p 1 2 jjz 0 z^ jj 0 1 (jjw jj k k=0 2 0 k 2 Q k 2 W + jjv jj k 2 V ) (5) where “min” stands for minimization and “max” maximization. Note that unlike the traditional minimum variance filtering approach (Wiener and/or Kalman filtering), the H1 filtering deals with deterministic disturbances, and no a priori knowledge of the noise statistics is required. Since the observation yk is given, vk can be uniquely 1053–587X/97$10.00 1997 IEEE IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 4, APRIL 1997 1093 determined by (1) once the optimal values of wk and x0 are found. ^k , we can rewrite the performance criterion (5) as Letting z^k = Lk x where xk and Pk are undetermined variables. x3k and 3k represent the optimal value of xk and k , respectively, for any fixed admissible functions of xk and yk : The optimal values for wk and x0 are min x ^ J max (y ;w ;x ) = 0 21 jjx 0 x^ jj 2 0 p 0 0 1 (jjw jj 2 k + W 1 + 2 N 01 jjx 0 x^ jj [ k k wk3 2 Q k=0 jjy 0 C k k xk jj2V )] (6) where Qk = LTk Qk Lk : The following theorem presents a complete solution to the H1 filtering problem for the system (1) with the performance criterion (6). Theorem: Let > 0 be a prescribed level of noise attenuation. Then, there exists an H1 filter for zk if and only if there exists a stabilizing symmetric solution Pk > 0 to the following discrete-time Riccati equation: Pk + CkT Vk01 Ck Pk )01 1 A + Bk Wk BkT P0 = p0 : Pk+1 = Ak Pk (I 0 Q xk+1 + Pk+1 3k+1 k = 0; 1; 1 1 1 ; N x^k+1 = Ak x^k + Kk (yk 0 Ck x^k ); x^0 3k = Ak Pk (I 0 Qk Pk + C T k V 01 C =x ^0 : k k Pk 01 ) C T k V 01 : k = 1 2 + jjx 0 x^ jj 0 1 (jjw jj k 2 k T k+1 2 k Q [Ak xk + Bk wk + [Ak xk + Bk wk =x ^0 + p0 0 ; + W jjy 0 C k k xk jj2V T k+1 ] (11) =0 T k (12) xk+1 + 0 Bk Wk BkT ATk 0 Qk x^k + CkT Vk01 yk k = 0; 1; 1 1 1 ; N xk k+1 x0 =x ^0 + p0 0 ; N 1 [Q = xk + Pk 3k k T k k k k k 1 k Pk )01 (21) k+1 0 (Q 0 C V 0 C )P ]0 0 x^ ) + C V 0 (y 0 C x )] xk T k k T k k 1 k k 1 k k k 1 k k =x ^0 Pk+1 (22) = Ak Pk (I 0 Q k Pk + CkT Vk01 Ck )Pk )01 AkT + Bk Wk B T k P0 = p0 : (23) Equation (23) is the well-known Riccati difference equation. It has been prooven that if the solution Pk to the Riccati equation (23) exists 8k 2 [0; N 0 1]. Then, Pk > 08k 2 [0; N 0 1]: Now, substituting the optimal strategies (18) into the performance (6), we obtain min max J = x ^ y 0 21 jj3 jj 0 2 + p N 1 01 T k k k k k jjx [ 2 k 3 0 x^ + Pk k k jj 2 Q k=0 0 1 (jjW B 3 jj 0 C P 3 jj )]: k+1 2 + W jjy 0 C k k xk 2 (24) V In the sequel, we will perform the min-max optimization of J with respect to x ^k and yk , respectively. Adding to (24) the identically zero term [ 0 2 p N 2 P N 01 + jj3 jj ( k+1 0 jj3 jj 2 P 2 k P )] k=0 =0 (15) (25) results in the following min-max problem = 0: (16) Since the two-point boundary value problem is linear, the solution is assumed to be of the form x3k T k jj3 jj 0 jj3 jj 2 with boundary conditions x0 k k and 1 ; 01 k = Ak xk + Ak Pk [(I k+1 (13) T 0 1 = A k+1 + Qk (xk 0 x ^k ) + Ck Vk (yk 0 Ck xk ): (14) Ak Qk 0 CkT Vk01 Ck k 1 k For (21) to hold true for arbitrary 3k , both sides are set identically to zero, resulting in T k These first-order necessary conditions result in a two-point boundary value problem T k k k T k = k k ) k+1 ] k+1 : (20) 1 [Q (x 0 x^ ) + C V 0 (y 0 C x )] 0C = [0P + A P (I 0 Q P + C V 1 A + B W B ]3 : = Wk B xk+1 k 3 T (10) 0x 0x N xk 0 x^k ) + CkT Vk01 (yk 0 Ck xk ) 1 [Q k( Taking the first variation, the necessary conditions for a maximum are x0 wk k k + Ak k+1 ]: (9) Proof: By using a set of Lagrange multiplier to adjoin the constraint (1) to the performance criterion (6), the resulting Hamiltonian is M Pk + CkT Vk01 Ck Pk )01 0 Q k( (7) where Kk = (I xk+1 0 Ak xk 0 Ak Pk (I 0 Qk Pk + CkT Vk01 Ck Pk )01 (8) Kk is the gain of the H1 filter and is given by (19) From (19) and (20), we have 01 Lk x^k ; (18) Ak xk + Ak Pk 3k + Bk Wk BkT 3k+1 = k+1 = 3 =x ^0 + p0 0 : and k z^k x30 Substituting (17) into (15) results in T k The H1 filter is given by Wk BkT 3k+1 ; = (17) min max J x ^ y = 1 2 N 01 jjx 0 x^ jj 0 1 jjy 0 C x jj [ k=0 k k 2 Q k k subject to the dynamic constraints (22) and (23). k 2 V ] (26) 1094 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 4, APRIL 1997 Fig. 1. Signal generating mechanism. Letting Fig. 2. rk = xk 0 x^k ; (26) becomes min max J = r q 1 N01 2 k=0 qk = yk 0 Ck xk jjrk jj2Q 0 1 jjqk jj2V Kalman filter estimate. (27) : (28) The two independent players rk and qk in (28) affect the variables xk , but xk does not appear in the performance index, and therefore, the optimal strategies of rk and qk are rk3 i.e. xk qk3 = 0; 3 =x ^k ; yk3 =0 = (29) Ck xk : (30) The value of the game is the value of the cost function (6). 3 3 3 3 ^k ; yk ; wk ; and x0 in (18) and (30) When the optimal strategies x are substituted into the (6) J (^ x3k ; yk3 ; wk3 ; x30 ) = 0 (31) giving a zero value game. 3 3 3 3 ^k ; yk ; wk ; and x0 have been assumed to Thus far, the strategies of x be optimal, based on the satisfaction of the necessary conditions for optimality. If the strategies can also satisfy a saddle-point inequality, they represent optimal strategies. A saddle point strategy can be obtained by solving two optimization problems: 3 x^ y w x max max max min J = J3 : y w x x^ min max max max J = J (32) (33) When J 3 = J3 , the solutions to (32) and (33) produce saddle point strategies. It can be easily shown that if Pk exists 8k 2 [0; N 0 1], the 3 3 3 3 ^k ; yk ; wk ; and x0 satisfy a saddle point inequality optimal strategies x J (^ x3k ; yk ; wk ; x0 ) J (^ x3k ; yk3 ; wk3 ; x30 ) J (^ xk ; yk3 ; wk3 ; x30 ): (34) Note that the notation J1 J2 means that J1 0 J2 is a positive semi-definite matrix. The right inequality can be checked by adding the identically zero term 3 ^0 jj2 0 jjx3 0 x^N jj2 [jjx 0 0 x N P P 2 N01 3 2 + (jjxk+1 0 x ^k+1 jj P 0 jjx3k 0 x^k jj2P )] (35) k=0 to J (^ xk ; yk3 ; wk3 ; x30 ), and the left inequality can be checked by adding 1 the identically zero term 1 2 jjx0 0 x^30 jj2P 0 jjxN 0 x^3N jj2P [ N01 + to 3 2 (jjxk+1 0 x ^k+1 jjP k=0 3 J (^ xk ; yk ; wk ; x0 ): 0 jjxk 0 x^3k jj2P H1 filter estimate. Fig. 3. The optimal strategy of the measurement noise can be obtained by vk3 = yk3 0 Ck x^3k (36) = 0: (37) With (22) and (30), the optimal H1 filter is given by z^k3 where x^3k+1 Kk = 3 Lk x^3k ; = Ak x ^k + Kk (yk = Ak Pk (I k = 0; 1; 1 1 1 ; N 0 Ck x^3k ); x0 01 =x ^0 0 Qk Pk + CkT Vk01Ck Pk )01 CkT Vk01 (38) (39) (40) and Pk is given by (23). It is important to note that the optimal H1 filter depends on the weighting on the estimation error in the performance criterion, i.e., the designer choses the weighting matrices based on the performance requitements, whereas both Wiener and Kalman filters are dependent on the variance of the noises. For the time-invariant case (N ! 1), the optimal steady-state H1 filter is given by z^k3 where x^3k+1 K = 3 Lx^3k ; k = 0; 1; 1 1 1 ; 1 = Ax ^k + K (yk = AP (I 0 C x^3k ); x0 = x^0 0 QP + C T V 01 CP )01C T V 01 (41) (42) (43) and the Riccati equation becomes P = AP (I 0 QP +C T V 01 CP )01 AT + BW B T : (44) The solution of the Riccati equation (44) can be obtained by the following [9]. Let H = A0T BW B T A0T A0T [C T V 01 C 0 Q] : A + BW B T A0T [C T V 01 C 0 Q] (45) Assume that matrix H has no eigenvalues on the unit circle and that the eigenvector S corresponds to the outer circle (unstable) eigenvalues of the matrix H: Spanning S as S = [S1T S2T ]T , the solution of Riccati equation P is given as P )] Ck xk 0 Ck x^3k = = S2 S101 : (46) Details of the last result can be found in [11]. Note that in the limiting case, where the parameter ! 0, the H1 filter given by (41)–(44) reduces to a steady-state Kalman filter. IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 4, APRIL 1997 1095 over all possible disturbances of finite energy; therefore, they are overconservative, resulting in a better robust behavior to disturbance variations. All the simulation results are obtained by using MATLAB [12]. IV. CONCLUSIONS A difference game has been formulated and solved for the discrete H filter design. The existence of a solution to the difference Riccati equation, over the time interval, is a necessary and sufficient condition for the existence of the optimal discrete H filter. Since the design criterion is based on the worst-case disturbance, the H filter is less sensitive to uncertainty in the exogenous signals statistics and dynamical model. 1 Fig. 4. Estimation error power spectra—without disturbance. 1 1 REFERENCES 1 Fig. 5. Estimation error power spectra—with disturbance. 1 III. NUMERICAL EXAMPLE A signal generating system (Fig. 1) is the damped harmonic oscillator with velocity measurements described by y wn 0 x_ = 0wn 02wn = (0 x+ 0 wn where the state x = (x1 x2 )T with x1 as the position and x2 as the velocity. The natural frequency wn = 1:1, and the damping coefficient = 0:15: w is a driving signal, and v is the measurement noise. It is assumed that w and v are uncorrelated, stationary, zeromean, white noise processes of unit intensity. Assuming a zero-order hold on the input, this system is converted to the following discrete system with sample time T = 1: xk+1 yk = 0 = (0 0:7594 0:2801 k xk + 0:4921 0:7594 wk k 1)x + v where it is desired to estimate zk = (1 k: 0)x (47) Using (45), (46), and (43), the following Kalman filter gain ( = 0) and H filter gain ( = 1:24) are obtained. 1 GK = 0:4236 0:0873 ; GH = 0:1792 1:1321 : (48) The performance of the two filters is compared by simulating their estimate of zk and the magnitudes of the transfer function Tzw from noises [wkT vkT ]T to the estimation error ek : The results are depicted in Figs. 2 to 5. From Figs. 2 and 3, it is observed that the H filter gives the estimate zk relatively better than that of the corresponding Kalman filter, even though the statistics of the noises wk and vk are known. Figs. 4 and 5 give the magnitudes of the estimation error power spectra using both H filter and Kalman filter when the system parameters (wn ; ) vary from (0.9, 0.08) to (1.1, 0.3). It is shown that the error spectra of the H filter have lower peaks, which means that the error spectra of the H filter are less sensitive to exact knowledge of the parameters of the system. This can be explained as follows: H filters guarantee the smallest estimation error energy 1 1 1 1 1 1 1 w 1)x + v 0:5079 0:7594 [1] R. N. Banavar and J. L. Speyer, “A linear quadratic game theory approach to estimation and smoothing,” in Proc. IEEE ACC, 1991, pp. 2818–2822. [2] C. E. de Souza, U. Shaked, and M. Fu, “Robust H filtering for continuous time varying uncertain systems with deterministic signal,” IEEE Trans. Signal Processing, vol. 43, pp. 709–719, 1995. [3] M. J. Grimble and A. Elsayed, “Solution of the H optimal linear filtering problem for discrete-time systems,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 38, pp. 1092–1104, 1990. [4] U. Shaked and Y. Theodor, “H -optimal estimation: A tutorial,” in Proc. 31st IEEE CDC, 1992, pp. 2278–2286. [5] K. M. Nagpal and P. P. Khargonekar, “Filtering and smoothing in an H setting,” IEEE Trans. Automat. Contr., vol. 36, pp. 152–166, 1991. [6] W. M. Haddad, D. S. Berstein, and D. Mustafa, “Mixed-norm H2 =H regulation and estimation: The discrete-time case,” Syst. Contr. Lett., vol. 16, pp. 235–247, 1991. [7] B. Hassibi and T. Kailath, “H adaptive filtering,” in Proc. IEEE ICASSP’95, Detroit, MI, 1995, pp. 949–952. [8] X. Shen and L. Deng, “Discrete H filter design with application to speech enhancement,” in Proc. IEEE ICASSP’95, Detroit, MI, 1995, pp. 1504–1507. [9] I. Yaesh and U. Shaked, “A transfer function approach to the problem of discrete-time systems: H -optimal linear control and filtering,” IEEE Trans. Automat. Contr., vol. 36, pp. 1264–1271, 1991. [10] T. Basar, “Optimum performance levels for H filters, predictors, and smoothers,” Syst. Contr. Lett., vol. 16, pp. 309–317, 1991. [11] D. R. Vaughan, “A nonrecursive algebraic solution for the discrete riccati equation,” IEEE Trans. Automat. Contr., vol. AC-15, pp. 597–599, 1970. [12] C. Moler, J. Little, and S. Bamgert, PC-MATLAB. Sherborn, MD: The Math Works, 1987. 1 1 1 1 1
© Copyright 2026 Paperzz