IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 4, APRIL 2006 On the Georgiou–Lindquist Approach to Constrained Kullback–Leibler Approximation of Spectral Densities Michele Pavon and Augusto Ferrante 639 II. APPROXIMATION PROBLEM We adopt the same notation as in [15]. Let C+ ( ) denote the positive, continuous functions on the unit circle. Consider the rational transfer function G(z ) = (I Abstract—We consider the Georgiou–Lindquist constrained approximation of spectra in the Kullback–Leibler sense. We propose an alternative iterative algorithm to solve the corresponding convex optimization problem. The Lagrange multiplier is computed as a fixed point of a nonlinear matricial map. Simulation indicates that the algorithm is extremely effective. Index Terms—Approximation of spectral densities, convex optimization, fixed point, Kullback–Leibler pseudodistance, spectral estimation. 0 zA)01 B; A 2 n2n ; B 2 n21 where A is a stability matrix, i.e., has all its eigenvalues in the open unit disc, and (A; B ) is a reachable pair. Let 9 2 C+ ( ) represent the a priori estimate of the spectrum of an underlying zero-mean, wide-sense stationary stochastic process fy (n); n 2 g. Consider the situation where new data become available in the form of an estimate 6 of the state covariance of the system I. INTRODUCTION Georgiou and Lindquist [15] have recently studied the problem of approximating a spectral density in the Kullback–Leibler sense with another spectral density that is consistent with given, asymptotic state-covariance statistics. In [15], it is shown that this problem has as special cases various important problems of control and signal processing engineering. These are briefly reviewed in the next section. We mention here, in particular, the Byrnes–Georgiou–Lindquist spectral estimator (THREE) [1], [2], [18]. The latter permits to obtain higher resolution of the spectrum in certain desired frequency bands. It also appears to work better than usual estimation techniques, such as periodogram and AR methods, when short observation records are available. In [15], the (convex) optimization problem is approached via duality theory. It is shown there that the dual problem is also convex and admits a unique solution in a certain prescribed set of Hermitian matrices L+ (0). In general, the solution cannot be obtained in closed form. It is then necessary to solve the dual problem numerically. In [15], it is observed that such a numerical solution can be based on Newton’s method, after a suitable reparametrization of L+ (0), along the lines of techniques developed in [2], [8], and [17] for similar problems. As observed in [15], the global coordinatization of L+ (0) needs to preserve convexity. Nevertheless, the available methods may lead to loss of global convexity or to numerical instability because of unboundness of the gradient at the boundary. In this note, we present an alternative algorithm to solve the optimization problem based on a nonlinear iteration scheme for the Lagrange multiplier matrix. Differently from [15], we do not restrict our “dual variable” to lie in the range of a certain operator 0 [see (11)]. So our dual problem may have infinitely many solutions. Problems connected to the reparametrization of L+ (0) are therefore altogether avoided. Extensive simulation indicates that this iteration converges extremely fast. The outline of the note is as follows. In Sections II and III we review, with some minor differences, the problem setting, the variational analysis and the corresponding dual problem of Georgiou and Lindquist [15]. In Section IV, we introduce the new algorithm and analyze its properties. Simulation results are presented in Section V. A preliminary version of the results of this note has been presented, without detailed proofs, in [19]. Manuscript received September 5, 2005; revised December 5, 2005. Recommended by Associate Editor E. Jonckheere. This work was supported in part by the MIUR-PRIN Italian Grant “Identification and Control of Industrial Systems.” M. Pavon is with the Dipartimento di Matematica Pura ed Applicata, Università di Padova, 35131 Padova, Italy (e-mail: [email protected]). A. Ferrante is with the Dipartimento di Ingegneria dell’Informazione, Università di Padova, 35131 Padova, Italy. Digital Object Identifier 10.1109/TAC.2006.872755 The Hermitian matrix 6 is assumed to be positive definite. It is obtained by feeding y to the measurement device (a bank of filters) modeled by G until it reaches steady state, and then estimating the state covariance. If 9 is not consistent with 6, we need to find 8 in C+ ( ) that is closest to 9 in a suitable sense among spectra consistent with 6. In [15], the following measure of distance for spectra in C+ ( ) is considered: i 9 log 9 = 9(ei ) log 9(ei ) 8 8(e ) 0 (9k8) = d 2 : The Kullback–Leibler pseudodistance for probability distributions is defined similarly. It is also known as divergence, relative entropy, information distance, etc. It is also denoted by (1; 1) and H (1; 1). If 8 = 9, we have (9k8) 0. The choice of (9k8) as a distance measure, even for spectra with different zeroth moment, is justified in [15, Sec. III]. Notice that minimizing (9k8) rather than (8k9) is unusual in to the statistics-probability-information theory world. Besides leading to a more tractable problem, however, it also includes as special case (9 1) maximization of entropy. As in [15], we consider the following. Problem 2.1: (Approximation problem) Let 9 2 C+ ( ), and let 6 2 n2n satisfy 6 = 63 > 0, where star denotes transposition plus ^ that solves conjugation. Find 8 minimize over (9k8) 8 2 C+ ( ) j (1) G 8G 3 = 6 : (2) The constraint (2) expresses the fact that 8 is consistent with 6. It is not necessary to require that 8 satisfies the hermitian property 8(ei )3 = 8(ei ), 8 . Indeed, whenever 9 2 C+ ( ) is a spectrum, namely it satisfies the parahermitian property, the solution of the approximation problem also turns out to be a spectrum. First of all, one needs to worry about existence of 8 2 C+ ( ) satisfying constraint (2). It was shown in [12], [13] that such family is nonempty if and only if there exists H 2 12n such that 6 0 A6A3 = BH + H 3 B 3 or, equivalently, the following rank condition holds: rank 0018-9286/$20.00 © 2006 IEEE 6 0 A 6A3 B = rank 0 B3 0 B3 B 0 : (3) 640 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 4, APRIL 2006 We now show, following [15], that this problem includes as special cases a variety of important applications. In all three examples below, the family of spectral densities consistent with the data is nonempty if 6 0 and contains infinitely many elements if 6 > 0. Example 1: (Covariance extension problem) Let 0 1 0 ... 0 0 0 1 ... 0 .. . A= .. . .. . .. . B= 0 0 The (rather straightforward) variational analysis follows quite closely [15] with some minor differences that are spelled out in [19]. It is, therefore, kept to a minimum. For 3 2 n2n satisfying G3 3G > 0 on all of , consider the Lagrangian function .. . (4) Notice that, differently from [15], we do not require the matrix belong to the range of the operator 0 defined on C ( ) by 6= .. . 0(8) = . . . cn01 . . . cn02 c1 c0 .. . .. (5) .. . . cn01 cn02 . . . G3 3G8 0 trace (36): = (9k8) + the Toeplitz matrix c0 c1 G8G3 0 6 L(8; 3) = (9k8) + trace 3 0 1 n0k so that the k th component of G(z ) is Gk (z ) = z . Also, let 6 be 0 0 0 ... 1 0 0 0 ... 0 III. OPTIMALITY CONDITIONS AND THE DUAL PROBLEM G8G3 : (10) 3 to (11) Consider the unconstrained minimization of the strictly convex functional L(8; 3) c0 minimizefL(8; 3)j8 2 C+ ( )g: (12) where ck := E fy(n)y(n + k)g: Thus, the information available on the process y is the finite sequence of covariance lags c0 ; c1 ; . . . ; cn01 . If 9 1, then Problem 2.1 is just the maximum-entropy covariance extension problem; see, e.g., [16]. When 9 is a positive polynomial of degree n, the solution of Problem 2.1 has degree 2n [9], [11], [6], [5], [3], [4] Example 2: In this case 0 0 ... 0 0 p2 0 . . . 0 .. . .. . 0 0 .. .. . . .. . B= 0 0 ... 0 0 0 . . . pn (6) 1 1 In this case the problem is a Nevanlinna-Pick interpolation problem [10], [11], [1], [2] whose solution permits spectral estimation by selective harmonic amplification [1], [2], [18], [12] (similarly, for Example 3). Example 3: In this case A= .. . .. . .. . .. . 0 0 0 ... 1 0 0 0 ... p so that the k th element of G(z ) is Gk (z ) The matrix 6 is of the form B= 0 0 .. . (8) 0 1 (14) where W is an upper triangular toeplitz matrix and E is the reachability gramian of the pair (A; B ). (15) is the unique solution of the approximation problem (1)–(2). ^ satisThus, the original problem (1)–(2) is now reduced to finding 3 fying (13)–(14). This is accomplished in [15] via duality theory. Consider, as in [15], the dual functional 3 ! inf fL(8; 3)j8 2 C+ ( )g: For 3 satisfying (13), the dual functional takes the form 3!L 9 ; 3 = 9 log G3 3G 0 trace (36)+ 9: G3 3G (16) Consider now the maximization of the dual functional (16) over the set L+ := f3 = 33 jG3 3G > 0 8ei 2 g: (17) Let, as in [15] 9 log G3 3G + trace (36): The dual problem is then equivalent to minimize (9) (13) ^ given by Then, 8 J9 (3) := 0 = z n0k =(1 0 pz )n+10k . 6 = 1 (W E + EW 3 ) 2 9 G 3 ^ G3 = 6: G 3G 8ei 2 8^ = 39^ G 3G so that the k -th element of G(z ) is Gk (z ) = 1=(1 0 pk z ). The matrix 6 is a Pick matrix with elements 6i;j = wi + wj (7a) 1 0 pi pj 1 e0i! + pk 8(ei! )d!; wk = k =1; 2; . . . ; n: (7b) 4 0e0i! 0 pk p 1 0 ... 0 0 p 1 ... 0 ^G > 0 G3 3 1 1 p1 A= This is a convex optimization problem. We have the following optimality condition. ^ = 3^ 3 is such that Theorem 3.1: Suppose 3 fJ9(3)j3 2 L+ g: (18) The dual problem is also a convex optimization problem. The functional J9 (3), however, is not strictly convex on 3 2 L+ . We have the following result [15]. ^ 2 L+ solves (18) if and only if it satisfies (14). Proposition 3.2: 3 In [15], existence of a (unique) minimum of J9 on L+ (0) := L+ \ range (0) is established via a profound homeomorphism result for continuous maps between open, connected sets of the same dimension [7]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 4, APRIL 2006 This result implies that, under assumption (3), there exists a (unique) 3^ in L+ (0) satisfying (14). Such a 3^ then provides the optimal solution of the primal problem (1)–(2) via (15). This, of course, establishes existence also for our dual problem (18) in view of Proposition (3.2). Notice, however, that our dual problem has in general infinitely many solutions in L+ . It is observed in [15] that, in general, to solve their dual problem, it is necessary to resort to an iterative algorithm, and that the latter, after a suitable parametrization of L+ (0) can be based on Newton’s method. In the following section, we propose an alternative strategy. 641 where G0 9 G03 = I 03 ^ 0 G0 G 3 then G rank I 0 AA3 B3 B 0 = rank 0 B B3 0 : 9 M 1=2 G G3 M G G3 M 1=2 : [2(M )] = 9 = G3 M G G3 M G For x 6= 0 in , let us denote by 5x projection onto span fxg. We have 2(M ) = = M 1=2 G 5M G 9 9I (23) 9 G 9 G3 M G G3 > m . GG3 : GG3 is the solution of the Lyapunov equa- X = AXA3 + BB 3 : As it is well known, when A is asymptotically stable and the pair (A; B ) is reachable, such a solution is positive definite. The Algorithm: Let M0 = (1=n)I . Note that M0 2 M+ . Define the sequence fMk g1 k=0 by (24) (25) := 2(Mk ): (26) 2M Notice that, by Theorem 4.1, Mk + for all k . Moreover, since Mk , k , the sequence is bounded. Hence, it has at least one . accumulation (limit) point in the closure of ^ Theorem 4.2: Suppose that the sequence Mk k1=0 has a limit M ^ satisfies (13)–(14) and, therefore, provides the optimal + . Then M solution of (1)–(2) via (15). ^ is a fixed point of the map 2. Proof: M M M f g M ^ M For M 2 M, we now want to show that G3 (ei! )2(M )G(ei! ) > 0 on such that G3 (ei! )2(M )G(ei! ) = 0. all of . Suppose there exists ! Then =0 : We have m > 0, since 9 is positive and G3 M G has no poles on Then 2M8 G3 M 1=2 G3 (ei! )M 1=2 Q()M 1=2 G(ei! )d of the continuous function G3 M G Mk+1 9 = 1: 9 = I: G 3 > 0: (22) = x(x3 x)01 x3 the orthogonal G3 M G 9 G3 M G Indeed, let m be the minimum on (21) Theorem 4.1: 2 maps M into M and M+ into M+ . Proof: It is apparent that 2(M ) = 2(M )3 . Moreover trace Recalling that 9 is everywhere positive, this implies G3 (ei! )M 1=2 G(ei! ) = 0. Hence, M 1=4 G(ei! ) = 0 and, consequently, G3 (ei! )M G(ei! ) = 0, a contradiction. Suppose now that M 2 M+ . We want to show that 2(M ) > 0. Since M 1=2 is nonsingular, it suffices to show that It remains to observe that tion M := fM 2 L+ j0 M I; trace [M ] = 1g M+ := fM 2 MjM > 0g: For M 2 M, define the map 2 by 2 9(ei! ) = 0: G3 (ei! )M G(ei! ) (20) 9 = 1. We assume, moreover, w.l.o.g., that Let 2(M ) := G3 (ei! )M 1=2 G(ei! ) (19) In order to have a nonempty feasible set for problem (1)–(2), we assume 8: = ! : In particular, this holds for G 8^ = 039^ 0 0 : G 3G G3 (ei ): G3 (ei! )M 1=2 Q()M 1=2 G(ei! ) = 0 9 G3 = 6: ^ 0 G0 G03 3 ^ in (15) may also be obtained directly from G0 and 3^ 0 as Hence, 8 G3 (ei )M G(ei ) Since the integrand of (25) is a continuous and nonnegative function of , it follows that the integrand is identically zero, i.e., IV. NEW ALGORITHM To simplify the writing, we assume, without loss of generality, that 6 = I . Indeed, if 6 6= I , it suffices to replace G by G0 := 601=2 G and (A; B ) with (A0 = 601=2 A61=2 ; B 0 = 601=2 B ). Moreover, if 3^ 0 has been found such that 9 Q() := G(ei ) = 2(M^ ): 2 (27) ^ > 0, (13) is satisfied and (27) gives (14). Q: E: D: Since M Next, we analyze the algorithm. Theorem 4.2 shows that every positive definite M which is a fixed point of the map 2 yields the optimal 8^ via (15). In general, a nonnegative definite fixed point of the map does not lead to the optimal solution. Indeed, we identify below a large that do not, in general, yield the opfamily of fixed points of 2 in M ^. timal solution 8 642 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 4, APRIL 2006 Proposition 4.3: Let x 2 n be such that x3 G(ei ) is not identi is a fixed point of cally zero. Then the orthogonal projection 5x 2 M the map 2. Proof: Recall that 2(M ) = 5M G 9: Notice that x3 G is a rational function. If it is not identically zero, then it has at most finitely many zeroes on . Hence, 55 G = 5x almost everywhere on . Thus 2(5x ) = 55 G 9 = 5x 9 = 5x : Q:E:D: 0 the Let us introduce M0 := fM 2 M+ j2(M ) = M g, and M closure of M0 . 0 , is Theorem 4.4: The set M0 , and consequently its closure M convex. Proof: Let M1 and M2 belong to M0 . Since they are nonsingular and fixed points of 2, it follows that G 9 G3 M 1 G G3 = G 9 G3 M 2 G 9=G3 M1 G and 82 G3 = I: 9=G3 M2 G solve the Hence, both 81 := := approximation problem (1)–(2). Since the solution is unique, we get G3 M1 G G3 M2 G. Hence M := (1 0 )M1 + M2 ; 9 G3 M G G3 M1 G. We G3 = I: We conclude that 2(M ) = M , and M 2 M0 . 0 . Then, there exist a sequence Now let M be a singular matrix in M fM3 k g1 with elements in M such that Mk ! M . It follows that 0 k=0 G Mk G ! G3 MG at each point on . Since G3 Mk G does not vary with k , it also follows that G3 MG G3 Mk G. We conclude that 0 yields the optimal solution of the approximation any matrix M 2 M problem via the formula 8o := G39MG : Let us denote by k0 = min (Nk ) k+ = max (Nk ) the minimum and maximum eigenvalues of Nk . We then have the following result. Proposition 4.5: 1 Nk Nk+1 1 Nk 0 k+ (31) Proof: Since k0 I Nk , we get from (30) G 9 G3 = 1 N : Nk+1 1 k k0 G3 M k G k0 Q :E :D : Similarly, for the other inequality. Corollary 4.6: For all k 1, we have k0 1 k+ k0 k0+1 k++1 k+ k+ : k0 (32) (33) Remark: Notice that Proposition 4.5 implies the following: If, for some k , Nk I , then Nk+1 Nk . Namely, the iteration for Nk is monotonically nonincreasing on fN jN I g. Similarly, it also follows from Propositon 4.5 that the iteration for Nk is monotonically nondecreasing on fN jN I g. This suggests that the iteration features some intrinsic mechanism to converge toward the identity matrix. Corollary 4.6, nevertheless, shows that after one iteration, Nk 0 I does not possess any definiteness property. We conjecture that the mechanism of the iteration for Nk continues to work for the portion of Nk greater than the identity and for the portion of Nk smaller than the identity. More explicitly, we conjecture (but cannot quite prove) the following: 0 that is not a fixed point of 2. Then, Let M be any matrix in MnM the corresponding sequence fNk g produced by the iteration (29)–(30) always converges to the identity matrix, thereby providing the optimal solution of the approximation problem. V. SIMULATION RESULTS Hence, it suffices that Mk defined in (26) converges to any element of Now, let G 9 G3 : G3 M k G In the examples that follow, we have iterated the algorithm (26) with the stopping condition kM 0 2(M )k < 1005 . Consider first the following instance of Example 1. Let 0 0 : 1 The (estimated) finite covariance sequence is c0 = 3, c1 = 1, c2 = 1, A= In order to facilitate the analysis of the algorithm, rewrite the recursion (26) as follows: Mk+1 = Mk1=2 Nk Mk1=2 M0 = 1 I n G 9 G3 G 9 G3 : Nk+1 = = n N 0 1 = 2 1 = 2 G3 G G3 M k Nk M k G 8 k 0: k (28) M 0 to obtain the solution of the primal problem (1)–(2). Nk = G 9 G3 : G3 G Nk 01 is a positive–definite matrix also satisfying G3 M G get G Notice that Mk and Nk generated by the recursion (29)–(30) are all positive definite. Also notice that if Nk = I , then Mk+1 = Mk and Nk+1 = Nk , namely we have a fixed point of the dynamical system. Moreover, if Nk = I , 2 + is a scalar matrix, then Nk+1 = I . Finally, notice that, since M I , the Nk have the uniform lower bound 0 1 0 0 0 1 0 0 0 B= so that (29) (30) 3 1 1 6= 1 3 1 : 1 1 3 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 4, APRIL 2006 Fig. 1. Graphics of 9(e 643 ) and 8 (e ) (bold line) as functions of ! . and 9( ) = ( + 2)(3 01 + 2) Also let 9( ) = ( + 1 1)( 0 1 2)(0 0671 + 1 1)( 01 0 1 2) Notice that 9 = 1. After the renormalization described at the beginning of Section V to transform 6 into the identity, we get 0 1462 1 0237 0 0237 0 = 0 1462 0 0237 1 0237 00 0475 00 1700 00 1700 00 0866 0 = 00 0866 0 6205 ^0 The algorithm converged in 13 steps and the optimal matrix o = 3 : z z : z : : A z : : : : and notice that : : : z z : A : : 9 = 1. After the usual renormalization, we get 0 00 1234 0 1234 0 0173 0 9922 0 0075 00 0173 00 0075 00 9922 0 9847 0 = 0 1234 0 1234 : 0= : : : : : : : B : : z : : : B z : : : : M is given by The algorithm converged in four steps and the optimal matrix M o is given by = 3^ 0 0 0004 0 0710 0 1106 0 4328 00 1469 0 4325 o = 0 0710 049 0867 0 7108 = 00 1469 0 1345 00 1469 0 1106 0 7108 048 9779 0 4325 00 1469 0 4328 The optimal solution 8o may now be computed through (19). The func- The optimal solution 8o ( i! ) is depicted (bold line) together with tion 8o ( i! ) is depicted in bold line together with 9( i! ) (thin line) 9( i! ) (thin line) in Fig. 2. : o M : : : : : : : : : M : : : : : : : : : : e e e in Fig. 1. Consider next one instance of Example 2. Let 0 0 0 1 = 0 0 99 0 = 1 0 0 00 99 1 1 1 1 6 = 1 50 2513 0 5050 1 0 5050 50 2513 A B : : : : : : e We have tested the algorithm, with similar results, also in many other cases of Examples 1–3, see [19] where some other simulation results are illustrated. The algorithm never failed to rapidly converge to a positive definite M which, by Theorem 4.2, provides an optimal solution to the original problem. This happened even when the algorithm was initialized with an M0 very close to one of the one-dimensional projections that are fixed points of the map 2 (Proposition 4.3). The latter, ^ via as observed before, do not, in general, yield the optimal solution 8 (15). 644 Fig. 2. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 4, APRIL 2006 Graphics of 9(e ) and 8 (e ) (bold line) as functions of ! . REFERENCES [1] C. I. Byrnes, T. Georgiou, and A. Lindquist, “A generalized entropy criterion for Nevanlinna-Pick interpolation with degree constraint,” IEEE Trans. Autom. Control, vol. 46, no. 6, pp. 822–839, Jun. 2001. [2] , “A new approach to spectral estimation: A tunable high-resolution spectral estimator,” IEEE Trans. Signal Process., vol. 48, no. 11, pp. 3189–3205, Nov. 2000. [3] C. I. Byrnes, S. Gusev, and A. Lindquist, “A convex optimization approach to the rational covariance extension problem,” SIAM J. Control Opim., vol. 37, pp. 211–229, 1999. [4] , “From finite covariance windows to modeling filters: A convex optimization approach,” SIAM Rev., vol. 43, pp. 645–675, 2001. [5] C. I. Byrnes, H. J. Landau, and A. Lindquist, “On the well-posedness of the rational covariance extension problem,” in Current and Future Directions in Applied Mathematics, M. Alber, B. Hu, and J. Rosenthal, Eds. Boston, MA: Birkhäuser, 1997, pp. 81–108. [6] C. I. Byrnes, A. Lindquist, S. V. Gusev, and A. S. Matveev, “A complete parameterization of all positive rational extensions of a covariance sequence,” IEEE Trans. Autom. Control, vol. 40, no. 11, pp. 1841–1857, Nov. 1995. [7] C. I. Byrnes and A. Lindquist, “Interior point solutions of variational problems and global inverse function theorems,”, Tech. Rep. TRITA/MAT-01-OS13, 2001, submitted for publication. [8] P. Enqvist, “A homotopy approach to rational covariance extension with degree constraint,” Int. J. Appl. Math. Comp. Sci., vol. 11, pp. 1173–1201, 2001. [9] T. Georgiou, “Realization of power spectra from partial covariance sequences,” IEEE Trans. Acoust., Speech, Signal Process., vol. ASSP-35, no. 4, pp. 438–449, Apr. 1987. [10] , “A topological approach to Nevanlinna-Pick interpolation,” SIAM J. Math. Anal., vol. 18, pp. 1248–1260, 1987. [11] , “The interpolation problem with a degree constraint,” IEEE Trans. Autom. Control, vol. 44, no. 3, pp. 631–635, Mar. 1999. [12] , “Spectral estimation by selective harmonic amplification,” IEEE Trans. Autom. Control, vol. 46, no. 1, pp. 29–42, Jan. 2001. [13] , “The structure of state covariances and its relation to the power spectrum of the input,” IEEE Trans. Autom. Control, vol. 47, no. 7, pp. 1056–1066, Jul. 2002. [14] [15] [16] [17] [18] [19] , “Spectral analysis based on the state covariance: The maximum entropy spectrum and linear fractional parameterization,” IEEE Trans. Autom. Control, vol. 47, no. 11, pp. 1811–1823, Nov. 2002. T. Georgiou and A. Lindquist, “Kullback–Leibler approximation of spectral density functions,” IEEE Trans. Inform. Theory, vol. 49, no. 11, pp. 2910–2917, Nov. 2003. S. Haykin, Nonlinear Methods of Spectral Analysis. New York: Springer-Verlag, 1979. R. Nagamune, “A robust solver using a continuation method for Nevanlinna–Pick interpolation with degree constraint,” IEEE Trans. Autom. Control, vol. 48, no. 1, pp. 113–117, Jan. 2003. A. N. Amini, E. Ebbini, and T. T. Georgiou, “Noninvasive estimation of tissue temperature via high-resolution spectral analysis techniques,” IEEE Trans. Biomed. Eng., vol. 52, no. 2, pp. 221–228, Feb. 2005. M. Pavon and A. Ferrante, “A new algorithm for Kullback-Leibler approximation of spectral densities,” in Proc. Conf. Decision Control-ECC’05 Conf., Seville, Spain, Dec. 2005, pp. 7332–7337.
© Copyright 2026 Paperzz