Petersen`s Lemma on Matrix Uncertainty and Its Generalizations

c Pleiades Publishing, Ltd., 2008.
ISSN 0005-1179, Automation and Remote Control, 2008, Vol. 69, No. 11, pp. 1932–1945. c M.V. Khlebnikov, P.S. Shcherbakov, 2008, published in Avtomatika i Telemekhanika, 2008, No. 11, pp. 125–139.
Original Russian Text ADAPTIVE AND ROBUST SYSTEMS
Petersen’s Lemma on Matrix Uncertainty
and Its Generalizations
M. V. Khlebnikov and P. S. Shcherbakov
Trapesnikov Institute of Control Science, Russian Academy of Sciences, Moscow, Russia
Received July 9, 2007
Abstract—Various generalizations and refinements are proposed for a well-known result on
robust matrix sign-definiteness, which is extensively exploited in quadratic stability, design of
robustly quadratically stabilizing controllers, robust LQR-problem, etc. The main emphasis is
put on formulating the results in terms of linear matrix inequalities.
PACS number: 02.10.Ud
DOI: 10.1134/S000511790811009X
1. INTRODUCTION
The subject of this work is the following model of matrix uncertainty studied in [1]. Considered
is a real symmetric matrix G ∈ Sn×n (from now on, Sn×n denotes the space of such n × n-matrices)
and its perturbation of the form
G + M ΔN + N T ΔT M T ,
(1)
where Δ ∈ Rp×q is the perturbation matrix and M ∈ Rn×p , N ∈ Rq×n are fixed “frame” matrices
of appropriate dimensions, which define the uncertainty structure. Note that the perturbation for
a symmetric matrix G is specified by a matrix Δ, which is not necessarily symmetric and even
square.
Such an uncertainty model for symmetric matrices arises naturally when constructing quadratic
Lyapunov functions for a dynamic system whose state matrix contains arbitrary (but normbounded) matrix uncertainty Δ. It is this generality that explains a wide range of applications
where model (1) has been found useful. For this model, a necessary and sufficient condition was
obtained in [1] for the inequality G+M ΔN +N T ΔT M T 0 to hold for all norm-bounded perturbations Δ; in the present paper, these conditions will be referred to as Petersen’s lemma. Throughout
the exposition, the notion A B for matrices stands for the negative semidefiniteness of the matrix
A − B, i.e., xT (A − B)x 0 for all x ∈ Rn .
In the seminal paper [1], the result discussed here was exploited in the solution of a robust
version of the LQR-problem, and in [2, 3], it was used in the robust H∞ control design. Based on
this result, a common quadratic Lyapunov function for an interval matrix family was constructed
in [4, 5]; a similar uncertainty model was considered in [6] to derive a reduced vertex result on the
quadratic stability of interval systems.
The present paper is devoted to the analysis and generalizations of Petersen’s lemma, with the
main emphasis put on the connection of the results obtained with linear matrix inequalities (LMIs)
and semidefinite programming (SDP). The authors faced the need in such generalizations (which
are of independent interest in our opinion) when working on the problem of robust rejection of
exogenous bounded disturbances in systems containing norm-bounded matrix uncertainty; see [7,8].
The results obtained are collected in the present paper; see [9] for a preliminary conference version.
1932
PETERSEN’S LEMMA ON MATRIX UNCERTAINTY AND ITS GENERALIZATIONS
1933
2. PETERSEN’S LEMMA
One of the auxiliary problems analyzed in [1] was the issue of robust sign-definiteness of the
matrix G in setup (1). The following result was obtained.
Petersen’s Lemma [1]. Let G = GT , M = 0, N = 0, be matrices of appropriate dimensions.
For all Δ ∈ Rp×q , Δ 1, the inequality
G + M ΔN + N T ΔT M T 0
(2)
holds1 if and only if there exists ε > 0 such that
1
G + εM M T + N T N 0.
ε
(3)
1/2
We note that in the lemma, the spectral matrix norm is used: Δ = max λi (ΔΔT ), where
i
λi (A) are the eigenvalues of A.
Prior to presenting our main results, we propose a simple proof of Petersen’s lemma; due to its
importance, the proof is given in the main body of the text.
Proof. Let the inequality
G + M ΔN + N T ΔT M T 0
hold for all Δ 1. This is equivalent to
xT Gx + 2xT M ΔN x 0
.
for all x ∈ Rn and all Δ 1. Denoting xT M Δ = y T , we write the inequality above in the form
xT Gx + 2y T N x 0
for all x ∈ Rn and y ∈ Rq such that
y T y = xT M ΔΔT M T x xT M M T x.
Introducing
.
z=
x
y
.
A0 =
∈ Rn+q ,
G NT
N 0
.
A1 =
,
−M M T 0
0
I
(where I and 0 stand for the identity and zero matrices of appropriate dimensions), we re-write it
in the following matrix form: z T A0 z 0 for all z such that z T A1 z 0.
By applying the S-procedure with one constraint (e.g., see [10]), we conclude that the fulfilment
of this condition is equivalent to the existence of ε 0 such that A0 εA1 , i.e.,
G + εM M T N T
N
−εI
0.
(4)
Finally, applying the Schur lemma to this non-strict matrix inequality ([10], p. 257), we arrive at
the equivalent condition
1
G + εM M T + N T N 0,
ε
ε > 0.
Remark 1. The proof above is substantially based on the use of S-procedure; this simplifies
considerably the derivation of the result as compared to the one given in the original paper [1],
1
The condition M = 0, N = 0 was not imposed in the original statement of the lemma in [1], since strict inequality
was considered in (2).
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
1934
KHLEBNIKOV, SHCHERBAKOV
where the authors had to formulate several auxiliary propositions on the properties of quadratic
forms in order to prove the necessity of condition (3). The sufficiency of condition (3) will be
separately proved in the sections to follow.
Remark 2. Petersen’s lemma reduces the robustness test for family (1) to condition (3), i.e., to
a simple one-dimensional search. Note that in the equivalent form, condition (3) can be written as
the LMI (4) with respect to one scalar uncertainty ε. This form of the condition under consideration
will be frequently used in the sequel, leading to essential simplifications in computations.
3. RADIUS OF MATRIX SIGN-DEFINITENESS
Petersen’s lemma is an analysis result; it provides a necessary and sufficient condition of robust
sign-definiteness of family (1) for a fixed level of uncertainty. A natural extension of this result
would be finding the maximal admissible level of perturbation Δ in (2) that retains sign-definiteness;
this will be referred to as the radius of sign-definiteness of the family
G(Δ, γ) = G + M ΔN + N T ΔT M T ,
defined as
Δ γ,
(5)
.
γmax = sup γ : G + M ΔN + N T ΔT M T ≺ 0 for all Δ γ .
Throughout this section, it is assumed that G ≺ 0. Also, in this section, we characterize the
worst-case (critical ) perturbation Δ = Δcr , Δcr = γmax , which is, by definition, the “smallest”
one that violates the inequality G(Δ, γmax ) ≺ 0, i.e., λmax (G(Δcr , γmax )) = 0.
Prior to formulating the result on the radius of sign-definiteness, we note that the quantity γmax
can equally be defined as the maximal value of γ retaining the validity of the condition
G + γ M ΔN + N T ΔT M T 0 for all Δ 1.
(6)
On the other hand, by Petersen’s lemma, the fulfillment of (6) for a fixed γ > 0 is equivalent to
the existence of ε > 0 such that
1 T
T
(7)
G + γ εM M + N N 0.
ε
These expressions will be used in the sequel.
Theorem 1. Let λ∗ be a solution of the following semidefinite program:
⎛
min λ
⎞
λG + εM M T N T 0
⎜
⎟
N
−εI 0 ⎠ 0
subject to ⎝
0
0 −λ
(8)
with respect to the two scalar variables ε, λ. Then γmax = 1/λ∗ .
With this result, the computation of the robustness radius reduces to a standard semidefinite
program which can be easily solved numerically, e.g., by using well-known toolboxes SeDuMi and
Yalmip in Matlab.
Theorem 1 presents a convenient tool for computing the robustness radius; however, it gives no
insight to the “physical meaning” of the quantities involved.
Let us pre-multiply and post-multiply inequality (7) by (−G)−1/2 0 and re-write it in the
following equivalent form:
1 T T
γ εM M + N N I,
ε
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
PETERSEN’S LEMMA ON MATRIX UNCERTAINTY AND ITS GENERALIZATIONS
1935
= (−G)−1/2 M , N
= N (−G)−1/2 ; in other words, without loss of generality,
where it is denoted M
we can consider G = −I in Petersen’s setup. It is now seen that the maximal value of γ retaining
the last inequality for a fixed ε is equal to
γmax (ε) =
1
M
T + 1 N
TN
λmax εM
ε
,
(9)
so that the subsequent maximization in ε > 0 yields
γmax =
1
M
T + 1 N
TN
min λmax εM
ε
,
(10)
ε>0
which is, therefore, the maximal value of γ such that inequality (7) is satisfied for some ε > 0. By
equivalence of (7) and (6) this coincides with the radius of robustness. Hence, according to (10),
for the solution λ∗ of problem (8) we have
M
T + 1 N
TN
λ = min λmax εM
∗
ε
ε>0
.
(11)
Next, to find the worst-case uncertainty, we consider inequality (6) and pre-multiply and postmultiply it by (−G)−1/2 0 to obtain the equivalent condition:
+N
T ΔT M
T I for all Δ 1
ΔN
γ M
or
ΔN
+N
T ΔT M
T x 1 for all Δ 1 and all x = 1,
γ xT M
i.e.,
ΔN
x 1 for all Δ 1 and all x = 1.
2γ xT M
(12)
T x, b = N
x. It is easily shown that for any vectors a, b and matrix Δ of compatible
Denote a = M
dimensions, the following relation holds:
max aT Δb = a b
Δ1
(in the spectral or Frobenius matrix norm), and the maximum is attained with
Δ∗ =
abT
.
a b
Therefore, for a fixed x, the maximum of the left-hand side of (12) with respect to Δ 1 is
attained at the rank-one matrix
T xxT N
T
M
Δ∗ (x) = T ,
M x N x
(13)
T x N
x. It now remains to maximize this quantity
and the maximal value is equal to 2γ M
with respect to x = 1 to obtain
γmax =
1
T 2 max M
x
x=1
.
N x
(14)
This relation will be used in the sequel.
Finally, let ε∗ denote the value of the parameter ε which attains the minimum in (11); then
∗
λ is the corresponding maximal eigenvalue, and let e be the normalized eigenvector of the matrix
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
1936
KHLEBNIKOV, SHCHERBAKOV
1 T N N , associated with this eigenvalue λ∗ . We have
ε∗
1 T 1 T 2
2
T ∗
T
∗ T
λ = e ε M M + ∗ N N e = ε∗ M
e + ∗ N
e 2 M
e N e ,
ε
ε
M
T +
ε∗ M
e / M
T e. In accordance with (13), consider the
moreover, the equality is attained with ε∗ = N
admissible perturbation
T eeT N
T
M
Δ= T ,
M e N e
Δ = 1,
1 T ΔT M
T 0 holds. We have
M
Δ
N
+
N
λ∗
so that −I +
T
T T
+N Δ M
ΔN
M
T T T
T
M ee N N
+N
T N ee M M
T e
e= M
T
T
M e N e
M e N e
e
T
N
M
T + M e N
TN
M
=
e
T e
N
M
1 T ∗ T
= ε MM + ∗ N N e
ε
= λ∗ e
e
by the definition of the quantities ε∗ , λ∗ , and e. In other words, this means
1 T T T
= 0,
λmax −I + ∗ M ΔN + N Δ M
λ
i.e., the considered perturbation is a worst-case one.
We formulate the results thus obtained as the theorem below.
Theorem 2. Let λ∗ , ε∗ be solutions of problem (8); then
∗
1 T ε MM + ∗ N
N ,
ε
λ = λmax
∗ T
= (−G)−1/2 M , N
= N (−G)−1/2 . Let e be the eigenvector of the matrix ε∗ M
M
T +
where M
1 T N N , associated with the eigenvalue λ∗ . Then the worst-case perturbation has the form
ε∗
T eeT N
T
1 M
,
T e
λ∗ e
M
N
Δcr =
and the following relation holds:
(15)
e / M
T e .
ε∗ = N
(16)
The quantity γmax can be determined by other means; namely, using the notion of boundary
oracle (see [11]). Indeed, inequality (7) can be thought of as a specification of the domain of
1 T
T
negative semidefiniteness of the matrix family W (γ, ε) = G + γ εM M + N N in the space
ε
of parameters γ, ε (under the additional constraint γ, ε > 0). For any fixed ε > 0, we have
W (0, ε) = G ≺ 0, and the minimal value of γ > 0 that violates sign-definiteness of W (γ, ε) is equal
to the minimal generalized eigenvalue (all of them are positive), see Lemma 1 in [11]:
γ(ε) = min λi
i
1
G, − εM M + N T N .
ε
T
AUTOMATION AND REMOTE CONTROL
(17)
Vol. 69
No. 11
2008
PETERSEN’S LEMMA ON MATRIX UNCERTAINTY AND ITS GENERALIZATIONS
1937
We remind that a number λ and a vector e are said to be a generalized eigenvalue and the associated
generalized eigenvector of the pair of matrices A and B, if Ae = λBe.
By optimizing (17) with respect to ε, we obtain
γmax = max λmin
ε>0
1
G, − εM M + N T N ,
ε
T
(18)
which is the largest γ for which there exists ε > 0 such that matrix (7) is negative semidefinite.
The results of Theorem 2 can accordingly be formulated in terms of generalized eigenvalues.
Namely,
γmax = λmin G, − ε∗ M M T +
1 T . ∗
N N
= λmin .
ε∗
λ∗min M T eeT N T
, where e
M T e N e
is the generalized eigenvector associated with the indicated generalized eigenvalue, and moreover,
N e
holds.
the relation ε∗ =
M T e
We also note that,
since G ≺ 0 (i.e.,
nonsingular), usual (not generalized) eigenvalues λi (ε)
1
of the matrix G−1 εM M T + N T N can be taken and the smallest among them (they are all
ε
non-positive) is to be chosen. Then γ(ε) = −1/λmin (ε) and γmax = max γ(ε).
Respectively, the worst-case perturbation can be taken in the form Δcr =
ε>0
The techniques used in the derivation of formulae (10) and (18) will be used below. A minor
drawback of these formulae is that the segment (0, ε] of variation of the parameter ε is not known
in advance.
Example 1. Let us consider G = −diag(1 2 3) and the following randomly generated square
matrices M, N :
⎛
⎞
⎛
⎞
−0.3328 0.4952 −0.6984
⎜
⎟
N = ⎝ 0.3360 0.2248 −0.4464 ⎠ .
0.7896 0.6108 −0.2980
0.1822
0.0450
0.3062
⎜
⎟
0.3110
0.1072 ⎠ ;
M = ⎝ −0.2952
−0.3844 −0.1084 −0.2866
Figure 1 depicts the curve γ(ε) (17); together
with the line
γ = 0 it defines the domain of
1 T
T
negative definiteness of the family G + γ εM M + N N , γ, ε > 0. Clearly, the plot of the
ε
function γ(ε) coincides with that of γmax (ε) (9); the maximum over all ε > 0 is equal to 1.2044.
Solving problem (8) leads to the same value γmax = 1.2044.
γ
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0
5
10
15
20
25
30
ε
Fig. 1. Computation of the robustness radius in Petersen’s setup.
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
1938
KHLEBNIKOV, SHCHERBAKOV
4. RADIUS OF MATRIX NONSINGULARITY
The next extension of Petersen’s lemma relates to finding the radius of nonsingularity in problem (6) under the condition that the matrix G is symmetric but sign-indefinite. Reasonings similar
to those used in the previous section lead to the following result, which we formulate in terms of
generalized eigenvalues.
Theorem 3. Let G ∈ Sn×n be nonsingular; then the radius of nonsingularity
.
ρ(G, M, N ) = sup Δ : G + M ΔN + N T ΔT M T is nonsingular
is given by
ρ(G, M, N ) = max min |λi (ε)|,
ε>0
(19)
i
where
λi (ε) = λi
1
G, − εM M + N T N
ε
T
1
are the generalized eigenvalues of the pair of matrices G and − εM M T + N T N . The worstε
case perturbation is given by
Δcr = λ
M T eeT N T
,
M T e N e
where λ is the generalized eigenvalue that attains the optimum in (19), and e is the associated
generalized eigenvector.
It is noted that the only difference between formula (19) and expression (18) for the radius
of sign-definiteness is the presence of the absolute value sign for the eigenvalues. This result is
expected, since nonsingularity is lost at an eigenvalue which is “closest to zero.”
Theorem 3 generalizes Petersen’s lemma to the case of symmetric sign-indefinite matrices, and
for G ≺ 0 we arrive at Theorem 1. On the other hand, for a specific case where the frame matrices
in (2) are identities, M = N = I, and the perturbation matrix Δ is assumed to be symmetric, the
following result on the symmetric radius of nonsingularity is known.
Lemma [11]. For a nonsingular matrix G ∈ Sn×n , its symmetric radius of nonsingularity
.
ρ(G) = sup Δ : Δ ∈ Sn×n ,
G + Δ is nonsingular
is given by
ρ(G) = 1/G−1 = min |λi (G)|,
i
and the critical value of Δ is equal to Δcr = −λeeT , where λ is the minimal (in absolute value)
eigenvalue of G, and e is the associated normalized eigenvector.
Hence, Theorem 3 extends this latter result to the more general uncertainty structure (2). For
M = N = I we have G + (Δ + ΔT )/2; denoting Δ1 = (Δ + ΔT )/2 ∈ Sn×n , we arrive at the
conditions of the lemma above and obtain the relation ρ(G) = 2ρ(G, I, I).
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
PETERSEN’S LEMMA ON MATRIX UNCERTAINTY AND ITS GENERALIZATIONS
1939
5. MULTIPLE UNCERTAINTIES
In this section, we analyze Petersen’s lemma in the situation where there are several uncertainties
are brought into the picture. We consider > 1 independent perturbations Δi ∈ Rpi ×qi in (6),
where the matrices Mi , Ni have dimensions n × pi and qi × n respectively. The goal is to check the
condition
G+
i=1
T
Mi Δi Ni + NiT ΔT
0
i Mi
for all
Δi γ,
i = 1, . . . , ,
(20)
for a given uncertainty level γ and to find the robustness radius γmax , which is the maximal value
of γ retaining the validity of (20). Clearly, the span γ can be considered common for all Δi by
introducing appropriate scalar multipliers in the matrices Mi (or Ni ).
In that case, Petersen’s lemma provides only sufficient conditions, which is formulated next.
Theorem 4. For a given γ > 0, condition (20) is satisfied if there exist positive scalars ε1 , . . . , ε ,
such that
G+γ
i=1
εi Mi MiT
1
+ NiT Ni
εi
0.
(21)
The maximal value of γ retaining the validity of (21) for some ε1 , . . . , ε > 0, is equal to γest = 1/λ∗ ,
where λ∗ is the solution of the SDP
⎛
⎜ λG +
min λ
subject to
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
i=1
⎞
εi Mi MiT
N1
..
.
N1T
...
NT
−ε1 I
0
..
.
...
N
0
0
0 ⎟
⎟
⎟
0 ⎟
⎟
⎟0
⎟
⎟
⎟
0 ⎟
⎠
(22)
−ε I
0
−λ
with respect to the scalar variables λ, ε1 , . . . , ε .
Theorem 4 provides a simple way for computing the lower bound γest for the robustness radius γmax in the situation with multiple uncertainties,—by reducing to the appropriate semidefinite
program. The accuracy of this estimate will be discussed in the subsections to follow.
5.1. Special Cases
Relation (21) provides just a sufficient condition for (20) to hold. This is explained by the fact
that in general, the S-procedure with 2 constraints is lossy; e.g., see [10]. However, in certain
cases, this restriction can be removed.
First, assume that all the quantities involved are defined over the field of complex numbers. In
that case, the S-procedure is lossless for two constraints (see [12]), and Petersen’s lemma provides
necessary and sufficient conditions of robust sign-definiteness of a matrix subjected to = 2
independent perturbations.
The second case relates to more stringent conditions imposed on the uncertainty structure
in (20). Specifically, denote by Δ = (Δ1 . . . Δ ) the compound p × q-matrix, where q =
i=1
qi , and
assume that the uncertainty is subjected to the joint constraint Δ 1. Moreover, let Mi ≡ M .
We then have the following result.
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
1940
KHLEBNIKOV, SHCHERBAKOV
Lemma 1. Consider M1 = . . . = M = M in (20). Then
G+
Mi Δi Ni + (Mi Δi Ni )T 0
for all
i=1
Δi : Δ1 . . . Δ 1
if and only if there exists an ε > 0 such that
1
N T Ni 0.
G + εM M +
ε i=1 i
T
Obviously, a similar result holds for the case N1 = . . . = N = N .
5.2. Accuracy of the Estimate γest
In [4], a generalization of Petersen’s lemma was proposed; specifically, the fulfilment of (20) was
claimed to be equivalent to the existence of positive ε1 , . . . , ε , such that (21) holds. Later, the proof
was found to be erroneous, and in the paper [5] by the same authors, an anecdotal counterexample
has been provided showing that condition (21) is only sufficient for family (20) to be robustly
sign-definite. The respective values were reported to be γmax ≈ 2.696 and γest ≈ 2.663; below, a
meaningful counterexample is constructed, where the difference is much bigger.
Since the perturbations Δi in (20) are independent, similarly to the derivation of formula (14)
we obtain
1
γmax =
2 max
x=1 i=1
,
T Mi x Ni x
(23)
whereas for γest , similarly to (10) we have
γest =
min λmax
εi >0
1
i=1
i M
T + 1 N
T εi M
i
εi i Ni
.
(24)
= (−G)−1/2 M , N
= N (−G)−1/2 . We now indicate specific numerical
As usual, here it is denoted M
values for G, Mi , Ni leading to γest < γmax .
Example 2. Likewise [4,5], we consider the case of two scalar uncertainties Δ1 , Δ2 ∈ R; then the
matrices M1 , M2 are column vectors, and the matrices N1 , N2 are row vectors. We next consider
G = −I; then the formulae (23) and (24) take the form
γmax =
2 max
x=1
γest =
min λmax ε1 M1 M1T +
ε1 ,ε2 >0
1
|M1T x| |N1 x| +
,
(25)
|M2T x| |N2 x|
1
1
T
ε1 N1 N1
+ ε2 M2 M2T +
1
T
ε2 N2 N2
.
(26)
Consider the following numerical values for the coefficient matrices:
√
√
√
0
2/2
, N1 = 1 0 , M2 = √
, N2 =
M1 =
2/2 − 2/2 .
1
2/2
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
PETERSEN’S LEMMA ON MATRIX UNCERTAINTY AND ITS GENERALIZATIONS
1941
Note that the vectors M1 , N1T and M2 , N2T are normalized, pairwise orthogonal, and the angle
between the pairs is π/4.
1
For the denominator of (25) we have 2 max
|x1 x2 | + |x1 + x2 | × |x1 − x2 | , so that parame2
x21 +x22 =1
terizing x1 = cos α, x2 = sin α, after straightforward manipulations, we arrive at max (| sin 2α| +
0α2π
√
| cos 2α|), which is equal to 2, whence
√
γmax = 2/2.
Interestingly, in this example with two scalar uncertainties, the robustness radius can be computed by other means. The uncertain matrix (20) in this case writes
−1
0
0 −1
+ Δ1
0 1
1 0
+ Δ2
1
0
0 −1
.
For this matrix, the whole domain of sign-definiteness on the plane of parameters Δ1 , Δ2 can
be easily characterized (see discussion preceding formula (17)). Namely, it is seen to be the unit
circle Δ21 + Δ22 1 (cf.√[11]), and the desired γmax is defined by the side of the maximal inscribed
square and is equal to 2/2.
We next compute γest (26). The matrix in the denominator of (26) has the form
⎛
1
1
1
⎜ ε + 2 ε2 + ε
2
⎜ 1
W (ε1 , ε2 ) = ⎜
⎝
1
1
ε2 −
2
ε2
1
1
ε2 −
2
ε2
1
1
ε1 +
ε2 +
2
ε2
⎞
⎟
⎟
⎟,
⎠
with the characteristic polynomial being
2
λ −
1
ε1 +
ε1
1
+ ε2 +
ε2
1
1
λ+
ε1 +
2
ε1
1
ε2 +
ε2
+ 2,
so that the maximal eigenvalue is equal to
1
λmax (ε1 , ε2 ) =
2
1
ε1 +
ε1
1
+ ε2 +
ε2
1
+
2
1
ε1 +
ε1
2
1
+ ε2 +
ε2
2
− 8.
It is seen that the minimal value of λmax (ε1 , ε2 ) is attained with ε1 = ε2 = 1 and it is equal to 2,
so that
√
1
2
.
γest = <
2
2
In Fig. 2a, the function
γest (ε1 , ε2 ) =
1
λmax (W (ε1 , ε2 ))
is plotted over the uniform 0.01-grid in ε1 , ε2 ; the maximal value is 1/2. Figure 2b, depicts the plot
of the function
γmax (x) =
2
1
|M1T x| |N1 x| +
|M2T x| |N2 x|
for x on the unit circumference x21 + x22 = 1; the minimum value is equal to
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
√
2/2.
1942
KHLEBNIKOV, SHCHERBAKOV
(a)
0.5
0.4
0.3
0.2
0.1
0
6
5
(b)
4
3
ε2 2
1
3
ε1
2
1
0
4
1.05
1.00
0.95
0.90
0.85
0.80
0.75
5 0.70
1
0.8
0.4
0
0
–0.4 –0.8 –1
Fig. 2. Calculation of γmax , γest for two uncertainties via formulae (23) and (24).
Analogous examples where γest < γmax are easy to construct for > 2 scalar uncertainties, e.g.,
by choosing two-dimensional vectors in the form
Mi =
Ni =
cos (ω + (i − 1)δ)
sin (ω + (i − 1)δ)
sin (ω + (i − 1)δ) − cos (ω + (i − 1)δ)
,
,
i = 1, . . . , , δ = π/2,
where ω is arbitrary. On the other hand, examples where γest ≈ γmax are equally easy to construct;
the value of γmax can be computed on a grid using formula (23).
At present, for the general case of matrix-valued uncertainties, the question on how conservative
the estimate γest may be (i.e., what is the maximal value of the ratio γmax /γest over all Mi , Ni , for
fixed and dimensions dim Mi , dim Ni ) remains open.
However, the accuracy of the estimate γest can be tested in the following way. Since the
perturbations Δi vary independently of each other, it is readily seen that the relations similar
∗
to (16)
for
are
valid
the optimal εi (the solutions of problem (22) for εi ). Namely, we have
T ∗
ε∗i = N
i e / Mi e, where e is the eigenvector associated with the maximal eigenvalue λ of the
matrix
i=1
i M
T
ε∗i M
i
1 T + ∗N
Ni . Hence, by analogy with (15), we consider
εi i
T eeT N
T
M
i i Δcr,i = γest T ,
Mi e Ni e
i = 1, . . . , ,
as candidate worst-case perturbations. It then follows that γmax ≈ γest if
λmax G +
i=1
T
Mi Δcr,i Ni + NiT ΔT
cr,i Mi
≈ 0.
5.3. Scalar Perturbations
This situation closely relates to the problem of robust stability of the n × n interval symmetric
matrix family A + γΔ, where A = AT ≺ 0, Δ = ΔT , |Δij | rij . Theorem 2 in [13] reduces
this problem to checking the negative definiteness of 2n−1 matrices G + γSi RSi , where R = (rij )
is the symmetric matrix composed of the positive numbers rij , and Si = diag(1, s1 , . . . , sn−1 ),
where sj = ±1. For very high dimensions n, solution becomes computationally intractable, since
the constraint matrix in the respective SDP is of dimension n2n−1 . Instead, Theorem 4 yields an
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
PETERSEN’S LEMMA ON MATRIX UNCERTAINTY AND ITS GENERALIZATIONS
1943
estimate which is easily computable by solving a semidefinite program with n(n + 1)/2 + 1 scalar
variables and the LMI constraint matrix of size n + n(n + 1)/2 + 1.
As mentioned above, the quality of the estimate γest is hard to evaluate satisfactorily. Even
for the case of scalar uncertainties, we could only obtain an obvious and very rough bound
γest /γmax . However, simulations show that “on average,” the actual quantity γest just slightly
differs from γmax . Thus, for 100 000 randomly generated 4 × 4 interval matrices ( = 10 independent scalar uncertainties in Petersen’s setup), the maximal value of the ratio γmax /γest ≈ 1.15 was
observed, and for approximately 95% of the samples it did not exceed 1.01.
6. CONCLUSION
In this paper, several new results are obtained, which seem useful for both a deeper understanding of the criterion considered, and for computing the robustness radius or its lower bound. Among
these, we distinguish a transparent proof of Petersen’s lemma, an LMI formulation of the robustness
condition, the result on the radius of nonsingularity, and the SDP formulation for computing the
robustness radius or its estimate γest in the case of multiple uncertainties. Numerical experiments
testify to high accuracy of this estimate in the case of several scalar uncertainties; however, it seems
very important to accurately evaluate the quality of this estimate in closed form.
The authors are grateful to B. Polyak for his interest in the subject of this paper and enlightening
discussions. We are also indebted to an anonymous referee for indicating on fallacy of several results
presented in the initial version of the paper, as well as on extendability of Petersen’s lemma towards
the complex case (Section 5.1).
APPENDIX
Proof of Theorem 1. We re-write condition (7) as
1
1
G + εM M T + N T N 0,
γ
ε
or in the block form by using the Schur lemma:
⎛
⎞
1
T NT
⎝ γ G + εM M
⎠ 0.
N
−εI
Introducing λ = 1/γ and noting that γ > 0, we incorporate this condition in the constraints to
obtain the augmented matrix:
⎛
⎞
λG + εM M T N T 0
⎜
⎟
N
−εI 0 ⎠ 0.
⎝
0
0 −λ
(A.1)
Negative semidefiniteness of the original matrix (7) for some ε > 0 and γ > 0 is equivalent to
feasibility of the linear matrix inequality (A.1) with respect to ε, λ. By solving the semidefinite
program
min λ
subject to
(A.1),
(A.2)
we obtain the desired maximal value of γmax = 1/λ∗ , where λ∗ is the solution of problem (A.2).
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
1944
KHLEBNIKOV, SHCHERBAKOV
Proof of Theorem 4. Let us compose the block matrices
E(εi ) =
εi I 0
0 I/εi
,
0 Δi
ΔT
0
i
Di (Δi ) =
,
Li =
MiT
Ni
,
i = 1, . . . , ,
the block-diagonal matrices
⎛
⎜
⎞
E(ε1 )
..
E(ε1 , . . . , ε ) = E(ε) = ⎝
⎟
⎠,
.
⎛
⎜
⎞
D1
..
D(Δ) = ⎝
⎟
⎠,
.
D1
E(ε )
and the block matrix
⎛
⎞
L1
⎜ .. ⎟
L = ⎝ . ⎠.
L
With this notation, condition (20) takes the form
G + LT D(Δ)L 0,
(A.3)
and the matrix in (21) writes
.
G + γLT E(ε)L = W (γ, ε).
Given γ, assume there exist εi > 0 such that W (γ, ε) 0 and show that (A.3) holds for all
Δi γ. We have
W (γ, ε) − G + LT D(Δ)L = LT (γE(ε) − D(Δ)) L,
and the block-diagonal matrix γE(ε) − D(Δ) is positive definite for all Δi γ if and only if the
blocks are positive definite, i.e.,
γ
εi I 0
0 I/εi
−
0 Δi
ΔT
0
i
0
for all
2
Δi ΔT
i γ I,
i = 1, . . . , .
The validity of the last relation follows after applying the Schur lemma.
Computing the quantity γest reduces to the SDP (22) similarly to what was done in Theorem 1.
Proof of Lemma 1. We have
G+
Mi Δi Ni + (Mi Δi Ni )T
i=1
⎛
⎛
⎞
⎛
⎞T
N1
N1
⎜ .. ⎟ ⎜ .. ⎟
= G + M (Δ1 . . . Δ ) ⎝ . ⎠ + ⎝ . ⎠ (Δ1 . . . Δ )T M T .
N
N
⎞
N1
⎜
q ×n , we arrive at the standard Petersen’s setup G + M ΔN
= ⎝ .. ⎟
+N
T ΔT M T .
Denoting N
. ⎠∈R
N
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008
PETERSEN’S LEMMA ON MATRIX UNCERTAINTY AND ITS GENERALIZATIONS
1945
REFERENCES
1. Petersen, I., A Stabilization Algorithm for a Class of Uncertain Systems, Syst. Control Lett., 1987, vol. 8,
pp. 351–357.
2. Xie, L., Output Feedback H∞ Control of Systems with Parameter Uncertainty, Int. J. Control , 1996,
vol. 63, pp. 741–750.
3. Khargonekar, P.P., Petersen, I.R., and Zhou, K., Robust Stabilization of Uncertain Linear Systems:
Quadratic Stabilizability and H ∞ Control Theory, IEEE Trans. Automat. Control , 1990, vol. 35, no. 3,
pp. 356–361.
4. Mao, W.-J. and Chu, J., Quadratic Stability and Stabilization of Dynamic Interval Systems, IEEE
Trans. Automat. Control , 2003, vol. 48, no. 6, pp. 1007–1012.
5. Mao, W.-J. and Chu, J., Correction to “Quadratic Stability and Stabilization of Dynamic Interval
Systems,” IEEE Trans. Automat. Control , 2006, vol. 51, no. 8, pp. 1404–1405.
6. Alamo, T., Tempo, R., Ramirez, D.R., and Camacho, E.F., A New Vertex Result for Robustness Problems with Interval Matrix Uncertainty, in Proc. Eur. Control Conf., Kos, Greece, 2007, paper ThC07.3.
7. Polyak, B.T., Shcherbakov, P.S., and Topunov, M.V., Invariant Ellipsoids Approach to Robust Rejection
of Persistent Disturbances, in Proc. 17th World Congr. IFAC , Seoul, 2008, pp. 3976–3981.
8. Polyak, B.T., Topunov, M.V., and Shcherbakov, P.S., Robust Rejection of Exogenous Disturbances via
Invariant Ellipsoid Technique, in Proc. XIV Int. Workshop on Dynamics and Control , Zvenigorod, 2007,
p. 59.
9. Topunov, M.V. and Shcherbakov, P.S., Ramifications of Petersen’s Lemma on Uncertain Matrices, in
Proc. XIV Int. Workshop on Dynamics and Control , Zvenigorod, 2007, p. 66.
10. Polyak, B.T. and Shcherbakov, P.S., Robastnaya ustoichivost’ i upravlenie (Robust Stability and Control), Moscow: Nauka, 2002.
11. Polyak, B.T. and Shcherbakov, P.S., The D-decomposition Technique for Linear Matrix Inequalities,
Avtom. Telemekh., 2006, no. 11, pp. 159–174.
12. Brickman, L., On the Field of Values of a Matrix, Proc. Am. Math. Soc., 1961, no. 12, pp. 61–66.
13. Rohn J., Positive Definiteness and Stability of Interval Matrices, SIAM J. Matrix Anal. Appl., 1994,
vol. 15, no. 1, pp. 175–184.
This paper was recommended for publication by B.T. Polyak, a member of the Editorial Board
AUTOMATION AND REMOTE CONTROL
Vol. 69
No. 11
2008