On the Estimation of the Equilibrium Points of Uncertain Nonlinear

On the Estimation of the Equilibrium Points of
Uncertain Nonlinear Systems
Graziano Chesi
Department of Electrical and Electronic Engineering
The University of Hong Kong
Contact: http://www.eee.hku.hk/~chesi
Abstract
The analysis and control of nonlinear systems often requires information about the location of their equilibrium points. This paper
addresses the problem of estimating the set of equilibrium points of uncertain nonlinear systems, in particular systems whose dynamics are
described by a nonlinear function of the state depending polynomially
on an uncertainty vector constrained in a polytope. It is shown that
estimates of this set can be obtained by solving linear matrix inequality
(LMI) problems, which are built through sum of squares (SOS) techniques by introducing worst-case truncations of the nonlinearities and
by exploiting homogeneity of equivalent representations. In particular, the computation of estimates with fixed shape and the problem
of establishing their tightness is firstly considered. Then, the paper
shows how this methodology can be used to address the computation
of the minimum volume estimate and the construction of the smallest
convex estimate. Examples with random and real systems illustrate
the proposed methodology.
1
Introduction
The knowledge of the equilibrium points of a nonlinear system plays a key
role in numerous analysis and control issues, see e.g. [11] and reference
therein. Models of real systems are often affected by uncertainty, which
means that the determination of the equilibrium points should be repeated
for all admissible values of the uncertainty. However, this is generally undesirable and even impossible. In fact, determining the equilibrium points
is a nontrivial problem even in the absence of uncertainty as it amounts to
1
solving a system of nonlinear equations, see e.g. [16, 22] for the case of polynomial equations and [18, 1] for the more general case. Moreover, the set
of admissible values of the uncertainty is typically infinite, and considering
a finite grid only could easily miss key equilibrium points achievable by the
system. Therefore, it appears necessary to provide concise descriptions of
the set of equilibrium points, e.g. through outer approximations of simple
shape.
This paper addresses the problem of estimating the set of equilibrium
points of uncertain nonlinear systems with nonpolynomial nonlinear functions of the state parametrized by an uncertain vector constrained in a
polytope. First, the problem of determining the smallest outer estimate
with fixed shape of the set of equilibrium points is considered. This estimate is expressed as a sublevel set of a given polynomial, and it is shown
that an upper bound of the optimal level can be obtained by solving linear
matrix inequality (LMI) problems. These problems are built through sum
of squares (SOS) techniques by introducing worst-case truncations of the
nonlinearities and by exploiting homogeneity of equivalent representations.
Moreover, a sufficient condition for establishing the tightness of the found
upper bound is provided, which is also necessary in the case that all the
nonlinearities are polynomial. Second, the paper addresses the computation
of outer estimates with variable shape of the set of equilibrium points. The
computation of the minimum volume estimate among the sublevel sets of
polynomials of a given degree is hence considered, and it is shown that a
candidate can be found by solving LMI problems where the volume of the
estimate is exactly minimized in the case of ellipsoidal sublevel sets and approximately minimized in the case of higher degree polynomials. Then, the
problem of constructing the smallest convex estimate of the sought set of
equilibrium points is considered. Examples with random and real systems
illustrate the proposed methodology.
Before proceeding it is useful mentioning that LMI problems have been
proposed for characterizing the equilibrium points of polynomial systems
without uncertainty (see e.g. [5, 6]) and with uncertainty (see e.g. [10]), and
for solving robust polynomial optimization (see e.g. [13]). The contribution
of this paper with respect to the existing literature is, in particular, to
consider nonpolynomial nonlinearities, to propose an alternative use of SOS
techniques for establishing estimates that exploits homogeneity of equivalent
representations, and to address the computation of estimates with variable
shape.
The paper is organized as follows. Section 2 provides the preliminaries.
Section 3 describes the proposed methodology. Section 4 presents some
2
examples. Lastly, Section 5 concludes the paper with some final remarks. A
preliminary version of this paper appeared in [4].
2
Preliminaries
The notation is as follows: N, R: natural number set (including 0) and real
number set; 0n : origin of Rn ; Rn0 : Rn \ {0n }; In : n × n identity matrix; A′ :
transpose of a vector/matrix A; A > 0 (A ≥ 0): symmetric positive definite
(semidefinite) matrix A; conv(S): √
convex hull of the elements in a set S;
vol(S): volume of a set S; kxk = x′ x with x ∈ Rn ; deg(p): degree of a
polynomial p(x, y, . . .) where x, y, . . . are vector variables; s.t.: subject to.
Moreover, for φ ∈ Rr we define the functions
 2 
 √ 
φ1
φ1
1
sq(φ)
 .. 
 .. 
sq(φ) =  .  , jrp(φ) = r
. (1)
 .  , prj(φ) =
Xp
kφk2
√
2
φr
φr
φi
i=1
Let us consider the uncertain nonlinear system
ẋ = f (x, θ)
θ ∈ Θ
(2)
where x ∈ Rn is the state, θ ∈ Rq is the time-invariant uncertain parameter,
and Θ ⊂ Rq is a bounded convex polytope expressed as
n
o
Θ = conv θ (1) , . . . , θ (r)
(3)
where θ (1) , . . . , θ (r) ∈ Rq are given vectors, and conv(·) denotes the convex
hull. The i-th entry of the function f (x, θ) ∈ Rn has the form
fi (x, θ) = ai (x, θ) +
nc
X
bi,j (θ)cj (xsj )
(4)
j=1
where ai (x, θ) and bi,j (θ) are polynomial functions in x and θ, nc and sj are
integers with sj ∈ [1, n], and cj (xsj ) are nonpolynomial functions of class
C ∞ (R). The set of equilibrium points of (2) is
E = {x ∈ Rn : f (x, θ) = 0n for some θ ∈ Θ} .
(5)
The first problem addressed consists of determining outer estimates of E
of the form
G(γ) = {x ∈ Rn : g(x) ≤ γ}
(6)
3
where g(x) is a given positive definite polynomial and γ ∈ R. In particular,
the problem consists of estimating the smallest outer estimate of E with fixed
shape defined by g(x), which is denoted by G(γ ∗ ) where γ ∗ is the solution
of the optimization problem
γ ∗ = inf γ s.t. E ⊆ G(γ).
γ≥0
(7)
The second problem consists of determining less conservative outer estimates
of E by using a variable shape. This is firstly addressed by considering the
problem of determining the minimum volume outer estimate of E among the
sublevel sets of polynomials of a given degree, i.e. the estimate
G ∗ = {x ∈ Rn : g∗ (x) ≤ 1}
(8)
where g∗ (x) is the solution of the optimization problem
E ⊆ G(1)
∗
g (x) = arg inf vol(G(1)) s.t.
g
g(x) is positive definite and deg(g) is fixed.
(9)
Then, we address the problem of approximating the smallest convex outer
estimate of E, i.e. the set
(10)
H∗ = conv(E).
3
3.1
Estimates Computation
Fixed Shape Estimates
First of all, let us express a generic θ in the set Θ as
θ=
r
X
φi θ (i)
(11)
i=1
where φ = (φ1 , . . . , φr )′ is a vector in the simplex Φ given by
(
)
r
X
Φ = φ ∈ Rr :
φi = 1, φi ≥ 0 .
(12)
i=1
Next, let γ0 ∈ R, and let Gsj (γ0 ) be the projection of G(γ0 ) on the sj -axis.
For a chosen integer k, we define the polynomial
!
nc
X
xk+1
sj
yi (x, θ, ξ) = ai (x, θ) +
bi,j (θ) c̃j (xsj ) + ξj
.
(13)
(k + 1)!
j=1
4
where c̃j (xsj ) is a truncated Taylor expansion of degree k around xsj = 0 of
cj (xsj ) for xsj ∈ Gsj (γ0 ), and ξj ∈ R is the coefficient of the reminder in the
Lagrange form. By substituting the expression of θ in (11) in yi (x, θ, ξ) and
trivially multiplying each monomial by a suitable power of φ1 + . . . + φr , we
can obtain a polynomial zi (x, φ, ξ) such that:
• zi (x, φ, ξ) = yi (x, θ, ξ) for all φ ∈ Φ and θ given by (11);
• zi (x, φ, ξ) is polynomial in x for fixed φ, ξ, homogeneous polynomial
in φ for fixed x, ξ, and linear in ξ for fixed x, φ.
We gather the polynomials zi (x, φ, ξ), i = 1, . . . , n, as
z(x, φ, ξ) = (z1 (x, φ, ξ), . . . , zn (x, φ, ξ))′
(14)
Next, let P be the set of real polynomials in x and φ that are polynomial
in x for fixed φ and homogeneous polynomial in φ for fixed x. We say that
p ∈ P is sum of squares
P of polynomials (SOS) if there exist p1 , p2 , . . . ∈ P
such that p(x, φ) = i pi (x, φ)2 . Moreover, for some u ∈ P n , v ∈ P and
γ ∈ R, we define the polynomial
w(x, φ, ξ) = u(x, φ)′ z(x, φ, ξ) + (γ − g(x))v(x, φ)
and we introduce the condition
v(x, sq(ψ)) − 1 is SOS
w(x, sq(ψ), ξ (i) ) is SOS
∀i = 1, . . . , nv
(15)
(16)
where ψ ∈ Rr is a new variable and ξ (1) , . . . , ξ (nv ) are the vertices of the
rectangle
Ξ = [ξ1− , ξ1+ ] × · · · × [ξn−c , ξn+c ]
(17)
where ξj− , ξj+ ∈ R are any scalars satisfying
ξj− ≤
dk+1 cj (y)
≤ ξj+ ∀y ∈ Gsj (γ0 ).
dy k+1
(18)
Theorem 1 For u ∈ P n , v ∈ P and γ ∈ R, let us define the optimization
problem
(16) holds
γ # = inf γ s.t.
(19)
u,v,γ
deg(u) and deg(v) are fixed.
Then, if γ # ≤ γ0 , it follows that
γ∗ ≤ γ#.
5
(20)
Proof. Let us suppose that the constraints in (19) are fulfilled, and let us
consider any x ∈ E. We want to show that g(x) ≤ γ. To this end, let θ ∈ Θ
be such that f (x, θ) = 0. Let φ ∈ Φ satisfy (11), and let ξ be the coefficient
of the reminder corresponding to x. One has that z(x, φ, ξ) = 0n . Since
φi ≥ 0 for all i = 1, . . . , r, we can define the vector
ψ = jrp(φ).
Since ξ ∈ Ξ and w(x, sq(ψ), ξ (i) ) depends affine linearly on ξ (i) , from the
second constraint in (19) we obtain
u(x, sq(ψ))′ z(x, sq(ψ), ξ) + (γ − g(x))v(x, sq(ψ)) ≥ 0.
Let us observe that
sq(ψ) = cφ,
c=
r p
X
i=1
φi
!−2
.
Moreover, we have that
z(x, cφ, ξ) = cd z(x, φ, ξ)
where d is the degree of z(x, φ, ξ) in φ. Hence, from z(x, φ, ξ) = 0n one has
that
(γ − g(x))v(x, sq(ψ)) ≥ 0.
Finally, let us observe that v(x, cφ) ≥ 1 from the first constraint in (19),
which implies that g(x) ≤ γ. Hence, E ⊆ G(γ).
∗
Theorem 1 provides an upper bound of γ in (7) via the optimization
problem (19), where the condition f (x, θ) 6= 0 for x ∈ Rn \ G(γ) and θ ∈ Θ
is established by:
1. introducing the equivalent expression z(x, φ, ξ) of f (x, θ) (which is
polynomial in x, homogeneous polynomial in φ, and linear in ξ) via
Taylor expansion and expressing θ through the simplex;
2. exploiting Positivstellensatz, see e.g. [21], which in this case consists
of introducing the polynomials u(x, φ) and v(x, φ);
3. getting rid of the constraint φ ∈ Φ by replacing φ with sq(ψ) and by
exploiting homogeneity in φ;
4. exploiting the linearity of z(x, φ, ξ) in ξ and the structure of Ξ.
6
Let us observe that, since u(x, φ) and v(x, φ) have fixed degree in (19),
the upper bound γ # can be simply obtained via a bisection search on γ,
where for any fixed γ one checks fulfillment of (16) via an LMI feasibility
test. See e.g. [19, 12, 14, 3] and references therein for details about the LMI
feasibility test for detecting a SOS polynomial, which is based on the Gram
matrix method also known as square matricial representation (SMR).
In the case that f (x, θ) is polynomial (i.e., nc = 0), the quantities ξ (i)
and Ξ do not need to be introduced (and hence γ0 is set to ∞ and nv is set
to 1), consequently (16) boils down to
v(x, sq(ψ)) − 1 is SOS
(21)
w(x, sq(ψ)) is SOS.
In the case that f (x, θ) is nonpolynomial (i.e., nc > 0), one first chooses
γ0 and then determines scalars ξj− , ξj+ satisfying (18). These scalars can be
easily computed in general since cj (xsj ) are univariate functions. Let us
observe the following:
• if γ # ≤ γ0 , then γ # is an upper bound of γ ∗ according to Theorem
1. This upper bound can be further improved by repeating the procedure with γ0 = γ # (since this will provide not more conservative, and
possibly less conservative, bounds ξj− and ξj+ );
• if γ # > γ0 , one can increase γ0 and repeat the procedure.
By solving (19) for increasing values of the degrees of u(x, φ) and v(x, φ),
one can obtain a sequence of tighter upper bounds γ # , similarly to the
hierarchy of bounds obtained in [12]. One way of doing this consists of
selecting the degrees of u(x, φ) and v(x, φ) as the largest ones such that the
degree of w(x, φ, ξ) is equal to 2k where k is a chosen integer: the upper
bound γ # is hence indexed by k, which provides the sought sequence.
Before proceeding it is worth mentioning that the numerical complexity of the computation of γ # can be reduced by exploiting sparsity and
symmetry of the Gram matrices of the polynomials v(x, sq(ψ) − 1 and
w(x, sq(ψ), ξ (i) ) required to establish if these polynomials are SOS, see e.g.
[23, 9]. Another way consists of exploiting the Newton polytope which may
reduce the size of these matrices, see e.g. [20].
3.2
Estimate Tightness
At this point, a question that naturally arises concerns the tightness of a
found upper bound: is γ # = γ ∗ ? The following theorem provides an answer
to this question.
7
Theorem 2 Let γ, u(x, φ) and v(x, φ) be such that (16) holds. Then, γ =
γ ∗ if there exist an integer i, x ∈ Rn and ψ ∈ Rr0 such that

 w(x, sq(ψ), ξ (i) ) = 0
(22)
f (x, θ) = 0n

g(x) = γ
P
where θ = ri=1 φi θ (i) and φ = prj(ψ). Moreover, in the case that f (x, θ)
is polynomial (i.e., nc = 0), the condition is not only sufficient but also
necessary.
Proof. “⇐” Let us suppose that (22) holds, and observe that prj(ψ) ∈ Φ,
which means that x is an equilibrium point of the system since f (x, θ) = 0n .
Moreover, x satisfies g(x) = γ, i.e. x lies on the boundary of G(γ). This
implies that γ ≤ γ ∗ . Moreover, since (16) holds, then γ ≥ γ ∗ from Theorem
1. Therefore, we conclude that γ = γ ∗ .
“⇒” Consider the case that f (x, θ) is polynomial (i.e., nc = 0), for which
we have nv = 1. Let us suppose that γ = γ ∗ . Let x ∈ E be a tangent point
between E and G(γ ∗ ), and let θ ∈ Θ be such that f (x, θ) = 0n . Let φ ∈ Φ
be the vector corresponding to θ via (11). Since φi ≥ 0 for all i = 1, . . . , r,
we can define the vector ψ = jrp(φ). Let us observe that sq(ψ) = cφ where
c > 0. Moreover, one has that
z(x, sq(ψ)) = cd z(x, φ) = cd f (x, θ) = 0.
Since g(x) = γ ∗ and z(x, sq(ψ)) = 0, it follows that w(x, sq(ψ)) = 0. Moreover, one has that
prj(ψ) = prj(jrp(φ)) = φ
and hence (22) holds.
The condition of Theorem 2 requires, firstly, the quantities γ, u(x, φ)
and v(x, φ) satisfying (16): these quantities are simply the minimizers of
(19), and hence γ is the found upper bound γ # .
Secondly, one needs to search for x ∈ Rn and φ ∈ Rr0 fulfilling (22) for
some i: since w(x, sq(φ), ξ (i) ) is SOS, we have that w(x, sq(φ), ξ (i) ) has a
positive semidefinite Gram matrix, and hence the vector of monomials in
x and φ used to define this matrix must lie in its null space. This allows
one to find the candidates x and φ that fulfill the first equation in (22)
via linear algebra operations in the general case, see [7] for more details.
Then, one checks the fulfillment of the other two equations in (22) via direct
substitution.
8
3.3
3.3.1
Variable Shape Estimates
Minimum Volume Estimate.
Let us start by showing how the methodology proposed in the previous
sections can further be elaborated in order to search for the minimum volume
estimate of E defined by (8)–(9).
From Theorem 1 a candidate of g∗ (x) can be found by solving

 g(x) is positive definite
inf vol(G(1)) s.t.
(16) holds with γ = 1
(23)
g,u,v

deg(g), deg(u) and deg(v) are fixed.
If g(x) is quadratic, then we can express g(x) as
g(x) = (x − x0 )′ G(x − x0 )
(24)
where x0 ∈ Rn and G > 0, and vol(G(1)) is given by
vol(G(1)) = p
η(n)
det(G)
(25)
where η(n) is a constant depending on n only (η(1) = 2, η(2) = π, η(3) =
4π/3, etc). Hence, minimizing vol(G(1)) in (23) amounts to maximizing
det(G). Let us observe that, with G > 0 and λ ∈ R, the condition
det(G) > λ
(26)
can equivalently be written through a suitable LMI in G, λ and some additional variables, see [2, 17]. Hence, if f (x, θ) is polynomial (i.e., nc = 0),
(23) can be replaced by
g# (x) = (x − x0 )′ G# (x − x0 )
where G# is the solution of the LMI problem

 G>0
G# = arg sup det(G) s.t.
(16) holds with γ = 1

G,u,v
deg(u) and deg(v) are fixed.
(27)
(28)
If f (x, θ) is nonpolynomial (i.e., nc > 0), the vertices ξ (1) , . . . , ξ (nv ) have
to be replaced with analogous ones, say ξ˜(1) , . . . , ξ˜(nv ) , which do not depend
9
on g(x) but still define worst-case reminders of f (x, θ) for x ∈ G(1). This
can be done by introducing an a priori chosen region of the form
n
o
G̃ = x ∈ Rn : (x − x0 )′ G̃(x − x0 ) ≤ 1
for some G̃ > 0, and requiring that G ≤ G̃ in (28) and ξ̃ (1) , . . . , ξ˜(nv ) satisfy
(18) with Gsj (γ0 ) replaced by G̃sj , where G̃sj is the projection of G̃ on the
sj -axis.
If g(x) has degree larger than 2, one possibility is to use a similar procedure to the one just described by maximizing the determinant of a Gram
matrix of g(x): this will approximately minimize vol(G(1)), see e.g. [15].
x0
x(1)
x(2)
(a)
(b)
Figure 1: Procedure for estimating the convex hull of E. (a) For each point
x(i) on a spherical outer estimate centered at x0 , an outer estimate with
planar level sets perpendicular to x0 − x(i) is computed via Theorem 1. (b)
A convex estimate is obtained as intersection of the half-spaces identified by
the previous computations.
3.3.2
Smallest Convex Estimate.
Next, we consider the construction of the smallest convex estimate of E, i.e.
the set H∗ in (10). The basic idea consists of generating a convex polytopic
estimate via intersection of a sequence of half-spaces. Specifically, we start
by computing an outer estimate of E with spherical shape. This can be done
by choosing g(x) = kx − x0 k2 for some x0 ∈ Rn and by exploiting Theorem
1. We hence obtain a sphere G(γ) which is guaranteed to contain E. Then,
10
we sample the boundary of G(γ) at ns points x(1) , . . . , x(ns ) , for example by
using equally spaced points. For each of these points, we compute an outer
estimate of E by choosing in place of g(x) the functions
!2
(x0 − x(i) )′ (x − x(i) )
gi (x) =
.
(29)
kx0 − x(i) k
The level sets of gi (x) are planes orthogonal to the ray x0 −x(i) of the sphere,
and gi (x) is the square of the distance of the plane containing x from x(i) ,
see Figure 1a. Let γi be the upper bound provided by Theorem 1 for gi (x).
We hence obtain that the set Gi (γi ), given by
Gi (γi ) = {x ∈ Rn : gi (x) ≤ γi }
is an outer estimate of E. Lastly, we define
(
)
(x0 − x(i) )′ (x − x(i) ) √
n
≤ γi .
Hi = x ∈ R :
kx0 − x(i) k
(30)
(31)
We have that Hi is a half-space delimited by one of the two planes delimiting
Gi (γi ) and containing E. From H1 , . . . , Hns we define the final estimate
H=
ns
\
i=1
Hi
(32)
which is a convex polytope containing E. See Figure 1b for details. It turns
out that, for any chosen x0 and ns ,
E ⊆ H∗ ⊆ H.
(33)
Let us observe that the procedure just described can be also used for obtain
convex outer approximations of semialgebraic sets, by suitably modifying
the definition of the polynomial w(x, φ, ξ).
4
Examples
In this section we present some illustrative examples of the proposed methodology. The LMI problems are solved by using the toolbox SeDuMi for Matlab
on a standard computer (Windows XP, Pentium IV 3.2 GHz, 2 GB RAM)
and the computational time is few seconds in all examples. The polynomial
v(x, φ) is chosen constant and equal to 1, while u(x, φ) is variable of degree
two.
11
4.1
Example 1
Let us consider the uncertain nonlinear system

2
2

 ẋ1 = x1 + (θ − 2)x1 x2 + x2 + 3θ − 4
ẋ2 = x21 + (1 − 4θ)x1 x2 + x22 + (1 − 2θ)x1 − 2θ


θ ∈ [0, 1].
In this case f (x, θ) is polynomial (i.e., nc = 0). By choosing θ (1) = 1 and
θ (2) = 0, one has φ1 = θ and φ2 = 1 − θ, and z(x, φ) in (14) is given by1
z(x, φ) =
(φ1 + φ2 )x21 − (φ1 + 2φ2 )x1 x2 + (φ1 + φ2 )x22 − φ1 − 4φ2
(φ1 + φ2 )x21 + (φ2 − 3φ1 )x1 x2 + (φ1 + φ2 )x22 + (φ2 − φ1 )x1 − 2φ1
Let us select the shape function g(x) = kxk2 . By solving (19) we find the
upper bound γ # = 6.811 of γ ∗ . Figure 2a shows the boundary of the found
estimate G(γ # ) and the equilibrium points of the system computed for 101
values of θ equally spaced in [0, 1].
In order to establish whether the found upper bound γ # is tight, we use
Theorem 2, in particular (22) holds with ψ # = (0.790, 0.613)′ and x# =
(1.857, 1.834)′ . This implies that γ # is tight, i.e. γ # = γ ∗ . Moreover, from
Theorem 2 we have that x# is an equilibrium point of the system achieved
for the uncertain parameter
φ# = prj(ψ # ) = (0.624, 0.376)′ .
Figure 2b shows the equilibrium points of the system achieved for φ# , i.e.
for θ = 0.624.
It is worth observing that the computation of the equilibrium points
shown in Figure 2a for 101 values of θ is unable to find the extreme equilibrium points that delimit the outer estimate of E (in fact, no equilibrium
point lies on the boundary of G(γ # ) in Figure 2a). Such an extreme point
is obtained for φ# found via Theorem 2 and is shown in Figure 2b.
Figure 2c shows the smallest outer estimate with ellipsoidal shape G #
centered in the origin obtained as described in Section 3.3. In particular, the
polynomial g# (x) in (27) is given by g# (x) = 0.299x21 − 0.354x1 x2 + 0.346x22 .
Figure 2d shows the polytopic convex estimate H obtained from Section
3.3 by using 18 points equally spaced on the boundary of G(γ # ).
1
z(x, φ) does not contain ξ since f (x, θ) is polynomial in this case.
12
!
.
2.5
2
2
1.5
1.5
1
1
0.5
0.5
x2
x2
2.5
0
−0.5
0
−0.5
−1
−1
−1.5
−1.5
−2
−2
−2.5
−2.5
−3
−2
−1
0
x1
1
2
3
−3
−2
−1
(a)
1
2
3
0
1
2
3
(b)
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
x2
x2
0
x1
0
−0.5
0
−0.5
−1
−1
−1.5
−1.5
−2
−2
−2.5
−2.5
−3
−2
−1
0
x1
1
2
3
−3
(c)
−2
−1
x1
(d)
Figure 2: Example 1. (a) Boundary of the estimate G(γ # ) for g(x) =
kxk2 (red disc) and equilibrium points for 101 values of θ equally spaced in
[0, 1] (black dots). (b) Equilibrium points of the system for the uncertain
parameter φ# found with Theorem 2 (the blue square is x# ). (c) Minimum
volume estimate with ellipsoidal shape (red ellipse). (d) Convex estimate
constructed with 18 half-spaces.
13
4.2
Example 2
Let us consider the uncertain nonlinear system

2
2

 ẋ1 = x1 − x1 x2 + 3x2 + 2θx1 − 1
ẋ2 = x22 + 2x1 x2 − θ − sin x1


θ ∈ [0, 1].
In this case f (x, θ) is nonpolynomial due to the presence of sin x1 . Let us
select the shape function g(x) = kxk2 . For simplicity we set γ0 = ∞, and
(18) is simply satisfied by choosing
ξ1− = −1, ξ1+ = 1
for any truncation order k. By using k = 1, 3, 5 we find the upper bounds
γ1# = 6.9931, γ3# = 6.7993 and γ5# = 6.2889. Figure 3a shows the boundary of the found estimate G(γ5# ) and the equilibrium points of the system
computed for 101 values of θ equally spaced in [0, 1].
As we can see, the upper bound γ5# is nearly tight as there are equilibrium
points close to the boundary of G(γ5# ). Moreover, tighter upper bounds can
be obtained either by increasing k or by repeating the procedure with a
smaller γ0 and corresponding less conservative quantities ξ1− and ξ1+ (e.g.,
one can select γ0 = γ5# ).
Figure 3b shows the smallest outer estimate with ellipsoidal shape G #
centered in the origin obtained as described in Section 3.3. In particular, the
polynomial g# (x) in (27) is given by g# (x) = 0.169x21 − 0.296x1 x2 + 2.053x22 .
4.3
Example 3
Let us consider the electrical circuit in Figure 4a with three tunnel diodes
and two variable resistors. By indicating with xi the voltage of the capacitor
Ci , i = 1, 2, and selecting x2 as output, this system is described by

R0


 R0 C1 ẋ1 = E − x1 − x2 − R (β ) x1 − R0 h(x1 ) − R0 h(x1 + x2 )

1 1

R0
R0 C2 ẋ2 = E − x1 − x2 −
x1 − R0 h(x2 ) − R0 h(x1 + x2 )


R2 (β2 )



y = x2
where R1 (β1 ) and R2 (β2 ) are variable resistances, and h(·) is the voltage-tocurrent characteristic of the tunnel diodes (supposed equal for all the three
diodes for ease of presentation).
14
2.5
2
2
1.5
1.5
1
1
0.5
0.5
x2
x2
2.5
0
−0.5
0
−0.5
−1
−1
−1.5
−1.5
−2
−2
−2.5
−2.5
−3
−2
−1
0
x1
1
2
3
−3
−2
−1
0
x1
1
2
3
Figure 3: Example 2. (a) Boundary of the estimate G(γ5# ) for g(x) = kxk2
(red disc) and equilibrium points for 101 values of θ equally spaced in [0, 1]
(black dots). (b) Minimum volume estimate with ellipsoidal shape (red
ellipse).
Let us consider the problem of determining the maximum absolute value
of the output of the system in steady-state, i.e.
y ∗ = sup {|x2 | : x ∈ E}
where E is the set of equilibrium points of the circuit. We select the plausible
values E = 1.2 V, R0 = 500 Ω, R2 = 900 Ω, C1 = C2 = 2 pF, and
R1 (β1 ) = 500 + 3000β1 , R2 (β2 ) = 1500 + 3000β2 , β ∈ [0, 1]2
where R1 (β1 ) and R2 (β2 ) are measured in Ω. Moreover, we adopt the expression of h(·) in [8, 11] given by
h(z) = 17.76z − 103.79z 2 + 229.62z 3 − 226.31z 4 + 83.72z 5 10−3
where z is measured in V and h(z) in A. For this system one can select
θ = (R1 (β1 )−1 , R2 (β2 )−1 )′
hence obtaining that r = 4 (number of vertices of Θ). Let us select the
shape function g(x) = x22 , hence implying γ ∗ = (y ∗ )2 . We find the upper
bound γ # = 0.762 V2 of γ ∗ . In order to establish whether γ # is tight,
we use Theorem 2, and we find that (22) holds with ψ # = (0, 0, 1, 0)′ and
x# = (0.021, 0.873)′ . This implies that γ ∗ = γ # is tight and that x# is
15
an equilibrium point. Figure 4b shows the boundary of the found estimate
G(γ # ) (red dashed line).
Figure 4b also shows the minimum volume ellipsoidal estimate obtained
as described in Section 3.3 for x0 = (E/2, E/2)′ (red solid line), for which
G# in (28) is
14.228 9.886
#
G =
,
9.886 10.921
and the polytopic convex estimate H constructed with 18 half-planes (dashed
area).
❧❧ D3
✱✱
❧❧ D1
✱✱
❧❧ D2
✱✱
1
R1 (β1 ) ✁✕
R2 (β2 ) ✁✕
✁
❆✁✁❆✁
✁
✁
❆✁✁❆✁
✁
0.9
0.8
0.7
C2
✛
0.5
✛
x1
0.6
x2 [V]
C1
x2
0.4
0.3
❆✁ ❆✁
R0
0.2
E
0.1
0
(a)
0
0.2
0.4
x1 [V]
(b)
0.6
0.8
Figure 4: Example 3. (a) Electrical circuit with three tunnel diodes and
two variable resistors. (b) Solution y ∗ (red dashed line), minimum volume
ellipsoidal estimate (red solid line), and polytopic convex estimate (dashed
area).
5
Conclusion
This paper has addressed the problem of estimating the set of equilibrium
points of uncertain nonlinear systems with nonlinear functions of the state
16
1
polynomially depending on an uncertain vector constrained in a polytope.
In particular, this paper has proposed LMI techniques for computing outer
estimates of this set in both cases of fixed and variable shape, and by providing conditions for establishing their tightness.
The benefit of the proposed methodology is twofold. First, determining
outer estimates of the set of equilibrium points is a problem that would
require to repeat the determination of the equilibrium points for all admissible uncertainties. Second, the determination of the equilibrium points can
hardly be done even for fixed values of the uncertainty (since solving a system of nonlinear equations is a difficult problem for which there do not exist
methods that guarantee to find all the solutions).
References
[1] E. L. Allgower and K. Georg. Computational solution of nonlinear
systems of equations. American Mathematical Society, 1990.
[2] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix
Inequalities in System and Control Theory. SIAM, 1994.
[3] G. Chesi. LMI techniques for optimization over polynomials in control:
a survey. IEEE Transactions on Automatic Control, 55(11):2500–2510,
2010.
[4] G. Chesi. On the admissible equilibrium points of nonlinear dynamical systems affected by parametric uncertainty: Characterization via
LMIs. In IEEE Multi-Conference on Systems and Control, pages 351–
356, Yokohama, Japan, 2010.
[5] G. Chesi, A. Garulli, A. Tesi, and A. Vicino. An LMI-based approach
for characterizing the solution set of polynomial systems. In IEEE Conference on Decision and Control, pages 1501–1506, Sydney, Australia,
2000.
[6] G. Chesi, A. Garulli, A. Tesi, and A. Vicino. Characterizing the solution set of polynomial systems in terms of homogeneous forms: an
LMI approach. International Journal of Robust and Nonlinear Control,
13(13):1239–1257, 2003.
[7] G. Chesi, A. Garulli, A. Tesi, and A. Vicino. Homogeneous Polynomial
Forms for Robustness Analysis of Uncertain Systems. Springer, 2009.
17
[8] L. O. Chua, C. A. Desoer, and E. S. Kuh. Linear and Nonlinear Circuits. McGraw-Hill, 1987.
[9] E. de Klerk. Exploiting special structure in semidefinite programming:
A survey of theory and applications. European Journal of Operational
Research, 201(1):1–10, 2010.
[10] J. Hasenauer, P. Rumschinski, S. Waldherr, S. Borchers, F. Allgower,
and R. Findeisen. Guaranteed steady-state bounds for uncertain chemical processes. In International Symposium on Advanced Control of
Chemical Processes, 2010.
[11] H. K. Khalil. Nonlinear Systems. Prentice Hall, 2001.
[12] J.-B. Lasserre. Global optimization with polynomials and the problem
of moments. SIAM Journal of Optimization, 11(3):796–817, 2001.
[13] J.-B. Lasserre. Robust global optimization with polynomials. Mathematical Programming, 107(1):275–293, 2006.
[14] M. Laurent. Sums of squares, moment matrices and optimization over
polynomials. In M. Putinar and S. Sullivant, editors, Emerging Applications of Algebraic Geometry, Vol. 149 of IMA Volumes in Mathematics
and its Applications, pages 157–270. Springer, 2009.
[15] A. Magnani, S. Lall, and S. Boyd. Tractable fitting with convex polynomials via sum-of-squares. In IEEE Conference on Decision and Control and European Control Conference, pages 1672–1677, Seville, Spain,
2005.
[16] D. Manocha. Solving systems of polynomial equations. IEEE Computer
Graphics and Applications, 14:46–55, 1994.
[17] Y. Nesterov and A. Nemirovsky. Interior-Point Polynomial Methods in
Convex Programming. SIAM, 1994.
[18] J. M. Ortega and W. C. Rheinboldt. Iterative Solution of Nonlinear
Equations in Several Variables. SIAM, 1987.
[19] P. A. Parrilo. Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization. PhD thesis, California
Institute of Technology, 2000.
[20] B. Reznick. Extremal PSD forms with few terms. Duke Mathematical
Journal, 45(2):363–374, 1978.
18
[21] G. Stengle. A nullstellensatz and a positivstellensatz in semialgebraic
geometry. Math. Ann., 207:87–97, 1974.
[22] B. Sturmfels. Solving Systems of Polynomial Equations. Amer. Math.
Soc., Providence, RI, 2002.
[23] H. Waki, S. Kim, M. Kojima, and M. Muramatsu. Sums of squares
and semidefinite programming relaxations for polynomial optimization
problems with structured sparsity. SIAM Journal on Optimization,
17(1):218–242, 2006.
19