Robust model predictive control for nonlinear discrete

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL
Int. J. Robust Nonlinear Control 2003; 13:229–246 (DOI: 10.1002/rnc.815)
Robust model predictive control for nonlinear
discrete-time systems
L. Magni1,n,y, G. De Nicolao1,z, R. Scattolini1,} and F. Allgo. wer2,}
1
Dipartimento di Informatica e Sistemistica, Universita! di Pavia, Via Ferrata 1, 27100 Pavia, Italy
2
Institute for Systems Theory in Engineering University of Stuttgart, 70550 Stuttgart, Germany
SUMMARY
This paper describes a model predictive control (MPC) algorithm for the solution of a state-feedback
robust control problem for discrete-time nonlinear systems. The control law is obtained through the
solution of a finite-horizon dynamic game and guarantees robust stability in the face of a class of bounded
disturbances and/or parameter uncertainties. A simulation example is reported to show the applicability of
the method. Copyright # 2003 John Wiley & Sons, Ltd.
KEY WORDS:
nonlinear model predictive control; robust control; discrete-time systems
1. INTRODUCTION
This paper is motivated by the problem of designing closed-loop controllers ensuring robust
stability in the face of bounded disturbances and/or parameter uncertainties for discrete-time
nonlinear systems. A classical way to address this problem is to resort to H1 control, see
References [1–4], where a solution is presented for the state-feedback, the full-information, and
the output-feedback case. The derivation of the H1 control law, however, calls for the solution
of a Hamilton–Jacobi–Isaacs (HJI) equation [3]; this is a difficult computational task which
hampers the application to real systems. In order to overcome this problem, at least partially,
the MPC technique, which is based on the solution of a finite horizon problem according to the
receding horizon (RH) paradigm, appears to be a promising approach. In fact, solving a finite
horizon dynamic game is much simpler than finding the function of the state which satisfies the
HJI equation. In an H1 setting, RH schemes have been first introduced in References [5, 6] for
linear unconstrained systems and have been recently studied in References [7, 8] for constrained
linear systems, while in References [9, 10] H1 -RH control algorithms for nonlinear continuous
time systems have been proposed. The basic ingredients of these nonlinear RH controllers are an
n
Correspondence to: Lalo Magni, Dipartimento di Informatica e Sistemistica, Universit!a di Pavia, Via Ferrata 1, 27100
Pavia, Italy.
E-mail: [email protected]
z
E-mail: [email protected]
}
E-mail: [email protected]
}
E-mail: [email protected]
y
Copyright # 2003 John Wiley & Sons, Ltd.
Received 21 March 2002
Revised 25 August 2002
230
L. MAGNI ET AL.
auxiliary controller k# ðxÞ (typically obtained by linearization techniques) and a computable
invariant set Oðk# Þ; inside which the auxiliary controller k# ðxÞ solves the H1 problem.
The design of the MPC controller is motivated by the desire to ensure the solution of the H1
problem in a set ORH larger than the set where the auxiliary controller k# ðxÞ solves the
problem.
The approaches proposed in References [8–10] guarantee that ORH Oðk# Þ; but this is
still not completely satisfactory. In fact, Oðk# Þ is just an easy-to-compute invariant set associated with the auxiliary controller k# ; and could be considerably smaller than the largest
region of attraction OM ðk# Þ; where the auxiliary controller solves the H1 control problem. In this
respect the approach introduced in Reference [9] does not guarantee that ORH OM ðk# Þ because
only open-loop sequences are considered in the optimization problem, while this property is
ensured by the methods described in References [8, 10], where optimization with respect to
closed-loop strategies is considered, provided that a sufficiently long optimization horizon is
used.
In the present paper, we propose a new solution to the state-feedback H1 -RH control
problem so as to obtain ORH OM ðk# Þ but with a substantially reduction of the computational
burden. This is achieved by introducing a more flexible problem formulation, i.e. by using
different control and prediction horizons. The use of two horizons has been already discussed in
the context of nonlinear RH control in Reference [11], where it has been shown that it leads to
significant improvements in terms of performance, domain of attraction and computational
burden.
The organization of the paper is as follows. In Section 2, the problem is formulated, while in
Section 3 the RH control solution is introduced with the main result. Section 4 proposes a
possible way to compute an auxiliary control law with its associated robust invariant set. A
simulation example is reported to discuss the characteristics of the method in Section 5. Section
6 contains some conclusions, while all the proofs are gathered in the appendix to improve
readability.
2. PROBLEM FORMULATION
Consider the nonlinear discrete-time dynamic system
xðk þ 1Þ ¼ f ðxðkÞ; uðkÞ; wðkÞÞ;
k50
zðkÞ ¼ hðxðkÞ; uðkÞ; wðkÞÞ
ð1Þ
where x 2 Rn ; u 2 Rm ; w 2 l2 ð½0; T ; Rp Þ; for every positive integer T ; z 2 Rs ; f and h are C 2
functions with f ð0; 0; 0Þ ¼ 0; hð0; 0; 0Þ ¼ 0 and xð0Þ ¼ x0 :
Assumption A1
% ; containing the origin as an interior point, system (1) is zero-state
Given a non-empty set O
% ; i.e. 8x0 2 O
% and 8uðÞ such that xðkÞ 2 O
% ; 8k5t; we have
detectable in O
zðkÞjw¼0 ¼ 0 8k5t ) lim xðkÞ ¼ 0
k!1
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
ROBUST MODEL PREDICTIVE CONTROL
231
In this paper, we want to synthesize a state-feedback controller that ensures robust stability in
the face of all disturbances w satisfying the following assumption:
Assumption A2
Given a positive constant gD ; the disturbance w is such that
jjwðkÞjj2 4g2D jjzðkÞjj2 ;
k5t
ð2Þ
The space of admissible disturbances will be denoted by WðgD Þ:
As is well known, Equation (2) also represents a wide class of modelling errors, with respect to
which robust stability is desired. To this end, we consider the H1 control problem defined
below.
Definition 1
H1 control problem (P1): Design a state-feedback control law
u ¼ kðxÞ
ð3Þ
guaranteeing that the closed-loop system (1)–(3) with input w 2 WðgD Þ and output z has a finite
L2 gain4g in a finite positively invariant set O; that is, 8x0 2 O;
(i) xðkÞ 2 O; 8k50;
(ii) there exists a finite quantity bðx0 Þ; bð0Þ ¼ 0; such that 8T 50;
T
T
X
X
jjzðiÞjj2 4g2
jjwðiÞjj2 þ bðx0 Þ
i¼0
i¼0
for any non-zero w 2 WðgD Þ:
Once such a control law is applied, it follows from the small-gain theorem that the closedloop system (1)–(3) will be robustly stable in O for all uncertainties w 2 WðgD Þ provided that
gD 51=g; see Reference [12].
A partial result is derived in Reference [4] where it is shown that, under some regularity
assumptions, a nonlinear H1 control problem is locally solved by the linear H1 control law
synthesized on the linearization of (1). The main limitation of this result is that the invariance of
the region O is not established, but it is only shown that condition (ii) holds for all disturbances
w 2 l2 ½0; 1Þ such that the state trajectory of system (1)–(3) does not leave O: Among the few
results presented in literature, we recall [13] where the construction of invariant subsets of the
state space for nonlinear systems with persistent bounded disturbances is investigated.
Hereafter, given a controller u ¼ kðxÞ that solves (P1), the symbol OM ðk; g; gD Þ will denote the
largest invariant set while Oðk; g; gD Þ will denote a computable invariant set that can be supposed
to be known. Starting from an available auxiliary control law u ¼ k# ðxÞ with an associated
invariant set Oðk# ; g; gD Þ; we want to obtain a control law u ¼ kRH ðxÞ whose associated largest
invariant set OM ðkRH ; g; gD Þ is such that:
(a) OM ðkRH ; g; gD Þ Oðk# ; g; gD Þ;
% M by a proper choice of the design parameters, where O
%M
(b) OM ðkRH ; g; gD Þ tends to O
denotes the largest domain of attraction achievable by a control law that solves (P1).
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
232
L. MAGNI ET AL.
3. MODEL PREDICTIVE CONTROL LAW
3.1. Problem statement
The derivation of the MPC control law is based on the solution of a finite-horizon zero-sum
differential game, where u is the input of the minimizing player (the controller) and w is the input
of the maximizing player (‘the nature’). More precisely, the controller chooses the input uðkÞ as a
function of the current state xðkÞ so as to ensure that the effect of the disturbance wðÞ on the
output zðÞ is sufficiently small for any choice of wðÞ made by ‘nature’.
In the following, according to the RH paradigm, we will focus on a finite time interval
½t; t þ Np 1: At a given time t; the controller will have to choose a vector of feedback control
strategies kt;tþNc 1 :¼ ½k0 ðxðtÞÞ; . . . ; kNc 1 ðxðt þ Nc 1ÞÞ where ki ðÞ : Rn ! Rm will be called
policy and Nc 4Np is the control horizon. At the end of the control horizon, i.e. in the interval
½t þ Nc ; t þ Np 1Þ; an auxiliary state-feedback control law u ¼ k# ðxÞ will be used. The sequence
of disturbances chosen by ‘the nature’ will be denoted by wt;tþNp 1 :¼[wðtÞ; . . . ; wðt þ Np 1Þ;
where Np > Nc is the prediction horizon.
Differently from the standard RH approach, in this case it is not convenient to consider openloop control strategies since open-loop control would not account for changes in the state due to
unpredictable inputs played by ‘the nature’ (see also Reference [8]). Hence, at each time t; the
minimizing player optimizes his sequence kt;tþNc 1 of policies, i.e. the minimization is carried out
in an infinite-dimensional space. Conversely, in open-loop it would be sufficient to minimize
with respect to the sequence ½uðtÞ; uðt þ 1Þ; . . . ; uðt þ Nc 1Þ of future control actions, a
sequence which belongs to a finite-dimensional space. This would be much simpler, but could
not guarantee the same level of performance as the closed-loop strategy.
Assumption A3
Suppose that there exists an auxiliary control law u ¼ k# ðxÞ that solves the problem (P1), with a
domain of attraction Oðk# ; g; gD Þ; whose boundary is assumed to be a level line of a positive
(storage) function Vk ðxÞ such that
Vk# ð f ðx; k# ðxÞ; wÞÞ Vk# ðxÞ5 12 ðjjzjj2 g2 jjwjj2 Þ 8x 2 Oðk# ; g; gD Þ 8w 2 WðgD Þ
ð4Þ
and Vk# ð0Þ ¼ 0:
In the next section a practical way to compute a nonlinear control k# satisfying Assumption A3
based on the solution of the H1 control problem for the linearized system is derived for
nonlinear affine systems.
Definition 2
Finite-Horizon Optimal Dynamic Game (FHODG): Minimize with respect to kt;tþNc 1 ; and
maximize with respect to wt;tþNp 1 ; the cost function
J ðx% ; kt;tþNc 1 ; wt;tþNp 1 ; Nc ; Np Þ ¼
tþNp 1
1 X
fjjzðiÞjj2 g2 jjwðiÞjj2 g þ Vk# ðxðt þ Np ÞÞ
2 i¼t
subject to (1) with xðtÞ ¼ x% ; and
xðt þ Np Þ 2 Oðk# ; g; gD Þ Rn
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
233
ROBUST MODEL PREDICTIVE CONTROL
(
uðiÞ ¼
kit ðxðiÞÞ;
t4i5t þ Nc 1
k# ðxðiÞÞ;
t þ Nc 4i5t þ Np 1
In the previous definition, g is a constant which can be interpreted as the disturbance
attenuation level.
For a given initial condition x% 2 Rn ; we denote by
max J ðx% ; kt;tþNc 1 ; wt;tþNp 1 ; Nc ; Np Þ
kot;tþNc 1 ¼ arg min
kt;tþNc 1 wt;tþNp 1
and
wot;tþNp 1 ¼ max J ðx% ; kot;tþNc 1 ; wt;tþNp 1 ; Nc ; Np Þ
wt;tþNp 1
the saddle point solution, if exists, of the zero-sum finite-horizon optimal dynamic game
(FHODG).
According to the RH paradigm, we obtain the value of the feedback control law as a function
of x% by solving the FHODG and setting
kRH ðx% Þ ¼ ko0 ðx% Þ
where
ko0 ðx% Þ
is the first column of
kot;tþNc 1
:¼
½ko0 ðxðtÞÞ; . . . ; koNc 1 ðxðt
ð5Þ
þ Nc 1ÞÞ:
3.2. Algorithm
In summary, the implementation of the proposed RH controller consists of the following steps.
Off-line computations: Computation of k# ðxÞ; Oðk# ; g; gD Þ; and Vk# ðxÞ (see Section 4).
On-line computations:
1. At each time instant k; let t ¼ k; x% ¼ xðtÞ ¼ xðkÞ; and compute kot;tþNc 1 by solving the finitehorizon optimal dynamic game formulated in Definition 2;
2. Apply the control action uðkÞ ¼ kRH ðxðkÞÞ where kRH is defined in (5).
3. At the next time instant k þ 1; let t ¼ k þ 1; x% ¼ xðtÞ ¼ xðk þ 1Þ and proceed as above.
3.3. Properties
In order to establish the closed-loop stability properties of the RH controller, we first introduce
the following definitions.
Definition 3
Let Kðx% ; Nc ; Np Þ be the set of all policies kt;tþNc 1 such that starting from x% ; it results x ðt þ Np Þ 2 Oðk# ; g; gD Þ for every admissible disturbance sequence wt;tþNp 1 :
Definition 4
Let ORH ðNc ; Np Þ be the set of initial states x% such that Kðx% ; Nc ; Np Þ is non-empty.
Definition 5
Given the initial state x% and a policy vector kt;tþNc 1 2 Kðx% ; Nc ; Np Þ; the set of admissible
disturbances wt;tþNp 1 is denoted by Wðx% ; kt;tþNc 1 ; Np Þ:
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
234
L. MAGNI ET AL.
In the following, the optimal value of the FHODG will be denoted by V ðx% ; Nc ; Np Þ; i.e.
V ðx% ; Nc ; Np Þ :¼ J ðx% ; kot;tþNc 1 ; wot;tþNp 1 ; Nc ; Np Þ: Now, the main result can be stated.
Theorem 1
Given an auxiliary control law k# and an associated invariant set Oðk# ; g; gD Þ; consider two positive
constants g and gD with gD g51; and the closed-loop system
(
xðk þ 1Þ ¼ f ðxðkÞ; kRH ðxðkÞÞ; wðkÞÞ
RH
S :
zðkÞ ¼ hðxðkÞ; kRH ðxðkÞÞ; wðkÞÞ
Then, under Assumptions A1–A3:
(i) ORH ðNc ; Np Þ is a positively invariant set for SRH and ORH ðNc ; Np Þ Oðk# ; g; gD Þ; 8Nc ;
Np > 0;
(ii) SRH is internally stable and has L2 -gain less than or equal to g; in ORH ðNc ; Np Þ;
% M ; where O
% M denotes the
(iii) ORH ðNc þ 1; Np Þ ORH ðNc ; Np Þ and limNc !1 ORH ðNc ; Np Þ ¼ O
largest domain of attraction achievable by a control law solving (P1).
(iv) 8Nc > 0; limNp !1 ORH ðNc ; Np Þ OM ðk# ; g; gD Þ; where OM ðk# ; g; gD Þ is the maximum invariant
set associated with k# :
3.4. Comments
*
*
*
*
*
*
Point (i) is essential in order to ensure that it is worth replacing the auxiliary controller k# with
the RH one kRH :
Point (ii) states that the RH controller is indeed a solution to the H1 control problem for the
nonlinear system.
Point (iii) shows that, at the cost of an increase of the computational burden associated with
optimization, the domain of attraction can be progressively enlarged towards the maximum
achievable one. In other words, Nc is a tuning knob that regulates the complexity/
performance trade off.
Point (iv) shows that by suitably increasing the prediction horizon, the RH controller can be
rendered better than the auxiliary one, not only in terms of the nominal set Oðk# ; g; gD Þ (see (i)),
but also in terms of the actual domain of attraction. Remarkably, this property holds
irrespective of the value of the control horizon Nc : In particular, it is possible to let Nc ¼ 1 in
which case it is not even necessary to optimize over policies in an infinite-dimensional space.
In fact, xðtÞ ¼ x% is known and the first control move is just a vector belonging to an mdimensional space.
A major drawback of the approach proposed in the present paper is of computational type,
since the implementation of the RH controller calls for optimization within the infinitedimensional space of control policies (at least for Nc > 1). A possible solution is to resort to a
finite-dimensional parametrization [14] (e.g. polynomial control policies), at the cost of losing
some flexibility. In this respect it is worth observing that the theoretical properties of the
regulator remain unchanged provided that the auxiliary controller k# ðxÞ belongs to the same
finite-dimensional class of control policies.
In Reference [9] optimization is performed with respect to the sequence of future control
moves. This is a (computationally cheaper) open-loop strategy but, due to the action of
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
ROBUST MODEL PREDICTIVE CONTROL
*
235
disturbances, there is no guarantee that ORH Oðk# Þ; so that the RH controller may not
improve on the auxiliary one. Herein, we adopt a more effective closed-loop strategy,
guaranteeing that ORH Oðk# Þ:
In [10] the property ORH Oðk# Þ is achieved by resorting to a closed-loop strategy
(minimization with respect to policies). However, there is no guarantee that ORH OM ðk# Þ;
where OM ðk# Þ is the largest invariant set where the auxiliary controller k# solves the H1
problem, unless the control horizon (which is equal to the prediction one) is properly
increased. Differently from Reference [10], in this paper the use of a control horizon Nc
shorter than the prediction horizon Np allows us to ensure that (by a proper choice of Np )
ORH OM ðk# Þ for any given (and possibly short) control horizon Nc ; see point (iv) of
Theorem 1.
4. THE AUXILIARY CONTROL LAW
In this section it is shown that, if nonlinear affine systems are considered, a nonlinear control
law u ¼ k# ðxÞ satisfying Assumption A3 can be derived by the solution of the H1 control
problem for the linearized system. In this respect, the following assumption is introduced.
Consider the system
xðk þ 1Þ ¼ f1 ðxðkÞÞ þ F2 ðxðkÞÞuðkÞ þ F3 ðxðkÞÞwðkÞ
"
#
h1 ðxðkÞÞ
zðkÞ ¼
u
ð6Þ
where f1 ; F2 ; F3 and h1 are C 2 functions with f1 ð0Þ ¼ 0 and h1 ð0Þ ¼ 0: For convenience, we
represent the corresponding discrete-time linearized system as
xðk þ 1Þ ¼ F1 xðkÞ þ F2 uðkÞ þ F3 wðkÞ
"
#
H1 xðkÞ
zðkÞ ¼
u
where F1 ¼ @f1 =@xjx¼0 ; F2 ¼ F2 ð0Þ; F3 ¼ F3 ð0Þ; H1 ¼ @h1 =@xjx¼0 : Given a square n n matrix P ;
define also the symmetric matrix
"
#
R11 R12
R ¼ RðP Þ ¼
ð7Þ
R21 R22
where
R11 ¼ F20 PF2 þ I
R12 ¼ R021 ¼ F20 PF3
R22 ¼ F30 PF3 g2 I
and the quadratic function
Vf ðxÞ ¼ x0 Px
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
236
L. MAGNI ET AL.
Proposition 2
Suppose there exists a positive definite matrix P such that:
(i) R11 > 0; R22 50
(ii) P þ F10 PF1 þ H10 H1 F10 P ½F2 F3 R1 ½F2 F3 0 PF1 50
Then, the control law u ¼ kn ðxÞ where
" n #
k ðxÞ
0
¼ RðxÞ1 F2 ðxÞ F3 ðxÞ Pf1 ðxÞ
n
x ðxÞ
with
"
RðxÞ ¼
F2 ðxÞ0 PF2 ðxÞ þ I
F2 ðxÞ0 PF3 ðxÞ
F3 ðxÞ0 PF2 ðxÞ
F3 ðxÞ0 PF3 ðxÞ g2 I
#
"
¼
r11 ðxÞ
r12 ðxÞ
r21 ðxÞ
r22 ðxÞ
#
satisfies Assumption A3 with
Oðk# ; g; gD Þ ¼ Oc :¼ fx: x0 Px4ag
where a is a finite positive constant, and Vk ðxÞ :¼ Vf ðxÞ:
Remark 1
P can be computed by solving a discrete-time H1 algebraic Riccati equation.
5. SIMULATION EXAMPLE
In this section, the MPC law introduced in the paper is applied to a cart with a mass M moving
on a plane. This carriage (see Figure 1) is attached to the wall via a spring with elasticity k given
by
k ¼ k0 ex1
where x1 is the displacement of the carriage from the equilibrium position associated with the
external force u ¼ 0: Finally, a damper with damping factor hd affects the system in a resistive
way. The model of the system is given by the following continuous-time state-space nonlinear
model
x’ 1 ðtÞ ¼ x2 ðtÞ
k0 x1 ðtÞ
hd
uðtÞ
e
x1 ðtÞ x2 ðtÞ þ
M
M
M
where x2 is the carriage velocity. The parameters of the system are M ¼ 1; k0 ¼ 0:33; while the
damping factor is not well known and it is given by hd ¼ h%d þ Dhd ; where h%d ¼ 1:1 and
jDhd j5gD : The system can be rewritten as
x’ 2 ðtÞ ¼ x’ 1 ðtÞ ¼ x2 ðtÞ
x’ 2 ðtÞ ¼ h%d
k0 x1 ðtÞ
uðtÞ
þ wðtÞ
e
x1 ðtÞ x2 ðtÞ þ
M
M
M
Copyright # 2003 John Wiley & Sons, Ltd.
ð8Þ
Int. J. Robust Nonlinear Control 2003; 13:229–246
ROBUST MODEL PREDICTIVE CONTROL
237
Figure 1. Cart and spring example.
with
wðtÞ ¼ Dhd
x2 ðtÞ
M
so that if we choose
"
z¼
h1 ðxÞ
u
it follows that
wðtÞ ¼ 0
#
2
qx1
3
6
7
7
¼6
4 x2 5
u
Dhd
0 zðtÞ:
M
Therefore,
jjwðtÞjj5gD jjzðtÞjj
according to Assumption A2. An Euler approximation of system (8) with sampling time Tc ¼
0:4 is given by
x1 ðk þ 1Þ ¼ x1 ðkÞ þ Tc x2 ðkÞ
h%
k0 x1 ðkÞ
uðkÞ
þ Tc wðkÞ
e
x1 ðkÞ Tc x2 ðkÞ þ Tc
M
M
M
which is a nonlinear discrete time nonlinear system of form (6). For this system the RH control
law is computed according to the algorithm presented in the paper. The auxiliary control law is
obtained as described in Section 4 with g ¼ 1 and, following the same notation, it is given by
" 0#
F2
1
k# ðxÞ ¼ ½1 0R
Pf1 ðxÞ
F30
x2 ðk þ 1Þ ¼ x2 ðkÞ Tc
The terminal penalty is given by Vk# ¼ x0 Px and the terminal region by Oc :¼ fx: x0 Px4ag; with
a ¼ 0:001: The control and prediction horizons are Nc ¼ 5 and Np ¼ 100; respectively. The
policies ki ðÞ are functions of the form ki ðÞ ¼ ai k# ðxÞ þ bi x21 þ gi x22 : In Figures 2–4 three different
simulation experiments are reported starting with the same initial condition x ¼ ½4 00 but with
different values of the damping factor: h ¼ 1:1; 0:6 and 1.6. The value of the constant q in the
function h1 ðxÞ has been chosen equal to 0.1. The comparison of the infinite horizon costs, which
are reported in the figures, shows the improvement guaranteed by the MPC law with respect to
the auxiliary one. Note also that, in all these experiments, the linear H1 control law (that is
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
238
L. MAGNI ET AL.
Figure 2. Simulation with damping factor hd ¼ 1:1 and q ¼ 0:1: MPC control law
(continuous line), k# ðxÞ (dashed line).
Figure 3. Simulation with damping factor hd ¼ 0:6 and q ¼ 0:1: MPC control law
(continuous line), k# ðxÞ (dashed line).
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
ROBUST MODEL PREDICTIVE CONTROL
239
Figure 4. Simulation with damping factor hd ¼ 1:6 and q ¼ 0:1: MPC control law
(continuous line), k# ðxÞ (dashed line).
Figure 5. Simulation with damping factor hd ¼ 1:1 and q ¼ 1: MPC control law
(continuous line), k# ðxÞ (dashed line).
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
240
L. MAGNI ET AL.
Figure 6. Simulation with damping factor hd ¼ 0:6 and q ¼ 1: MPC control law
(continuous line), k# ðxÞ (dashed line).
Figure 7. Simulation with damping factor hd ¼ 1:6 and q ¼ 1: MPC control law
(continuous line), k# ðxÞ (dashed line).
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
ROBUST MODEL PREDICTIVE CONTROL
241
different from the nonlinear auxiliary control law) is not able to control the carriage to the
origin. In Figures 5–7 the same simulation experiments have been repeated using q ¼ 1 in the
synthesis phase.
6. CONCLUSIONS
In this paper it has been shown that the RH paradigm applied to the H1 control problem for
discrete-time nonlinear systems can improve the domain of attraction provided by an available
local solution, obtained for example through linearization. In particular, the RH control can
enlarge the domain of attraction even for very short control horizons (e.g. Nc ¼ 1) and at a
reasonable computational cost. The key points of the algorithm are: (i) the adoption of a closedloop strategy involving the optimization of the future control policies; (ii) the use of two distinct
horizons: a prediction horizon over which system performance is evaluated, and a shorter
control horizon over which control policies are optimized. The possibility of enlarging the
domain of attraction of the auxiliary controller without necessarily increasing the control
horizon is a definite advantage with respect to the algorithms available in the literature.
ACKNOWLEDGEMENTS
The authors acknowledge the partial financial support by MURST Project ‘New techniques for
identification and adaptive control of industrial systems’.
APPENDIX A
Proof of Theorem 1
(i) If x% 2 ORH ðNc ; Np Þ; there exists a policy kt;tþNc 1 2 Kðx% ; Nc ; Np Þ: Then, k* tþ1;tþNc ¼
½ktþ1;tþNc 1 ; k# ðxðt þ Nc ÞÞ 2 Kð f ðx% ; kRH ðx% Þ; wðtÞÞ; Nc ; Np Þ; 8w satisfying Assumption A2, so
that f ðx% ; kRH ðx% Þ; wðtÞÞ 2 ORH ðNc ; Np Þ that is ORH ðNc ; Np Þ is a positively invariant set for
SRH : Moreover 8x% 2 Oðk# ; g; gD Þ there exists the policy kt;tþNc 1 ¼ ½k# ðxðtÞÞ; . . . ; k# ðxðt þ Nc 1ÞÞ such that starting from x% ; it results xðt þ Np Þ 2 Oðk# ; g; gD Þ for every disturbance wt;tþNp 1
2 Wðx% ; kt;tþNc 1 ; Np Þ so that ORH ðNc ; Np Þ Oðk# ; g; gD Þ; 8Nc ; Np > 0 and, in view of
Assumption A3, the origin is an interior point of ORH ðNc ; Np Þ8Nc ; Np > 0:
(ii) First note that, in view of Assumption A2, V ð0; Nc ; Np Þ ¼ 0: Moreover, given w% t;tþNp 1 ¼ 0;
for every kt;tþNc 1
tþN
p 1
X
1
fjjzðiÞjj2 g þ Vk# ðxðt þ N ÞÞ50
J ðx% ; kt;tþNc 1 ; w% t;tþNp 1 ; Nc ; Np Þ ¼
2
i¼t
so that
V ðx% ; Nc ; Np Þ5 min J ðx% ; kt;tþNc 1 ; w% t;tþNp 1 ; Nc ; Np Þ50
kt;tþNc 1
ðA1Þ
8x% 2 ORH ðNc ; Np Þ:
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
242
L. MAGNI ET AL.
The monotonicity property V ðx% ; Nc þ 1; Np þ 1Þ4V ðx% ; Nc ; Np Þ is now proven. For ease of
notation, define
1
Sðz; wÞ ¼ fg2 jjwjj2 jjzjj2 g
2
Suppose now that kot;tþNc 1 is the solution of the FHODG with horizon Nc ; and consider the
following policy vector for the FHODG with horizon Nc þ 1
( o
kt;tþNc 1 ;
t4k4t þ Nc 1
k* t;tþNc ¼
k# ðxðt þ Nc ÞÞ; k ¼ t þ Nc
then, letting uðkÞ ¼ k* ðxðkÞÞ in (1), it results that
J ðx% ; k* t;tþNc ; wt;tþNp ; Nc þ 1; Np þ 1Þ
¼
tþN
Xp
SðzðiÞ; wðiÞÞ þ Vk# ðxðt þ Np þ 1ÞÞ
i¼t
¼ Vk# ðxðt þ Np þ 1ÞÞ Vk# ðxðt þ Np ÞÞ Sðzðt þ Np Þ; wðt þ Np ÞÞ
tþN
p 1
X
SðzðiÞ; wðiÞÞ þ Vk# ðxðt þ Np ÞÞ
i¼t
so that, in view of (4),
J ðx% ; k* t;tþNc ; wt;tþNc ; Nc þ 1; Np þ 1Þ4 tþN
p 1
X
SðzðiÞ; wðiÞÞ þ Vk# ðxðt þ Np ÞÞ
i¼t
which implies
V ðx% ; Nc þ 1; Np þ 1Þ 4
4
max
wt;tþNp 2Wðx% ;k* t;tþNc ;Np þ1Þ
maxo
J ðx% ; k* t;tþNc ; wt;tþNp ; Nc þ 1; Np þ 1Þ
wt;tþNp 1 2Wðx% ;kt;tþNc 1 ;Np Þ
tþN
p 1
X
SðzðiÞ; wðiÞÞ þ Vk# ðxðt þ Np ÞÞ
i¼t
¼ V ðx% ; Nc ; Np Þ
ðA2Þ
which holds for all x% 2 ORH ðNc ; Np Þ: Moreover, in view of (A2) and the definition
of V ðx; Nc ; Np Þ; it follows that 8x% 2 ORH ðNc ; Np Þ; and for a generic wðtÞ 2 Wðx% ; kot;tþNc 1 ; Np Þ:
V ðx% ; Nc ; Np Þ ¼ J ðx% ; kot;tþNc 1 ; wot;tþNp 1 ; Nc ; Np Þ
5 V ð f ðx% ; kRH ðx% Þ; wðtÞÞ; Nc 1; Np 1Þ
þ
Copyright # 2003 John Wiley & Sons, Ltd.
1
fjjhðx% ; kRH ðx% Þ; wðtÞÞjj2 g2 jjwðtÞjj2 g
2
ðA3Þ
Int. J. Robust Nonlinear Control 2003; 13:229–246
ROBUST MODEL PREDICTIVE CONTROL
243
5 V ð f ðx% ; kRH ðx% Þ; wðtÞÞ; Nc ; Np Þ
þ
1
fjjhðx% ; kRH ðx% Þ; wðtÞÞjj2 g2 jjwðtÞjj2 g
2
Setting w ¼ 0 in (A3), by Assumption A1 and the continuity of f and h; it follows
immediately from LaSalle’s invariance principle [15] that x ¼ 0 is locally asymptotically
stable when w ¼ 0: Finally, with reference to SRH with initial condition xðtÞ ¼ x% ; from (A3)
it follows that 8x% 2 ORH ðNc ; Np Þ
T
X
1
fjjzðt þ iÞjj2 g2 jjwðt þ iÞjj2 g
V ðxðt þ T þ 1Þ; Nc ; Np Þ V ðx% ; Nc ; Np Þ4 2
i¼0
and by (A1)
0 4V ðxðt þ T þ 1Þ; Nc ; Np Þ
4
T
X
1
fjjzðt þ iÞjj2 g2 jjwðt þ iÞjj2 g þ V ðx% ; Nc ; Np Þ
2
i¼0
which implies that
T
T
X
X
1
1
jjzðt þ iÞjj2 4g2
jjwðt þ iÞjj2 þ V ðx% ; Nc ; Np Þ
2
2
i¼0
i¼0
8x% 2 ORH ðNc ; Np Þ; 8T 50; 8w 2 Wðx% ; kot;tþNc ; Np Þ: Hence, we conclude that SRH has L2 -gain
less than or equal to g; in ORH ðNc ; Np Þ:
(iii) If kt;tþNc 1 2 Kðx% ; Nc ; Np Þ; then k# t;tþNc ¼ ½kt;tþNc 1 ; k# ðxðt þ Nc ÞÞ 2 Kðx% ; Nc þ 1; Np Þ so that
ORH ðNc þ 1; Np Þ ORH ðNc ; Np Þ; 8Nc > 0: As Nc ! 1 the problem becomes an infinite
% M:
horizon H1 control problem implying that ORH ðNc ; Np Þ ! O
M
%
(iv) If x% 2 O ðk# ; g; gD Þ then there exists a finite N p such that the control sequence kt;tþN% p 1 ¼
½k# ðxðtÞÞ; . . . ; k# ðxðt þ N% p 1ÞÞ satisfies xðt þ N% p 1Þ 2 Oðk# ; g; gD Þ: This implies that 8Nc ;
kt;tþNc 1 ¼ ½k# ðxðtÞÞ; . . . ; k# ðxðt þ Nc 1ÞÞ 2 Kðx% ; Nc ; N% p Þ or, equivalently, x% 2 ORH ðNc ; N% p Þ:
Proof of Proposition 2
Define
H ðx; u; wÞ ¼ Vf ð f1 ðxÞ þ F2 ðxÞu þ F3 ðxÞwÞ Vf ðxÞ þ 12ðjjzjj2 g2 jjwjj2 Þ
ðA4Þ
Then
1
H ðx; u; wÞ ¼ fð f1 ðxÞ þ F2 ðxÞu þ F3 ðxÞwÞ0 P ð f1 ðxÞ þ F2 ðxÞu þ F3 ðxÞwÞ
2
x0 Px þ h1 ðxÞ0 h1 ðxÞ þ u0 u g2 w0 wg
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
244
L. MAGNI ET AL.
1
¼
2
(
ð f1 ðxÞ0 Pf1 ðxÞ x0 Px þ h1 ðxÞ0 h1 ðxÞÞ þ u0 ðF2 ðxÞ0 PF2 ðxÞ þ IÞu
"
0
0
2
0
0
þ w ðF3 ðxÞ PF3 ðxÞ g Þw þ 2½u w F2 ðxÞ0 Pf1 ðxÞ
#
F3 ðxÞ0 Pf1 ðxÞ
)
0
0
þ 2u F2 ðxÞ PF3 ðxÞw
(
" #
u
1
0
0
0
0
0
¼ ð f1 ðxÞ Pf1 ðxÞ x Px þ h1 ðxÞ h1 ðxÞÞ þ ½u w RðxÞ
2
w
"
#)
F2 ðxÞ0 Pf1 ðxÞ
þ2½u0 w0 F3 ðxÞ0 Pf1 ðxÞ
and, computing H ðx; u; wÞ for u ¼ kn ðxÞ and w ¼ xn ðxÞ;
H ðx; kn ðxÞ; xn ðxÞÞ
(
1
¼
ð f1 ðxÞ0 Pf1 ðxÞ x0 Px þ h1 ðxÞ0 h1 ðxÞÞ
2
" n #
" n #)
k ðxÞ
k ðxÞ
n
0
0
n
0
n
n
2½k ðxÞ x ðxÞ RðxÞ n
þ½k ðxÞ x ðxÞ RðxÞ n
x ðxÞ
x ðxÞ
(
1
¼
ð f1 ðxÞ0 Pf1 ðxÞ x0 Px þ h1 ðxÞ0 h1 ðxÞÞ
2
"
#)
F2 ðxÞ0 Pf1 ðxÞ
0
0
1
f1 ðxÞ PF2 ðxÞ f1 ðxÞ PF3 ðxÞ RðxÞ
F3 ðxÞ0 Pf1 ðxÞ
From (i) and (ii) it follows that there exist positive constants e; r such that
H ðx; kn ðxÞ; xn ðxÞÞ4 ejjxjj2
8x 2 Oo ¼ fx: jjxjj4rg
ðA5Þ
By the Taylor expansion theorem (note that the first-order term evaluated in ðu; wÞ ¼ ðkn ðxÞ; xn ðxÞÞ is null)
H ðx; u; wÞ ¼ H ðx; kn ðxÞ; wn Þ
0 1
"
#0
"
#
n
u kn ðxÞ 3
u kn ðxÞ
1 u k ðxÞ
A
þ
RðxÞ
þ O @ w xn ðxÞ 2 w xn ðxÞ
w xn ðxÞ
Assumption (i) implies that there exists a neighbourhood X of x ¼ 0 such that 8x 2 X
r11 ðxÞ > 0;
Copyright # 2003 John Wiley & Sons, Ltd.
r22 ðxÞ50
Int. J. Robust Nonlinear Control 2003; 13:229–246
ROBUST MODEL PREDICTIVE CONTROL
245
If the system is controlled by u ¼ kn ðxÞ then
H ðx; kn ðxÞ; wÞ ¼ H ðx; kn ðxÞ; xn ðxÞÞ
þ
1
ðw xn ðxÞÞ0 ðr22 ðxÞ þ Oðjjw xn ðxÞjjÞÞðw xn ðxÞÞ
2
Since r22 ðxÞ50; it follows that there exist open neighbourhoods Xo of x ¼ 0 and Wo of w ¼ 0
such that
H ðx; kn ðxÞ; wÞ4H ðx; kn ðxÞ; xn ðxÞÞ
8x 2 Xo 8w 2 Wo
In view of (A5) and Assumption A2, there exists a positive constant r1 such that
H ðx; kn ðxÞ; wÞ4H ðx; kn ðxÞ; xn ðxÞÞ4 ejjxjj2
ðA6Þ
8x 2 O1 ¼ fx: jjxjj4r1 g and for all w 2 WðgD Þ: Let us choose some a > 0 such that
Oa ¼ fx: Vf ðxÞ4ag O1
ðA7Þ
From (A4), (A7), (A6) and Assumption A2 follows that
1
Vf ð f1 ðxÞ þ F2 ðxÞkn ðxÞ þ F3 ðxÞwÞ 4Vf ðxÞ ðjjzjj2 g2 jjwjj2 Þ ejjxjj2
2
4Vf ðxÞ ejjxjj2
8x 2 Oa 8w 2 WðgD Þ
ðA8Þ
and that Oa is an invariant set with the origin as an interior point for the system
8
xðk þ 1Þ ¼ f1 ðxðkÞÞ þ F2 ðxðkÞÞkn ðxðkÞÞ þ F3 ðxðkÞÞwðkÞ
>
>
<
"
#
Sa :
h1 ðxðkÞÞ
>
>
zðkÞ ¼
:
kn ðxðkÞÞ
8w 2 WðgD Þ:
REFERENCES
1. James MR, Baras JS. Robust H1 output feedback control for nonlinear systems. IEEE Transactions on Automatic
Control 1995; 40:1007–1017.
2. Wei L, Byrnes CI. Discrete-time nonlinear H1 control with measurement feedback. Automatica 1995; 31:419–434.
3. Lin W, Byrnes CI. H1 -control of discrete-time nonlinear systems. IEEE Transactions on Automatic Control 1996;
41:494–510.
4. Lin W, Xie L. A link between H1 control of a discrete-time nonlinear system and its linearization. International
Journal of Control 1998; 69:301–314.
5. Tadmor G. Receding horizon revisited: an easy way to robustly stabilize an LTV system. System & Control Letters
1992; 18:285–294.
6. Lall S, Glover K. A game theoretic approach to moving horizon control. In Advances in Model-Based Predictive
Control, Clarke DW (ed.). Oxford University Press: Oxford, 1994; 131–144.
7. Chen H, Scherer CW, Allgo. wer F. A robust model predictive control scheme for constrained linear systems. In
Proceedings of the DYCOPS 5, Corfu, Greece, 1998.
8. Scokaert POM, Mayne DQ. Min–max feedback model predictive control for constrained linear systems. IEEE
Transactions on Automatic Control 1998; 43:1136–1142.
9. Chen H, Scherer CW, Allgo. wer F. A game theoretical approach to nonlinear robust receding horizon control of
constrained systems. In American Control Conference ’97, 1997.
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246
246
L. MAGNI ET AL.
10. Magni L, Nijmeijer H, van der Schaft AJ. A receding-horizon approach to the nonlinear H1 control problem.
Automatica 2001; 37:429–435.
11. Magni L, De Nicolao G, Magnani L, Scattolini R. A stabilizing model-based predictive control for nonlinear
systems. Automatica 2001; 37:1351–1362.
12. van der Schaft AJ. L2 -Gain and Passivity Techniques in Nonlinear Control. Springer: Berlin, 1996.
13. Lu W. Attenuation of persistent l1 -bounded disturbances for nonlinear systems. In Conference on Decision and
Control, 1995.
14. Mayne DQ. Nonlinear model predictive control: challenges and opportunities. In Nonlinear Model Predictive
. er A, Zheng A (eds). Progress in Systems and Control Theory. Birkhauser: Basel, 2000; 23–44.
Control, Allgow
15. LaSalle JP. The Stability and Control of Discrete Process. Springer: New York, 1986.
Copyright # 2003 John Wiley & Sons, Ltd.
Int. J. Robust Nonlinear Control 2003; 13:229–246