Course on Model Predictive Control Part II – Linear

Course on Model Predictive Control
Part II – Linear MPC design
Gabriele Pannocchia
Department of Chemical Engineering, University of Pisa, Italy
Email: [email protected]
Facoltà di Ingegneria, Pisa.
July 16th, 2012, Aula Pacinotti
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
1 / 47
Outline
1
Estimator module design for offset-free tracking
2
Steady-state optimization module design
3
Dynamic optimization module design
4
Closed-loop implementation and receding horizon principle
5
Quick overview of numerical optimization
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
2 / 47
Estimator module
Tuning parameters
Dynamic
Optimization
u(k)
Steady-state
Optimization
Tuning parameters
G. Pannocchia
y(k)
x̂ −(k + 1)
dˆ−(k + 1)
z −1
x̂(k), dˆ(k)
x s (k), u s (k)
Process
dˆ(k)
Estimator
MPC
Course on Model Predictive Control. Part II – Linear MPC theory
3 / 47
Estimator preliminaries
Basic model, inputs and outputs
Basic model:
x + = Ax + Bu + w
y =Cx +v
Inputs k: measured output y(k) and predicted state x̂ − (k)
Outputs k: updated state estimate x̂(k)
State estimator for basic model
Choose any L such that (A − ALC ) is strictly Hurwitz
Filtering: x̂(k) = x̂ − (k) + L(y(k) −C x̂ − (k))
Key observation
The estimator is the only feedback module in an MPC. Any discrepancy
between true plant and model should be corrected there
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
4 / 47
Augmented system: definition
Issue and solution approach
An MPC based on the previous estimator does not
compensate for plant/model mismatch and persistent
disturbances
As in [Davison and Smith, 1971, Kwakernaak and Sivan, 1972,
Smith and Davison, 1972, Francis and Wonham, 1976], one
should model and estimate the disturbance to be rejected
For offset-free control, an integrating disturbance is added
Augmented system [Muske and Badgwell, 2002, Pannocchia and
Rawlings, 2003], with d ∈ Rnd
· ¸+ ·
x
A
=
d
0
£
y= C
G. Pannocchia
¸· ¸ · ¸
· ¸
x
B
w
+
u+
d
0
wd
· ¸
¤ x
Cd
+v
d
Bd
I
Course on Model Predictive Control. Part II – Linear MPC theory
5 / 47
Augmented system: observability
Observability of the augmented system
Assume that (A,C ) is observable
µ·
¸
¶
¤
A Bd £
Question: is
, C C d observable?
0
I
Answer: yes, if and only if (from the Hautus test)
·
¸
A − I Bd
rank
= n + nd
C
Cd
Observation: the previous can be satisfied if and only if
nd ≤ p
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
6 / 47
Augmented system: controllability
Controllability of the augmented system
Assume that (A, B ) is controllable
µ·
¸ · ¸¶
A Bd
B
Question: is
,
controllable?
0
I
0
Answer: No
Observation: the disturbance is not going to be controlled.
Its effect is taken into account in Steady-State Optimization
and Dynamic Optimization modules
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
7 / 47
Augmented estimator
General design
Set n d = p (see [Pannocchia and Rawlings, 2003, Maeder et al.,
2009] for issues on choosing n d < p)
Choose (B d ,C d ) such that the augmented system is observable
· ¸
Lx
Choose L =
such that
Ld
µ·
¸ ·
¸· ¸
¶
¤
A Bd
A Bd L x £
C Cd
−
0
I
0
I
Ld
is strictly Hurwitz
Augmented estimator:
·
¸ · − ¸ · ¸µ
£
x̂(k)
x̂ (k)
Lx
= ˆ−
+
y(k) − C
ˆ
Ld
d (k)
d (k)
G. Pannocchia
Cd
·
¸¶
¤ x̂ − (k)
dˆ− (k)
Course on Model Predictive Control. Part II – Linear MPC theory
8 / 47
The typical industrial design
Typical industrial design for stable systems (A strictly Hurwitz)
B d = 0,
Cd = I ,
L x = 0,
Ld = I
Output disturbance model
Any error y(k) −C x̂ − (k) is assumed to be caused by a step
(constant) disturbance acting on the output. In fact, the
filtered disturbance estimate is:
dˆ(k) = dˆ− (k) + (y(k) −C x̂ − (k) − dˆ− (k)) = y(k) −C x̂ − (k)
It is a deadbeat Kalman filter
Q = 0,
Qd = I ,
R →0
It is simple and does the job [Rawlings et al., 1994]
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
9 / 47
Comments on the industrial design
Rotation factor for integrating systems
(rotated and
translated)
(rotated)
yk
(uncorrected)
}α
x̂ k|k−1
k −1
k
k +1
k +2
k +3
k +4
(current time)
Limitations of the output disturbance model
The overall performance is often sluggish [Lundström et al., 1995, Muske and
Badgwell, 2002, Pannocchia and Rawlings, 2003, Pannocchia, 2003]
A suitable estimator design can improve the closed-loop performance
[Pannocchia, 2003, Pannocchia and Bemporad, 2007, Rajamani et al., 2009]
Often, a deadbeat input disturbance model works better
B d = B , C d = 0, Q = 0, Q d = I , R → 0
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
10 / 47
Steady-state optimization module
Tuning parameters
Dynamic
Optimization
u(k)
Steady-state
Optimization
Tuning parameters
G. Pannocchia
y(k)
x̂ −(k + 1)
dˆ−(k + 1)
z −1
x̂(k), dˆ(k)
x s (k), u s (k)
Process
dˆ(k)
Estimator
MPC
Course on Model Predictive Control. Part II – Linear MPC theory
11 / 47
Steady-state optimization module: introduction
A trivial case
Square system (m = p) without constraints and with
setpoints on all CVs
Solve the linear system
x s = Ax s + Bu s + B d dˆ
r s = C x s +C d dˆ
Obtain
·
¸ ·
xs
I−A
=
us
C
−B
0
¸−1 ·
¸
B d dˆ
r sp −C d dˆ
Observation
The steady-state target (x s , u s ) may change at each decision time because of the
disturbance estimate dˆ
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
12 / 47
Objectives of the steady-state optimization module
Boundary conditions
In most cases the number of CVs is different from the
number of MVs: p 6= m
CVs and MVs have constraints to meet
y min ≤ y(k) ≤ y max ,
u min ≤ u(k) ≤ u max
Only a small subset of CVs have fixed setpoints: H y(k) → r sp
Objectives
Given the current disturbance estimate, dˆ
Compute the equilibrium (x s , u s , y s ):
x s = Ax s + Bu s + B d dˆ, y s = C x s +C d dˆ such that:
Ï
Ï
constraints on MVs and CVs are satisfied:
y min ≤ y s ≤ y max , u min ≤ u s ≤ u max
the subset of CVs tracks the setpoint: H y s = r s
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
13 / 47
Examples of steady-state optimization feasible regions
Stable system with 2 MVs and 2 CVs (with bounds)
y s = C (I − A)−1 Bu s + (C (I − A)−1 B d +C d )dˆ = Gu s +G d dˆ
(1)
y max
(2)
u max
(2)
y max
u s(2)
(2)
y min
(2)
u min
(1)
y min
(1)
u min
G. Pannocchia
u s(1)
(1)
u max
Course on Model Predictive Control. Part II – Linear MPC theory
14 / 47
Examples of steady-state optimization feasible regions
Stable system with 2 MVs and 2 CVs (one setpoint)
y s = C (I − A)−1 Bu s + (C (I − A)−1 B d +C d )dˆ = Gu s +G d dˆ
(1)
y max
(2)
u max
(2)
(2)
y min
= y max
u s(2)
(2)
u min
(1)
y min
(1)
u min
G. Pannocchia
u s(1)
(1)
u max
Course on Model Predictive Control. Part II – Linear MPC theory
15 / 47
Examples of steady-state optimization feasible regions
Stable system with 2 MVs and 2 CVs (two setpoints)
y s = C (I − A)−1 Bu s + (C (I − A)−1 B d +C d )dˆ = Gu s +G d dˆ
(2)
u max
(2)
(2)
y min
= y max
u s(2)
(1)
(1)
y min
= y max
(2)
u min
(1)
u min
G. Pannocchia
u s(1)
(1)
u max
Course on Model Predictive Control. Part II – Linear MPC theory
16 / 47
Examples of steady-state optimization feasible regions
Stable system with 2 MVs and 3 CVs (two setpoints)
y s = C (I − A)−1 Bu s + (C (I − A)−1 B d +C d )dˆ = Gu s +G d dˆ
(3)
y max
(3)
y min
(2)
u max
(2)
(2)
y min
= y max
u s(2)
(1)
(1)
y min
= y max
(2)
u min
(1)
u min
G. Pannocchia
u s(1)
(1)
u max
Course on Model Predictive Control. Part II – Linear MPC theory
17 / 47
Hard and soft constraints
Hard constraints
In the two optimization modules, constraints on MVs are
regarded as hard, i.e., cannot be violated
This choice comes from the possibility of satisfying them
exactly
Soft constraints
In the two optimization modules, constraints on CVs are
regarded as soft, i.e., can be violated when necessary
The amount of violation is penalized in the objective function
This choice comes from the impossibility of satisfying them
exactly
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
18 / 47
Steady-state optimization module: linear formulation
General formulation (LP)
min
u s ,x s ,²s ,²s
q 0 ²s + q 0 ²s + r 0 u s
s.t.
x s = Ax s + Bu s + B d dˆ
u min ≤ u s ≤ u max
y min − ²s ≤ C x s +C d dˆ ≤ y max + ²s
²s ≥ 0
²s ≥ 0
Extensions
Setpoints on some CVs can be specified with either an equality constraint or
by setting identical value for minimum and maximum value
Often CVs are grouped by ranks
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
19 / 47
LP steady-state optimization module: tuning
Cost function
r ∈ Rm is the MV cost vector: a positive (negative) entry in r implies that the
corresponding MV should be minimized (maximized)
q ∈ Rp and q ∈ Rp are the weights of upper and lower bound CV violations.
Sometimes they are defined in terms of equal concern error:
1
qi =
q = 1 L , i = 1, . . . , p
U ,
SSECEi
i
SSECEi
Constraints
Bounds u max , u min , y max , y min can be modified by the operators (within
ranges defined by the MPC designer)
Sometimes in order to avoid large target changes from time k − 1 to time k a
rate constraint is added
−∆u max ≤ u s (k) − u s (k − 1) ≤ ∆u max
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
20 / 47
LP steady-state optimization module: numerical solution
Standard LP form
min c 0 z
s.t.
z
E z ≤ e,
Fz = f
with
s
z=
us
²s
²s
0
0
C
=  −C
0
0

"x #
,E
I
−I
0
0
0
0
0
0
−I
0
−I
0
0 
0
0 
−I ,
0
−I

u max
−u min

 y −C dˆ 
e =  max d ˆ  , F = [ I −A −B
−(y
−C d )
min
d
0 0],
f = B d dˆ
0
0
Solution algorithms [Nocedal and Wright, 2006]
Solution methods based on simplex or interior point algorithms
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
21 / 47
Steady-state optimization module: quadratic formulation
QP formulation
The LP formulation is quite intuitive but its outcome may be
too “jumpy”
Sometimes a QP formulation may be preferred
2
min k²s k2 + k²s kQ
+ ku s − u sp k2R s
u s ,x s ,²s ,²s
Qs
s.t.
s
x s = Ax s + Bu s + B d dˆ
u min ≤ u s ≤ u max
y min − ²s ≤ C x s +C d dˆ ≤ y max + ²s
2
where u sp is the desired MV setpoint and kxkQ
= x 0Qx
Tuning is slightly more complicated, but the outcome is
usually “smoother”
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
22 / 47
Dynamic optimization module
Tuning parameters
Dynamic
Optimization
u(k)
Steady-state
Optimization
Tuning parameters
G. Pannocchia
y(k)
x̂ −(k + 1)
dˆ−(k + 1)
z −1
x̂(k), dˆ(k)
x s (k), u s (k)
Process
dˆ(k)
Estimator
MPC
Course on Model Predictive Control. Part II – Linear MPC theory
23 / 47
Dynamic optimization module: introduction
Agenda
Make a finite-horizon prediction of future CVs evolution
based on a sequence of MVs
Find the optimal MVs sequence, minimizing a cost function
that comprises:
Ï
Ï
deviation of CVs (and MVs) from their targets
rate of change of MVs
respecting constraints on:
Ï
Ï
MVs (always)
CVs (possibly)
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
24 / 47
Dynamic optimization module: graphical interpretation
y
Prediction horizon
u
Control horizon
Past
G. Pannocchia
Future
Course on Model Predictive Control. Part II – Linear MPC theory
25 / 47
Dynamic optimization module: formulation
Cost function: Q, either R or S, Q, Q, P positive definite matrices
VN (x̂, u,²²,²²) =
NX
−1 h
j =0
2
k ŷ( j ) − y s kQ
+ ku( j ) − u s k2R + ku( j ) − u( j − 1)k2S +
i
2
k²( j )k2 + k²( j )kQ
+ kx(N ) − x s k2P
Q
x = Ax + Bu + B d dˆ
+
ŷ = C x +C d dˆ
s.t.
x(0) = x̂
y s = C x s +C d dˆ
Control problem
min VN (x̂, u,²²,²²)
u,²²,²²
s.t.
u min ≤ u( j ) ≤ u max
−∆u max ≤ u( j ) − u( j − 1) ≤ ∆u max
y min − ²( j ) ≤ ŷ( j ) ≤ y max + ²( j )
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
26 / 47
Dynamic optimization module: formulation
Main tuning parameters
Q: diagonal matrix of weights for CVs deviation from target:
Ã
qi i =
1
!2
DEC E iM
R: diagonal matrix of weights for MVs deviation from target
S: diagonal matrix of weights for MVs rate of change, often called
move suppression factors
Q, Q: diagonal matrices of weights for CVs violation of
constraints
Ã
!2
Ã
!2
1
1
,
q =
qii =
ii
DEC E iL
DEC E iU
u max , u min , y max , y min , ∆u max : constraint vectors
N : prediction horizon
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
27 / 47
Dynamic optimization module: rewriting
Deviation variables
Use the target (x s , u s ):
x̃( j ) = x( j ) − x s , ũ( j ) = u( j ) − u s ,
Recall that:
x s = Ax s + Bu s + B d dˆ,
y s = C x s +C d dˆ
Cost function becomes:
VN (x̂, ũ,²²,²²) =
NX
−1 h
j =0
kx̃( j )kC2 0QC + kũ( j )k2R + kũ( j ) − ũ( j − 1)k2S +
i
2
k²( j )k2 + k²( j )kQ
+ kx̃(N )k2P
Q
x̃ + = A x̃ + B ũ
G. Pannocchia
s.t.
x̃(0) = x̂ − x s
Course on Model Predictive Control. Part II – Linear MPC theory
28 / 47
Dynamic optimization module: constrained LQR
Compact problem formulation
min VN (x̂, ũ,²²,²²)
ũ,²²,²²
s.t.
u min − u s ≤ ũ( j ) ≤ u max − u s
−∆u max ≤ ũ( j ) − ũ( j − 1) ≤ ∆u max
y min − y s − ²( j ) ≤ C x̃( j ) ≤ y max − y s + ²( j )
Constrained LQR formulation
With suitable definitions (shown next), we obtain a constrained LQR formulation:
min VN (x a , ua ) =
ua
NX
−1 h
j =0
i
2
kx a ( j )kQ
+ ku a ( j )k2R a + 2x a ( j )M a u a ( j ) + kx a (N )k2P a
a
s.t. x a+ = A a x a + B a u a ,
G. Pannocchia
D a ua + E a xa ≤ e a
Course on Model Predictive Control. Part II – Linear MPC theory
29 / 47
Dynamic optimization module: constrained LQR (cont.’d)
Augmented state and input
xa ( j ) =
h
x̃( j )
ũ( j −1)
i
· ũ( j ) ¸
²( j )
²( j )
ua ( j ) =
,
The state is augmented to write terms u( j ) − u( j − 1) (in the
objective function and/or in the constraints)
The input is augmented to write the soft output constraints
Matrices and vectors
Aa =
£ A 0¤
0 0
Ba =
£B
00
I 00
I
−I
I
D a =  −I
0
0

G. Pannocchia
¤
0
0
0
0
−I
0
Qa =
0 
0
0 
0
0
−I
h
C 0 QC 0
0 S
i
0
0
E a =  00
C
−C

· R+S
Ra =
0 
0
−I 
I
0
0
0
0
0 0¸
Q 0
0 Q
Ma =
£
0 00
−S 0 0
¤
 umax −u s 

ea = 
u s −u min
∆u max 
∆u max 
y max −y s
y s −y min
Course on Model Predictive Control. Part II – Linear MPC theory
30 / 47
Dynamic optimization module: QP solution
From constrained LQR to a QP problem
0

x
a (0)
x a (1)
x a (2)






Ba = 



 

Aa = 
 .. 
.



xa = 
 . 
..
AN
a
x a ()
Q
I
Aa
A 2a

a
..



.
Qa
Ba
Aa Ba
Da = 
..
Ba
. 0
−1 B ··· A B B
AN
a
a a a
a
Ra = 


.
Da
G. Pannocchia
.
..
M

Ra
..

.
0
Ea
..
.

Ma
..
0
 Ea

Ea = 

a

Ma = 

Ra
Da
..

.. 
. 

 =⇒ xa = Aa x a (0) + Ba ua



.
Pa
 Da
0
···
..
.
 Ra
Qa

Qa = 

···
..
.. 
.

Ea 0
···
.
Ma
0



 ea 
ea
ea =  .. 
.
ea
Course on Model Predictive Control. Part II – Linear MPC theory
31 / 47
Dynamic optimization module: QP solution (cont.’d)
QP problem
1
s.t.
min u0a Ha ua + u0a qa
ua 2
Fa ua ≤ ea − Ga x a (0)
where
Ha = B0a Qa Ba + Ra + B0a Ma + M0a Ba
Fa = Da + Ea Ba
qa = (B0a Qa + M0a )Aa x a (0)
Ga = Ea Aa
Observations
Both the linear penalty and constraint RHS vary linearly with the current
augmented state x a (0), while all other terms are fixed
QP solvers are based on Active-Set Methods or Interior Point Methods
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
32 / 47
Feedback controllers synthesis from open-loop
controllers: the receding horizon principle
A quote from [Lee and Markus, 1967]
One technique for obtaining a feedback controller synthesis from
knowledge of open-loop controllers is to measure the current control
process state and then compute very rapidly for the open-loop control
function.
The first portion of this function is then used during a short time interval,
after which a new measurement of the process state is made and a new
open-loop control function is computed for this new measurement.
The procedure is then repeated.
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
33 / 47
Optimal control sequence and closed-loop
implementation
The optimal control sequence
The optimal control sequence u0a is in the form






u0a = ũ 0 (0), ũ 0 (1), . . . , ũ 0 (N − 1), ²(0), ²(1), . . . , ²(N − 1), ²(0), ²(1), . . . , ²(N − 1)
{z
}|
{z
}|

{z
}
|

ũ0
²
²
The variables ² and ² are present only when soft output constraints are used
Closed-loop implementation
Only the first element of the optimal control sequence is
injected into the plant: u(k) = ũ 0 (0) + u s (k)
The successor state is predicted using the estimator model
x̂ − (k + 1) = A x̂(k) + Bu(k) + B d dˆ(k)
dˆ− (k + 1) = dˆ(k)
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
34 / 47
Linear MPC: summary
Overall algorithm
1
Given predicted state and disturbance (x̂ − (k), dˆ− (k)) and output
measurement y(k),
¡ compute filtered estimate:
¢
x̂(k) = x̂ − (k) + L x ¡y(k) −C x̂ − (k) −C d dˆ− (k) ¢,
dˆ(k) = d − (k) + L d y(k) −C x̂ − (k) −C d dˆ− (k)
2
Solve Steady-State Optimization problem and compute targets
(x s (k), u s (k))
3
Define deviation variables: x̃(0) = x̂(k) − u s (k),
h
i
x̃(0)
.
ũ(−1) = u(k − 1) − u s (k), and initial regulator state x a (0) = ũ(−1)
Solve Dynamic Optimization problem to obtain ũ0
4
Inject control action u(k) = ũ 0 (0) + u s (k). Predict successor state
x̂ − (k + 1) = A x̂(k) + Bu(k) + B d dˆ(k) and disturbance
dˆ− (k + 1) = dˆ(k). Set k ← k + 1 and go to 1
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
35 / 47
General formulation of an optimization problem
The three ingredients
x ∈ Rn , vector of variables
f : Rn → R, objective function
c : Rn → Rm , vector function of constraints that the variables
must satisfy. m is the number of restrictions applied
The optimization problem
½
minn f (x)
x∈R
subject to
c i (x) = 0
c i (x) ≥ 0
∀i ∈ E
∀i ∈ I
E , I : sets of indices of equality and inequality constraints, respectively
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
36 / 47
Constrained optimization: example 1
Solve
s. t. x 12 + x 22 − 2 = 0
min x 1 + x 2
Standard notation, feasibility region and solution
x2
In standard notation: f (x) = x 1 + x 2 , I = ;, E = {1},
c 1 (x) = x 12 + x 22 − 2
p
Feasibility region: circle of radius 2, only the border
f (x)
x1
Solution: x ∗ = [−1, −1]T
Observation
· ¸
1
∇ f (x ) =
,
1
∗
G. Pannocchia
¸
1
−2
∇c 1 (x ) =
=⇒ ∇ f (x ∗ ) = − ∇c 1 (x ∗ )
−2
2
∗
·
Course on Model Predictive Control. Part II – Linear MPC theory
37 / 47
Constrained optimization: example 2
Solve
s. t. 2 − x 12 − x 22 ≥ 0
min x 1 + x 2
Standard notation, feasibility region and solution
x2
In standard notation: f (x) = x 1 + x 2 , I = {1}, E = ;,
c 1 (x) = 2 − (x 12 + x 22 )
p
Feasibility region: circle of radius 2, including the
interior
f (x)
x1
Solution: x ∗ = [−1, −1]T
Observation
· ¸
1
∇ f (x ) =
,
1
∗
G. Pannocchia
· ¸
1
2
∇c 1 (x ) =
=⇒ ∇ f (x ∗ ) = ∇c 1 (x ∗ )
2
2
∗
Course on Model Predictive Control. Part II – Linear MPC theory
38 / 47
Constrained optimality conditions (KKT)
Lagrangian function
L (x, λ) = f (x) −
X
λi c i (x)
i ∈E ∪I
Karush-Kuhn-Tucker optimality conditions
If x ∗ is a local solution to the standard problem, there exists a
vector λ∗ ∈ Rm such that the following conditions hold:
∇x L (x ∗ , λ∗ ) = 0
c i (x ∗ ) = 0
for all i ∈ E
c i (x ∗ ) ≥ 0
for all i ∈ I
λ∗i
∗
≥0
λ∗i c i (x ) = 0
for all i ∈ I
for all i ∈ E ∪ I
The components of λ∗ are called Lagrange multipliers
Notice that a multiplier is zero when the corresponding
constraint is inactive
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
39 / 47
Linear programs (LP) problems
LP in standard form
min c T x
x
subject to Ax = b, x ≥ 0
where: c ∈ Rn , x ∈ Rn , A ∈ Rm×n , b ∈ Rm and rank(A) = m
Optimality conditions
Lagrangian function: L (x, π, s) = c T x − πT (Ax − b) − s T x
If x ∗ is solution of the linear program, then:
A T π∗ + s ∗ = c
Ax ∗ = b
x∗ ≥ 0
s∗ ≥ 0
x i∗ s i∗
G. Pannocchia
= 0,
i = 1, 2, . . . n
Course on Model Predictive Control. Part II – Linear MPC theory
40 / 47
LP: the simplex method
The “base points”
A point x ∈ Rn is a base point if
Ax = b and x ≥ 0
At most m components of x are nonzero
The columns of A corresponding to the nonzero elements are
linearly independent
Fundamental aspects of the simplex method
Base points are vertices of the feasibility region
The solution is a base point
The simplex method iterates from a base point x k to another one x k+1 , and
stops when all components of s k are nonnegative
When a component of s k is negative, a new base point x k+1 in which the
corresponding element of x k is nonzero is selected
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
41 / 47
Quadratic Programming (QP)
Standard form
1
min x T G x + x T d
x 2
subject to:
a iT x = b i ,
i ∈E
a iT x
i ∈I
≥ bi ,
where G ∈ Rn×n is symmetric (positive definite), d ∈ Rn , b ∈ Rn , and
a i ∈ Rn , for all i ∈ E ∪ I
The active set
©
ª
A (x ∗ ) = i ∈ E ∪ I : a iT x ∗ = b i
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
42 / 47
Quadratic Programming (QP) problems (2/2)
Lagrangian function
X
1
L (x, λ) = x T G x + x T −
λi (a iT x − b i )
2
i ∈E ∪I
Optimality conditions (KKT)
G x∗ + d −
X
i ∈A (x ∗ )
G. Pannocchia
λ∗i a i = 0
a iT x ∗ = b i ,
for all i ∈ A (x ∗ )
a iT x ∗ > b i ,
for all i ∈ I \ A (x ∗ )
λ∗i ≥ 0,
for all i ∈ I ∩ A (x ∗ )
Course on Model Predictive Control. Part II – Linear MPC theory
43 / 47
Active set methods for convex QP problems
Fundamental steps
1
Given a feasible x k , we evaluate its active set and build the
matrix A whose row are {a iT }, i ∈ A (x k )
2
Solve the KKT linear system
·
G
A
3
4
5
AT
0
¸·
¸ ·
¸
−p k
d +G x k
=
λk
Ax k − b
If kp k k ≤ ρ, check if λ∗i ≥ 0 for all i ∈ A (x k ). If so, stop.
If a multiplier λ∗i < 0 for some i ∈ A (x k ), remove the i −th
constraint from the active set.
If kp k k > ρ, define x k+1 = x k + αk p k , where αk is the largest
scalar in (0, 1] such that no inequality constraint is violated.
When a blocking constraint is found, it is included in the new
active set A (x k+1 )
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
44 / 47
Nonlinear programming (NLP) problems via SQP
algorithms
Nonlinear programming (NLP) problems
min f (x)
x
s.t.
c i (x) = 0
i ∈E
c i (x) ≥ 0
i ∈I
“Sequential Quadratic Programming” (SQP) approach
1
min p kT Wk p k + ∇ f (x k )T p k
pk 2
∇c i (x k )T p k + c i (x k ) = 0
T
∇c i (x k ) p k + c i (x k ) ≥ 0
G. Pannocchia
s.t.
i ∈E
i ∈I
Course on Model Predictive Control. Part II – Linear MPC theory
45 / 47
References I
E. J. Davison and H. W. Smith. Pole assignment in linear time-invariant multivariable systems with
constant disturbances. Automatica, 7:489–498, 1971.
B. A. Francis and W. M. Wonham. The internal model principle of control theory. Automatica, 12:
457–465, 1976.
H. Kwakernaak and R. Sivan. Linear Optimal Control Systems. John Wiley & Sons, 1972.
E. B. Lee and L. Markus. Foundations of Optimal Control Theory. John Wiley & Sons, New York, 1967.
P. Lundström, J. H. Lee, M. Morari, and S. Skogestad. Limitations of dynamic matrix control. Comp.
Chem. Eng., 19:409–421, 1995.
U. Maeder, F. Borrelli, and M. Morari. Linear offset-free model predictive control. Automatica, 45(10):
2214–2222, 2009.
K. R. Muske and T. A. Badgwell. Disturbance modeling for offset-free linear model predictive control. J.
Proc. Contr., 12:617–632, 2002.
J. Nocedal and S. J. Wright. Numerical Optimization. Springer, second edition, 2006.
G. Pannocchia. Robust disturbance modeling for model predictive control with application to
multivariable ill-conditioned processes. J. Proc. Cont., 13:693–701, 2003.
G. Pannocchia and A. Bemporad. Combined design of disturbance model and observer for offset-free
model predictive control. IEEE Trans. Auto. Contr., 52(6):1048–1053, 2007.
G. Pannocchia and J. B. Rawlings. Disturbance models for offset-free model predictive control. AIChE J.,
49:426–437, 2003.
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
46 / 47
References II
M. R. Rajamani, J. B. Rawlings, and S. J. Qin. Achieving state estimation equivalence for misassigned
disturbances in offset-free model predictive control. AIChE J., 55(2):396–407, 2009.
J. B. Rawlings, E. S. Meadows, and K. R. Muske. Nonlinear model predictive control: a tutorial and
survey. In ADCHEM Conference, pages 203–214, Kyoto, Japan, 1994.
H. W. Smith and E. J. Davison. Design of industrial regulators. Integral feedback and feedforward
control. Proc. IEE, 119(8):1210–1216, 1972.
G. Pannocchia
Course on Model Predictive Control. Part II – Linear MPC theory
47 / 47