Linear Programming

Linear Programming
Summary
→
Linear Programs (LP) have the form:
min
cT x
x
subject to :
Ax!b
0 ! x ! xl
→
the solution of a well-posed LP is uniquely
determined by the intersection of the active
constraints.
→
most LPs are solved using the SIMPLEX method.
→
commercial computer codes used the Revised
SIMPLEX algorithm (more efficient).
→
optimization studies include:
•
solution to the nominal optimization problem,
•
a sensitivity study to determine how
uncertainty in the assumed problem
parameter values affect the optimum.
Linear Programming Applications
Linear Programming is used in a wide range of activities:
1)
2)
3)
4)
production planning and scheduling,
financial planning,
solution of Nonlinear Programming problems,
solution of non-square control problems.
We will examine the use of LP for solving NLP and nonsquare control problems.
Quadratic Programming using LP
Consider an optimization problem of the form:
min
x
1 T
x Hx + c T x
2
subject to:
Ax ! b
x!0
where H is a symmetric, positive definite matrix.
Linear Programming Applications
This is a Quadratic Programming (QP) problem.
Problems such as these are encountered in areas such
as parameter estimation (regression), process control
and so forth. Recall that the Lagrangian for this problem
would be formed as:1 T
L( x , l , m) = x Hx + c T x ! l T ( Ax ! b) ! mT x
2
where λ and µ are the Lagrange multipliers for the
constraints.
! x L( x , l , m) = x T H + c T " l T A " mT = 0 T
The optimality conditions for this problem are:
Ax " b # 0
x, l , m # 0
l T ( Ax " b) " mT x = 0
It can be shown that the1 solution
to the QP problem is
T
T
min
c
x
+
b
! b)]
(
[ LP Ax
also a solution xof the following
problem:
2
subject to:
Hx + c ! A T l ! m = 0
l T ( Ax ! b) ! mT x = 0
Ax ! b " 0
x, l , m " 0
This LP problem may prove easier to solve than the
original QP problem.
Linear Programming Applications
Successive Linear Programming
Consider the general nonlinear optimization problem:
min
P( x )
x
subject to:
h( x ) = 0
g( x ) ! 0
If we linearize the objective function and all of the
constraints about some point xk, the original nonlinear
problem may be approximated (locally) by:
min
x
P(x k ) + # x P x (x " x k )
k
subject to:
h( x k +1 ) + # x h x ( x " x k ) = 0
k
g ( x k +1 ) + # x g x ( x " x k ) ! 0
k
Since xk is known, the problem is linear in the variables
x and can be solved using LP techniques.
The solution to the approximate problem is x* and will
often not be a feasible point of the original nonlinear
problem (i.e. the point x* may not satisfy the original
nonlinear constraints). There are a variety of techniques
to correct the solution x* to ensure feasibility.
Linear Programming Applications
A possibility would be to search in the direction of x*,
starting from xk, to find a feasible point:
x k +1 = x k + " ( x * ! x k )
We will discuss such ideas later with nonlinear
programming techniques.
Then, the SLP algorithm is:
1) linearize the nonlinear problem about the
current
point xk,
2) solve the approximate LP,
3) correct the solution (x*) to ensure feasibility of
the
new point xk+1 (i.e. find a point xk+1 in the
direction
of x* which satisfies the original nonlinear
constraints),
4) repeat steps 1 through 3 until convergence.
The SLP technique has several disadvantages of which
slow convergence and infeasible intermediate solutions
are the most important. However, the ease of
implementation of the method can often compensate for
the method’s shortcomings.
Linear Programming Applications
Non-Square Control Problems
Consider a very common process control problem where
there are more manipulated variables (u) available for
control purposes than there are output variables (y) that
must be controlled:
u1
*
y
y1
controller
plant
yn
um
Most multivariable control algorithms require that there
be exactly as many input variables u as there are output
variables y (i.e. m = n). In short, most control algorithms
are only applicable to square problems.
Such non-square problems present an economic
opportunity. Because of the extra degrees of freedom
available for control purposes, we are free to choose
steady-state targets for the input variables that minimize
the cost of operating the plant. The Dynamic Matrix
Control (DMC) strategy incorporates these ideas:
Morshedi, A.M., C.R. Cutler, and T.A. Skrovanek, Optimal Solution of Dynamic Matrix
Control with Linear Programming Techniques, proceedings of ACC, pp. 199-208,
Boston, June, 1985.
Linear Programming Applications
If we know the steady-state gains of the plant, we can
write the steady-state economic optimization problem as:
c T u ss
min
u ss
subject to:
y *ss = G ss u ss
u lower ! u ss ! u upper
which is a Linear Programming problem. The solution
uss* provides steady-state target values for the
manipulated variables u.
We can now augment the original control problem to
utilize the extra degrees of freedom:
uss*
y*
controller
u
plant
y
This control problem can be solved by conventional
multivariable control techniques such as the standard
DMC.