linear programming - e

LINEAR PROGRAMMING
A linear programming problem is a mathematical
programming problem in which both the objective function and all other function in the constraints are linear, it
can be represented in a
canonical form as
LP
maximize
c0x,
subjected to Ax ≤ b,
x ≥ 0,
or
LP
minimize
c0x,
subjected to Ax ≥ b,
x ≥ 0,
where c ∈ IR n, b ∈ IR m and A is a matrix m × n.
n variables
m constraints
1
Equivalent forms
General form:
LP
minimize
c0x,
subjected to ai • x = bi,
i∈M
ai • x ≥ bi,
i∈M
xj ≥ 0,
j∈N
xj free, j ∈ N
Standard form:
LP
minimize
c0x,
subjected to Ax = b,
x ≥ 0,
see examples
2
A PRODUCTION LINEAR PROBLEM
A firm produces n different types of items, each unit of
product requires fixed quantities of m production factors
(resourses).
Let be x1, x2, . . . , xn the required quantities of each item
and let be p1, p2, . . . , pn > 0 the related sale prices.
Let’s assume that each unit of product j requires the
quantity aij ≥ 0 of the i–th production factor, j = 1, 2, . . . , n,
i = 1, 2, . . . , m.
Let’s also assume that resourses (production factors) have
limited availability, b1, b2, . . . , bm > 0.
3
b1 ≥ R1
b2 ≥ R2
A1→ p1
b3 ≥ R3
A2→ p2
bi ≥ Ri
→
aij
A j → pj
bm ≥ Rm
A n → pn
We consider the problem of determining the production
plan in order to maximize the revenue and to take into
account the availability constraints.
It turns out to be a linear problem
maximize
p0x,
subjected to Ax ≤ b,
x ≥ 0,
where p
=
(p1, p2, . . . , pn)0, b
=
(b1, b2, . . . , bm)0
A = (aij )i=1, 2, ..., m; j=1, 2, ..., n is the matrix m × n which
4
represents the technological structure of the firm.
The feasible region, RA of a linear problem (LP)
X = {x| Ax ≥ b, x ≥ 0}
is a set which is convex (it is the unempty and bounded
intersection of convex sets), it is closed and its boundary
points belong to the hyperplanes, called faces.
If X is also bounded ⇒ polyhedron
A vertex (or extreme point) is a point that CANNOT be obtained as convex linear combination of vectors
of the set. The vertexes/vertices are the points where n or
more hyperplanes (faces) intersect.
⇒ boundary point
vertex 6⇐
Edges are segments connecting two vertices, we obtain
them by intersecting n-1 hyperplanes (faces)
5
Two vertices are adjacent if the segment which connect
them is a polyhedron’s edge
Each point of a convex and bounded polyhedron can be
written as a convex linear combination of polyhedron’s vertices.
In <
2
6
In <
3
7
Example Find out the vertices of the convex polyhedron
represented by the following inequalities:
x1 + x2 + x3 ≤ 20
x1 + x2 + x3 ≥ 10
x1 ≥ 0
x2 ≥ 0
x3 ≥ 0.
SOL:
Vertices can be obtained by intersecting at least n = 3 of the following planes:
H1 = x1 + x2 + x3 = 20
H2 = x1 + x2 + x3 = 10
H3 = x1 = 0
H4 = x2 = 0
H5 = x3 = 0.
There

 are 10 possible intersections of 5 planes taken 3 by 3




5




=
5!
5·4·3·2·1
5!
5·4·3·2·1
=
=
=
= 10
3!(5 − 3)! 3 · 2 · 1 · 2 · 1 3!(5 − 3)! 3 · 2 · 1 · 2 · 1
3
Some intersections may coincide or be empty (H1 ∩ H2 ∩ H3 ), (H1 ∩ H2 ∩ H4 , )
(H1 ∩ H2 ∩ H5 , ) some others can be external with respect to the polyhedron
(H3 ∩ H4 ∩ H5 ).
8
H1 ∩ H3 ∩ H4 = (0, 0, 20)
H1 ∩ H3 ∩ H5 = (0, 20, 0)
H1 ∩ H4 ∩ H5 = (20, 0, 0)
H2 ∩ H3 ∩ H4 = (0, 0, 10)
H2 ∩ H3 ∩ H5 = (0, 10, 0)
H2 ∩ H4 ∩ H5 = (10, 0, 0)
k inequalities, n variables ⇒ polyhedron with









k 
n




=
k!
n! (k − n)!
vertices.
It’s a finite number, but it may be very large!
9
Objective function is linear (concave/convex), FR convex
⇓
Local / Global Theorem
each local maximum/minimum is global too
————————————Objective function is linear
⇓
∇f (x) = c0 6= 0
There NOT exist maximum/minimum points
inside the FR
⇓
maximum/minimum points (if any) lay in the
boundary of the FR (vertices / edges or faces)
10
If F R = ∅ ⇒ 6 ∃ any optimal solution
If F R 6= ∅ and bounded ⇒
Weierstrass theorem guarantees the existence of at least
one optimal solution (each linear function is continuous)
If F R 6= ∅ and unbounded ⇒
• f is bounded sup/inf ⇒
∃ max/min (NB LP)
• f is unbounded sup/inf ⇒ 6 ∃ exist max/min
11
Example of NON linear function
bounded and WITHOUT maximum
12
Geometrical Solution
of a linear problem in IR2
Linear Objective Function ⇒ level curves are hyperplanes
(lines) perpendicular to the gradient vector of the Objective
Function (Maximum increase direction)
⇓
• Either the level lines intersect the boundary of the feasible region in one vertex
(UNIQUE SOLUTION),
• or in an edge
(INFINITE OPTIMAL SOLUTIONS).
13
EXAMPLE ( bounded FR - unique solution)
max z = 2x1 − 4x2
s.t.





















−3x1 − 5x2 ≤ −15
4x1 + 9x2
≤ 36
x1, x2
≥0
14
EXAMPLE (bounded FR - infinite solutions)
min z = −2x1 − x2/2
s.t.





















−6x1 − 5x2 ≥ −30
−4x1 − x2
≥ −12
x1, x2
≥0
15
EXAMPLE (unbounded FR - unique solution)
min z = 2x1 − 3x2
s.t.





















−x1 + x2 ≤ 0
x2
≤5
x1, x2
≥0
16
EXAMPLE (unbounded FR - infinite solutions)
min z = 2x1 − 2x2
s.t.





















−x1 + x2 ≤ 0
x2
≤5
x1, x2
≥0
17
EXAMPLE
(unbounded FR - unbounded objective function)
max z = 3x1 + 2x2
s.t.





















−3x1 + 2x2 ≤ 6
−x1 + x2
≤4
x1, x2
≥0
18
Theorem (SIMPLEX - bounded case)
Let’s consider problem
minimize
c0x,
subjected to Ax ≥ b,
x ≥ 0,
where A ∈ Mm,n, x, c0 ∈ IR n and b ∈ IR m.
If the problem admits an optimal solution, then, among the
optimal vectors, there is at least one vertex of the polyhedron defined by the inequalities Ax ≥ b and x ≥ 0.
⇓
Polyhedron FR, if not empty, contains an INFINITE
number of points, but a FINITE number of vertices. Therefore, to solve a linear programming problem is sufficient to
valuate the objective function in each vertex of the feasible
region.
Proof See italian slides.
19
Theorem (unbounded case)
Let’s consider problem
minimize
c0x,
subjected to Ax ≥ b,
x ≥ 0,
where A ∈ Mm,n, x, c0 ∈ IR n and b ∈ IR m.
If the problem admits an optimal solution, then, among the
optimal vectors, there is at least one vertex of the polytope
defined by the inequalities Ax ≥ b and x ≥ 0.
20
SIMPLEX METHOD
Simplex method is an iterative procedure which finds at
each step, if it is possible, a feasible base solution (vertex)
which leads to a lower value or equal if min problem (to a
greater value or equal f max) to the previous base solution.
geometrically
The algorithm considers one by one each vertex of the
FR, moving form one vertex to the adiacent one through
the edges of the boundary of the FR.
NB If in two or more vertices the objectinve function
gets the same minimum (maximum) value, then all convex
linear combinations between such vertices are optimal too,
that is all the point laying in the edges connecting such
vertices are optimal.
21
Duality Theory
To each linear programming problem is associated another
linear programming problem called dual problem.
P rimalLP
maximize
f (x) = c0x,
subjected to Ax ≤ b,
x ≥ 0;
DualDP
minimize
g(y) = y 0b,
subjected to y 0A ≥ c0,
⇒
A0 y ≥ c
y ≥ 0.
Analogies / differencies
• linear objective function, linear constraints,
non negativity constraints
• same parameters: matrix A, vector b, vector c.
• n+m inequality constraints (including non neg.)
22
LP
max f (x1, x2) = x1 + x2
s.t.





















3x1 − x2
≤5
−2x1 − 2x2 ≥ −5
x1, x2
c0 = (1, 1)











⇒










≥0
−1 
3
3x1 − x2
≤5
2x1 + 2x2 ≤ 5
≥0
x1, x2
5




A = 



b = 



2
2
5
⇓
3

0
A =




2

−1 2



DP
min g(y1, y2) = 5y1 + 5y2
s.t.





















3y1 + 2y2
≥1
−1y1 + 2y2 ≥ 1
y1, y2
≥0
see other examples
23
Relationship between primal and dual
problems
P L Minimum Probl.










DP Maximum Probl.
≥0










≤
n variables  ≤ 0
n constraints  ≥
free
=






















≥










≤0
m constraints  ≤
m variables  ≥ 0
=
free












no decision variables of P = no constraints of DP
no decision variables of DP = no constraints of P
The dual problem of the dual is the primal problem itself
24
z
max
}| {
min
z }| {
Let be f (x), X and g(y), Y the objective functions and
the feasible regions of problems LP and DP respectively:
X = {x ∈ IR n | Ax ≤ b, x ≥ 0} ,
Y
= {y ∈ IR m | y 0A ≥ c0, y ≥ 0} .
For problem LP the following
(alternative) situations can occur:
1. X = ∅: LP is unfeasible, It doesn’t exist any optimal
solution;
2. X 6= ∅: LP is feasible and other cases can occur:
2.1. f (x) is superiorly bounded: there exists at least one
optimal solution;
2.2. f (x) is superiorly unbounded: it doesn’t exist any
optimal solution.
25
Analogously, for problem DP the following
(alternative)
cases can occur:
1. Y = ∅: DP is unfeasible, It doesn’t exist any optimal
solution;
2. Y 6= ∅: DP is feasible and other cases can occur:
2.1. g(y) is inferiorly bounded: there exists at least one
optimal solution;
2.2. g(y) is inferiorly unbounded: it doesn’t exist any optimal solution.
26
NB
Only in linear programming
sup {f (x), x ∈ X} ∈ IR ⇒ ∃ global max in X
inf {g(y), y ∈ Y } ∈ IR ⇒ ∃ global min in Y
27
Fundamental Theorems
Lemma (Weak Duality)
Let be x ∈ X and y ∈ Y, then
c0x ≤ y 0b
Proof
x∈X
⇒
Ax ≤ b, x ≥ 0,
⇒
y 0 Ax ≤ y 0 b
y∈Y
⇒
y 0 A ≥ c0 , y ≥ 0,
⇒
y 0 Ax ≥ c0 x
therefore
y 0 b ≥ y 0 Ax ≥ c0 x ⇒ c0 x ≤ y 0 b
Corollary
f ∈ X and y
f ∈ Y are such that
If x
f 0 b,
f = y
c0x
f is an optimal solution of P
then x
yf is an optimal solution of DP
Proof
c0 xe = ye 0 b ≥ c0 x ∀x ∈ X therefore xe is an optimal solution of P, (pmaximum
point for c0 x in X);
ye 0 b = c0 xe ≤ y 0 b ∀y ∈ Y therefore ye is an optimal solution of DP, (minimum
point for y 0 b in Y ).
28
Existence Theorem
LP has an optimal solution if and only if
X 6= ∅
and
Y 6= ∅.
Proof
(Khun Tucker conditions)
Strong duality Theorem
x∗ ∈ X is an optimal solution of LP if and only if there
exists y ∗ ∈ Y such that
c 0 x∗ = y ∗ 0 b
in such case y ∗ is an optimal solution of DP .
(same unit of measurement)
If one of the two problems (LP ) and (DP ) has an
optimal solution, then the other problem has an optimal
solution too.
29
Complementarity Theorem
x∗ ∈ X and y ∗ ∈ Y are optimal solutions of LP and
DP , rispectively, if and only if they satisfy the following
equations
∗0
0
y A − c x∗ = 0
and
y ∗0 (Ax∗ − b) = 0.
Proof
”⇒” Let be x and y ∗ optimal solutions of LP and DP , then for duality theorem
we have
(1)
(2)
c0x∗ = y ∗0Ax∗ = y ∗0b,
1. c0x∗ = y ∗0Ax∗ ⇒ (c0 − y ∗0A)x∗ = 0
2. y ∗0Ax∗ = y ∗0b ⇒ y ∗0(Ax∗ − b) = 0
”⇐” Let be x∗ and y ∗ feasible points in X and Y, such that
(y ∗0A − c0) x∗ = 0
y ∗0 (Ax∗ − b) = 0
and
y ∗0Ax∗ − c0x∗ = 0
y ∗0Ax∗ − y ∗0b = 0
⇒ c0x∗ = y ∗0Ax∗ = y ∗0b x∗ y ∗ optimal solution of LP and DP .
30
Corollary
If c 6= 0 and x∗ ∈ X is an optimal solution of LP , then
x∗ is a boundary point for X.
31

m
 X


i=1
aij yi∗ − cj  x∗j = 0,

yi∗
n
 X

j=1
j = 1, 2, . . . , n;
x∗j ≥ 0
1 = 1, 2, . . . , m;
yi∗ ≥ 0

aij x∗j − bi = 0,
i) x∗j ≥ 0 and x∗j = 0 if
Pm
∗
i=1 aij yi
< cj x∗j ,
j = 1, 2, . . . , n
the j−th component of the optimal vector x∗ of P is obviously non negative.
It turns out to be NULL if the j − th CONSTRAINT of the dual problem
is INACTIVE (as strict inequality) valuated in the optimal vector y ∗
ii)
Pm
∗
i=1 aij yi
≤ cj and
Pm
∗
i=1 aij yi
= cj if x∗j > 0 j = 1, 2, . . . , n
CONSTRAINT j −th of the dual problem valuated in the optimal vector y ∗
is ACTIVE (satisfied as equality) if the j − th component x∗j of the optimal
vector of P is strictly positive
iii) yi∗ ≥ 0 and yi∗ = 0 if
Pn
∗
j=1 aij xj
> bi ,
1 = 1, 2, . . . , m
the i − th component of the optimal vector y ∗ of DP is obviously non
negative. It turns out to be NULL if the i − th CONSTRAINT of the
primal problem is INACTIVE (as strict inequality) valuated in the optimal
vector x∗
iv)
Pn
∗
j=1 aij xj
≥ bi and
Pn
∗
j=1 aij xj
= bi if yi∗ ≥ 0,
1 = 1, 2, . . . , m
CONSTRAINT i−th of the primal problem primale valuated in the optimal
vector x∗ is ACTIVE (ssatisfied as equality) if the i − th component yi∗ of
the optimal vector of DP is strictly
32 positive.
Alternative situations for problem LP and DP :
1. X = ∅ and Y = ∅: LP and DP are unfeasible, none
of them has an optimal solution;
2. X = ∅ and Y 6= ∅: LP is unfeasible and DP is unbounded, none of them has an optimal solution;
3. X 6= ∅ and Y = ∅: LP is unbounded and DP is
unfeasible, none of them has an optimal solution;
4. X 6= ∅ and Y 6= ∅: LP and DP sono feasible both
have an optimal solution.
Observe that it cannot occur that both LP and DP are
unbounded.
33
34
Geometric solution of linear programming
problems with several decision variables and
two constraints
• write the dual problem
• solve the dual problem
• find the primal solution through complementarity conditions
35
EXAMPLE
P: min v = 10y1 + 4y2 + 8y3
s.t.





















y1 + y2 − y3
≥2
y1 − y2 + 2y3 ≥ 3
y1, y2, y3
≥0
3 decision variables, two constraints ⇒ dual
DP: max z = 2x1 + 3x2
s.t.

































x1 + x2
≤ 10
x1 − x2
≤4
−x1 + 2x2 ≤ 8
x1, x2
36
≥0