Linear Programming Projection Theory: Part 2 Chapter 2 (57-80)

Linear Programming
Projection Theory: Part 2
Chapter 2 (57-80)
University of Chicago
Booth School of Business
Kipp Martin
September 27, 2016
1
Outline
Duality Theory
Dirty Variables
From Here to Infinity
Complementary Slackness
Sensitivity Analysis
Degeneracy
Optimal Value Function
Summary: Key Results
2
Duality Theory
We are solving the linear programming problem using projection.
min{c > x | Ax ≥ b}
Rewrite as the system:
z0 − c > x
Ax
≥ 0
≥ b
Project out the x variables and get
z 0 ≥ dk ,
0 ≥ dk ,
k = 1, . . . , q
k = q + 1, . . . , r
3
Duality Theory
Since
z 0 ≥ dk ,
0 ≥ dk ,
k = 1, . . . , q
k = q + 1, . . . , r
the optimal value of the objective function is
z0 = max{dk | k = 1, ..., q}
Key Idea: Any of the dk provide a lower bound on the optimal
objective function value since the optimal value is the maximum
over all values.
4
Duality Theory
Observation: We projected out the x variables from the system
z0 − c > x
Ax
≥ 0
≥ b
and got
z 0 ≥ dk ,
0 ≥ dk ,
k = 1, . . . , q
k = q + 1, . . . , r
If we let u be the dual multipliers on Ax ≥ b and u0 the multiplier
on z0 − c > x ≥ 0 we have (u k )> b = dk and
(u k )> A − u0k c = 0
We take without loss u0k = 1
5
Duality Theory
Let me say this again because it is so important. Each of the
z 0 ≥ dk ,
k = 1, . . . , q
constraints in the projection result from aggregating c > x with
Ax ≥ b where u k is a vector of multipliers on Ax ≥ b so that
(u k )> A = c
6
Duality Theory
Thus any nonnegative u k with A> u k = c provides a lower bound
value of
dk = b > u k
on the optimal objective function value. This is known as weak
duality.
Lemma 2.28 (Weak Duality) If x is a solution to the system
Ax ≥ b and u ≥ 0 is a solution to A> u = c, then c > x ≥ b > u.
7
Duality Theory
Since the minimum value of the linear program is given by
z0 = max{dk | k = 1, . . . q} it is necessary to find a nonnegative u
such that A> u = c and b > u is as large as possible.
Finding the largest value of b > u is also a linear program:
max {b > u | A> u = c, u ≥ 0 }.
The linear program min{c > x | Ax ≥ b} is the primal linear
program.
The linear program max{b > u | A> u = c, u ≥ 0 } that generates
the multipliers for the lower bounds is the dual linear program.
The x variables are the primal variables and the u variables the
dual variables or dual multipliers.
8
Duality Theory
Theorem 2.29 (Strong Duality) If there is an optimal solution to
the primal linear program
min {c > x | Ax ≥ b},
then there is an optimal solution to the dual linear program
max {b > u | A> u = c, u ≥ 0} and the optimal objective function
values are equal.
Corollary 2.30 If the primal linear program min{c > x | Ax ≥ b} is
unbounded then max{b > u | A> u = c, u ≥ 0} the dual linear
program is infeasible.
Duality Theory
If the primal problem is
min c > x
Ax
≥ b
the dual problem is
max
b> u
A> u = c
u ≥ 0
What is the dual of the dual?
10
Duality Theory
If the primal is
min c > x
Ax
x
What is the dual?
What is the dual of the dual?
11
≥ b
≥ 0
Duality Theory
If the primal is
min c > x
Ax
= b
x
≥ 0
What is the dual?
What is the dual of the dual?
12
Duality Theory
DO NOT MEMORIZE PRIMAL DUAL PAIRS!
DO NOT MEMORIZE PRIMAL DUAL PAIRS!
DO NOT MEMORIZE PRIMAL DUAL PAIRS!
DO NOT MEMORIZE PRIMAL DUAL PAIRS!
DO NOT MEMORIZE PRIMAL DUAL PAIRS!
DO NOT MEMORIZE PRIMAL DUAL PAIRS!
DO NOT MEMORIZE PRIMAL DUAL PAIRS!
You get the idea.
Duality Theory
Table: Primal-Dual Relationships
Primal
inequality primal constraint
equality primal constraint
nonnegative primal variable
unrestricted primal variable
min primal objective function
14
Dual
nonnegative dual variable
unrestricted dual variable
inequality dual constraint
equality dual constraint
max dual objective function
Duality Theory
Proposition 2.31 If the linear program {c > x | Ax ≥ b} has an
optimal solution, then applying projection to the system
z0 − c > x ≥ 0, Ax ≥ b gives all of the extreme points of the dual
polyhedron {u | A> u = c, u ≥ 0} and all of the extreme rays of the
dual recession cone {u | A> u = 0, u ≥ 0}.
You are going to basically prove this for homework.
Where We Are Headed
We want to solve problems with special structure! Real problems
have special structure! One such structure is
min c > x + f > y
s.t.
Ax + By
≥ b
Later we will take advantage of special structure in the A matrix
and project out the x variables and solve a problem in only the y
variables.
Where We Are Headed
z0 − c > x − f > y
Ax + By
≥ 0
≥ b
Rewrite this as
z0 − c > x
Ax
≥ f >y
≥ b − By
Now project out x
z0 ≥ f > y + (u k )> (b − By ), k = 1, . . . , q
0 ≥ (u k )> (b − By ), k = q + 1, . . . , r
17
Where We Are Headed
Here is the new formulation
z0 ≥ f > y + (u k )> (b − By ), k = 1, . . . , q
0 ≥ (u k )> (b − By ), k = q + 1, . . . , r
Any problems with this?
What is the fix?
See the homework for a premonition.
18
Where We Are Headed
We will also use duality to generate bounds in enumeration
algorithms.
Dirty Variables (See pages 43-46 in the text.)
Consider the system
x1 − x2 ≥ 1
x1 + x2 ≥ 1
I
Can we eliminate x1 using Fourier-Motzkin elimination?
I
What is the projection of the polyhedron onto the x2 space?
20
Dirty Variables (See pages 43-46 in the text.)
Recall
H+ (k) := {i ∈ I | ak (i) > 0}
H− (k) := {i ∈ I | ak (i) < 0}
(1)
H0 (k) := {i ∈ I | ak (i) = 0}
Variable k is a dirty variable if H+ (k) is empty and H− (k) is not
empty, or H+ (k) is not empty and H− (k) is empty,
21
Dirty Variables
In general, what is the projection of the system
x1 + ai2 x2 + · · · + ain xn ≥ bi ,
i = 1, . . . , m,
into Rn−1 ?
What is the projection of the system
x1 + ai2 x2 + · · · + ain xn ≥ bi ,
ai2 x2 + · · · + ain xn ≥ bi ,
into Rn−1 ?
22
i = 1, . . . , m1
i = m1 + 1, . . . , m
Dirty Variables
If, at some point:
x1 + ai2 x2 + · · · + ain xn ≥ bi ,
i = 1, . . . , m,
Two implications:
I
The projection of <n in <n−1 is <n−1 (the entire subspace)
I
The only solution to the corresponding u > A = 0, for u ≥ 0 is
u=0
The above is only true if and only if there is a strictly positive
(negative) coefficient in every row at some point in the projection
process.
23
Dirty Variables
If there are finite number of constraints, dirty variables are not too
bad, we just drop the constraints with the dirty variables.
However, if there are an infinite number of constraints, dirty
variables hide dirty secrets and can be obscene.
24
From Here to Infinity
Theorem: If
Γ = {x : a1 (i)x1 + a2 (i)x2 + · · · + an (i)xn ≥ b(i) for i ∈ I = {1, 2, . . . , m}}
then Fourier-Motzkin elimination gives
P(Γ; x1 ) := {(x2 , x3 , . . . , xn ) ∈ Rn−1 : ∃x1 ∈ R s.t. (x1 , x2 , . . . , xn ) ∈ Γ}. (2)
even when x1 is dirty.
What happens when I is not a finite index set? That is, when Γ is
a closed convex set, but not necessarily a polyhedron.
25
From Here to Infinity
26
From Here to Infinity
Consider the system
ix1 +
x1
x2
i
x2
≥ 1, i = 1, . . . , N
≥ −1
≥ −1
Project out x2 (it is a dirty variable) and get
x1 ≥ −1
27
(3)
From Here to Infinity
28
The Road to Infinity
Now what about (note the ∞)
ix1 +
x1
x2
i
x2
≥ 1, i = 1, . . . , ∞
≥ −1
≥ 0
Project out x2 (it is a dirty variable) and get
x1 ≥ −1
Is this correct?
29
(4)
From Here to Infinity
ix1 +
x1
x1
x1
x2
≥ 1 i=1, …, ∞
i
≥ −1
x2 ≥ 0
≥ −1
≥0
x2
x1
1
x2
0
19
Typo: x1 > 0 not x1 ≥ 0.
30
x1
From Here to Infinity
31
From Here to Infinity
Brief Review: Solve the finite linear programming problem using
projection.
Rewrite as the system:
min{c > x | Ax ≥ b}
z0 − c > x
Ax
≥ 0
≥ b
Project out the x variables and get
z 0 ≥ dk ,
0 ≥ dk ,
k = 1, . . . , q
k = q + 1, . . . , r
the optimal value of the objective function is
v (LP) = V (LPD) = max{dk | k = 1, ..., q}
32
From Here to Infinity
Fourier-Motzkin algorithm modification.
Do Not drop dirty variables.
Pass over them and eliminate only the clean variables
For a formal statement see pages 6-7 in:
http://www.ams.jhu.edu/~abasu9/papers/fm-paper.pdf
33
From Here to Infinity
34
From Here to Infinity
Consider
0 ≥ b̃(h),
h ∈ I1
ã` (h)x` + · · · + ãn (h)xn ≥ b̃(h),
h ∈ I2
z
≥ b̃(h),
h ∈ I3
z
+ ã` (h)x` + · · · + ãn (h)xn ≥ b̃(h),
h ∈ I4
Assume (z̄, x̄` , . . . , x̄n ) is a feasible solution to the projected
system above.
35
(5)
From Here to Infinity
Observation: For the feasible (z̄, x̄` , . . . , x̄n ), we must have
z̄ ≥ sup{b̃(h) − ã` (h)x ` − · · · − ãn (h)x n : h ∈ I4 }
and
z̄ ≥ sup{b̃(h) : h ∈ I3 }
36
From Here to Infinity
Lemma: If z is an optimal solution value to the linear program
then
z ≥ lim sup{b̃(h) − δ
δ→∞
n
X
k=`
|ãk (h)| : h ∈ I4 }
Proof: the key idea is that if (x ` , . . . , x n ) is a feasible point in the
projected space then
x̂k =
δ,
−δ,
if x̄k > 0
k = `, . . . , n
if x̄k < 0
where
δ = max{|xk | : k = `, . . . , n}
is a feasible point in the projected space.
37
From Here to Infinity
Theorem: If (SILP) is feasible, then the optimal solution value is
v (SILP) = max{S, L}
where
S = sup{b̃(h) : h ∈ I3 }
and
L = lim sup{b̃(h) − δ
δ→∞
n
X
k=`
|ãk (h)| : h ∈ I4 }
Corollary When the cardinality of I is finite, L = −∞ and
v (SILP) = S.
38
From Here to Infinity
39
Complementary Slackness
Motivation: Most work algorithms work by moving from point to
point until a set of optimality conditions are satisfied.
Generic Algorithm:
Initialization: Start with a point that satisfies a subset of the
optimality conditions.
Iterative Step: Move to a better point.
Termination: Stop when you have satisfied (to numerical
tolerances) the optimality conditions.
40
Complementary Slackness
Theorem 2.33 If x is a feasible solution to the primal problem
min {c > x | Ax ≥ b } and u is a feasible solution to the dual
problem max {b > u | A> u = c, u ≥ 0 }, then x, u are primal-dual
optimal if and only if
(b − Ax)> u = 0. primal complementary slackness
Corollary 2.34 If
Ax
>
≥ b
A u = c
>
(b − Ax) u = 0
(6)
u≥0
(7)
(8)
then x is optimal to the primal problem min {c > x | Ax ≥ b} and u
is optimal to the dual problem max {b > u | A> u = c, u ≥ 0}.
41
Complementary Slackness
Condition (8) is
(b − Ax)> u = 0
What is the economic interpretation?
What is the interpretation in terms of projection?
42
Complementary Slackness
Theorem 2.35 (Strict Complementary Slackness) If there is an
optimal solution to the linear program
min {c > x | (ai )> x ≥ bi , i = 1, . . . , m},
then there is an optimal primal-dual pair (x, u) such that
(bi − (ai )> x) < 0 ⇒ u i = 0 and (bi − (ai )> x) = 0 ⇒ u i > 0, i = 1, . . . , m. (9)
Corollary 2.37 If x, u is an optimal primal-dual pair for the linear
program min {c > x | Ax ≥ b}, but is not strictly complementary,
then there are either alternative primal optima or alternative dual
optima.
This is not good. More on this later.
43
Complementary Slackness
The linear program is min {x1 + x2 | x1 + x2 ≥ 1, x1 , x2 ≥ 0}. Solve
the linear program using projection.

z0






− x1
x1
x1
− x2
+ x2
x2
≥
≥
≥
≥

0 (E 0) 


1 (E 1)
z0
⇒
0 (E 2) 
z
0


0 (E 3)
≥ 1
≥ 0
(E 0) + (E 1)
(E 0) + (E 2) + (E 3)
Complementary Slackness
Proof Motivation:
Idea One: It is sufficient to show strict complementary slackness
must hold for the first constraint.
Why is showing the result for one constraint sufficient?
Idea Two: Apply Corollary 2.21 on page 52 of the text.
45
Complementary Slackness
Corollary (2.21)
If there is no solution to the system
A1 x > b 1 , A2 x ≥ b 2
then there are nonnegative u 1 , u 2 such that
1
> 2
A>
= 0, (b 1 )> u 1 + (b 2 )> u 2 ≥ 0, u 1 6= 0 (10)
1 u + A2 u
has a solution, or
1
> 2
= 0, (b 1 )> u 1 + (b 2 )> u 2 > 0,
A>
1 u + A2 u
(11)
has a solution. Conversely, if there is no solution to either (10) or
(11), there is a solution to A1 x > b 1 , A2 x ≥ b 2 .
46
Complementary Slackness
The complementary slackness results of this section are stated in
terms of the symmetric primal-dual pair.
If
Ax
>
≥ b,
A u ≤ c,
>
(b − Ax) u = 0
>
>
(c − A u) x
= 0
x ≥0
u≥0
(12)
(13)
(14)
(15)
then x is optimal to the primal problem min {c > x | Ax ≥ b, x ≥ 0}
and u is optimal to the dual problem max {b > u | A> u ≤ c, u ≥ 0}.
47
Complementary Slackness
Conditions (12) - (15) are called optimality conditions.
I
Condition (12) – primal feasibility
I
Condition (13) – dual feasibility
I
Conditions (14)-(15) – complementary slackness
Simplex Algorithm: Enforce conditions (12), (14), and (15) and
iterate to satisfy (13) and then stop.
Nonlinear Programming: – Karush-Kuhn-Tucker conditions
generalize this to nonlinear programming.
Constructing a primal dual-pair with equal objective function
values is a very useful proof technique.
48
Sensitivity Analysis
People are interested not only in primal solutions, but also dual
information.
What is the practical significance (or utility) of knowing the
optimal values of the dual variables?
Solvers not only give the optimal dual information, but they also
provide range analysis.
Let’s look at some actual solver output.
49
Sensitivity Analysis
Consider the linear program we have been working with
MIN
2 X1
SUBJECT TO
2)
3)
4) 5)
END
- 3 X2
X2 >=
2
0.5 X1 - X2 >= - 8
0.5 X1 + X2 >= - 3
X1 - X2 >= - 6
Sensitivity Analysis (LINDO/LINGO)
OBJECTIVE FUNCTION VALUE
1)
VARIABLE
X1
X2
ROW
2)
3)
4)
5)
-22.00000
VALUE
4.000000
10.000000
SLACK OR SURPLUS
8.000000
0.000000
11.000000
0.000000
REDUCED COST
0.000000
0.000000
DUAL PRICES
0.000000
-2.000000
0.000000
-1.000000
Sensitivity Analysis (LINDO/LINGO)
Terminology: You will see
I
Dual price
I
Dual variable/value
I
Shadow price
52
Sensitivity Analysis (LINDO/LINGO)
RANGES IN WHICH THE BASIS IS UNCHANGED:
VARIABLE
X1
X2
ROW
2
3
4
5
CURRENT
COEF
2.000000
-3.000000
OBJ COEFFICIENT RANGES
ALLOWABLE
ALLOWABLE
INCREASE
DECREASE
1.000000
0.500000
1.000000
1.000000
CURRENT
RHS
2.000000
-8.000000
-3.000000
-6.000000
RIGHTHAND SIDE RANGES
ALLOWABLE
INCREASE
8.000000
2.000000
11.000000
INFINITY
ALLOWABLE
DECREASE
INFINITY
INFINITY
INFINITY
2.000000
Sensitivity Analysis (Excel Solver)
54
Sensitivity Analysis: Allowable Increase/Decrease
Given an optimal dual solution, u, the allowable increase
(decrease) on the right hand side of the constraint
(ak )> x ≥ bk
is the maximum increase (decrease) on the right hand side bk such
that u is still an optimal dual solution (or the primal becomes
infeasible).
The dual values and the allowable/increase is a natural by-product
of projection.
55
Sensitivity Analysis
The result of projection on the linear program is
z0 ≥ −22
(E 0) + 2(E 2) + (E 4)
(16)
z0 ≥ −24
(E 0) + 3(E 2) + 0.5(E 5)
(17)
z0 ≥ −44
(E 0) + 4(E 2) + 2(E 3) + (E 4)
(18)
z0 ≥ −35
(E 0) + 4(E 2) + (E 3) + 0.5(E 5)
(19)
(E 0) + (E 1) + 4(E 2)
(20)
(E 0) + 4(E 2) + (E 6)
(21)
(E 2) + (E 3)
(22)
z0 ≥ −30
z0 ≥ −32
0 ≥ −11
56
Sensitivity Analysis
Questions:
I
Why is the dual price on row (E2) -2?
I
Why is the allowable increase on row (E1) two 8?
I
Why is the allowable decrease on row (E4) 2?
Sensitivity Analysis – 100% Rule
In calculating the allowable/increase for the linear programming
right hand sides we assume that only one right hand side changes
while the others are held constant.
What if we change more than one b?
If the sum of the percentage increase (decrease) of all the right
hand sides in the linear program, min{c > x | Ax ≥ b}, does not
exceed 100% then the current optimal dual solution remains
optimal.
Proof: Homework!
Degeneracy
People are interested in knowing if there are alternative optima.
Why?
If an optimal solution is NOT strictly complementary, then we have
alternative primal or dual solutions. Why?
If an optimal solution is NOT strictly complementary, then the
range analysis may not be valid. Argle Bargle!
Can we tell from a solver output if we have primal or dual
alternative optima?
It depends. Degenerate solutions mess things up.
59
Degeneracy
Closely related to alternative dual optima is the concept of primal
degeneracy.
A solution x of the primal linear program min {c > x | Ax ≥ b}
where A is an m × n matrix is primal degenerate if the submatrix of
A defined by the binding constraints ((ai )> x = bi where ai is row i
of A) has rank strictly less than the number of binding constraints.
Primal degeneracy can also lead to incorrect values of the
allowable increase/decrease. Argh!!!!
Some authors define primal degeneracy to be alternative dual
optima, but this is not correct.
60
Degeneracy
Consider the linear program
min
x1
−x1
−2x1
x1
−x2
−x2
−x2
−x2
+x2
x2
≥
≥
≥
≥
≥
≥
−3
−7
−2
−2
0
0
(E 0)
(E 1)
(E 2)
(E 3)
(E 4)
(E 5)
(E 6)
Degeneracy
Applying projection to this linear program and eliminating the x
variables gives
z0
z0
z0
z0
0
0
0
0
0
0
0
0
≥ −2
≥ −5
≥ −8
≥ −7
≥ −2
≥ −8
≥ −7
≥ −10
≥ −4
≥ −4
≥ −10
≥ −9
(E 0)
+(E 3)
(E 0) +(1/2)(E 1) +(1/2)(E 2)
(E 0)
+2(E 1)
+(E 4)
(E 0)
+(E 2)
+(E 5)
(E 3)
+(E 6)
2(E 1)
+(E 4)
+(E 6)
(E 2)
+(E 5) +(E 6)
(E 1)
+(E 2)
+2(E 6)
(E 3) +(E 4) +2(E 5)
(E 1)
+(E 2)
+2(E 4) +2(E 5)
2(E 1)
+2(E 4) +2(E 5)
(E 2)
+(E 4) +3E (5)
62
Degeneracy
Question: What is the allowable decrease on constraint (E3)?
Answer: ??????????
Let’s look at what we get from an LP code.
63
Degeneracy
The LINGO solution (RHS of E3 is -2)
64
Degeneracy
The LINGO Range analysis (RHS of E3 is -2)
65
Degeneracy
What is the allowable decrease on (E3) according to LINGO?
What is the allowable decrease on (E3) according to projection?
Argle bargle!!!!!!!!! What happened? Do we have strict
complementary slackness?
Let’s look at a picture. In the picture we have decreased the RHS
to -4 from -2, that is the constraint is x2 ≤ 4.
66
Degeneracy
x2
10
7.5
(E1)
(3.0, 4.0)
(E3)
5
2.5
-2
(E2)
2
-2.5
4
(E4)
-5
67
6
x1
Degeneracy
The LINGO solution (RHS of E3 is -4)
68
Degeneracy
The LINGO Range analysis (RHS of E3 is -4)
69
Degeneracy Range analysis (RHS of E3 is -4)
The primal system (tight constraints):
−x1 − x2 = −7
−x2 = −4
−2x1 + x2 = −2
(E 2)
(E 3)
(E 4)
The dual system:
−u2 − 2u4 = 0
−u2 − u3 + u4 = −1
u1 , u2 , u3 ≥ 0
A unique dual solution, but alternative primal optima.
u2 = 0
u3 = 1
70
u4 = 0
Degeneracy
The LINGO solution (RHS of E3 is -5)
71
Degeneracy
The LINGO Range analysis (RHS of E3 is -5)
72
Degeneracy Range analysis (RHS of E3 is -5)
The primal system (tight constraints):
x1 − x2 = −3
−x1 − x2 = −7
−x2 = −5
(E 1)
(E 2)
(E 3)
The dual system:
u1 − u2 = 0
−u1 − u2 − u3 = −1
u1 , u2 , u3 ≥ 0
Alternative dual solution, but unique primal optima (x1 = 2,
x2 = 5).
−2u1 − u3 = −1
73
Range Analysis Summary
x2
10
7.5
(E1)
(3.0, 4.0)
(E3)
5
2.5
-2
(E2)
2
-2.5
4
(E4)
-5
74
6
x1
Range Analysis Summary
Constraint
−x2 ≥ −2
−x2 ≥ −3
−x2 ≥ −4
−x2 ≥ −5
x1
x1
x1
x1
Solution
= 0, x2 = 2
= 0, x2 = 3
= 3, x2 = 4
= 2, x2 = 5
Degenerate
No
Yes
Yes
Yes
75
Alternate
Primal
Yes
Yes
Yes
No
Alternate
Dual
No
No
No
Yes
Range Analysis Summary
Here is something to think about while lying in bed at night.
Assume there is an optimal solution to the linear program such that
I
the solution is not strictly complementary
I
the solution is not degenerate
What can we conclude about alternative optimal primal and
alternative optimal dual solutions?
How can Simplex spot a degenerate solution?
76
Degeneracy – Key Take Aways
Here is the problem: Simplex codes calculate the allowable
increase/decrease by how much a right hand side can change
before the set of positive primal variables and constraints with
positive slack change.
Stated another way: Simplex codes calculate the allowable
increase/decrease by how much a right hand side can change
before there is a change in the basis.
This may not actually be equal to the true allowable increase (i.e.
before the value of the optimal dual variables change.)
There may be several basis changes before the dual solution
changes.
Degeneracy – Key Take Aways
I
If there is NOT strict complementary slackness we have
alternative primal optima or dual optima.
I
If there is NOT strict complementary slackness range analysis
may be misleading.
I
If there is primal degeneracy knowing which kind of
alternative optima (primal or dual) we have is difficult.
Optimal Value Function
What is the efficient frontier in finance?
79
Optimal Value Function
Increasing the right hand side bk , of constraint k, in the linear
program
min {c > x | Ax ≥ b},
will result in infeasibility or a new optimal dual solution. If a new
optimal dual solution is the result then the new dual solution will
have a strictly greater value for component k.
Hurting hurts more and more.
80
Optimal Value Function
If u is an optimal dual solution to min {c > x | Ax ≥ b} and θk is
the allowable increase (decrease) on the right hand side bk then
increasing (decreasing) bk by more than θk will either result in an
infeasible primal or a new optimal dual solution û where
ûk > u k (ûk < u k ).
Why does this follow from projection?
81
Optimal Value Function
Proposition 2.48: If the projection of the system
z0 − c > x, Ax ≥ b into the z0 variable space is not <1 , and is not
null, then there exists a set of vectors, u 1 , . . . , u q such that
z0 (b) = min {c > x | Ax ≥ b} = max {b > u k | k = 1, . . . , q}
for all b such that {x | Ax ≥ b } =
6 0.
Corollary 2.49: The optimal value function
z0 (b) = min {c > x | Ax ≥ b} is a piecewise linear convex function
over the domain for which the linear program is feasible.
Key Results
I
Projection
I
Projection does not create or destroy solutions
I
Projection yields dual mutlipliers
I
Farkas’ Lemma – Theorems of the Alternative
I
Weyl’s Theorem
I
Solve a linear program with projection
I
Weak and Strong Duality
I
Complementary Slackness
I
Sensitivity Analysis
I
Optimal value function
83