Shadow Prices for Linear Programs 1 Introduction 2 Feasible Sets

Lecture 0: 01/01/2012
Econ 6100
Shadow Prices for Linear Programs
Lecturer: Larry Blume
1
Scribes: L. Blume
Introduction
These notes tie down the relationship between solutions to the primal and dual lp’s:
P(b)
v P (b) = max c · x
s. t. Ax ≤ b
x≥0
D (c)
v D (c) = min y · b
s. t. yA ≥ c
y≥0
where the primal has n variables and m constraints. In class we have asserted that if
x ∗ solves P(b) and y∗ solves D (c), then y∗ ∈ ∂v P (b) and x ∗ ∈ ∂v D (c). This statement
presumes that these differentials exist, something we have yet to prove.
Denote by dom v P the set of vectors b ∈ Rm such that the feasible set is not empty, and
define dom v D similarly. These sets are closed and convex. The value v P (b) > −∞ if and
only if b ∈ dom v P , and v D (c) < +∞ if and only if c ∈ dom vC .
Theorem 1. v P (b) is concave, and v D (c) is convex.
I will prove the first claim. The proof of the second is similar.
Proof. The value function is clearly increasing and linearly homogeneous. If (b0 , v0 ) and
(b00 , v00 ) are in the subgraph of v P , then b0 and b00 are in dom v P , and there are feasible
x 0 and x 00 such that v0 ≤ c · x 0 and v00 ≤ c · x 00 . For any 0 ≤ λ ≤ 1, λx 0 + (1 − λ) x 00
is feasible for λb0 + (1 − λ)b00 , so the λ-combination of (b0 , v0 ) and (b00 , v00 ) is also in the
subgraph.
2
Feasible Sets
It is possible to have linear programs with feasible sets which, when they are not empty,
never admit a finite value. We want to characterize these programs. The central result is
that if a linear program is unbounded for some b, then it is unbounded for all b for which
the feasible set is not empty (and similarly for the dual).
1
Theorem 2. If for some b ∈ dom v P , v P (b) = +∞, then v P (b) = +∞ for all b ∈ dom v P . If
for some c ∈ dom v D , v D (c) = −∞, then v D (c) = −∞ for all c ∈ dom v D .
We prove the theorem here for the primal; the proof for the dual proceeds the same
way. The proof is in two steps. First, if the program with b = 0 is unbounded, then the
program is unbounded for every b ∈ dom v P . Second, if the program is unbounded for
any b in dom v P , then it is unbounded for b = 0. Here is the first step. (The corresponding
statement for the dual follows from the duality theorem.)
Lemma 3. v P (0) is either 0 or +∞, and v P (b) = +∞ for all b ∈ dom v P if and only if v P (0) =
+∞.
Proof. The vector 0 is always feasible when b = 0, so v P (0) ≥ 0. If v P (0) > 0, then there is
a vector x ∈ Rn+ such that Ax = 0 and c · x > 0. The same will be true of αx for all α > 0,
and so v P (0) = +∞. This proves the first claim.
The only if direction of the second claim is trivial since, as we have already argued,
0 ∈ dom v P . To prove the if direction, observe that if x is feasible for P(b) and xn if feasible
for P(0), then x + xn is feasible for P(b), and so v P (b) ≥ c · x + c ẋn . If v P (0) = +∞, for all
n there is a feasible xn such that c · xn ≥ n. Consequently, v P (b) = +∞.
The second step proceeds by showing that the constraint set { x : Ax ≤ b} consists of
translations of the set { x : Ax ≤ 0}.
Lemma 4. If the set { x : Ax ≤ b, x ≥ 0} is non-empty, then it equals the set sum P + Q, where
P is a bounded convex set and Q = {y : Ay ≤ 0, y ≥ 0}.
Proof. For convenience, we suppose that one of the inequalities represented by A and b
is − x ≤ 0, so we do not have to carry the nonnegativity inequality separately. With this
convention, our constraint set is C (b) = { x : Ax ≤ b} ⊂ Rn . We map this set to the cone
K (b) = {(y, t) ∈ Rn × R : Ay − tb ≤ 0, t ≥ 0}. Clearly this is a polyhedral cone in the
half-space {(y, t) : t ≥ 0}. This set-to-cone correspondence is clearly one-to-one, since
C (b) = { x : ( x, t) ∈ K, t = 1}.
Since K (b) is polyhedral, it is generated by a finite number of vectors (y1 , t1 ), . . . , (yr , tr ).
We can rescale the vectors so that if ti > 0, then ti = 1. Then without loss of generality
we can suppose that the first p vectors are of the form (wi , 1) and the remaining q vectors
are of the form (z, 0). If p = 0, then C (b) is empty, so p ≥ 1. By taking (yr , tr ) = (0, 0)
if necessary, we can without loss of generality assume that q ≥ 1. C (b) = { x : ( x, 1) ∈
K (b)}. The set of all vectors in K (b) with last coefficient 1 is the set of all vectors (y, t) =
p
p
q
∑i=1 αi (wi , 1) + ∑ j=1 β j (z j , 0), where the αi and β j are all non-negative, and ∑i=1 αi = 1.
2
Thus take P to be the set of convex combinations of the wi , and take Q to be the set of
non-negative affine combinations of the z j ; P is clearly bounded, and Q is clearly a cone.
Finally, if (z, 0) ∈ K (b), then by definition Az = Az − 0b ≤ 0, proving the characterization
of Q.
Proof of Theorem 2. According to lemma 3, it suffices to show that if for some b, v P (b) =
+∞, then v P (0) = +∞. This must be true, since C (b) is the sum of vectors in a bounded
set and C (0).
Suppose that for some b, v P (b) = +∞. For all n there exists an xn ≥ 0 such that Axn ≤ b
and c · xn ≥ n. Each xn = yn + zn , where yn ∈ P and zn ∈ Q = C (0) from lemma 4. Since
P is bounded, there is a γ < +∞ such that cy ≤ γ for all y ∈ P. Therefore c · zn ↑ +∞, so
v P (0) = +∞.
The content of theorem 2 is that either v P (b) = +∞ at every vector b ∈ dom v P , or v P (b)
never equals +∞. Our main theorem is concerned with the latter case.
3
Super/Sub-gradients and Super/Sub-differentials
Definition 5. The superdifferential of a concave function f on Rn at x ∈ dom f is the set
∂ f ( x ) = {s : f (y) ≤ f ( x ) + s(y − x ) for all y ∈ Rn }. The subdifferential of a convex function g on Rn at x ∈ dom g is the set ∂ f ( x ) = {s : g(y) ≥ g( x ) + s(y − x ) for all y ∈ Rn }.
The inequalities in the definition are called the super- and subgradient inequalities, for
the concave and convex case, respectively.
Theorem 6. If either v P (b) or v D (c) is finite, then v P (b) = v D (c), programs P(b) and D (c) have
optimal solutions x ∗ and y∗ , respectively, and for any optimal solutions x ∗ and y∗ , x ∗ ∈ ∂v D (c)
and x ∗ ∈ ∂v P (b).
The only new part is the statement about sub- and superdifferentials. Since by hypothesis (and the duality theorem) v P (b) < +∞, v P (b) < ∞ for all b according to theorem 2,
so the supergradient exists.1
Proof of Theorem 6. The value functions for primal and dual are linear homogeneous as
well as concave/ convex. In this case, the super- and subgradient inequalities take on a
special form.
1 The
supergradient inequality cannot be satisfied if v P (b) = +∞.
3
Lemma 7. (i ) y ∈ ∂v P (b) iff y · b0 ≥ v P (b0 ) for all b0 ∈ Rn , with equality at b0 = b.
(ii ) y ∈ ∂v D (c) iff y · b0 ≤ v D (b0 ) for all b0 ∈ Rn , with equality at b0 = b.
Proof. We only prove (i ). This follows from the linear homogeneity of v p . If for some b0 ,
y · b0 > v P (b0 ), then for large enough α, y · (αb0 − αb) < v P (αb0 ) − v P (b), which violates the
supergradient inequality. This proves the first claim. Taking b0 = 0 in the supergradient
inequality gives v P (b) − y · b ≥ v P (0) = 0, so v P (b) ≥ y · b, which proves the second
claim.
Lemma 7 breaks the subgradient inequality into two pieces. The next lemma shows
that a vector y is feasible for the dual if and only if it satisfies the first piece. The following
lemma shows that y is optimal if and only if, in addition, it satisfies the second piece.
Lemma 8. y ∈ Rm is feasible for the dual iff y · b0 ≥ v P (b0 ) for all b0 ∈ Rm .
Proof. Suppose first that y · b0 ≥ v P (b0 ) for all b0 ∈ Rm . For any b0 ≥ 0, y · b0 ≥ v P (b0 ) ≥
v P (0) = 0, so y ≥ 0.2 For any x ≥ 0, y · Ax ≥ v P ( Ax ) ≥ c · x. (The last step follows
because x ≥ 0 is feasible for the problem with constraint Ax 0 ≤ b with b = Ax.) Thus
y · A ≥ c.
Conversely, suppose that y ≥ 0 is feasible for the dual, that is, yA ≥ c. Consider the
primal problem with the constraint Ax ≤ b0 . Changing b in the primal has no effect on
the set of feasible dual solutions, so
v P (b0 ) = sup{c · x : Ax ≤ b0 and x ≥ 0}
= inf{z · b0 : zA ≤ c and x ≥ 0}
≤ y · b0 .
The second inequality, of course, is the duality theorem.
Lemma 9. If y is feasible for the dual, then y is optimal if and only if y · b = v P (b).
Proof. Suppose that y is feasible for the dual. Then duality implies that v P (b) = v D (c) ≤
y · b. Then y is optimal for the dual iff v P (b) = y · b.
The last two lemmas establish that y satisfies the version of the supergradient inequality
provided by lemma 7 if and only if it is a solution to the dual problem. A similar proof
works to show that x satisfies the subgradient inequality for the dual value function if
and only if it solves the primal problem.
2 Recall
that v P (b) is nondecreasing in b.
4