EXISTENCE OF OPTIMAL SOLUTIONS

EXISTENCE OF
OPTIMAL SOLUTIONS
Do we always have an optimal solution?
• Clearly, an LP can be unbounded (inf = −∞ or sup = +∞).
• If the feasible region is bounded, then we have a min/max
from Weierstrass.
• Not obvious: can we have a finite inf or sup that is not
attained (with unbounded feasible region)?
• No. We won’t prove it, though—surprisingly difficult.
Theorem. A linear program is either infeasible, or unbounded (inf
= −∞ or sup = +∞), or attains a min/max.
• The excluded case is the finite inf/sup that’s not attained,
which can happen in nonlinear problems, e.g., min 𝑒 $ .
$
38
CERTIFYING
OPTIMALITY
Duality in linear optimization.
min
𝑐\ 𝑥
max
𝑏\ 𝑦
s.t.
𝐴𝑥 = 𝑏
𝑥≥0
s.t.
𝐴\ 𝑦 ≤ 𝑐
PRIMAL PROBLEM
DUAL PROBLEM
• Weak duality: min in PRIMAL ≥ max in DUAL (if exist). More
generally, every DUAL feasible solution is a lower bound on the
minimum of PRIMAL.
• Strong duality: suppose PRIMAL has a min. Then DUAL
automatically has a max, and min = max.
• The primal and dual optimal solutions are each other’s optimality
certificates.
39
CERTIFYING
OPTIMALITY
Again, why do we care?
• This is what all algorithms are based on. (They essentially
have to be!)
• Generic optimization algorithm: start with a feasible solution.
Certify that it’s optimal; if not, find a better one, and recurse.
• Sounds familiar…? (Compare to steepest descent from
calculus.)
40
WHAT IS A “SOLUTION”?
Local vs global optima
• Local is global in a (possibly very small) open neighborhood.
• We pretty much always want a global optimum.
41
CONVEXITY I.
Recall: a set is convex if…
vs.
42
CONVEXITY II.
Recall: a function is convex if…
𝑥
𝜆𝑥 + 1 − 𝜆 𝑦
𝑦
…equivalent: its epigraph is convex
43
LOCAL MINIMA OF
CONVEX FUNCTIONS
Theorem. Every local minimum of a convex function over a convex
set is a global minimum.
There might be more than one minima. But the minimizers always
form a convex set.
Theorem. Assume that 𝑓: ℝ, → ℝ is differentiable at a point 𝑥 ∗ .
1. If 𝑓 attains a local minimum at 𝑥 ∗ , then 𝛻𝑓(𝑥 ∗ ) = 0.
2. If 𝑓 is convex and 𝛻𝑓(𝑥 ∗ ) = 0, then 𝑥 ∗ is a global minimum of 𝑓.
Proof.
1. Intuition: suppose gradient is not zero. Which way should we go
to see lower function values?
44
PROOFS, QUICKLY.
Theorem. Assume that 𝑓: ℝ, → ℝ is differentiable at a point 𝑥 ∗ .
1. If 𝑓 attains a local minimum at 𝑥 ∗ , then 𝛻𝑓(𝑥 ∗ ) = 0.
2. If 𝑓 is convex and 𝛻𝑓(𝑥 ∗ ) = 0, then 𝑥 ∗ is a global minimum of 𝑓.
Proof.
1. Taylor: 𝑓 𝑦 = 𝑓 𝑥 ∗ + 𝛻𝑓 𝑥 ∗ \ 𝑦 − 𝑥 ∗ + 𝑜 𝑦 − 𝑥 ∗
Move from 𝑥 ∗ in the direction of the negative gradient…
2. Follows immediately from the following characterization of
diff.able convex functions: a differentiable function 𝑓 is convex
on S if and only if
𝑓 𝑦 ≥ 𝑓 𝑥 + 𝛻𝑓 𝑥
\
𝑦 − 𝑥 ∀𝑥, 𝑦 ∈ 𝑆
45
THE TAXONOMY OF
OPTIMIZATION MODELS
A classification of (deterministic) optimization problems
• Convexity (being able to just locally optimize) is key!
Exotic
Used everywhere
We know
how to solve
efficiently
(we have good
local optimality
certificates)
Don’t really know
how to solve, or
known to be
impossibly hard
CONVEX
LINEAR SEMIDEFINITE
NON-CONVEX
(MIXED) INTEGER,
COMBINATORIAL
UNSTRUCTURED
GLOBAL OPT.
46
THE SIMPLEX METHOD
The most commonly used algorithm for the solution of linear
programs.
• The precise (linear algebraic) description of the method is fairly
involved, but the geometric idea is simple.
• Relies on the fact that if the feasible region has a corner and there
exists an optimal solution, then there is an optimal corner.
The simplex method (sketch):
• Start from a corner of the feasible region
• Find the adjacent corners; move to a better adjacent corner if there
is one.
• If all adjacent corners are worse than the current one, stop: the
current corner is an optimal solution.
The precise, linear algebraic version also automatically provides a
dual optimal solution (optimality certificate).
47