CS 372: Computational Geometry
Lecture 10
Linear Programming in Fixed Dimension
Antoine Vigneron
King Abdullah University of Science and Technology
November 7, 2012
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
1 / 48
1
Introduction
2
One dimensional case
3
Two-dimensional linear programming
4
Generalization to higher dimension
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
2 / 48
Outline
This lecture is on linear programming.
Special case: fixed dimension.
It means d = O(1) variables.
Even without this restriction, there are polynomial-time algorithms.
For any fixed dimension, we give a simple linear-time algorithm.
Reference:
Textbook Chapter 4.
Dave Mount’s lecture notes, lectures 8–10.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
3 / 48
Example
A factory can make two types of products: X and Y
A product of type X requires 10 hours of manpower, 4` of oil, and
5m3 of storage.
A product of type Y requires 8 hours, 2` of oil and 10m3 storage.
A product X can be sold $200 and a product Y can be sold $250.
You have 168 hours of manpower available, as well as 60` of oil and
150m3 of storage.
How many products of each type should you make so as to maximize
their total price?
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
4 / 48
Formulation
x and y denote the number of products of type X and Y , respectively.
Maximize the price
f (x, y ) = 200x + 250y
under the constraints
−x
10x
4x
5x
Antoine Vigneron (KAUST)
−y
+ 8y
+ 2y
+ 10y
CS 372 Lecture 9
6
0
6
0
6 168
6 60
6 150.
November 7, 2012
5 / 48
Geometric Interpretation
4x + 2y = 60
f (x, y ) = constant
Feasible region
10x + 8y = 168
Antoine Vigneron (KAUST)
CS 372 Lecture 9
5x + 10y = 150
November 7, 2012
6 / 48
Geometric Interpretation
4x + 2y = 60
optimal (x, y )
10x + 8y = 168
Antoine Vigneron (KAUST)
CS 372 Lecture 9
5x + 10y = 150
f (x, y ) = constant
November 7, 2012
7 / 48
Solution
From previous slide, at the optimum:
I
I
x =8
y = 11.
Luckily these are integers.
So it is the solution to our problem.
If we add the constraint that all variables are integers, we are doing
integer programming.
I
I
We do not deal with it in CS 372.
We consider only linear inequalities, no other constraint.
Our example was a special case where the linear program has an
integer solution, hence it is also a solution to the integer program.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
8 / 48
Problem Statement
Maximize the objective function
f (x1 , x2 . . . xd ) = c1 x1 + c2 x2 + · · · + cd xd
under the constraints
a1,1 x1 + · · · + a1,d xd
a2,1 x1 + · · · + a2,d xd
..
..
.
.
an,1 x1 + · · · + an,d xd
6 b1
6 b2
..
.
6 bn
This is linear programming in dimension d.
Equivalently, the goal could be to minimize f .
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
9 / 48
Geometric Interpretation
Each constraint represents a
half-space in Rd .
The intersection of these
half-spaces forms the feasible
region.
feasible region
The feasible region is a convex
polyhedron in Rd .
a constraint
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
10 / 48
Convex Polyhedra
Definition (Convex polyhedron)
A convex polyhedron is an intersection of half spaces in Rd .
May also be called convex polytope.
A convex polyhedron is not necessarily bounded.
Special case: A convex polygon is a a bounded, convex polytope in
R2 .
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
11 / 48
Convex Polyhedra in R3
a tetrahedron
a cube
a cone
Faces of a convex polyhedron in R3 :
I
I
Vertices, edges and facets.
Example: A cube has 8 vertices, 12 edges and 6 facets.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
12 / 48
Geometric Interpretation
−
Let →
c = (c1 , c2 , . . . cd ).
−
We want to find a point vopt of the feasible region such that →
c is an
outer normal at vopt , if there is one.
→
−
c
vopt
Feasible
region
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
13 / 48
Infeasible Linear Programs
The feasible region may be empty.
In this case there is no solution to the linear program.
The program is said to be infeasible.
We would like to know when it is the case.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
14 / 48
Unbounded Linear Programs
The feasible region may be unbounded
−
in the direction of →
c.
In this case, we say that the linear
program is unbounded.
Then we want to find a ray ρ in the
feasible region along which f takes
arbitrarily large values.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
ρ
Feasible
region
November 7, 2012
→
−
c
15 / 48
Degenerate Cases
A linear program may have an infinite number of optimal solutions.
f (x, y ) = opt
→
−
c
In this case, we report only one solution.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
16 / 48
Background
Many optimization problems in engineering, operations research, and
economics are linear programs.
A practical algorithm: The simplex algorithm.
I
Exponential time in the worst case.
There are polynomial time algorithms.
I
Interior point methods.
Integer programming is NP-hard.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
17 / 48
Background
Computational geometry techniques give good algorithms in low
dimension.
Running time is O(n) when d is constant.
2
But exponential in d: Time O(3d n).
This lecture: Seidel’s algorithm.
Simple, randomized.
Expected running time O(d!n).
This is O(n) when d = O(1)
In practice, very good for low dimension.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
18 / 48
One dimensional case
Maximize the objective function
f (x) = cx
under the constraints
Antoine Vigneron (KAUST)
a1 x
a2 x
..
.
6 b1
6 b2
..
.
an x
6 bn
CS 372 Lecture 9
November 7, 2012
19 / 48
Interpretation
If ai > 0 then constraint i corresponds to the interval
bi
−∞,
.
ai
If ai < 0 then constraint i corresponds to the interval
bi
,∞ .
ai
The feasible region is an intersection of intervals.
So the feasible region is an interval.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
20 / 48
Interpretation
a1 > 0
a2 < 0
b1 /a1
b2 /a2
feasible
region
L
Antoine Vigneron (KAUST)
CS 372 Lecture 9
R
R
November 7, 2012
21 / 48
Algorithm
Case 1: ∃(i1 , i2 ) such that ai1 < 0 < ai2 .
Compute
bi
.
ai >0 ai
R = min
Compute
L = max
ai <0
bi
.
ai
It takes O(n) time.
If L > R then the program in infeasible.
Otherwise
I
I
If c > 0 then the solution is x = R.
If c < 0 then the solution is x = L.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
22 / 48
Algorithm
Case 2: ai > 0 for all i.
Compute R = min baii .
If c > 0 then the solution is x = R .
If c < 0 then the program is unbounded and the ray (−∞, R] is a
solution.
Case 3: ai < 0 for all i.
Compute L = max baii .
If c < 0 then the solution is x = L .
If c > 0 then the program is unbounded and the ray [L, ∞) is a
solution.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
23 / 48
Two-Dimensional Linear Programming
First approach:
Compute the feasible region.
I
I
In O(n log n) time by divide and conquer+plane sweep.
Other method: see later, lecture on duality.
The feasible region is a convex polyhedron.
Find an optimal point.
I
Can be done in O(log n) time.
Overall, it is O(n log n) time.
This lecture: An expected O(n)-time algorithm.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
24 / 48
Preliminary
We only consider bounded linear programs.
We make sure that our linear program is bounded by enforcing two
additional constraints m1 and m2 .
I
I
I
I
I
I
Objective function: f (x, y ) = c1 x + c2 y
Let M be a large number.
If c1 > 0 then m1 is x 6 M.
If c1 6 0 then m1 is x > −M.
If c2 > 0 then m2 is y 6 M.
If c2 6 0 then m2 is y > −M.
In practice, it often comes naturally.
For instance, in our first example, it is easy to see that M = 30 is
sufficient.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
25 / 48
New constraints
y
M
m2
→
−
c
m1
M
Antoine Vigneron (KAUST)
CS 372 Lecture 9
x
November 7, 2012
26 / 48
Notation
The i–th constraint is:
ai,1 x + ai,2 y 6 bi .
It defines a half-plane hi .
`i
hi
Let `i denote the line that bounds hi .
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
27 / 48
Algorithm
A randomized incremental algorithm.
We first compute a random permutation of the constraints
(h1 , h2 . . . hn ).
We denote Hi = {m1 , m2 , h1 , h2 . . . hi }.
T
We denote by vi a vertex of Hi that maximizes the objective
function.
I
I
In other words, vi is a solution to the linear program where we only
consider the first i constraints.
v0 is simply the vertex of the boundary of m1 ∩ m2 .
Idea: knowing vi−1 , we insert hi and find vi .
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
28 / 48
Example
m1
v0
m2
→
−
c
f (x, y ) = constant
Feasible region
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
29 / 48
Example
m1
v1
m2
→
−
c
f (x, y ) = f (v1 )
Feasible region
h1
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
30 / 48
Example
m1
v2 = v1
m2
→
−
c
Feasible region
h2
f (x, y ) = f (v2 )
h1
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
31 / 48
Example
m1
h3
m2
v3
→
−
c
h2
f (x, y ) = f (v3 )
Feasible region
h1
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
32 / 48
Example
m1
h3
m2
v4 = v3
→
−
c
h4
h2
f (x, y ) = f (v4 )
Feasible region
h1
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
33 / 48
Algorithm
Randomized incremental algorithm.
Before inserting hi , we only assume that we know vi−1 .
How can we find vi ?
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
34 / 48
First Case
First case: vi−1 ∈ hi .
hi
vi−1
→
−
c
Feasible region
Then vi = vi−1 .
Proof?
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
35 / 48
Second Case
Second case: vi−1 6∈ hi .
vi−1
→
−
c
Feasible region
for Hi−1
`i
hi
Then vi−1 is not in the feasible region of Hi .
So vi 6= vi−1 .
What do we know about vi ?
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
36 / 48
Second Case
There exists an optimal solution vi ∈ `i .
→
−
c
vi
Feasible region
for Hi−1
`i
hi
Proof?
How can we find vi ?
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
37 / 48
Second Case
Assume that ai,2 6= 0, then the equation of `i is
y=
We replace y with
objective function.
bi −ai,1 x
ai,2
bi − ai,1 x
.
ai,2
in all the constraints of Hi and in the
We obtain a one dimensional linear program, with variable x.
If it is feasible, its solution gives us the x-coordinate of vi .
We obtain the y -coordinate using the equation of `i .
If this linear program is infeasible, then the original 2D linear program
is infeasible too and we are done.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
38 / 48
Analysis
First case is done in O(1) time: Just check whether vi−1 ∈ hi .
Second case in O(i) time: One dimensional linear program with i + 2
constraints.
So the algorithm runs in O(n2 ) time.
I
Is there a worst case example where it runs in Ω(n2 ) time?
What is the expected running time?
We need to know how often the second case happens.
We define the random variable Xi .
I
I
Xi = 0 in first case (vi = vi−1 ).
Xi = 1 in second case (vi 6= vi−1 ).
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
39 / 48
Analysis
When Xi = 0 we spend O(1) time at i-th step.
When Xi = 1 we spend O(i) time.
So the expected running time E [T (n)] is
E [T (n)] = O
n
X
!
1 + i.E [Xi ] .
i=1
Note: E [Xi ] is simply the probability that Xi = 1 in our case.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
40 / 48
Analysis
We denote Ci =
T
Hi .
In other words, Ci is the feasible region at step i.
vi is adjacent to two edges of Ci , these edges correspond to two
constraints h and h0 .
h0
Ci
vi
h
If vi 6= vi−1 , then hi = h or hi = h0 .
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
41 / 48
Analysis
What is the probability that hi = h or hi = h0 ?
We use backwards analysis.
We assume that Hi is fixed.
So hi is chosen uniformly at random in Hi \ {m1 , m2 }.
So the probability that hi = h or hi = h0 is at most 2/i.
I
It could be 1/i or 0 if vi is defined by m1 and/or m2 .
So E [Xi ] 6 2/i
So
E [T (n)] = O
n
X
i=1
Antoine Vigneron (KAUST)
2
1 + i.
i
CS 372 Lecture 9
!
= O(n).
November 7, 2012
42 / 48
Generalization to higher dimension
First attempt:
Each constraint is a half-space.
Can we compute their intersection and get the feasible region?
In R3 it can be done in O(n log n) time. (Not covered in CS 372.)
d
In higher dimension, the feasible region has Ω(nb 2 c ) vertices in the
worst case.
d
So computing the feasible region requires Ω(nb 2 c ) time.
Here, we will give a O(n) expected time algorithm for finding one
optimal point in the feasible region, when d = O(1).
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
43 / 48
Preliminary
A hyperplane in Rd has equation
α1 x1 + α2 x2 + · · · + αd xd = βd .
where
(α1 , α2 , . . . , αd ) ∈ Rd \ {0}d .
In general position, d hyperplanes intersect at one point.
Each constraint hi is a half-space, bounded by a hyperplane ∂hi . We
assume general position in the sense that:
I
I
I
Any d hyperplanes ∂hi1 , . . . , ∂hid intersect at exactly one point.
The intersection of any d + 1 such hyperplanes is empty.
−
No such hyperplane is orthogonal to →
c.
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
44 / 48
Algorithm
We generalize the 2D algorithm.
We first find d constraints m1 , m2 , . . . md that make the linear
program bounded:
I
I
If ci > 0 then mi is xi 6 M.
If ci < 0 then mi is xi > −M.
We pick a random permutation (h1 , h2 , . . . hn ) of H.
Then Hi is {m1 , m2 , . . . md , h1 , h2 , . . . hi }.
We maintain vi , the solution to the linear program with constraints
Hi and objective function f .
v0 is the vertex of
d
\
∂mi .
i=1
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
45 / 48
Algorithm
We assume d = O(1).
Inserting hi is done in the same way as in R2 :
If vi−1 ∈ hi then vi−1 = vi .
Otherwise, vi ∈ ∂hi .
I
We find vi by solving a linear program with i + d constraints in Rd−1 .
F
I
If this linear program is infeasible, then the original linear program is
infeasible too, so we are done.
It can be done in expected O(i) time.
(By induction.)
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
46 / 48
Analysis
What is the probability that vi 6= vi−1 ?
I
I
I
By our general position assumption, vi belongs to exactly d
hyperplanes that bound constraints in Hi .
The probability that vi 6= vi−1 is the probability that one of these d
constraints was inserted last.
By backwards analysis, it is 6 d/i.
So the expected running time of our algorithm is
!
n
X
d
E [T (n)] = O
1 + i.
= O(dn) = O(n).
i
i=1
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
47 / 48
Conclusion
This algorithm can be made to handle unbounded linear programs
and degenerate cases.
A careful implementation of this algorithm runs in O(d!n) time.
So it is only useful in low dimension.
It can be generalized to other types of problems.
I
See textbook: Smallest enclosing disk.
Sometimes we can linearize a problem and use a linear programming
algorithm.
I
We will see such cases in homework or tutorial.
(Example: Finding the enclosing annulus with minimum area.)
Antoine Vigneron (KAUST)
CS 372 Lecture 9
November 7, 2012
48 / 48
© Copyright 2026 Paperzz