Practice Final Exam 2: Solutions

Algorithm Design Techniques
Practice Final Exam 2: Solutions
1. The Simplex Algorithm.
(a) Take the LP
max
x1 + 2x2
2x1 + x2 ≤ 3
s.t.
x1 − x2 ≤ 2
x1 , x2 ≥ 0
and write it in dictionary form.
z =
x1 + 2x2
x3 = 3 − 2x1 − x2
x4 = 2 − x1 + x2
Pivot: add x1 to basis, remove x3 .
1
(3
2
− x3 − x2 ) + 2x2
x3
− x2
1
(3 − x3 − x2 ) + x2
2
z =
2x1 = 3 −
x4 = 2 −
z =
x1 =
x4 =
3
2
3
2
1
2
−
−
−
1
x
2 3
1
x
2 3
1
x
2 3
+
−
+
3
x
2 2
1
x
2 2
3
x
2 2
Pivot: add x2 to basis, remove x1 .
z = 32 −
x2 = 3 −
x4 = 12 −
1
x
2 3
x3
1
x
2 3
+
−
+
3
(3
2
− x3 − 2x1 )
2x1
3
(3 − x3 − 2x1 )
2
z = 6 − 2x3 − 3x1
x2 = 3 − x3 − 2x1
x4 = 5 − 2x3 − 3x1
1
So the optimal solution is (x1 , x2 ) = (0, 3) with objective value z = x1 +
2x2 = 6.
(b) The dual is
min
s.t.
3y1 + 2y2
2y1 + y2 ≥ 1
y1 − y2 ≥ 2
y1 , y2 ≥ 0
The top row of the final dictionary was
z = 6 − 3x1 − 2x3
We know that the dual variables y1 , y2 should be set equal to the coefficients
of the slack variables x3 , x4 . Thus (y1 , y2 ) = (2, 0).
Clearly this is dual feasible [You should verify this]. It has a dual value
of 3y1 + 2y2 = 6. So, by weak duality, our solution of value 6 in a) to the
primal shows that this must be dual optimal.
2
2. Local Search.
(a) Given an undirected graph G = (V, E), the maximum cut problem is find
a set S ⊆ V such that |δ(S)| is maximised.
(b) Consider any vertex v, and suppose it has degree deg(v). The final cut
δ(St ) contains at least 21 deg(v) edges incident to vertex v otherwise the
local search algorithm would not have terminated (since we could improve
the cut by moving v). Summing over all vertices we see that δ(St ) contains at least half the edges in the graph. Obviously, the optimal cut δ(S ∗ )
contains at most the total number of edges in the graph. The result follows.
3. Parameterised Complexity.
(a) A problem is fixed parameter tractable if it has an algorithm to solve it
that runs in time f (k) · p(n), where n is the problem input size and k is
the size of the optimal solution. Here p() is a polynomial function but f ()
need not be.
(b) Randomly colour the vertices with colours {1, 2, . . . , k}. We now search
for a cycle whose vertices are coloured in that same order. Let V1 be the
vertices coloured 1, let V2 be be the vertices coloured 2 and which have an
edge to some vertex in V1 , let V3 be the vertices coloured 3 and which have
an edge to some vertex in V2 , etc. Thus Vk is the set of vertices at the end
of a path coloured 1, 2, . . . , k. For each vertex v in Vk , do a reverse search
to find all the vertices in V1 that begin multi-coloured paths that end at
v. For each such vertex u check if (u, v) is an edge. If so we have found a
multi-coloured cycle. This process can be done in polynomial time in the
graph size.
The probability that a k-cycle is multi-coloured is at least ( k1 )k so repeating
this colouring experiment independently enough times (say k k ·log n times)
will let us find a k-cycle with high probability in time O(f (k) · p(n)) as
desired.
3
4. Branch and Bound.
The branch and bound tree is shown below. Dashed circles corresponded to
feasible solutions (that is, perfect matchings); dotted circles correspond to suboptimal subtrees that can be pruned away as we have already found better
integral solutions.
Note that by the branching rule we first explore the subpath B then BA to
find the feasible solutions BACD and BADC. The latter has value 22, which
allows us to prune the whole of the subtree rooted at A as well as the remainder
of the nodes in the subtree rooted at B. Next we branch down C and CA but
leads to no better solutions. Finally we search the subpath D and DA leading
to the solutions DABC and DACB both of value 21. Everything remaining in
the subtree of D can now be pruned. So we have found an optimal solution.
JOB 1 JOB 2 JOB 3 A B ACBB : 22
BDCC : 23
BACC : 19
A C
BACD : 23
DABB : 20
BACC : 19
C BCAD : 26
D
CDBB : 23
B CBAD : 29
B
CABD : 24
4
D CABB : 19
CABB : 19
A D
BADC : 22
C D
CADB : 23
D
DBCC : 25
A DABB : 20
B
DABC : 21
C
B C
DCBB : 22
DACB : 21
5. NP-Completeness.
(a) Given n (positive) integers x1 , x2 , . . . , xn . The partition problem asks if
there is a subset S ⊆ [n] such that
X
X
xi =
xi
i∈S
(b)
i∈S
/
i. Bin Packing is in NP (we can easily check a proposed solution to
confirm a YES instance). The reduction from Partition is as follows.
P
Set k = 2 and C = 21 i si , where si = xi . Clearly, there is a bin
packing that uses two bins if and only if there is a partition S ⊂ [n]
such that
X
X
xi =
xi
i∈S
i∈S
/
ii. Bin Covering is in NP (we can easily check a proposed solution to
confirm a YES instance). The reduction from Partition is as follows.
P
Set k = 2 and R = 12 i si , where si = xi . Clearly, there is a bin
covering that uses two bins if and only if there is a partition S ⊂ [n]
such that
X
X
xi =
xi
i∈S
i∈S
/
6. Approximation Algorithms.
(a) An α-approximation algorithm for a maximisation problem P always returns in polynomial time a feasible solution S to any instance I ∈ P such
that the “value” of the optimal solution for that instance is at most an α
factor greater that value of S.
(b) We use a greedy algorithm. First observe that any item i with si ≥ R will
be placed in its own bin in the optimal solution. So our greedy algorithm
will do that as well. So we may assume that si < R for all i.
The greedy algorithm simply places items in Bin 1 until it is overfull. Then
it places items in Bin 2 until it is overfull, etc. The greedy algorithm clearly
runs in polynomial time and returns a feasible solution.
Let’s show it gives a factor 3-approximation guarantee. Assume that the
optimal solution fills k bins, and the greedy algorithm fills l bins. So
5
P
si ≥ k · R. Now since si < R for all i, the greedy algorithm overfills a
bin by at most maxi si < R. Thus each bin has items of total size at most
R + R. The greedy algorithm may use one extra bin that is not full, that
is, one bin that has items of total size at most R. As every item is used
we have
X
si < l · 2R + R
i
i
But by the optimal solution we know
X
si ≥ k · R
i
So
k · R < l · 2R + R
Thus
1
1
l ≥ (k − 1) ≥ k
2
3
provided k ≥ 3. If k < 3 then we trivially get a 2-approximation algorithm by placing all the items in one bin. So we have a 3-approximation
algorithm.
[Remark. We can actually refine the analysis to show that this is a 2approximation algorithm. To do this, let item xi be the item that first
overfills bin i. Items x1 , . . . , xl are clearly in at most l bins of the optimal
solution. The remaining items have total size less than (l + 1)R because
they don’t fill bins 1 to l + 1 (recall one bin in greedy could be left unfull).
These items thus may cover at most l bins of opt. So opt covers at most
l + l = 2l bins.]
6