Homework 10

CMU 15-251
Spring 2017
Homework 10
Writing session on Wednesday April 12
1. (SOLO) In the 1-d Battleship Problem, the input is an array B[1 . . . n] consisting of n/4
1’s and 3n/4 0’s. The task is to output an index i such that B[i] = 1. Write a randomized
algorithm which solves this problem, and justify that your algorithm satisfies the following
properties:
• For every input, with probability 1 it outputs a correct answer.
• For every input, with probability 1 its running time is O(n).
• For every input, the expected running time is O(1).
2. (SOLO) Suppose you are given a randomized algorithm that solves f : Σ∗ → Σ∗ in expected
time T (n) and with probability of error (i.e., the algorithm gambles with both correctness
and running time).1 Show that for any constant 0 > 0, there is a Monte Carlo algorithm
computing f with running time O(T (n)) and error probability + 0 .
3. (SOLO) In this question, randomization meets approximation. 3SAT is a hard problem to
solve exactly, but is it hard to find a decent approximation algorithm for?
Consider the MAX-3SAT problem where given a CNF formula in which every clause has
exactly 3 literals (with distinct variables), we want to find a truth assignment to the variables
in the formula so that we maximize the number of clauses that evaluate to True.
Describe a polynomial-time randomized algorithm with the property that given a 3CNF
formula with m clauses, it outputs a truth assignment to the variables such that the expected
number of clauses that evaluate to True is 78 m (i.e., in expectation, the algorithm is a 78 approximation algorithm).
Hint: Review the analysis of the randomized MAX-CUT algorithm shown in lecture.
Fun fact: There is no ( 78 + )-approximation algorithm for MAX-3SAT (for any constant
> 0) unless P = NP (this result is very hard to prove). So the randomized algorithm you
came up with is pretty much the best possible.
4. (GROUP) Consider the first problem from the last homework (the one where Alan writes
homework solutions to toilet papers). Show that for any > 0, there is no polynomial-time
(1.5 − )-approximation algorithm for that problem unless P = NP.
Hint: PARTITION.
5. (GROUP) Recall the hospital problem from the previous homework. We saw that the
problem was NP-hard. Let’s relax our original goal and try to make sure that everyone will
be somewhat close to a hospital.
1
In lecture, we introduced a Las Vegas algorithm as a randomized algorithm that always gives the correct answer
but gambles a bit with running time: the algorithm runs in T (n) time with high probability. Usually, this guarantee
on the running time is expressed by saying that the expected running time is T (n). In fact, Las Vegas algorithms are
defined using the notion of expected running time (see the course notes).
1
Suppose we have a connected graph G = (V, E) and a subset H ⊆ V . For any vertex v ∈ V ,
we write dist(v, H) for the length of the shortest path in G from v to any vertex in H. We
also define the radius of H to be
radius(H) = max{dist(v, H) : v ∈ V }.
This is like “the farthest anyone has to travel to get to a hospital”. Given G = (V, E) and
k, it is NP-hard to find the size-k set H ⊆ V with smallest radius. We want you to analyze
a polynomial-time algorithm for finding a size-k set H which nevertheless has pretty good
radius. Consider the algorithm below:
A: input is a connected graph G = (V, E) and a positive integer k.
Let H = {v1 }, where v1 ∈ V is an any arbitrary vertex.
for i = 2, 3, . . . , k
Determine the vertex vi ∈ V for which dist(vi , H) is largest.
Add vi into H.
Output H.
Show that this is a 2-approximation algorithm.
(Hint: Think about the “coverage zones” defined by the optimum H ∗ : i.e., for each vertex
in H ∗ , the set of vertices that are closest to it. How far apart can two vertices in the same
coverage zone be?)
6. (GROUP) At first, one of the surprising aspects of the Cook-Levin Theorem seems to be that
it shows that an NP-complete language indeed exists. On the other hand, in this problem, we
will try to convince you that the existence of an NP-complete language is not very surprising,
and not hard to prove. The real contribution of the Cook-Levin Theorem is the fact that it
provides a “natural” language that is NP-complete (i.e. SAT or CIRCUIT-SAT), which can
then be used to prove the NP-hardness of many other “natural” languages.
We’ll fix our alphabet to Σ = {0, 1}. Consider the language
WOW = {hM, x, 1t i : M is a TM with two inputs, x ∈ {0, 1}∗ , t ∈ N, and
∃u ∈ {0, 1}∗ with |u| ≤ t such that M (x, u) accepts in at most t steps.}.
Show that WOW is NP-complete. For the NP-hardness proof, you should not use the existence
of any NP-hard language. Instead, argue directly that for any language L ∈ NP, L ≤Pm WOW.
Note on notation: Above, 1t denotes the string consisting of t 1’s. It is the encoding of the
number t in unary. You should think about what would go wrong if t was encoded in binary
instead of unary.
2