Basic Algorithms — Spring 2017 — Problem Set 6
Due: April 28
1. Weird recursion tree analysis. Suppose we have an algorithm that on problems of size n, recursively
solves two problems of size n/2, with a “local running time” of O(f (n)) for some function f (n). That is, the
algorithm’s total running time satisfies the recurrence T (n) ≤ 2T (n/2) + O(f (n)). For simplicity, assume
that n is a power of 2.
Prove the following using a recursion tree analysis:
(a) If f (n) = n log n, then T (n) = O(n(log n)2 ).
(b) If f (n) = n/ log n, then T (n) = O(n log log n).
(c) If f (n) = n/(log n)2 , then T (n) = O(n).
2. Uneven divide and conquer. Suppose we have an algorithm that on problems of size n, recursively
solves two subproblems, one of size bn/b1 c and one of size bn/b2 c. Here, each bi is a constant greater than
1. Furthermore, the “local running time” is O(n). That is, the algorithm’s total running time satisfies the
recurrence
T (n) ≤ T (bn/b1 c) + T (bn/b2 c) + cn
for some constant c. You may assume the above inequality holds for all n ≥ 1, and that T (0) = 0.
Let δ := 1/b1 + 1/b2 . Assuming that δ < 1, use the method of recursion trees to prove that T (n) = O(n).
Hint: prove by induction on j the following: at each level j = 0, 1, 2, . . . of the recursion tree, the sum of the
subproblem sizes at level j is at most δ j n.
3. Product trees. For this and the following two exercises, we assume that F is a field and that we can
compute the product of two polynomials in F [X] whose degrees are less than n using O(nα ) operations in F ,
where 1 < α < 2 is a constant. You may also freely use the so-called Master Theorem in these exercises.
For this problem, the input is a list of field elements a1 , . . . , an ∈ F . For simplicity, assume that n is a power
of 2. The output is a complete
binary tree with n leaves, where each node in the tree stores a polynomial
Qs+t−1
which is of the form Pst := i=s (X − ai ):
• the root stores P1n ;
• for every internal node in the tree, if that node stores the polynomial Ps2t , its left child stores Pst and its
t
.
right child stores Ps+t
Note that the n leaves of the tree store the linear polynomials (X − a1 ), . . . , (X − an ).
Using the given polynomial multiplication algorithm as a subroutine, design a recursive algorithm to solve
this problem and analyze its running time. In particular, show that the running time is O(nα ) operations in
F.
4. Multi-point evaluation. For this problem, the input is a list of field elements a1 , . . . , an ∈ F and a
polynomial f ∈ F [X] of degree less than n. For simplicity, assume that n is a power of 2. The output is
f (a1 ), . . . , f (an ). Show how to solve this problem with a recursive algorithm that uses O(nα ) operations in
F.
Note that unlike the FFT, we do not assume anything special about the evaluation points a1 , . . . , an .
Use the algorithm from previous exercise as a preprocessing step.
Also, you may use as a subroutine an algorithm for polynomial division: on input g, h ∈ F [X] where h 6= 0
and both g and h have degree less than n, you may assume this algorithm computes the quotient q ∈ F [X]
and remainder r = g mod h ∈ F [X] satisfying g = hq + r and deg(r) < deg(h) using O(nα ) operations in F .1
Hint: use the following facts:
(i) for a ∈ F , we have f (a) = f mod (X − a);
(ii) for non-zero polynomials h1 , h2 ∈ F [X], we have f mod h1 = (f mod h1 h2 ) mod h1 ; that is, to compute
f mod h1 , we can first compute r = f mod h1 h2 and then r mod h1 .2
1 It
is a general fact that we can perform polynomial division just as fast as polynomial multiplication, up to constant factors.
is not a hard fact to prove. You might try to prove it yourself.
2 This
1
5. Polynomial interpolation.
(a) For this problem, the input consists of two lists of field elements: a1 , .Q
. . , an ∈ F and c1 , . . . , cn ∈ F . For
n
simplicity, assume that n is a power of 2. Define the polynomial P := i=1 (X − ai ), and for i = 1, . . . , n,
∗
define the polynomial Pi := P/(X − ai ). That is,
Y
Pi∗ =
(X − aj ) = (X − a1 ) · · · (X − ai−1 ) (X − ai+1 ) · · · (X − an ).
1≤j≤n
j6=i
Pn
The output is the polynomial i=1 ci Pi∗ .
Show how to solve this problem with a recursive algorithm that uses O(nα ) operations in F .
Hint: use the algorithm from Exercise 3 as a preprocessing step.
(b) Use part (a) and the algorithm from Exercise 4 to implement Lagrange interpolation using O(nα )
operations in F . The input is a list (a1 , b1 ), . . . , (an , bn ), where each ai and bi is in F , and ai 6= aj for all
i 6= j, and the output is the unique polynomial f ∈ F [X] of degree less than n that satisfies f (ai ) = bi
for i = 1, . . . , n. For simplicity, assume that n is a power of 2.
6. Doubling down. Recall the dice game played between Alice and Bob: Alice rolls two dice, and tells Bob
the sum; then Bob guesses a number from 1 to 6. Bob wins the game if his guess appears on either of the
two dice.
Now suppose Alice and Bob play for money. If Bob wins, he wins a dollar, and if he loses, Alice wins a dollar.
We can model this using a random variable W representing Bob’s winnings. We saw that using an optimal
strategy, Bob wins with probability 5/9. This means E[W ] = (1)(5/9) + (−1)(4/9) = 1/9.
To make the game more interesting, Alice allows Bob to “double down”: this means that after Alice tells Bob
the sum, Bob is allowed to double his bet from one dollar to two dollars, if he so chooses. Give an optimal
strategy for Bob (for both his guess and the double-down decision) and compute E[W ] for this strategy.
Hint: if Z is a random variable representing the sum of the two dice, compute E[W ] using the law of total
expectation:
12
X
E[W ] =
E[W | Z = `] Pr[Z = `].
`=2
7. Computing expectations.
(a) Let X and Y be independently and uniformly distributed over {1, −1}. Let Z = X + Y . Compute the
following:
E[Z], E[Z 2 ], E[Z 3 ], E[Z 4 ].
Hints: linearity of expectation, product rule for independent random variables.
(b) You roll a die and it comes up X. Then you continue to roll the die repeatedly until it shows a number
that is greater than or equal to X. What is the expected number of additional rolls? Hints: geometric
distribution, total expectation.
(c) You toss a coin until you get a total of k heads. What is the expected number of coin tosses? Hints:
geometric distribution, linearity of expectation.
8. Expected Min. Suppose X1 , . . . , Xt are mutually independent random variables, each uniformly distributed
over the set {1, . . . , n}. Let X = min(X1 , . . . , Xt ). Your goal is to show that E[X] = n/(t + 1) + O(1).
(a) Show that for j = 1, . . . , n, we have Pr[X ≥ j] = (n − j + 1)t /nt .
(b) Using the tail sum formula for expectation, along with part (a), show that
n
1 X t
i.
E[X] = t
n i=1
(c) Approximating a sum by an integral, use part (b) to show that
E[X] =
2
n
+ O(1).
t+1
9. Concurrent coin tossing. There are n people in a room. They play a game that proceeds in rounds. At
each round, every person in the room flips a coin — if it comes up heads, they leave the room. The question
is: what is the expected number of rounds until the room is empty?
To this end, let Y be the number of rounds until the room is empty. We have Y = max(Y1 , . . . , Yn ), where the
Yi ’s are independent random variables, each with a geometric distribution with associated success probability
1/2. Your goal is to show that E[Y ] = Θ(log n).
(a) Using the tail sum bound, show that
E[Y ] =
∞
X
(1 − (1 − 2−(i−1) )n ) =
i=1
∞
X
(1 − (1 − 2−i )n ).
i=0
(b) Now estimate the sum by an integral, and make the substitution u = 1−2−x to show that (1/ ln(2))Hn ≤
E[Y ] ≤ 1 + (1/ ln(2))Hn , where Hn is the nth harmonic number.
(c) Conclude that E[Y ] = Θ(log n).
3
© Copyright 2026 Paperzz