Solution - Department of Computer Science

CS-270 Algorithms 2013/14
Solution to Coursework I
Oliver Kullmann
Swansea University
Computer Science
January 8, 2014
1
Rules
Formulate, in your own words (really important) a kind of “framework” for working
out Θ-expressions:
1. These rules should be tailored to the task at hand: not a general theory, but something workable which suffices for the task (Section 2).
2. Try to be as precise as possible. In Section 2, you need to state which rules you
applied, and the application should be always obvious.
3. Give examples for each rule.
4. You need to justify these rules, but it definitely doesn’t need to be a mathematical
explanation: you might cite the book, or even some Internet sources might be used
(only if they are really authoritative — sorry, but other students don’t count here).
Here are basic rules with examples; if you have problems reading such language, then at least
make sure that you can handle the simplifications as in Section 2.1
1.1
Simplification
First two rules for removing some parts of a term. We can drop constant factors:
α · f (n) = Θ(f (n))
(1)
for a constant number α > 0. So for example 5n2 = Θ(n2 ). We can also drop smaller addends:
f (n) + g(n) = Θ(g(n))
(2)
for f (n) = O(g(n)). So for example we have n = O(n2 ), and thus n + n2 = Θ(n2 ).
1
It might be that reflecting on all that gives you headaches — in that case don’t worry too much about it,
and concentrate on practical handling of Θ and Ω.
1
1.2
O-Relations
Now come four rules for obtaining O-relations (needed for Rule (2)). First the trivial rule,
which is applied without mentioning: if there is some integer n0 , such that for all n ≥ n0
holds f (n) ≤ g(n), then we have f (n) = O(g(n)). More substantial is that all logarithms are
Θ-equal, and asymptotically smaller than any power:
loga (n) = Θ(lg(n)),
lg(n) = O(nα )
(3)
for all constant numbers a > 1 and α > 0. So for example log3 (n) = Θ(lg n) and log10 (n) =
√
O( n). Powers are simply ordered by the exponents:
nα ≤ nβ
(4)
for constant numbers 0 < α ≤ β; for example n5 ≤ n6 . Finally, ordering exponentials is also
easy — they are asymptotically bigger than any power, and themselves ordered by the base:
nα = O(an ),
an ≤ bn
(5)
1000000 )
for all constant numbers α > 0 and 1 < a ≤ b. So for example n(10
and 3n ≤ 4n .
1.3
= O(1.0000001n )
Structural rules
These structural rules are very basic, and thus they are harder to understand. If you don’t
understand them, chances are that you apply them nevertheless. Let’s hope the best, and
you might skip this subsection.
We need a general structural rule for replacement of Θ-equal terms in sums:
f (n) = Θ(g(n)) =⇒ f (n) + h(n) = Θ(g(n) + h(n)).
(6)
For example, considering 5n + n2 , we set f (n) := 5n, g(n) := n, and h(n) = n2 . We know
5n = Θ(n), and thus 5n + n2 = Θ(n + n2 ).
Another basic structural rule is transitivity of Θ-equality:
f (n) = Θ(g(n)) and g(n) = Θ(h(n)) =⇒ f (n) = Θ(h(n)).
(7)
For example, we have already derived 5n + n2 = Θ(n + n2 ), and we also know n + n2 = Θ(n2 ),
and thus 5n + n2 = Θ(n2 ).
Finally there is the trivial rule (again applied without mentioning), that if there is an
integer n0 , such that for all n ≥ n0 holds f (n) = g(n), then we have f (n) = Θ(g(n)). For
example (n + 3)2 = n2 + 6n + 9, and thus (n + 2)2 = Θ(n2 + 6n + 9).
1.4
Subtraction
To fully handle the examples in Section 2, we also need to do something about subtraction.
This is now a bit subtle:
• If we have a sum f (n) + g(n), then Rule (6) says that we can replace g(n) by h(n), if
g(n) = Θ(h(n)) holds, and we get f (n) + g(n) = Θ(f (n) + h(n)).
2
• However for a difference f (n) − g(n) we have to be more careful, and replacing g(n) by
h(n) in case of g(n) = Θ(h(n)) is no longer possible:
1. n2 − n2 = 0
2. n2 − 1/2n2 = 1/2n2 .
• The point is, that in a difference f (n) − g(n) we can drop g(n) if g(n) is asymptotically
“strictly smaller” than f (n).
Instead of formulating this in general, we only formulate the rule we need:
nα − γ · nβ = Θ(nα )
(8)
for 0 ≤ β < α and arbitrary γ.
1.5
Justifications
The range of acceptable justifications is rather large, and since I do not want to digress here
into mathematical argumentation, only some incomplete arguments are given, which are good
to know in any case.
In the context of this module, don’t worry too much about such justifications, but have a
“critical consciousness”, that is, know when you are applying rules, and try to be as precise
as possible with them.
1. Rule (4), that is, for all natural numbers n ≥ 0 and all real numbers 0 < α ≤ β holds
nα ≤ nβ , is a basic mathematical fact, which should be quite intuitive.2
2. The part of Rule (5), which states an ≤ bn , is another basic mathematical fact, which
again should be quite intuitive.
3. That all logarithms are Θ-equal (the first part of Rule (3)) follows by
loga (x) =
logb (x)
.
logb (a)
For example log8 (x) = lg(x)/3 (think about it — that’s not too hard).
4. That lg(n) grows slower than n should be rather “obvious”. That lg(n) grows slower
√
than e.g. n is just one step further. So also the second part of Rule (3) should “feel
alright”.
5. Comparing a power with an exponential: A general proof (from scratch) is not completely trivial. Let’s consider the inequality na ≤ 2n for n ≥ 2 and arbitrary a > 0
(note that writing down such an inequality doesn’t mean it’s true — in general we are
looking for those n making it true):
na ≤ 2n ⇐⇒ lg(na ) ≤ lg(2n ) ⇐⇒ a lg(n) ≤ n ⇐⇒
n
≥ a.
lg(n)
If we accept that lg(n) grows slower than n, then for n sufficiently large
that is, na ≤ 2n holds for n sufficiently large. And thus na = O(2n ).
2
n
lg(n)
≥ a holds,
However also notice that for real numbers 0 < x < 1 and for 0 < α < β ≤ 1 we have xα > xβ .
3
2
Θ-simplifications
Give Θ-expressions which are as simple as possible, and state the rules (your rules!)
you applied:
5n3 − 8n + 11n4 = Θ(?)
2n + 3n + n500 = Θ(?)
√
n + log(n) = Θ(?)
n · (n + 1) · (n + 2) = Θ(?)
The results are:
5n3 − 8n + 11n4 = Θ(n4 )
2n + 3n + n500 = Θ(3n )
√
√
n + log(n) = Θ( n)
n · (n + 1) · (n + 2) = Θ(n3 ).
The rules used are:
1. (1), (2), (4), (6), (7), (8).
2. (2), (5), (6), (7).
3. (2), (3), (6), (7).
4. (1), (2), (4), (6), (7).
3
Θ-ordering
Sort the ten expressions into asymptotic ascending order (i.e., if f (x) is left of g(x),
then you need to have f (x) = O(g(x)):
√
√
lg n, n − lg(n), 2n+1 , n + n, 3.4n, 4n , nn , n2 + n3 , 2n , n + n2 .
You need also to give some justification (could be a mathematical proof, but doesn’t need
to be) for each relation in this list, in the form of “f (x) = O(g(x)) because of ...”. In
some cases we even have a relation f (x) = Θ(g(x)), which needs to be stated and argued
for.
The sorted order is, with Θ-equalities:
√
√
lg n, n − lg(n), n + n, 3.4n, n + n2 , n2 + n3 , 2n , 2n+1 , 4n , nn .
4
To justify this, best to first determine the Θ-”normalform”:
√
lg n = Θ(lg n)
√
n − lg(n) = Θ( n)
√
n + n = Θ(n)
((3), (8))
((2), (3))
3.4n = Θ(n)
2
((1))
2
n + n = Θ(n )
((2), (4))
n2 + n3 = Θ(n3 )
((2), (4))
n
n
2 = Θ(2 )
n+1
2
n
= Θ(2n )
((1))
n
4 = Θ(4 )
nn = Θ(nn ).
In this way we see the two Θ-equalities. And that the order of the ten terms is valid we can
see by our rules, or, for example, by noticing that for all n ≥ 10 holds
√
lg n ≤ n ≤ n ≤ n2 ≤ n3 ≤ 2n ≤ 4n ≤ nn .
4
Recurrences
State in your own words a methodology for solving recurrences as considered by us:
1. Here the source shall be just the lecture notes (nothing else).
2. Say clearly, what is given (the “problem”), and how to solve it, as a kind of (little)
tutorial.
3. Give two examples for each case.
In Wikipedia we find “recurrences” under Recurrence relation; only the Fibonacci numbers
there and Computer Science there is relevant to us (but you don’t need to worry about that).
For us, recurrences come from Divide-and-Conquer algorithms, and express the running
time T (n) in a form
n
(9)
T (n) = a · T ( ) + Θ(nc ),
b
where a, b, c can be real numbers, with a ≥ 1, b > 1 (otherwise no getting smaller!), and c ≥ 0.
The underlying algorithm shall not concern us here. And the recurrence is usually just
read off from the algorithm:
• a is the number of sub-cases to be handled (and so typically a is a natural number); if
there is “no a”, then a = 1.
• b is the factor by which the instance is smaller (and so typically also b is a natural
number).
5
• While Θ(nc ) is the “recombination cost”.
Sometimes instead of “Θ(nc )” we have something like “1” (or, say, “77.123”), which
translates to Θ(n0 ). And “n” (or, say, “111n”) translates into Θ(n1 ).
Once the form (9) is established, we can apply the so-called “(Simplified) Master Theorem”,
which has three cases:
1. If c < logb (a), then T (n) = Θ(nlogb (a) ).
2. If c = logb (a), then T (n) = Θ(nc · lg n).
3. If c > logb (a), then T (n) = Θ(nc ).
We can determine, which of the three cases holds, without computing the logarithm logb (a),
namely:
1. c < logb (a) ⇐⇒ bc < a
2. c = logb (a) ⇐⇒ bc = a
3. c > logb (a) ⇐⇒ bc > a.
Examples:
1. Case 1:
(a) T (n) = 2T (n/2) + 1, yielding T (n) = Θ(n).
(b) T (n) = 27T (n/3) + n2 , yielding T (n) = Θ(n3 ).
2. Case 2:
(a) T (n) = 2T (n/2) + n, yielding T (n) = Θ(n · lg n).
(b) T (n) = 25T (n/5) + n2 , yielding T (n) = Θ(n2 · lg n).
3. Case 3:
(a) T (n) = 2T (n/2) + n2 , yielding T (n) = Θ(n2 ).
(b) T (n) = 25T (n/5) + n3 , yielding T (n) = Θ(n3 ).
6