The Growth of Functions and Big

The Growth of Functions and Big-O Notation
Big-O Notation
Big-O notation allows us to describe the aymptotic growth of a function without concern for i)
constant multiplicative factors, and ii) less influential additive terms. For example, using big-O
notation, the function f (n) = 3n2 + 6n + 7 is assumed to have the same kind of (in this case
quadratic) growth as g(n) = n2 .
Why do we choose to ignore constant factors and less influential additive terms? One kind of function
that we often consider throughout this course is T (n), which represents the worst-case number of
steps required by a an algorithm to process an input of size n. Function T (n) will vary depending
on the computing paradigm that is used to represent the algorithm. For example, one paradigm
might represent the algorithm as a C program, while another might represent it as a sequence of
random-access machine instructions. Now if T1 measures the number of algorithmic steps for the first
paradigm, and T2 (n) measures the number of steps for the second, then, assuming that a paradigm
does not include any unnecessary overhead, these two functions will likely be within multiplicative
constant factors of one another. In other words, there will exist two constants C1 and C2 for which
C1 T2 (n) ≤ T1 (n) ≤ C2 T2 (n).
For this reason, big-O notation allows one to describe the steps of an algorithm in a mostly paradigmindepenent manner, yet still be able to give meaningful representations of T (n) by ignoring the
paradignm-dependent constant factors.
Let f (n) and g(n) be functions from the set of whole numbers to the set of nonnegative real numbers.
Then
Big-O f (n) = O(g(n)) iff there exist constants C > 0 and k ≥ 0 such that f (n) ≤ Cg(n) for every
n ≥ k.
Big-Ω f (n) = Ω(g(n)) iff there exist constants C > 0 and k ≥ 0 such that f (n) ≥ Cg(n) for every
n ≥ k.
Big-Θ f (n) = Θ(g(n)) iff f (n) = O(g(n)) and f (n) = Ω(g(n)).
1
little-o f (n) = o(g(n)) iff lim
f (n)
n→∞ g(n)
little-ω f (n) = ω(g(n)) iff lim
f (n)
n→∞ g(n)
= 0.
= ∞.
Note that a more succinct way to say “property P (n) is true for all n ≥ k, for some constant k” is to
say “property P (n) holds for sufficiently large n”. Although this phrase will be used often throughout
the course, nevertheless the student should make the effort to establish and communicate the value
of k as part of the solution.
Given functions f (n) and g(n), to determine the big-O relationship between f and g, we mean
establishing which, if any, of the above growth relationships apply to f and g. Note that, if more
than one of the above relations is true, then we choose the one that gives the most information. For
example, if f (n) = o(g(n)) and f (n) = O(g(n)), then we would simply write f (n) = o(g(n)), since it
implies the latter relation.
Example 1. Determine the big-O relationship between i) f (n) = 6n2 + 2n + 5 and g(n) = 50n2 , and
ii) f (n) and g(n) = n3 .
2
Basic Results
In this section we provide some basic results that allow one to determine the big-O growth of a
function, or the big-O relationship between two functions, without having to revert to the definitions.
Theorem 1. Let p(n) be a polynomial of degree a and q(n) be a polynomial of degree b. Then
• p(n) = O(q(n)) if and only if a ≤ b
• p(n) = Ω(q(n)) if and only if a ≥ b
• p(n) = Θ(q(n)) if and only if a = b
• p(n) = o(q(n)) if and only if a < b
• p(n) = ω(q(n)) if and only if a > b
Thus, Theorem 1 could have been invoked to prove that f (n) = o(g(n)), where f and g are the
functions from Example 1.
Theorem 2. Let f (n), g(n), h(n), and k(n) be nonnegative integer functions for sufficiently large
n. Then
• f (n) + g(n) = Θ(max(f (n), g(n)))
• if f (n) = Θ(h(n)) and g(n) = Θ(k(n)), then f (n)g(n) = Θ(h(n)k(n))
• Transitivity of big-O. If f (n) = O(g(n)) and g(n) = O(h(n)) then f (n) = O(h(n)). Note:
transitivity also holds for big-Ω and big-Θ.
3
Example 2. Use the results of Theorems 1 and 2 to give a succinct expression for the big-O growth
of f (n)g(n), where f (n) = n log(n4 + 1) + n(log n)2 and g(n) = n2 + 2n + 3.
Theorem 3. If lim
f (n)
n→∞ g(n)
= C, for some constant C > 0, then f (n) = Θ(g(n)).
Proof of Theorem 3. Mathematically,
f (n)
=C
n→∞ g(n)
lim
means that, for every > 0, there exists k ≥ 0, such that
|
f (n)
− C| < ,
g(n)
(n)
for all n ≥ k. In words, fg(n)
can be made arbitrarily close to C with increasing values of n. Removing
the absolute-value yields
f (n)
C −<
< C + ,
g(n)
which implies
(C − )g(n) < f (n) < (C + )g(n).
Since C > 0 and > 0 are constants, the latter inequalities imply f (n) = Θ(g(n)) so long as C − > 0.
Therefore, choosing = C/2, the result is proven.
Example 3. Suppose a > 1 and b < 0 are constants, with |b| < a. Prove that an + bn = Θ(an ).
4
L’Hospital’s Rule. Suppose f (n) and g(n) are both differentiable functions with either
1. lim f (n) = lim g(n) = ∞, or
n→∞
n→∞
2. lim f (n) = lim g(n) = 0.
n→∞
Then
n→∞
f (n)
f 0 (n)
= lim 0
.
n→∞ g(n)
n→∞ g (n)
lim
Example 4. Prove that for every > 0, log n = o(n ).
5
The following terms are used to describe the big-O growth of a function. Notice that a function
might satisfy the condition of more than one of the terms. In that√case we choose the term that is
most informative, in that it implies the other terms. For example, n displays sublinear, linear, and
polynomial growth, but we would say that it has sublinear growth, since this term implies the latter
two.
Growth
O(1)
O(log n)
O(logk n), for some k ≥ 1
o(n)
O(n)
O(n log n)
O(n logk n), for some k ≥ 1
O(nk ) for some k ≥ 1
Ω(nk ), for every k ≥ 1
Ω(an ) for some a > 1
Terminology
constant growth
logarithmic growth
polylogarithmic growth
sublinear growth
linear growth
log-linear growth
polylog-linear growth
polynomial growth
superpolynomial growth
exponential growth
Example 5. Describe the growth of the functions from Examples 1 and 2.
Advanced Results
When analyzing the running time of an algorithm, quite often one has to determine the big-O growth
of a series of the form
n
X
f (i) = f (1) + f (2) + · · · + f (n),
i=1
for some function f (n). The following theorem shows how to determine the growth when f is
integrable and either increasing or decreasing.
6
Theorem 4 (Integral Theorem). Let f (x) > 0 be an increasing or decreasing Riemann-integrable
function over the interval [1, ∞). Then
n
X
Z
n
f (x)dx),
f (i) = Θ(
1
i=1
if f is decreasing. Moreover, the same is true if f is increasing, provided f (n) = O(
Rn
1
f (x)dx).
Proof of Theorem 4. We prove
R nthe case when f is decreasing. The case when f is increasing is
left as an exercise. The quantity 1 f (x)dx represents the area under the curve of f (x) from 1 to n.
Moreover, for i = 1, . . . , n − 1, the rectangle Ri whose base is positioned from x = i to x = i + 1, and
whose height is f (i + 1) lies under the graph. Therefore,
Z
n−1
n
X
X
Area(Ri ) =
f (i) ≤
i=1
n
f (x)dx.
1
i=2
Adding f (1) to both sides of the last inequality gives
Z
n
X
f (i) ≤
f (x)dx + f (1).
1
i=1
Now, choosing C > 0 so that f (1) = C
n
Rn
1
f (x)dx gives
Z
n
X
f (i) ≤ (1 + C)
n
P
f (i) = O(
i=1
Rn
1
f (x)dx,
1
i=1
which proves
n
f (x)dx).
Now, for i = 1, . . . , n − 1, consider the rectangle Ri0 whose base is positioned from x = i to x = i + 1,
and whose height is f (i). This rectangle covers all the area under the graph of f from x = i to
x = i + 1. Therefore,
Z n
n−1
n−1
X
X
0
f (x)dx.
Area(Ri ) =
f (i) ≥
i=1
1
i=1
Now adding f (n) to the left side of the last inequality gives
Z
n
X
f (i) ≥
n
P
f (i) = Ω(
i=1
Rn
1
f (x)dx,
1
i=1
which proves
n
f (x)dx).
Therefore,
n
X
Z
f (i) = Θ(
f (x)dx).
1
i=1
7
n
Example 6. Give a succinct expression for the order of growth of the sum
1
1 1
+ + ··· + .
1 2
n
Theorem 5 (Log Ratio Test). Suppose f and g are continuous functions over the interval [1, ∞),
and
f (n)
lim log(
) = lim log(f (n)) − log(g(n)) = L.
n→∞
n→∞
g(n)
Then
1. If L = ∞ then
f (n)
= ∞.
n→∞ g(n)
lim
2. If L = −∞ then
f (n)
= 0.
n→∞ g(n)
lim
3. If L ∈ (−∞, ∞) is a constant then
f (n)
= 2L .
n→∞ g(n)
lim
8
Example 7. Use the log-ratio test to determine the big-O growth of f (n) = nlog n and g(n) = (1+ n1 )n .
9
Exercises
In the following problems, all functions are assumed nonnegative. Show all work and provide careful
explanation to receive a passing mark.
1. Use the definition of big-O to prove that 3n + 2 log n = O(n). Provide the appropriate C and
k constants.
2. Use the definition of big-Ω to prove that n + n log n2 = Ω(n log n). Provide the appropriate C
and k constants.
3. Prove that f (n) = O(g(n)) if and only if g(n) = Ω(f (n)).
4. Use the definition of big-Θ to prove that f (n) + g(n) = Θ(max(f (n), g(n))).
5. Prove that (n + a)b = Θ(nb ), for all real a and b > 0.
(n)
= 0, then f (n) = O(g(n)), but g(n) 6= O(f (n)).
6. Prove that if limitn→∞ fg(n)
7. Explain why the statement, “The running time of algorithm A is at most Ω(n log n)”, does not
make sense. Re-write the statement so that it makes more sense.
8. Prove or disprove: 2n+1 = O(2n ).
9. Prove or disprove: 22n = O(2n ).
10. Use any techniques or results from lecture to determine a succinct big-Θ expression for the
growth of the function log50 (n)n2 + log(n4 )n2.1 + 1000n2 + 100000000n.
11. Suppose g(n) ≥ 1 for all n, and that f (n) ≤ g(n) + L, for some constant L ≥ 0 and all n. Prove
that f (n) = O(g(n)).
12. Give an example that shows that the statement of Exercise 11 may not be true if we no longer
assume g(n) ≥ 1.
13. Prove or disprove: if f (n) = O(g(n)) and f (n) ≥ 1 and log(g(n)) ≥ 1 for sufficiently large n,
then log(f (n)) = O(log(g(n))).
14. Prove or disprove: if f (n) = O(g(n)), then 2f (n) = O(2g(n) ).
15. Prove transitivity of big-O: if f (n) = O(g(n)), then g(n) = O(h(n)), then f (n) = O(h(n)).
16. If g(n) = o(f (n)), then prove that f (n) + g(n) = Θ(f (n)).
17. Suppose we want to prove that statement P (n) is true for every n ≥ 1. The principle of
mathematical induction can establish this using the following two steps. Basis step: show
P (1) is true. Inductive step: assume P (n) is true for some n ≥ 1, and show that this
implies P (n + 1) is true. The assumption in Step ii is called the inductive assumption. Use
mathematical induction to prove that
1 + 2 + ··· + n =
10
n(n + 1)
.
2
18. Use mathematical induction to prove that 1 + 22 + · · · + n2 =
n(n+1)(2n+1)
.
6
]2 .
19. Use mathematical induction to prove that 1 + 23 + · · · + n3 = [ n(n+1)
2
20. Using the internet or a calculus book from the library, review the power, product, and quotient
rule of differentiation, and practice them on ten different functions. Make sure to practice on
functions with both positive and negative powers.
21. The chain rule of differentiation states that if h(x) = f (g(x)), then
h0 (x) = f 0 (g(x))g 0 (x).
Use the chain rule to compute the following derivatives.
√
a. h(x) = 2x + 1.
b. h(x) = ln5 (x).
1
c. h(x) = ln( 1+x
).
22. Use mathematical induction to prove that, for all integers k ≥ 1 and some > 0, logk n = o(n ).
P
23. Compute an exact value for ni=1 (3i2 + 2i + 5).
P P
24. Compute an exact value of ni=1 ij=1 (3i + 2j).
25. Use the integral theorem to establish that 1k + 2k + · · · + nk = Θ(nk+1 ), where k ≥ 1 is an
integer constant.
26. Using the internet or a calculus book from the library, review the power and u-substitution
rules of integration, and practice them on 10 different functions. Make sure to practice on
functions with both positive and negative powers.
27. Let u = u(x) and v = v(x) be differentiable functions. Let du denote the derivative of u, while
dv denotes the derivative of v. Then Integration by parts states that
Z
Z
udv = uv − vdu.
Use integration by parts to obtain the anti-derivatives of the following functions.
a. Use integration by parts to obtain the anti-derivative of f (x) = x2 e−3x .
b. Use integration by parts to obtain the anti-derivative of f (x) = x ln x.
28. Provide a closed-form expression for the asymptotic growth of log 1 + log 2 + · · · + log n.
29. Show that log(n!) = Θ(n log n).
30. Provide a closed-form expression for the asymptotic growth of n + n/2 + n/3 + · · · + 1.
√
31. Determine the big-O growth of the function f (n) = 2
32. Prove that n! = ω(2n ).
n
33. Determine the big-O relationship between 22 and n!.
11
2 log n
. Explain and show work.
34. Determine the big-O growth of the function f (n) = (log n)log n . Explain and show work.
√
35. Determine the big-O growth of the function f (n) = ( 2)log n . Explain and show work.
36. Determine the big-O growth of the function f (n) = (log n)!.
37. Determine the big-O growth of the function f (n) = n1/ log n .
12
Exercise Solutions
1. 3n + 2 log n ≤ 3n + 2n = 5n, for all n ≥ 0. C = 5, k = 0.
2. n + n log n2 = n + 2n log n ≥ n log n, for all n ≥ 0. C = 1, k = 0.
3. Set up the inequality that the assumption guarantees, and divide both sides by the constant.
4. C1 = 0.5, k1 = 1, and C2 = 2, k2 = 1 are sufficient constants (why?).
5. Use Theorem 3.
6. Suppose it was true that g(n) ≤ Cf (n) for some constant C > 0, and n sufficiently large. Then
(n)
≥ 1/C for sufficiently large n. But since
dividing both sides by g(n) yields fg(n)
f (n)
= 0,
n→∞ g(n)
lim
we know that
O(f (n)).
f (n)
g(n)
< 1/C, for sufficiently large n, which is a contradiction. Therefore, g(n) 6=
To prove g(n) 6= O(f (n)), use a proof by contradiction. In other words, assume the needed C
and k did exist, and show how this leads to a contradiction.
7. The phrase “at most” implies an upper bound, but Big-Ω implies a lower bound. Big-O should
be used instead.
8. True, since 2n+1 = 2 · 2n . C = 2, k = 0.
9. False. 22n = 4n and
2n
1
=
lim
= 0.
n→∞ 4n
n→∞ 2n
lim
Now use Exercise 6.
10. Θ(n2.1 log n).
11. f (n) ≤ g(n) + L ≤ g(n) + Lg(n) ≤ (1 + L)g(n), for all n ≥ 0, where the second inequality is
true since g(n) ≥ 1. C = 1 + L and k = 0. The result still holds so long as g(n) is bounded
away from zero.
12. Consider f (n) = 1/n and g(n) = 1/n2 .
13. After taking logarithms, use Exercise 11.
14. False. Consider f (n) = 2n and g(n) = n.
15. Assume f (n) ≤ C1 g(n), for all n ≥ k1 and g(n) ≤ C1 h(n), for all n ≥ k2 . Therefore,
f (n) ≤ C1 g(n) ≤ C1 C2 h(n)
for all n ≥ max(k1 , k2 ). C = C1 C2 and k = max(k1 , k2 ).
16. Use Theorem 2 and the fact that g(n) < f (n) for n sufficiently large.
13
17. Basis step 1 =
(1)(2)
.
2
Inductive step Assume 1 + 2 + . . . n =
n(n+1)
2
for some n ≥ 1. Show:
1 + 2 + . . . n + (n + 1) =
(n + 1)(n + 2)
.
2
n(n + 1)
n
(n + 1)(n + 2)
+ (n + 1) = (n + 1)( + 1) =
,
2
2
2
where the first equality is due to the inductive assumption.
1 + 2 + . . . n + (n + 1) =
18. Basis step 1 =
(1)(2)(3)
.
6
Inductive step Assume 12 + 22 + . . . n2 =
n(n+1)(2n+1)
6
12 + 22 + . . . n2 + (n + 1)2 =
for some n ≥ 1. Show:
(n + 1)(n + 2)(2n + 3)
.
6
n(n + 1)(2n + 1
+ (n + 1)2 =
6
(n + 1)
(n + 1)
(n + 1)
(n(2n + 1) + 6(n + 1)) =
(2n2 + 7n + 6) =
(n + 2)(2n + 3) =
6
6
6
(n + 1)(n + 2)(2n + 3)
,
6
where the first equality is due to the inductive assumption.
2
19. Basis step 1 = (1)(2)
.
2
2
n(n+1)
3
3
3
Inductive step Assume 1 + 2 + . . . n =
for some n ≥ 1. Show:
2
12 + 22 + . . . n2 + (n + 1)2 =
2
(n + 1)(n + 2)
1 + 2 + . . . n + (n + 1) =
.
2
2
n(n + 1)
3
3
3
3
1 + 2 + . . . n + (n + 1) =
+ (n + 1)3 =
2
3
3
3
3
(n + 1)2 2
(n + 1)2 2
(n + 1)2 (n + 2)2
(n + 4(n + 1)) =
(n + 4n + 4) =
=
4
4
4
2
n(n + 1)
,
2
where the first equality is due to the inductive assumption.
20. Answers will vary.
√
21. a. h0 (x) = 1/ 2x + 1.
ln4 (x).
b. h0 (x) =
5
x
c. h0 (x) =
−1
1
.
1
(1+x)2 ( 1+x
)
22. Basis step log1 n = o(n ) by Example 4.
14
logk (n)
n→∞ n
= 0 for some k ≥ 1. Show:
Inductive step Assume lim
logk+1 (n)
lim
= 0.
n→∞
n
For simplicity, we assume that log(n) has base e, so that (log n)0 = n1 . This will not affect
the big-O growth of the ratio, since, by the change of base formula,
logc a
,
logc b
logb a =
and so logarithms with different bases only differ by a constant factor, which does not
affect big-O growth. Then by L’Hospital’s Rule,
logk+1 (n)
(logk+1 (n))0
=
lim
=
n→∞
n→∞
n
(n )0
lim
1
logk (n)
logk (n)
= lim
= 0,
lim
n→∞ n · n−1
n→∞ n
where the last equality is due to the inductive assumption.
23.
n
(2n2
2
24.
n(n+1)(8n+7)
.
6
+ 5n + 13).
25. By the Integral Theorem,
Z
n
X
k
i = Θ(
n
1
i=1
n
xk+1 ) = Θ(nk+1 ).
x dx) = Θ(
k + 1 1
k
26. Answers will vary.
27.
a.
b.
−e−3x
(x2 + 2x/3 + 2/9
3
1 2
x ln(x) − 41 x2 + C.
2
+ C).
28. By the Integral Theorem,
n
X
Z
ln i = Θ(
Z
ln xdx).
1
i=1
Moreover,
n
n
ln xdx =
x ln x|n1
1
Z
−
n
1dx =
1
n ln n − n + 1 = Θ(n ln n).
29. Since log ab = log a + log b, we have
log(n!) = log(n(n − 1)(n − 2) · · · 1) = log n + log(n − 1) + log(n − 2) + · · · log 1.
Therefore, from the previous exercise, we have log(n!) = Θ(n log n).
15
30. We have
n + n/2 + n/3 + · · · + 1 = n(1 + 1/2 + 1/3 + · · · + 1/n).
Therefore, by Example 6, n + n/2 + n/3 + · · · + 1 = Θ(n log n).
31. Sublinear growth. Use the Log-ratio test with g(n) = n.
P
32. Use the Log-ratio test, and the fact that ni=1 log(i) = Θ(n log n).
33. Use the Log-ratio test to show that the first function is little-ω of the second.
34. Superpolynomial but not exponential growth. Use the Log-ratio test against an arbitrary
polynomial function nk . Then use it again against an arbitrary exponential function an , where
a > 1.
35. Sublinear. Use logarithm identities.
36. Superpolynomial. First analyze the growth of log(f (n)). Then consider the growth of 2log(f (n))
which has the same growth as f (n).
37. Constant growth. Take the logarithm of f (n) and analyze its growth.
16