Analysis of Algorithms:
Methods and Examples
CSE 2320 – Algorithms and Data Structures
Alexandra Stefan
Based on presentations by
Vassilis Athitsos and Bob Weems
University of Texas at Arlington
Updated 9/11/2016
1
Reading
• CLRS: Chapter 3 (all)
• Notation: f(n) = Θ(n2) (if the dominant term is n2).
• Given a function, e.g. f(n) = 15n3 +7n2+3n+20, find
Theta:
– find the dominant term: 15n3
– Ignore the constant: n3
– f(n) = Θ(n3)
2
Counting instructions – part 1
Count in detail the total number of instructions executed by each of
the following pieces of code:
// Example A. Notice the ; at the end of the for loop.
for (i = 0; i <n; i++) ;
---------------------------------------------------------------------------------------------// Example B (source: Dr. Bob Weems)
for (i=0; i<n; i++)
for (j=0; j<s; j++)
{
c[i][j] = 0;
for (k=0; k<t; k++)
c[i][j] += a[i][k]*b[k][j];
}
----------------------------------------------------------------------------------------------
3
Counting instructions – part 1
1 + 1 + n* (1 + 1 + body_instr_count)
false true
for (init; cond; update) // assume the condition is TRUE n times
body
// Example A. Notice the ; at the end of the for loop.
for (i = 0; i <n; i++) ;
1 + 1 + n* (1 + 1 + 0) = 2 + 2n
---------------------------------------------------------------------------------------------// Example B (source: Dr. Bob Weems)
for (i=0; i<n; i++)
1 + 1 + n * (1 + 1 + ___ ) = 2 + n * (2 + 2 + 5*s + 3*s*t) = 2 + 2*n + 5*n*s + 3*n*s*t
1 + 1 + s* (1 + 1 + 1 + ____) = 2 + s * (3 + 2 + 3*t) =
for (j=0; j<s; j++)
{
c[i][j] = 0;
1 + 1 + t* (1 + 1 + 1) = 2 + 3 * t
for (k=0; k<t; k++)
c[i][j] += a[i][k]*b[k][j];
}
----------------------------------------------------------------------------------------------
2 + 5*s + 3*s*t
4
Counting instructions – part 2
(n2 time complexity and it’s justification review)
// Example G. T(n) = ….
for (i = 0; i<n; i++)
n
Θ(n2 )
for(j=0; j<n; j = j+1)
n
printf("A");
---------------------------------------------------------------------------------------------// Example H. (similar to selection sort processing) T(n) = ….
for (i = 0; i<(n-1); i++)
for(j=i+1; j<n; j = j+1)
(n-1) + (n-2) + …2 + 1 = [n(n-1)]/2 = Θ(n2 )
printf("A");
---------------------------------------------------------------------------------------------// Example I. (similar to insertion sort processing) T(n) = ….
for (i = 1; i<n; i++)
for(j=i-1; j>=0; j = j-1)
1 + 2 + … + (n-1) + (n-2) = [n(n-1)]/2 = Θ(n2 )
printf("A");
5
Time complexity – part 1
// Example C (source: Dr. Bob Weems) T(N) = …
for (i = 0; i<N; i++)
for(j=1; j<N; j = j+j)
printf("A");
---------------------------------------------------------------------------------------------// Example D. T(N) = ….
for (i = 0; i<N; i++)
for(j=1; j<N; j = j*2)
printf("A");
---------------------------------------------------------------------------------------------// Example E. T(N) = ….
for (i = 0; i<N; i++)
for(j=N; j>=1; j = j/2)
printf("A");
---------------------------------------------------------------------------------------------// Example F. T(N) = ….
for (i = 0; i<N; i++)
for(j=1; j<N; j = j+2)
printf("A");
6
Time complexity – part 1
// Example C (source: Dr. Bob Weems) T(N) = …
for (i = 0; i<N; i++)
N
Θ(NlgN)
for(j=1; j<N; j = j+j)
j: 1, 2, 4, …, N/2
printf("A");
---------------------------------------------------------------------------------------------// Example D. T(N) = ….
for (i = 0; i<N; i++)
N
Θ(NlgN)
for(j=1; j<N; j = j*2)
j: 1, 2, 4, …, N/2
printf("A");
lgN values
---------------------------------------------------------------------------------------------// Example E. T(N) = ….
for (i = 0; i<N; i++)
N
Θ(NlgN)
for(j=N; j>=1; j = j/2)
j: N, N/2, …, 2,1
printf("A");
(1+lgN) values
---------------------------------------------------------------------------------------------// Example F. T(N) = ….
for (i = 0; i<N; i++)
N
Θ(N2 )
for(j=1; j<N; j = j+2)
j:1,3,5,7,…, N-4,N-2
printf("A");
N/2 values
7
Time complexity – part 2
k
Source: code section from the LU decomposition code:
“Notes 2: Growth of functions” by Dr. Weems.
1. for (k=1; k<=n-1; k++) {
2. …
3. for (j=k+1; j<=n; j++) {
4.
temp=ab[ell][j];
5.
ab[ell][j]=ab[k][j];
6.
ab[k][j]=temp;
7.
8.
for (i=k+1; i<=n; i++)
ab[i][j] -= ab[i][k]*ab[k][j];
}
…
}
j
(loop 3)
i
Line 8
(loop 7) (nested 3&7)
1
2
…
n-2
n-1
total
8
Time complexity – part 2
k
j
(loop 3)
1
2
↓(n-1)
n
2 -> n
…
2 -> n
(n-1)2
3
↓(n-2)
n
3 -> n
…
3 -> n
(n-2)2
…
…
…
…
n-2
n-1
↓(2)
n
n-1 -> n
…
n-1 -> n
(2)2
n
↓(1)
n
n -> n
…
n -> n
(1)2
2
Source: code section from the LU decomposition code:
“Notes 2: Growth of functions” by Dr. Weems.
1. for (k=1; k<=n-1; k++) {
2. …
3. for (j=k+1; j<=n; j++) {
4.
temp=ab[ell][j];
5.
ab[ell][j]=ab[k][j];
6.
ab[k][j]=temp;
7.
8.
for (i=k+1; i<=n; i++)
ab[i][j] -= ab[i][k]*ab[k][j];
}
…
}
i
Line 8
(loop 7) (nested 3&7)
n-1
total
n 1
y 1
y 2 ( n 3 )
See the approximation of summations
by integrals example for solution to:
n 1
y 1
y 2 (n3 )
9
Estimate runtime
• Problem:
The total number of instructions in a program (or a piece of a program) is
1012 and it runs on a computer that executes 109 instructions per second.
How long will it take to run this program? Give the answer in seconds. If it is
very large, transform it in larger units (hours, days, years).
• Summary:
– Total instructions: 1012
– Speed: 109 instructions/second
• Answer:
– Time = (total instructions)/speed =
(1012 instructions) / (109 instr/sec) = 103 seconds ~ 15 minutes
• Note that this computation is similar to computing the time it takes to travel
a certain distance ( e.g. 120miles) given the speed (e.g. 60 miles/hour).
10
Estimate runtime
• A slightly different way to formulate the same problem:
– total number of instructions in a program (or a piece of a program)
is 1012 and
– it runs on a computer that executes one instruction in one
nanosecond (10-9 seconds)
– How long will it take to run this program? Give the answer in
seconds. If it is very large, transform it in larger units (hours, days,
years)
• Summary:
– 1012 total instructions
– 10-9 seconds per instruction
• Answer:
– Time = (total instructions) * (seconds per instruction) =
(1012 instructions)* (10-9 sec/instr) = 103 seconds ~ 15 minutes
11
Motivation for Big-Oh Notation
• Scenario: a client requests an application for sorting
his records. Which one of the 3 sorting algorithms
that we discussed will you implement if time is the
most important criteria (space is not a problem)?
– Selection sort
– Insertion sort
– Merge sort
12
Comparing algorithms
• Comparing linear, N lg N, and quadratic complexity.
N
N lg N
N2
106 (1 million)
≈ 20 million
1012 (one trillion)
109 (1 billion)
≈ 30 billion
1018 (one quintillion)
1012 (1 trillion)
≈ 40 trillion
1024 (one septillion)
• Quadratic time algorithms become impractical (too
slow) much faster than linear and N lg N algorithms.
• Of course, what we consider "impractical" depends
on the application.
– Some applications are more tolerant of longer running
times.
N
lgN
1000
≈ 10
106 =10002
≈ 2*10
109 =10003
≈ 3*10
1012 =10004 ≈ 4*10
13
Motivation for Big-Oh Notation
• Given an algorithm, we want to find a function that describes the
time performance of the algorithm.
• Computing the number of instructions in detail is NOT desired:
– It is complicated and the details are not relevant
– The number of machine instructions and runtime depend on factors other
than the algorithm:
•
•
•
•
Programming language
Compiler optimizations
Performance of the computer it runs on (CPU, memory)
There are some details that we would actually NOT want this function to include,
because they can make a function unnecessarily complicated.
• When comparing two algorithms we want to see which one is
better for very large data – asymptotic behavior
– It is not important what happens for small size data.
– Asymptotic behavior = rate of growth = order of growth
• The Big-Oh notation describes the asymptotic behavior and
greatly simplifies algorithmic analysis.
14
Asymptotic Notation
• Goal: we want to be able to say things like:
Θ(n2 )
O(n2 )
– Selection sort will take time strictly proportional to n2
– Insertion sort will take time at most proportional n2
• Note that we can still say that insertion sort is Θ(n2 ).
• Use big-Oh for upper bounding complex functions of n.
– Any sorting algorithm will take time at least proportional to n.
Ω(n)
• Math functions that are:
– Θ(n2 ) :
Informal definition:
f(n) grows ‘proportional’
to g(n) if:
– O(n2 ) :
𝑓(𝑛)
lim 𝑔(𝑛) = c ≠ 0
𝑛→∞
– Ω(n2 ) :
Abuse notation:
f (n) O( g (n))
(c is a non-zero constant)
= …. Θ tight bound
≤ .… O upper bound
instead of:
f (n) O( g (n))
(big-Oh – bigger)
≥ …. Ω lower bound 15
Abuse of notation
Instead of :
f (n) O( g (n))
We may use:
f (n) O( g (n))
16
Big-Oh
• A function f(n) is said to be O(g(n)) if there exist constants c0
and n0 such that:
f(n) ≤ c0g(n) for all n ≥ n0.
• Theorem: if
𝑓(𝑛)
lim
𝑛→∞ 𝑔(𝑛)
= c is a constant, then f (n) O( g (n)).
• Typically, f(n) is the running time of an algorithm.
– This can be a rather complicated function.
• We try to find a g(n) that is simple (e.g. n2), and such that
f(n) = O(g(n)).
17
Asymptotic Bounds and Notation
(CLRS chapter 3)
• f(n) is O(g(n)) if there exist positive constants c0 and n0 such that:
f(n) ≤ c0 g(n) for all n ≥ n0.
𝑓(𝑛)
=
𝑛→∞ 𝑔(𝑛)
– Theorem: if lim
c is a constant, then
f (n) O( g (n))
– g(n) is an asymptotic upper bound for f(n).
• f(N) is Ω(g(n)) if there exist positive constants c0 and n0 such that:
c0 g(n) ≤ f(n) for all n ≥ n0.
𝑔(𝑛)
=
𝑛→∞ 𝑓(𝑛)
– Theorem: if lim
c is a constant, then f (n) ( g (n))
– g(n) is an asymptotic lower bound for f(n).
• f(n) is Θ(g(n)) if there exist positive constants c0 , c1 and n0 such that:
c0 g(n) ≤ f(n) ≤ c1 g(n) for all n ≥ n0.
𝑓(𝑛)
=
𝑛→∞ 𝑔(𝑛)
– Theorem: if lim
c ≠ 0 is a constant, f (n) ( g ( N ))
– g(n) is an asymptotic tight bound for f(n).
18
Asymptotic Bounds and Notation
(CLRS chapter 3)
“little-oh”: o
𝑓(𝑛)
=
𝑛→∞ 𝑔(𝑛)
• Theorem: if lim
0 is a constant, then f (n) o( g (n))
• f(n) is o(g(n)) if for any constant c0, there exists n0 s.t.: f(n) ≤ c0 g(n) for all n ≥ n0.
• g(N) is an asymptotic upper bound for f(N) (but NOT tight).
• E.g.: n = ω(n2), n = ω(nlgn), n2 = ω(n4),…
“little-omega”: ω
𝑓(𝑛)
=
𝑛→∞ 𝑔(𝑛)
• Theorem: if lim
, then
f (n) ( g (n))
• f(N) is ω(g(n)) if for any constant c0, there exists n0 s. t.: c0 g(n) ≤ f(n) for all n ≥ n0.
• g(n) is an asymptotic lower bound for f(n) (but NOT tight).
• E.g.: n2 = ω(n), nlgn = ω(n), n3 = ω(n2),…
19
L’Hospital’s Rule
If lim 𝑓(𝑛) and lim 𝑔(𝑛) are both 0 or
𝑛→∞
𝑓′(𝑛)
𝑛→∞
and if lim 𝑔′(𝑛) is a constant or
𝑛→∞
𝑓(𝑛)
,
𝑓′(𝑛)
Then lim 𝑔(𝑛) = lim 𝑔′(𝑛)
𝑛→∞
𝑛→∞
20
Big-Oh Notation
21
Theta vs Big-Oh
• The Theta notation is more strict than the Big-Oh
notation:
– TRUE: n2 = O(n100).
– FALSE: n2 = Θ(n100).
22
Properties of O, Ω and Θ
1. f(n) = O(g(n)) => g(n) = Ω(f(n))
2. f(n) = Ω(g(n)) => g(n) = O(f(n))
3. f(n) = Θ(g(n)) => g(n) = Θ(f(n))
4. If f(n) = O(g(n)) and f(n) = Ω(g(n)) => f(n) = Θ(g(n))
5. If f(n) = Θ(g(n)) => f(n) = O(g(n)) and f(n) = Ω(g(n))
Transitivity (proved in future slides):
6. If 𝑓 𝑛 = 𝑂 𝑔 𝑛
and 𝑔 𝑛 = 𝑂(ℎ 𝑛 ), then 𝑓 𝑛 = 𝑂(ℎ 𝑛 ).
7. If 𝑓 𝑛 = Ω 𝑔 𝑛
and 𝑔 𝑛 = Ω(ℎ 𝑛 ), then 𝑓 𝑛 = Ω(ℎ 𝑛 ).
23
Simplifying Big-Oh Notation
• Suppose that we are given this running time:
f(n) = 35n2 + 41n + lg(n) + 1532.
• How can we express f(n) in Big-Oh notation?
24
Asymptotic Notation
• Let
f(n) = 35n2 + 41n + lg(n) + 1532.
• We say that f(n) = O(n2).
• Also correct, but too detailed (do not use them):
– f(n) = O(n2+n)
– f(n) = O(35n2 + 41n + lg(n) + 1532).
• In the recurrence formulas and proofs, you may see these
notations (see CLRS, page 49):
– f(n) = 2n2 + Θ(n)
• There is a function h(n) in Θ(n) s.t. f(n) = 2n2 + h(n)
– 2n2 + Θ(n) = Θ(n2).
• For any function h(n) in Θ(n), there is a function g(n) in Θ(n2) s.t. f(n) = 2n2 +
h(n) = g(n).
• For any function h(n) in Θ(n), 2n2 + h(n) is in Θ(n2).
25
Proofs using the definition: O
• Let f(n) = 35n2 + 41n + lg(n) + 1532.
Show (using the definition) that f(n) = O(n2).
• Proof:
Want to find n0 and c0 s.t., for all n ≥ n0: f(n) ≤ c0n2.
Version 1:
– Upper bound each term by n2 for large n (e.g. n ≤ 1532)
f(n) = 35n2 + 41n + lg(n) + 1532 ≤ 35n2 + n2 + n2 + n2 = 38n2
– Use: c0 = 38, n0 = 1536
f(n) = 35n2 + 41n + lg(n) + 1532 ≤ 38n2, for all n ≥ 1536
Version 2:
– You can also pick c0 large enough to cover the coefficients of all the
terms: c0 = 1609 = (35 + 41+1+1532), n0 = 1
26
Proofs using the definition: Ω, Θ
• Let f(n) = 35n2 + 41n + lg(n) + 1532.
Show (using the definition) that f(n) = Ω(n2) and f(n) = Θ(n2).
• Proof of Ω:
Want to find n1 and c1 s.t., for all n ≥ n1: f(n) ≥ c1n2.
– Use: c1 = 1, n1 = 1
f(n) = 35n2 + 41n + lg(n) + 1532 ≥ n2, for all n ≥ 1
• Proof of Θ:
Version 1: We have proved f(n) = O(n2) and f(n) = Ω(n2) and so f(n) = Θ(n2)
(property 4, page 26).
Version 2: We found c0 = 38, n0 = 1536 and c1 = 1, n1 = 1 s.t.:
f(n) = 35n2 + 41n + lg(n) + 1532 ≤ 38n2, for all n ≥ 1536
f(n) = 35n2 + 41n + lg(n) + 1532 ≥ n2, for all n ≥ 1
=> n2 ≤ f(n) ≤ 38n2, for all n ≥ 1536 => f(n) = Θ(n2)
27
Polynomial functions
• If f(n) is a polynomial function, then it is Θ of the dominant
term.
• E.g. f(n) = 15n3 +7n2+3n+20,
find g(n) s.t. f(n)=Θ(g(n)):
–
–
–
–
find the dominant term:
15n3
Ignore the constant, left with: n3
=> g(n) = n3
=> f(n) = Θ(n3)
28
Using Limits
𝑓(𝑛)
𝑛→∞ 𝑔(𝑛)
• if lim
= c is a non-zero constant, then g(n) = ____(f(n)).
– In this definition, both zero and infinity are excluded.
– In this case we can also say that f(n) = Θ(g(n)).
This can easily be proved using the limit or the reflexivity property of Θ.
• if
𝑓(𝑛)
lim
𝑛→∞ 𝑔(𝑛)
= c is a constant, then g(n) = ____(f(n)).
– "constant" includes zero, but not infinity.
• if
𝑓(𝑛)
lim
𝑛→∞ 𝑔(𝑛)
= ∞ then g(n) = ____(f(n)).
– f(n) grows much faster than g(n)
• if
𝑔(𝑛)
lim
𝑛→∞ 𝑓(𝑛)
is a constant, then g(n) = ____(f(n)).
– "Constant" includes zero, but does NOT include infinity.
29
Using Limits
𝑓(𝑛)
𝑛→∞ 𝑔(𝑛)
• if lim
= c is a non-zero constant, then f(n) = Θ(g(n)).
– In this definition, both zero and infinity are excluded.
– In this case we can also say that g(n) = Θ(f(n)).
This can easily be proved using the limit or the reflexivity property of Θ.
• if
𝑔(𝑛)
lim
𝑛→∞ 𝑓(𝑛)
= c is a constant, then f(n) = Ω(g(n)).
– "constant" includes zero, but not infinity.
• if
𝑔(𝑛)
lim
𝑛→∞ 𝑓(𝑛)
= ∞ then f(n) = O(g(n)).
– g(n) grows much faster than f(n)
• if
𝑓(𝑛)
lim
𝑛→∞ 𝑔(𝑛)
is a constant, then f(n) = O(g(n)).
– "Constant" includes zero, but does NOT include infinity.
30
Using Limits: Example 1
• Suppose that we are given this running time:
f(n) = 35n2 + 41n + lg(n) + 1532.
• Use the limits theorem to show that f(n) = O(n2).
31
Using Limits: Example 2
𝑛5+3𝑛4+2𝑛3+𝑛2+𝑛+12
• Show that
5𝑛3+𝑛+3
= Θ(???).
32
Using Limits: An Example
• Show that
𝑛5+3𝑛4+𝑛3+2𝑛2+𝑛+12
5𝑛3+𝑛+3
• Proof: Here: 𝑓 𝑛 =
= Θ(𝑛2).
𝑛5+3𝑛4+𝑛3+2𝑛2+𝑛+12
5𝑛3+𝑛+3
Let 𝑔(𝑛) = 𝑛2.
𝑓(𝑛)
𝑛→∞ 𝑔(𝑛)
We want to show that lim
𝑓(𝑛)
lim
𝑛→∞ 𝑔(𝑛)
=
=c ≠0 and so, 𝑓 𝑛 = Θ 𝑔 𝑛 .
𝑛5+3𝑛4+𝑛3+2𝑛2+𝑛+12 1
lim
5𝑛3+𝑛+3
𝑛2
𝑛→∞
• Solution 1:
𝑓(𝑛)
lim
𝑛→∞ 𝑔(𝑛)
= lim
𝑛5+3𝑛4+𝑛3+2𝑛2+𝑛+12
5𝑛5+𝑛3+3𝑛2
𝑛→∞
𝑛5+3𝑛4+𝑛3+2𝑛2+𝑛+12
= lim
5𝑛5+𝑛3+3𝑛2
𝑛→∞
=
1
5
• Solution 2 (L’Hospital) :
𝑓(𝑛)
lim
𝑛→∞ 𝑔(𝑛)
𝑓′(𝑛)
= lim
𝑛→∞ 𝑔′(𝑛)
5𝑛4+3∗4𝑛3+3𝑛2+2𝑛+1
= lim
5∗5𝑛4+3∗𝑛2+3∗2𝑛
𝑛→∞
5∗4∗3∗2∗𝑛
𝑛→∞ 5∗5∗4∗3∗2∗𝑛
= … = lim
=
1
5
33
Big-Oh Hierarchy
•
•
•
•
1 = 𝑂(lg(𝑛))
𝑙𝑔(𝑛) = 𝑂(𝑛)
𝑛 = 𝑂 𝑛2
If 0 ≤ 𝑐 ≤d, then 𝑛𝑐 = 𝑂(𝑛𝑑).
– Higher-order polynomials always get larger than lower-order polynomials,
eventually.
• For any 𝑑, if 𝑐 > 1, 𝑛𝑑 = 𝑂 𝑐𝑛 .
– Exponential functions always get larger than polynomial functions,
eventually.
• You can use these facts in your assignments.
• You can apply transitivity to derive other facts, e.g., that 𝑙𝑔(𝑛) =
𝑂(𝑛2).
O(1), 𝑂(𝑙𝑔(𝑛)) , 𝑂 n , 𝑂 𝑛 , 𝑂 𝑛𝑙𝑔𝑛 , 𝑂 𝑛2 , 𝑂 𝑛𝑐 , 𝑂 𝑐𝑛
34
Big-Oh Transitivity
• If 𝑓 𝑛 = 𝑂 𝑔 𝑛 and 𝑔 𝑛 = 𝑂(ℎ 𝑛 ), then
𝑓 𝑛 = 𝑂(ℎ 𝑛 ).
Proof:
35
Big-Oh Transitivity
• If 𝑓 𝑛 = 𝑂 𝑔 𝑛 and 𝑔 𝑛 = 𝑂(ℎ 𝑛 ), then
𝑓 𝑛 = 𝑂(ℎ 𝑛 ).
Proof:
We want to find 𝑐3 and 𝑛3 s. t. 𝑓 𝑛 ≤ 𝑐3ℎ 𝑛 , for all 𝑛 ≥ 𝑛3.
We know:
𝑓 𝑛 =𝑂 𝑔 𝑛
=> there exist c1, n1, s.t. 𝑓 𝑛 ≤ 𝑐1𝑔 𝑛 , for all 𝑛 ≥ 𝑛1
𝑔 𝑛 =𝑂 ℎ 𝑛
=> there exist c2, n2, s.t. g 𝑛 ≤ 𝑐2ℎ 𝑛 , for all 𝑛 ≥ 𝑛2
𝒏 ≥ 𝒏𝟏
𝒏 ≥ 𝒏𝟐
𝑓 𝑛 ≤ 𝑐1𝑔 𝑛 ≤ 𝑐1𝑐2ℎ 𝑛 , for all 𝑛 ≥ max(𝑛1, 𝑛2)
Use: c = c1 * c2 , and 𝑛 ≥ 𝑚𝑎𝑥(𝑛1, 𝑛2)
36
Using Substitutions
• If lim ℎ(𝑥) = ∞, then:
𝑥→∞
𝑓 𝒙 = 𝑂 𝑔 𝒙 ⇒ 𝑓 ℎ(𝑥) = 𝑂(𝑔 ℎ(𝑥) ).
(This can be proved )
• How do we use that?
• For example, prove that lg 𝑛 = 𝑂( 𝑛).
37
Using Substitutions
• If lim ℎ(𝑥) = ∞, then:
𝑥→∞
𝑓 𝑥 = 𝑂 𝑔 𝑥
⇒ 𝑓 ℎ(𝑥) = 𝑂(𝑔 ℎ(𝑥) ).
• How do we use that?
• For example, prove that lg
• Use h n = 𝑛. We get:
𝑛 = 𝑂( 𝑛).
lg n = O n ⇒ lg
𝑛 =𝑂
𝑛
38
Example Problem 1
• Is 𝑛 = 𝑂(sin 𝑛 𝑛2 )?
• Answer:
39
Example Problem 2
• Show that max(f(n), g(n)) is 𝛩(f(𝑛)+ g(𝑛))
– Show O:
– Show Ω:
40
Asymptotic notation for two
parameters (CLRS)
f(n,m) is O(g(n,m)) if there exist constants c0, n0 and m0 such
that:
f(n,m) ≤ c0g(n,m) for all pairs (n,m) s.t.
either n ≥ n0 or m ≥ m0
f(n,m)
n0
n
m0
m
41
Useful logarithm properties
• 𝑐 𝑙𝑔(𝑛) = 𝑛𝑙𝑔(𝑐)
– Proof: apply lg on both sides and you get two equal terms:
lg(𝑐 𝑙𝑔(𝑛) ) = lg (𝑛𝑙𝑔(𝑐) )
=>
lg(n) * lg(c) = lg(n) * lg(c)
• Can we also say that 𝑐 𝑛 = 𝑛𝑐 ?
– NO!
42
Summary
•
•
•
•
•
•
•
Definitions
Properties: transitivity, reflexivity, …
Using limits
Big-Oh hierarchy
Substitution
Example problems
Asymptotic notation for two parameters
• 𝑎𝑙𝑜𝑔𝑏(𝑛) = 𝑛𝑙𝑜𝑔𝑏(𝑎)
(𝑎𝑛 ≠ 𝑛𝑎 ) (note logb in the
exponent)
43
© Copyright 2026 Paperzz