Complexity
It’s not as complex as it sounds!
Learning Goals
After this unit, you should be able to...
• Define which program operations we measure in an algorithm in order to
•
•
•
•
•
•
•
•
•
approximate its efficiency.
Define “input size” and determine the effect (in terms of performance) that input
size has on an algorithm.
Give examples of common practical limits of problem size for each complexity
class.
Give examples of tractable, intractable, and undecidable problems.
Given code, write a formula which measures the number of steps executed as a
function of the size of the input (N).
Compute the worst-case asymptotic complexity of an algorithm (e.g., the worst
possible running time based on the size of the input (N)).
Categorize an algorithm into one of the common complexity classes.
Explain the differences between best-, worst-, and average-case analysis.
Describe why best-case analysis is rarely relevant and how worst-case analysis
may never be encountered in practice.
Given two or more algorithms, rank them in terms of their time and space
complexity.
Complexity Introduction
• Complexity theory addresses the issue of efficiency
in algorithms
• This allows us to compare algorithms (that solve the
same problem)-- very important!
• How do you suppose we might measure efficiency of
an algorithm?
Complexity Introduction
• Think for a moment about the efficiency measures
you just suggested.
• Can you think of any confounding factors that would
make comparison along these lines difficult?
• E.g. Using wall-clock time... does this work?
Complexity Introduction
• In reality, all measures have trade offs.
In our class,
we’ll focus on one complexity measure:
The number of operations performed by an
algorithm on an input of a given size
• Number of operations refers to the # of instructions
executed, # of function calls, etc
• Size refers to the # of items upon which the algorithm
is applied
•
E.g. # of items to be sorted, # of data points, # of
customers to process, etc
Complexity Introduction
• We express the complexity of an algorithm as the
number of operations performed as a function of the input
size n
• We write this as T(n)
• What if there are multiple, different inputs of size n?
• E.g. sorting a set of size n, has n! possible orderings
of the data
Complexity Introduction
• In the case of multiple inputs, we can talk about
algorithm performance in terms of...
• Worst case
• Average case
• Best case
Big-O (“Big-oh”) Notation
+
+
Z
→
R
Let f(n) and g(n) be functions mapping Z+ -> R+.
f(n) is O(g(n)) if there exist constants c ∈ R and n0 ∈ Z
such that for all n ≥ n0 :
+
f (n) ≤ c · g(n)
+
Using Big-O notation...
Some examples of using Big-O notation
For instance, we might say that “Algorithm A runs in
time Big-O of nlogn” and we would write that A is
O(nlogn)
What we mean is that the number of operations, as a
function of the input size, n, is O(nlgn)
Thus, as the input size (n) increases, the number of
steps increases at a rate approximately equivalent to
the formula nlgn
Complexity as approximation
Complexity analysis is typically an approximation
That is, we are not dealing with precision
measurements, but rather measurements that are
good enough to give us a sense of how the algorithm
performs
Because of this fixed constants are ignored, for the most
part...
All the same, be aware that they exist as they can be
important when it comes to actual implementations
A note on style...
We write:
T (n) ∈ O(g(n))
But not:
T (n) = O(g(n))
Why?
or
T (n) is O(g(n))
Example 1: Linear Search (e.g. array or vector)
Always assume the worst case.
k = 1;
while (k<=n)
{
if (A[k] == key)
return k;
k++;
}
return k;
If we’re measuring the number of instructions, then
By letting c =
and, n0 =
we note that T(n) is
We say that T(n) is “order n” or “Big-O of n” or “linear”.
Example 1: Linear Search (e.g. array or vector)
It’s useful to note that T(n) is also O(n )
2
We prefer to say, O(n) however
Why?
Big-O is a tool for suppressing constant factors and low
order terms. This allows us to emphasize the growth rate
of the function/algorithm.
More Examples...
Compute the following Big-O estimates, and show the
values for c and n0
Example 2: T (n) = 25n − 16n + 7
2
Example 3: T (n) = 10
6
More Examples
Example 4: T (n) = log(n!)
Example 5: T (n) = log(n!) + log(n )
2
Logs...
Note that we typically denote log2 n as lgn
logc a
Also recall that:
= logb a
logc b
Therefore, if T (n) = log10 n then T(n) is O(lg n). Why?
log10 2
T (n) = log10 n =
· log10 n
log10 2
log10 n
= log10 2 ·
log10 2
= log10 2 · log2 n
Algorithmic Bounds
At 1 MIPS (million instructions per second):
lgn is
n is
nlgn is
n is
2
2 is
n
n! is
Common Growth Orders (Families)
O(1) constant time
Fastest
O(lgn) logarithmic time
Tractable
O(n) linear time
O(nlgn) n log n time
O(n ) quadratric (n-squared) time
2
O(n3 ) cubic time
O(n ), m ≥ 1 polynomial time
Slowest
m
O(m ), m > 1 exponential time
n
O(n!) factorial time
}
Intractable!
Additional notes
• For data mining and machine learning applications, we
typically strive for at least O(n
•
3/2
)
n
n
3
=
f
(n)
f
(n)
∈
O(2
)
Given,
it is not the case that
because there does not exist constants, c and n0 s.t.
f (n) ≤ c · 2 , ∀n ≥ n0
f (n) + g(n) ∈ O(max{f (n), g(n)})
•
• Drop fixed constants
?
n
O(n(lgn + 1))
should instead be expressed as O(nlgn)
Lower Bounds: Big-Omega (Big-Ω) Notation
Let f(n) and g(n) be functions mapping Z+→R+.
f(n) is Ω(g(n)) if there exist constants c > 0 (c∈R+) and n0∈Z+
such that for all n ≥ n0 :
f(n) ≥ c⋅g(n).
We say that:
“f(n) is at least order g(n)”
“f(n) is bound from below by g(n)”
“f(n) is Big-Omega of g(n)”
Example: Searching for a key in an unsorted array of n
elements must take __________________ because we must
compare ___________________________ elements in the
worst-case.
Tight Upper and Lower Bounds: Big Theta (Big-Θ)
Notation
f(n) is Θ(g(n)) if f(n) is O(g(n)) and f(n) is Ω(g(n)).
We say that:“f(n) is Big-Theta of g(n)”
“f(n) is bounded above and below by g(n)”
Example: Find Θ-notation for the number of calls to “foo” as a
function of n:
for i = 1 to n
for j = 1 to i
foo(i,j)
One Approach...
An alternative approach
Example:
Suppose T(n) = log(n!). From an example a few days ago, we know that
T(n) is O(n log n). Now, show that T(n) is Ω(n log n), and therefore Θ(n
log n).
Example:
Give Θ-notation for T(n) = 2 + 4 + 8 + … + 2n
Note that the sum of a geometric series is given by the formula:
a(1 + r + r2 + … + rn) = a( rn+ 1 – 1 ) / ( r – 1 )
where r is the ratio between successive terms, and a, r ∈ R+) .
For T(n) above, a = _____ and r = _______
Big-O part:
Example, cont:
Big-Ω Part
An alternative solution...
Example:
Example: Give the most appropriate (best) Big-O notation for the
number of times that subroutine/function W is called, as a function of n:
for x = 1 to n
for y = lg x to x
W(x, y)
Useful Propositions
If T1(n) is O(f(n)) and T2(n) is O(g(n)), then:
(T1+T2)(n) is O( max(f(n),g(n)) ).
Equivalently, if T1(n) is O(f(n)) and T2(n) is O(g(n)), then:
(T1+T2)(n) is O(f(n) + g(n)).
If T1(n) is O(f(n)) and T2(n) is O(g(n)), then:
(T1T2)(n) is O(f(n) g(n)).
If T(n) is a polynomial of degree d (i.e., T(n) = a0 + a1n + a2n2 + …
+ adnd) , then:
T(n) is O(nd).
Learning Goals
After this unit, you should be able to...
• Define which program operations we measure in an algorithm in order to
•
•
•
•
•
•
•
•
•
approximate its efficiency.
Define “input size” and determine the effect (in terms of performance) that input
size has on an algorithm.
Give examples of common practical limits of problem size for each complexity
class.
Give examples of tractable, intractable, and undecidable problems.
Given code, write a formula which measures the number of steps executed as a
function of the size of the input (N).
Compute the worst-case asymptotic complexity of an algorithm (e.g., the worst
possible running time based on the size of the input (N)).
Categorize an algorithm into one of the common complexity classes.
Explain the differences between best-, worst-, and average-case analysis.
Describe why best-case analysis is rarely relevant and how worst-case analysis
may never be encountered in practice.
Given two or more algorithms, rank them in terms of their time and space
complexity.
© Copyright 2026 Paperzz