Document

o-notation

For a given function g(n), we denote by o(g(n))
the set of functions:
o(g(n)) = {f(n): for any positive constant c > 0,
there exists a constant n0 > 0 such that 0 
f(n) < cg(n) for all n  n0 }

f(n) becomes insignificant relative to g(n) as n
approaches infinity: limn
[f(n) / g(n)] = 0

We say g(n) is an upper bound for f(n) that is
not asymptotically tight.
1
O(*) versus o(*)
O(g(n)) = {f(n): there exist positive constants c and n0 such
that 0  f(n)  cg(n), for all n  n0 }.
o(g(n)) = {f(n): for any positive constant c > 0, there exists a
constant n0 > 0 such that 0  f(n) < cg(n) for all n  n0 }.
Thus o(f(n)) is a weakened O(f(n)).
For example:
n2 = O(n2)
n2  o(n2)
n2 = O(n3)
n2 = o(n3)
2
o-notation




n1.9999 = o(n2)
n2/ lg n = o(n2)
n2  o(n2) (just like 2< 2)
n2/1000  o(n2)
3
w-notation
For a given function g(n), we denote by w(g(n))
the set of functions
w(g(n)) = {f(n): for any positive constant c > 0,
there exists a constant n0 > 0 such that 0 
cg(n) < f(n) for all n  n0 }


f(n) becomes arbitrarily large relative to g(n) as
n approaches infinity: lim [f(n) / g(n)] = 
n

We say g(n) is a lower bound for f(n) that is not
asymptotically tight.
4
w-notation



n2.0001 = ω(n2)
n2 lg n = ω(n2)
n2  ω(n2)
5
Comparison of Functions
fg  ab
f (n) = O(g(n)) 
f (n) = (g(n)) 
f (n) = (g(n)) 
f (n) = o(g(n)) 
f (n) = w (g(n)) 
a
a
a
a
a


=
<
>
b
b
b
b
b
6
Properties


Transitivity
f(n) = (g(n)) & g(n) = (h(n))  f(n) = (h(n))
f(n) = O(g(n)) & g(n) = O(h(n))  f(n) = O(h(n))
f(n) = (g(n)) & g(n) = (h(n))  f(n) = (h(n))
Symmetry
f(n) = (g(n)) if and only if g(n) = (f(n))

Transpose Symmetry
f(n) = O(g(n)) if and only if g(n) = (f(n))
7
Practical Complexities


Is O(n2) too much time?
Is the algorithm practical?
2
3
n
n
nlogn n
n
1000
1mic
10mic
1milli
1sec
10000
10mic
130mic
100milli
17min
106
1milli
20milli
17min
32years
At CPU speed 109 instructions/second
8
Impractical Complexities
4
10
n
n
n
n
2
1000
17min
3.2 x 1013
years
3.2 x 10283
years
10000
116
days
10^6
3 x 10^7 ??????
years
???
???
??????
At CPU speed 109 instructions/second
9
Some Common Name for Complexity
O(1)
Constant time
O(log n)
Logarithmic time
O(log2 n)
Log-squared time
O(n)
Linear time
O(n2)
Quadratic time
O(n3)
Cubic time
O(ni ) for some i
Polynomial time
O(2n)
Exponential time
10
Growth Rates of some Functions


 n   On

 
 
 On log n   O n log 2 n  O n1.5  O n 2
 
 
 O n3  O n 4
   

 O n   O 2
 O 2   O 3   O 4 
 On!  O n 
Polynomial
Functions

Olog n   O log 2 n  O
O n c  O 2 c logn for any constant c
n
n
n
n
Exponential
Functions
log2 n
log n
11
Effect of Multiplicative Constant
Run time
800
f(n)=n2
600
400
f(n)=10n
200
0
10
20
25
n
12
Exponential Functions

Exponential functions increase rapidly, e.g.,
2n will double whenever n is increased by 1.
n
2n
1ms x 2n
10
103
0.001 s
20
106
1s
30
109
16.7 mins
40
1012
11.6 days
50
1015
31.7 years
60
1018
31710 years
13
Practical Complexity
250
f(n) = n
f(n) = log(n)
f(n) = n log(n)
f(n) = n^2
f(n) = n^3
f(n) = 2^n
0
1 2 3 4 5
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
14
Practical Complexity
500
f(n) = n
f(n) = log(n)
f(n) = n log(n)
f(n) = n^2
f(n) = n^3
f(n) = 2^n
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
15
Practical Complexity
1000
f(n) = n
f(n) = log(n)
f(n) = n log(n)
f(n) = n^2
f(n) = n^3
f(n) = 2^n
0
1
3
5
7
9
11
13
15
17
19
16
Practical Complexity
5000
4000
f(n) = n
f(n) = log(n)
3000
f(n) = n log(n)
f(n) = n^2
2000
f(n) = n^3
f(n) = 2^n
1000
0
1
3
5
7
9
11
13
15
17
19
17
Floors & Ceilings




For any real number x, we denote the greatest
integer less than or equal to x by x
 read “the floor of x”
For any real number x, we denote the least integer
greater than or equal to x by x
 read “the ceiling of x”
For all real x, (example for x=4.2)
 x – 1  x  x  x  x + 1
For any integer n ,
 n/2 + n/2 = n
18
Polynomials

Given a positive integer d, a polynomial in
n of degree d is a function P(n) of the form
d

i
a
n
P(n) =  i
i 0


where a0, a1, …, ad are coefficient of the polynomial

ad  0
A polynomial is asymptotically positive iff
ad  0

Also P(n) = (nd)
19
Exponents

x0 = 1
x1 = x

xa . xb = xa+b

xa / xb = xa-b

(xa)b = (xb)a = xab

xn + xn = 2xn  x2n
x-1 = 1/x
20
Logarithms (1)









In computer science, all logarithms are to
base 2 unless specified otherwise
xa = b iff
logx(b) = a
lg(n)
=
log2(n)
ln(n)
=
loge(n)
lgk(n)
=
(lg(n))k
loga(b) =
logc(b) / logc(a) ;
c0
lg(ab) =
lg(a) + lg(b)
lg(a/b) =
lg(a) - lg(b)
lg(ab)
=
b . lg(a)
21
Logarithms (2)
a
= blogb(a)
 alogb(n) = nlogb(a)
 lg (1/a) = - lg(a)
 logb(a) = 1/loga(b)
 lg(n)
n
for all n  0
 loga(a) = 1
 lg(1)
= 0, lg(2) = 1, lg(1024=210) = 10
 lg(1048576=220) = 20

22
Summation

Why do we need to know this?
We need it for computing the running time of a
given algorithm.

Example: Maximum Sub-vector
Given an array a[1…n] of numeric values (can be
positive, zero and negative) determine the subvector a[i…j] (1 i  j  n) whose sum of
elements is maximum over all sub-vectors.
23
Example: Max Sub-Vectors
MaxSubvector(a, n) {
maxsum = 0;
for i = 1 to n {
for j = i to n {
sum = 0;
for k = i to j { sum += a[k] }
maxsum = max(sum, maxsum);
}
}
return maxsum;
}
j
n
n
T (n)  1
i 1
j i k i
24
Summation
n

k  1  2  ...  n  n(n  1) / 2  (n 2 )
k 1
n
n 1
x
1
k
2
n
x  1  x  x ...  x 

x 1
k 0
n
n
n
k 1
k 1
 (ca
k
 (a
k
 ak 1 )  an  a0 , for a0 , a1 ,..., an
k
 ak 1 )  a0  an , for a0 , a1 ,..., an
k 1
n
k 1
n 1
 (a
k 0
 bk )  c  ak   bk
25
Summation

Constant Series: For a, b  0,
b
1  b  a  1
i a

Quadratic Series: For n  0,
n
i
2
 1  2  ...  n  (2n  3n  n) / 6
2
2
2
3
2
i 1

Linear-Geometric Series: For n  0,
n
i
2
n
n 1
n
2
ic

c

2
c

...

nc

[(
n

1
)
c

nc
]
/(
c

1
)

i 1
26
Series
27
Proof of Geometric series
A Geometric series is one in which the sum
approaches a given number as N tends to
infinity.
Proofs for geometric series are done by
cancellation, as demonstrated.
28
Factorials

n! (“n factorial”) is defined for integers n
 0 as
if n  0,
1

 n! =

n.(n  1)!
if n  0

n! = 1 . 2 .3 … n

n! < nn
for n ≥ 2
29