O(n!)

CSCE 210
Data Structures and Algorithms
Prof. Amr Goneid
AUC
Part 3. Introduction to the
Analysis of Algorithms
Prof. Amr Goneid, AUC
1
Introduction to the Analysis of
Algorithms
 Algorithms
 Analysis
of Algorithms
 Time Complexity
 Bounds and the Big-O
 Types of Complexities
 Rules for Big-O
 Examples of Algorithm Analysis
Prof. Amr Goneid, AUC
2
1. Algorithms

The word Algorithm comes from the name of Abu
Ja’afar Mohamed ibn Musa Al Khowarizmi (c.
825 A.D.)
 An Algorithm is a procedure to do a certain task
 An Algorithm is supposed to solve a general, wellspecified problem
Prof. Amr Goneid, AUC
3
Algorithms



Example: Sorting Problem:
Input: A sequence of keys {a1 , a2 , …, an}
output: A permutation (re-ordering) of the input,
{a’1 , a’2 , …, a’n} such that a’1 ≤ a’2 ≤ …≤ a’n
An instance of the problem might be sorting an
array of names or sorting an array of integers.
An algorithm is supposed to solve all instances of
the problem
Prof. Amr Goneid, AUC
4
Example: Selection Sort Algorithm
 Solution:
“From those elements that are currently unsorted,
find the smallest and place it next in the sorted list”
 Algorithm:
for each i = 0 .. n-2

find smallest element in sub-array a[i] to a[n-1]

swap that element with that at the start of the subarray
Prof. Amr Goneid, AUC
5
Example: Selection Sort Algorithm
void selectsort (itemType a[ ], int n)
{
int i , j , m;
for (i = 0; i < n-1; i++) {
m=i;
for ( j = i+1; j < n; j++)
if (a[j]  a[m]) m = j ;
swap (a[i] , a[m]);
}
}
Prof. Amr Goneid, AUC
6
Algorithms
Example: Euclide’s Algorithm for the GCD
ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n ≠ 0 do
r ←m mod n
m←n
n←r
return m

Prof. Amr Goneid, AUC
7
Algorithms
Euclide’s Algorithm for the GCD
(Recursive Version)
function gcd(m, n)
if n = 0
return m
else
return gcd (n, m mod n)

"The Euclidean algorithm is the granddaddy of all
algorithms, because it is the oldest nontrivial algorithm
that has survived to the present day”.
Donald Knuth, The Art of Computer Programming, Vol. 2
Prof. Amr Goneid, AUC
8
Algorithms should be:







Transparent
Correct
Complete
Writeable
Maintainable
Easy to use
Efficient
Prof. Amr Goneid, AUC
9
Algorithms
 An
algorithm should have a clear and
Transparent purpose
 An Algorithm should be Correct, i.e. solves
the problem correctly.
 An Algorithm should be Complete, i.e.
solve all instances of the problem
 An algorithm should be Writeable, i.e. we
should be able to express its procedure
with available implementation language.
Prof. Amr Goneid, AUC
10
Algorithms
 An
algorithm should be Maintainable, i.e.
easy to debug and modify.
 An algorithm is supposed to be Easy to
use.
 An Algorithm is supposed to be Efficient.
 An efficient algorithm uses minimum of
resources (space and time).
 In particular, it should solve the problem in
the minimum amount of time.
Prof. Amr Goneid, AUC
11
2. Analysis of Algorithms
 The main goal is to determine the cost of
running an algorithm and how to reduce
that cost. Cost is expressed as
Complexity
 Time Complexity
 Space Complexity
Prof. Amr Goneid, AUC
12
Analysis of Algorithms
 Time Complexity
Depends on:
- Machine Speed
- Size of Data and Number of Operations
needed (n)
 Space Complexity
Depends on:
- Size of Data
- Size of Program
Prof. Amr Goneid, AUC
13
3. Time Complexity
 Expressed as T(n) = number of
operations required.
 (n) is the Problem Size:
n could be the number of specific
operations, or the size of data (e.g. an
array) or both.
Prof. Amr Goneid, AUC
14
Number of Operations T(n)
Example (1): Factorial Function
int factorial (int n)
{
n times
int i , f = 1;
if ( n > 0 )
for (i = 1; i <= n; i++) f * = i ;
return f ;
}
Let T(n) = Number of multiplications.
For a given n , then Prof.
T(n)
= n (always)
Amr Goneid, AUC
1mult
15
Complexity of the Factorial
Algorithm
Because T(n) = n always, then T(n) = (n)
T(n)
(n)
n
Prof. Amr Goneid, AUC
16
Location of Minimum
Example (2):
How many comparisons T(n) are done to find the location of
the minimum in an array a[0..n-1]?
minimum (a[0..n-1], n)
n  1 times
{
m=0;
for j = 1 to n-1
if (a[j]  a[m]) m = j ;
1comp
return m;
}
T(n) = n-1 comparisons
Complexity will be T(n) = (n)
Prof. Amr Goneid, AUC
17
Number of Operations T(n)
Example (3): Linear Search in an array a[0 .. n-1 ]
linSearch (a, target, n)
{
for (i = 0 to n-1)
if (ai == target) return i;
return -1;
}
T(n) = number of array element comparisons.
Best case:
T(n) = 1
Worst case:
T(n) = n
Prof. Amr Goneid, AUC
18
Complexity of the Linear Search
Algorithm
T(n) = 1 in the best case. T(n) = n in the worst case
We write that as: T(n) = (1) and T(n) = O(n)
T(n)
O(n)
(1)
n
Prof. Amr Goneid, AUC
19
4. Bounds and the Big-O
 If an algorithm always costs T(n) = f(n)
for the same (n) independent of the
data, it is an Exact algorithm.
 In this case we say T(n) = (f(n)), or
Big .
 The factorial function is an example
where T(n) = (n)
Prof. Amr Goneid, AUC
20
Bounds
 If the cost T(n) of an algorithm for a given size
(n) changes with the data, it is not an exact
algorithm.
 In this case, we find the Best Case (Lower
Bound) T(n) = (f(n)) or Big  and the Worst
Case (Upper Bound) T(n) = O(f(n)) or Big O
 The linear search function is an example
where T(n) = (1) and T(n) = O(n)
Prof. Amr Goneid, AUC
21
Constants do not matter
 If
T(n) = c f(n) we still say that T(n) is O(f(n))
or (f(n)) or (f(n))
 Constants are always dropped
 They can be related to machine or language properties
 Examples:
T(n) = 4
(best case)
T(n) = 6 n2 (worst case)
T(n) = 3 n (always)
then T(n) = (1)
then T(n) = O(n2)
then T(n) = (n)
Prof. Amr Goneid, AUC
22
Constants do not matter
T(n) = 4
(best case)
T(n) = 6 n2 (worst case)
T(n) = 3 n (always)
then T(n) = (1)
then T(n) = O(n2)
then T(n) = (n)
is of
Number of Operations
Complexity
Prof. Amr Goneid, AUC
23
5. Types of Complexities
 Constant Complexity
 T(n) = constant independent of (n)
 Runs in constant amount of time  O(1)
 Example: cout << a[0][0]
Prof. Amr Goneid, AUC
24
Types of Complexities
 Logarithmic Complexity
= m is equivalent to n=2m
 Reduces the problem to half  O(log2n)
 Example: Binary Search
T(n) = O(log2n)
Much faster than Linear Search which has
T(n) = O(n)
 Log2n
Prof. Amr Goneid, AUC
25
Linear vs Logarithmic Complexities
T(n)
O(n)
x
logx
O(log2n)
n
x
Prof. Amr Goneid, AUC
26
Types of Complexities
 Polynomial Complexity
 T(n) = amnm+…+ a2n2 + a1n1 + a0
 If m=1, then O(a1n+a0)  O(n)
 If m > 1, then  O(nm) as nm dominates
Prof. Amr Goneid, AUC
27
Polynomial Complexities
Log T(n)
O(n3)
O(n2)
O(n)
n
Prof. Amr Goneid, AUC
28
Types of Complexities
 Exponential
 Example:
List all the subsets of a set of n
elements {a,b,c}
{a,b,c}, {a,b},{a,c},{b,c},{a},{b},{c},{}
 Number of operations T(n) = O(2n)
 Exponential expansion of the problem 
O(an) where a is a constant greater than 1
Prof. Amr Goneid, AUC
29
Exponential Vs Polynomial
Log T(n)
O(2n)
O(n3)
O(n)
n
Prof. Amr Goneid, AUC
30
Types of Complexities
 Factorial time Algorithms
 Example:
 Traveling
salesperson problem (TSP):
Find the best route to take in visiting n
cities away from home. What are the
number of possible routes? For 3 cities:
(A,B,C)
Prof. Amr Goneid, AUC
31
Possible routes in a TSP
Home
Amsterdam
NY
Montreal
NY
Montreal
Amsterdam
Montreal
Amsterdam
NY
Montreal
NY
Montreal
Amsterdam
NY
Amsterdam
–Number of operations = 3!=6, Hence T(n) = n!
–Expansion of the problem  O(n!)
Prof. Amr Goneid, AUC
32
Exponential Vs Factorial
Log T(n)
O(nn)
O(n!)
O(2n)
n
Prof. Amr Goneid, AUC
33
Execution Time Example
 Example:
 For
the exponential algorithm of listing all
subsets of a given set, assume the set size
to be of 1024 elements
 Number of operations is 21024 about
1.8*10308
 If we can list a subset every nanosecond
the process will take 5.7 * 10291 yr!!!
Prof. Amr Goneid, AUC
34
Polynomial & Non-polynomial
Times
 P (Polynomial) Times:
O(1), O(log n), O(log n)2, O(n) , O(n logn),
O(n2), O(n3), ….
 NP (Non-Polynomial) Times:
O(2n) , O(en) , O(n!) , O(nn) , …..
Prof. Amr Goneid, AUC
35
6. Rules for Big-O
Rule
Example
For constant k, O(k) < O(n)
O(7) = O(1) < O(n)
For constant k, O(kf) = O(f)
(Constants are dropped)
O(2n) = O(n)
O(|f|+|g|) = O(|f|) + O(|g|) =
Max (O(|f|) , O(|g|)
O(6n2+n)=O(6n2)+O(n) = O(n2)
Nesting of loop O(g) within a
loop O(f) gives O(f*g)
O(n4*n2)=O(n6)
O(nm-1) < O(nm)
O(n2) < O(n3)
O(log n) O(n)
O(log2 n) < O(n)
Prof. Amr Goneid, AUC
36
Rules for Big-O
Rule
Example
All logarithms grow at the
same rate
log2 n =  (log3 n)
Exponential functions grow
faster than powers
O(n3) < O(2n)
Factorials grow faster than
exponentials
O(2n) < O(n!)
Prof. Amr Goneid, AUC
37
Exercises
Which function has smaller complexity ?
 f = 100 n4
g = n5
 f = log(log n3)
g = log n
 f = n2
g = n log n
 f = 50 n5 + n2 + n
g = n5
 f = en
g = n!
Prof. Amr Goneid, AUC
38
7. Examples of Algorithm Analysis
for (i = 1; i <= n/2; i++)
{
O(1);
for (j = 1; j <= n*n; j++)
{ O(1); }
}
T(n) = (n2 + 1) * n/2 = n3/2 + n/2
Hence T(n) = O(n3)
Prof. Amr Goneid, AUC
39
Examples of Algorithm Analysis
for (i = 1; i <= n/2; i++)
{ O(1); }
for (j = 1; j <= n*n; j++)
{ O(1); }
T(n) = (n/2) + n2
Hence T(n) = O(n2)
Prof. Amr Goneid, AUC
40
while Loop
k = n;
while (k > 0)
{
d = k % 2;
}
cout << d;
k /= 2;
Each iteration cuts the value of (k) by half and the final
iteration reduces k to zero. Hence the number of iterations
T(n) =  log2 n  + 1
Hence T(n) = O(log2 n)
Prof. Amr Goneid, AUC
41
A function to reverse an array(1)
 A function to reverse an array a[ ].
Version(1): using a temporary array
b[ ].
int a[N];
void reverse_array (int a[ ], int n)
{
int b[N];
for (int i = 0; i < n ; i++) b[i] = a[n-i-1];
for (int i = 0; i < n ; i++) a[i] = b[i];
}
Consider T(n) to be the number array accesses, then
T(n) = 2n + 2n = 4n
Hence
T(n) = O(n), with extra space b[N]
Prof. Amr Goneid, AUC
42
A function to reverse an array(2)
 A function to reverse an array a[ ].
Version(2): using swapping.
int a[N];
void reverse_array (int a[ ], int n)
{
int temp;
for (int i = 0; i < n/2 ; i++)
{ temp = a[i]; a[i] = a[n-i-1]; a[n-i-1] = temp; }
}
Consider T(n) to be the number array accesses, then
T(n) = 4(n/2) = 2n
Hence
T(n) = O(n), without extra space
Prof. Amr Goneid, AUC
43
Index of the minimum element

A function to return the index of the minimum element
int index_of_min ( int a[ ] , int s , int e )
{
int imin = s;
for (int i = s+1; i <= e ; i++)
if (a[i] < a[imin]) imin = i ;
return imin ;
}
Consider T(n) to be the number of times we compare array
elements. When invoked as index_of_min (a, 0 , n-1 ), then
T(n) = e – (s+1) +1 = e - s = n-1
Hence
T(n) = O(n)
Prof. Amr Goneid, AUC
44
Analysis of Selection Sort
void SelSort(int a[ ], int n)
{
int m, temp;
for (int i = 0; i < n-1; i++)
{
m = index_of_min(a, i, n-1);
temp = a[i]; a[i] = a[m]; a[m] = temp;
}
}
Costs n-1-i
T(n) = number of array comparisons. For a given iteration (i),
Ti(n) = n-1-i and T(n) = T0(n) + T1(n) + …. Tn-2(n)
= (n-1) + (n-2) + …+ 1 = n(n-1)/2 = 0.5 n2 – 0.5 n = O(n2)
This cost is the same for any data of size (n) (exact algorithm)
Prof. Amr Goneid, AUC
45
SelectSort vs QuickSort
Selectsort
O(n2)
T(n)
Quicksort
O(n log2n)
n
Prof. Amr Goneid, AUC
46
Analysis of Binary Search
int BinSearch( int a[ ], int n, int x)
{ int L, M, H; bool found = false;
L = 0; H = n;
while ( (L < H) && ! found)
{
M = (L + H)/2;
if (x == a[M]) found = true;
else if (x > a[M]) L = M+1;
else H = M – 1;
}
if (found) return M; else return -1;
}
// Approximate Middle
// Match
// Discard left half
// Discard right half
In the best case, a match occurs in the first iteration, thus T(n) = (1). In the
worst case (not found) half of the array is discarded in each iteration.
Hence T(n) = log2 n +1 = O(log2 n)
Prof. Amr Goneid, AUC
47
Linear vs Binary Search
O(n)
T(n)
x
logx
O(log2n)
x
n
Prof. Amr Goneid, AUC
48
Running Time
 The number of operations can be used to predict the
running time of an algorithm for a given problem size (n)
 Example:
The number of operations of a sorting algorithm is directly
proportional to (n log n). Direct time measurement gives 1
ms to sort 1000 items. Find how long it will take to sort
1,000,000 items.
Solution:
T ( N )  cN log N , c  T ( N ) / ( N log N )
6
6
10
log
10
n log n
T ( n)  cn log n 
T ( N ), T (106 )  3 10 3 T (103 )  2 sec
N log N
10 log10 10
Prof. Amr Goneid, AUC
49
Polynomial Evaluation

A polynomial of degree n can be evaluated directly
 A polynomial of degree n can be
as:
evaluated directly as
P(x) = an xn + an-1 xn-1 + …..+ ai xi + ….+
a1 x +=aa0 xn + a xn-1 + …..+ a xi + ….+ a x + a
P(x)
n
i
1
0
xi is computed
byn-1
a function pow(x,i)
using
i-1 multiplications.The direct
i
xalgorithm
is computed
by ax function
using (i-1)
is (consider
and a[ ] topow(x,i)
be of
multiplications.
The direct algorithm is: (consider x
type double):
and a[ ] to be of type double):
double p = a[0];
double
a[0];
for (intpi == 1;
i <= n; i++)
p =i =
p+
* pow(x,i);
for (int
1;a[i]
i <=
n; i++)
p = p + a[i] * pow(x,i);
Prof. Amr Goneid, AUC
50
Polynomial Evaluation
The number of double arithmetic
operations inside the loop is 2 + (i-1),
Hence,
n
T ( n )   ( i  1 )  0.5n 2  1.5n  O( n 2 )
i 1
We never use this method because it
is quadratic. Instead, we use
Horner’s method.
Prof. Amr Goneid, AUC
51
Horner’s Algorithm
William George Horner (1819) introduced a
factorization in the form:
P(x) = (…(((an)x + an-1)x + an-2)x +..+a1)x + a0
The corresponding algorithm is:
double p = a[n];
for ( i = n-1; i >= 0; i--)
p = p * x + a[i];
Prof. Amr Goneid, AUC
52
Horner’s Algorithm
Analysis of this algorithm gives for the
number of double arithmetic operations:
n 1
T ( n )   2  2n  O( n )
i 0
This is a faster linear algorithm
Prof. Amr Goneid, AUC
53
Learn on your own about:
 Proving Correctness of algorithms
 Analyzing Recursive Functions
 Standard Algorithms in C++
Prof. Amr Goneid, AUC
54