Algorithm Input Output


DESIGN AND ANALYSIS OF ALGORITHMS
Algorithm
An algorithm is any well-defined computational procedure that
takes some value, or set of values, as input and produces some value,
or set of values, as output.
An algorithm is thus a sequence of computational steps that
transform the input into the output.
An algorithm is a sequence of unambiguous instructions for
solving a problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time.
2
Design and Analysis of Algorithms
•
Analysis: predict the cost of an algorithm in terms
of resources and performance
•
Design: design algorithms which minimize the cost
3
Input

Algorithm
Output
Example: sorting
input: A sequence of numbers.
output: An ordered permutation of the input.
4
Informal definition
An informal definition of an algorithm is:
i
Algorithm: a step-by-step method for solving a
problem or doing a task.
Informal definition of an algorithm used in a computer
Finding the largest integer among five integers
Find Largest refined
8.7
THREE CONSTRUCTS
There are three constructs for the algorithm. The idea is
that a program must be made of a combination of only
these three constructs: sequence, decision (selection)
and repetition. It has been proven there is no need for
any other constructs. Using only these constructs makes
a program or an algorithm easy to understand, debug or
change.
8.8
Three constructs
8.9
Properties of algorithms:

Input : It generally requires finite no. of inputs.

Output : It must produce at least one output.

Uniqueness: Each instruction should be clear and unambiguous.

Correctness : the output must be correct for all possible inputs.

Finiteness : It must terminate offer a finite no. of steps.

Effectiveness of each calculation step.
10
Propositional Calculus:

Truth Values

Notations
o
Negation
o
Conjunction
o
Disjunction
o
Conditional Statements

Tautology & Contradication

Well formed formula
11
Set theory

Set

Subset

Proper Subset

Equality of Sets

Operations on Sets
:

Solving a problem with a computer
The following steps are required to solve a problem using a
computer:
1. Statement of the problem or Problem definition.
2. Development of a model.
3. Design of the Algorithm.
4. Checking the correctness of the algorithm.
5. Implementation in some programming language.
6. Analysis and study of the complexity of the algorithm.
7. Program testing.
8. Preparation of document.
Use of Loops

A loop executing one more or less than the intended number is
very common.
Three aspects while constructing a loop:

The initial condition that needs to be true before the loop begins
execution;

The invariant relation that must hold before, during and after
each iteration of the loop;

The condition under which the loop must terminate.
A loop may terminate in various ways. This aspect is critical
to correctness of the algorithm.
Loop Termination

The simplest termination condition: when it is known in advance
the number of times the loop is to be iterated. (Counting loop.)
The loop terminates when some condition becomes false. We
cannot directly predict in advance the number of times the loop
will iterate before it terminates. There is no assurance that a
loop of this type will terminate at all. The responsibility to
achieve it is on the algorithm designer.

Terminate by forcing the condition of termination within the
loop.
Control Constructs:

IF_THEN_ELSE
If <condition> then
……………………..
……………………..
If <condition> then
……………………..
……………………..
Else
……………………..
……………………..
16
True
Check
Condition
False
Execute ‘False’
Condition
Execute ‘True’
Condition
Action to execute
after
IF_THEN_ELSE
17
FOR_DO
Setup Initial
Condition
Test
continue
condition
Update loop variable
True
Body of Loop
Exit
18
CASE

Select CASE <expression>
case value 1:<expression statements>
case value 2:<expression statements>
………………………………………
………………………………………
Default ::<executable statements>
19
REPEAT- UNTIL
Repeat
………………………
Statements to
be repeated
………………………
Until <terminating condition>
False
Test
Condition
True
20
WHILE-DO
While <condition> Do
………………………
………………………
Test
Condition
Statements to
be repeated
21
Algorithmic Performance
There are two aspects of algorithmic performance:
 Time




Space




Instructions take time.
How fast does the algorithm perform?
What affects its runtime?
Data structures take space
What kind of data structures can be used?
How does choice of data structure affect the runtime?
We will focus on time:


How to estimate the time required for an algorithm
How to reduce the time required
The Execution Time of Algorithms

Each operation in an algorithm (or a program) has a cost.
 Each operation takes a certain of time.
count = count + 1;  take a certain amount of time, but it
is constant
A sequence of operations:
count = count + 1;
sum = sum + count;
 Total Cost = c1 + c2
Cost: c1
Cost: c2
The Execution Time of Algorithms (cont.)
Example: Simple If-Statement
Cost
if (n < 0)
c1
absval = -n c2
else
absval = n; c3
Total Cost <= c1 + max(c2,c3)
Times
1
1
1
The Execution Time of Algorithms (cont.)
Example: Simple Loop
i = 1;
sum = 0;
while (i <= n) {
i = i + 1;
sum = sum + i;
}
Cost
c1
c2
c3
c4
c5
Times
1
1
n+1
n
n
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5
 The time required for this algorithm is proportional to n
The Execution Time of Algorithms (cont.)
Example: Nested Loop
Cost
c1
c2
c3
c4
c5
c6
c7
Times
1
1
n+1
n
n*(n+1)
n*n
n*n
i=1;
sum = 0;
while (i <= n) {
j=1;
while (j <= n) {
sum = sum + i;
j = j + 1;
}
i = i +1;
c8
n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
 The time required for this algorithm is proportional to n2
Efficiency of Algorithms

When implemented, every algorithm uses some of the system
resources, most relevant to the efficiency being:

The central processor unit (CPU) time, and Internal memory
(RAM).

There is no standard method for designing efficient algorithms,
but a few general means of improving the efficiency of an
algorithm:
 Removing Redundant Computations Outside Loops

Referencing of Array Elements

Inefficiency due to late termination

Early detection of Desired Output conditions
Removing Redundant Computations Outside Loops
Referencing of Array Elements
Inefficiency due to late termination

Inefficient Algorithm:
1. While Name sought Current Name and not(EOF) do get name
from list. Where EOF denotes end of file.
2……….
Efficient Algorithm:
1. While Name sought> Current Name and not(EOF) do get name
from list.
2. Test if current name is equal to the name sought.
Early detection of Desired Output conditions


For i = 1 to n-1 do
For j = 1 to n-1 do



If (a[j] < a[j+1]) then exchange a[j] with a[j+1];
For i = 1 to n-1 do
For j = 1 to n-i do

If (a[j] < a[j+1]) then exchange a[j] with a[j+1];
Design and Analysis of Algorithms


Analysis Issues of algorithm is
1. WHAT DATA STRUCTURES TO USE! (Lists, queues,
stacks, heaps, trees, etc.)

2. IS IT CORRECT! (All or only most of the time?)

3. HOW EFFICIENT IS IT! (Asymptotically fixed or does it
depend on the inputs?)

4. IS THERE AN EFFICIENT ALGORITHM!!
Design and Analysis of Algorithms
“Analysis of algorithm” is a field in computer science whose
overall goal is an understanding of the complexity of algorithms (in
terms of time Complexity), also known as execution time & storage
(or space Complexity) requirement taken by that algorithm.
Suppose M is an algorithm, and suppose n is the size of the
input data. The time and space used by the algorithm M are the two
main measures for the efficiency of M.
The time is measured by counting the number of key operations,
for example, in case of sorting and searching algorithms, the number
of comparisons is the number of key operations.
33
Design and Analysis of Algorithms
The space is measured by counting the maximum of memory
needed by the algorithm.
The complexity of an algorithm M is the function f(n), which
give the running time and/or storage space requirement of the
algorithm in terms of the size n of the input data.
34
Design and Analysis of Algorithms
The storage space required by an algorithm is simply a
multiple of the data size n. In general the term “complexity”
given anywhere simply refers to the running time of the
algorithm. There are 3 cases, in general, to find the complexity
function ƒ(n):
1. Best case: The minimum value of for any possible input.
2. Worst case: The maximum value of for any possible input.
3. Average case: The value of which is in between maximum
and minimum for any possible input. Generally the Average
case implies the expected value of ƒ(n)
Time Complexity
worst-case
5 ms
}
4 ms
average-case?
3 ms
best-case
2 ms
1 ms
A
B
C
D
Input
E
F
G
Notations
There are three basic asymptotic notations which are used to
express the running time of an algorithm in terms of function, whose
domain is the set of natural numbers N={1,2,3,…..}. These are:

O (Big-’Oh’)Notation:[This notation is used to express Upper bound
(maximum steps) required to solve a problem]

Ω (Big-’Omega’) Notation:[This notation is used to express Lower
bound i.e. minimum (at least) steps required to solve a problem]

Θ (‘Theta’) Notation:[Used to express both Upper & Lower bound,
also called tight bound i.e. average number of steps required to solve
a problem]
37
Big Oh Notation

Notation to describe algorithm complexity

For a given function g(n), we denote O(g(n)) pronounced as ‘big-oh
of g on n’ or sometimes just ‘oh of g on n’ the set of functions
O(g(n)) = { ƒ(n) : there exist positive constants c and n0 such that
0 ≤ ƒ(n) ≤ cg(n) for all n ≥ n0 }

O notation gives an upper bound on a function, to within a constant
factor.
38
Big-Oh notation
c g(n)
f(n)
n
n
0
is stronger than
, or
.
We write
for
Big Oh Notation

We say fA(n)=30n+8 is order n, or O(n).
It is, at most, roughly proportional to n.

fB(n)=n2+1 is order n2, or O(n2). It is, at most, roughly proportional
to n2.

In general, an O(n2) algorithm will be slower than O(n) algorithm.

We say that n4 + 100n2 + 10n + 50 is of the order of n4 or O(n4)

We say that 10n3 + 2n2 is O(n3)

We say that n3 - n2 is O(n3)

We say that 10 is O(1),
40
Figure 4-6 An O(n) algorithm
41
Figure 4-8 Another O(n2) algorithm Copyright
43
Ө Notation

For a given function g(n), we denote Ө(g(n)), the set of functions
Ө(g(n)) = { ƒ(n) : there exist positive constants c1 , c2 and n0 such that
0 ≤ c1g(n) ≤ ƒ(n) ≤ c2g(n) for all n ≥ n0 }

A function ƒ(n) belongs to the set Ө(g(n)) if there exist positive
constants c1 and c2 such that it can be “sandwiched” between c1g(n)
and c2g(n), for sufficiently large n.
44
Theta notation
Let
be an asymptotically non-negative function on
.
Function f(n) is in
if it can be sandwiched between
and
for some constants , for all
some
c g(n)
2
f(n)
c g(n)
1
n
0
n
.
Examples
We have to determine
for any
such that
Dividing by yields:
This is satisfied for
.
If the above relation would have held, we would
such
have to determine
that:
for any
, which
cannot exist.
.
Ω Notation

For a given function g(n), we denote Ω(g(n)), pronounced as ‘bigomega of g on n’ or sometimes just ‘omega of g on n’, the set of
functions
Ω(g(n)) = { ƒ(n) : there exist positive constants c1 and n0 such that
0 ≤ cg(n) ≤ ƒ(n) for is on or above all n ≥ n0 }

For all values n to the right of n0 , the value of ƒ(n) cg(n).
47
Omega notation
Let
be an asymptotically non-negative function on
.
is an asymptotic lower bound for
In this case we say that
.
f(n)
c g(n)
n
0
n
Omega notation
Let
be an asymptotically non-negative function on
.
is an asymptotic lower bound for
In this case we say that
.
f(n)
c g(n)
n
0
n
Algorithm Strategies
There are basically 5 fundamental Algorithm Strategies
which are used to design an algorithm efficiently:
1. Divide-and-Conquer
2. Greedy method
3. Dynamic Programming
4. Backtracking
5. Branch-and-Bound
51
1. Divide-and-Conquer
Divide & conquer technique is a top-down approach to solve a
problem. The algorithm which follows divide and conquer technique
involves 3 steps:

Divide the original problem into a set of sub problems.

Conquer (or Solve) every sub-problem individually, recursive.

Combine the solutions of these sub problems to get the solution of
original problem.
52
Divide-and-Conquer Technique
a problem of size n
(instance)
subproblem 1
of size n/2
subproblem 2
of size n/2
a solution to
subproblem 2
a solution to
subproblem 1
a solution to
the original problem
Divide-and-Conquer Examples

Sorting: mergesort and quicksort

Binary tree traversals

Binary search

Multiplication of large integers

Matrix multiplication: Strassen’s algorithm

Closest-pair and convex-hull algorithms
Mergesort



Split array A[0..n-1] into about equal halves and make copies
of each half in arrays B and C
Sort arrays B and C recursively
Merge sorted arrays B and C into array A as follows:
 Repeat the following until no elements remain in one of the
arrays:
 compare the first elements in the remaining unprocessed
portions of the arrays
 copy the smaller of the two into A, while incrementing
the index indicating the unprocessed portion of that array
 Once all elements in one of the arrays are processed, copy
the remaining unprocessed elements from the other array
into A.
Mergesort Example
8 3 2 9 7 1 5 4
8 3 2 9
8 3
8
7 1 5 4
2 9
3
2
3 8
71
9
2 9
7
5 4
1
5
1 7
2 3 8 9
4 5
1 4 5 7
1 2 3 4 5 7 8 9
4
2. Greedy technique

Greedy technique is used to solve an optimization problem.

An Optimization problem is one in which, we are given a set of input
values, which are required to be either maximized or minimized
(known as objective function) w. r. t. some constraints or conditions.

Greedy algorithm always makes the choice (greedy criteria) that
looks best at the moment, to optimize a given objective function.
That is, it makes a locally optimal choice in the hope that this choice
will lead to a optimal solution.
57
2. Greedy technique


The greedy algorithm does not always guaranteed the optimal
solution but it generally produces solutions that are very close in
value to the optimal.
Applications of the Greedy Strategy: •
Optimal solutions:
Change making
Minimum Spanning Tree (MST)
Single-source shortest paths
Huffman codes
Approximations:
Traveling Salesman Problem (TSP)
Fractional Knapsack problem
58
An Activity Selection Problem
(Conference Scheduling Problem)

Input: A set of activities S = {a1,…, an}

Each activity has start time and a finish time


ai=(si, fi)
Two activities are compatible if and only if their
interval does not overlap

Output: a maximum-size subset of mutually
compatible activities
The Activity Selection Problem

Here are a set of start and finish times
• What is the maximum number of activities that can be
completed?
• {a3, a9, a11} can be completed
• But so can {a1, a4, a8’ a11} which is a larger set
• But it is not unique, consider {a2, a4, a9’ a11}
Interval Representation
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
The Fractional Knapsack Problem

Given: A set S of n items, with each item i having




bi - a positive benefit
wi - a positive weight
Goal: Choose items with maximum total benefit but with
weight at most W.
A Largest-profit strategy: (Greedy method)
Pick always the object with largest profit.

If the weight of the object exceeds the remaining Knapsack
capacity, take a fraction of the object to fill up the Knapsack.

If we are allowed to take fractional amounts, then this is called
a fractional knapsack problem.
The Fractional Knapsack Problem

In this case, we let xi denote the amount we take of item i

Objective: maximize
b (x / w )
i
i
i
W
iS

Constraint:
x
iS
i
Example 1

Given: A set S of n items, with each item i having



bi - a positive benefit
wi - a positive weight
Goal: Choose items with maximum total benefit but with
weight at most W.
“knapsack”
Solution:
Items:
Weight:
Benefit:
Value:
($ per ml)
1
2
3
4
5
4 ml
8 ml
2 ml
6 ml
1 ml
$12
$32
$40
$30
$50
3
4
20
5
50
• 1 ml of 5
• 2 ml of 3
• 6 ml of 4
• 1 ml of 2
10 ml
Example 2











3 objects (n=3).
(w1,w2,w3)=(18,15,10)
(b1,b2,b3)=(25,24,15)
M=20
B = 0 , C= M = 20 /* remaining capacity */
Put object 1 in the Knapsack.
b1=25 Since w1 < M then x1=1
C= M – 18 = 20 -18 = 2
Pick object 2
Since C< w2 then x2= C/w2=2/15.
B =25+2/15*24 =25+3.2=28.2
Since the Knapsack is full then x3=0.
The feasible solution is (1, 2/15,0).
3. Dynamic programming

Dynamic programming technique is similar to divide and conquer
approach.

Both solve a problem by breaking it down into a several sub
problems that can be solved recursively.

The difference between the two is that in dynamic programming
approach, the results obtained from solving smaller sub problems are
reused (by maintaining a table of results) in the calculation of larger
sub problems.

Thus dynamic programming is a approach that begins by solving the
smaller sub-problems, saving these partial results, and then reusing
them to solve larger sub-problems until the solution to the original
problem is obtained.
The dynamic programming approach always gives a guarantee to get
70
a optimal solution.

Dynamic Programming


Dynamic Programming is an algorithm design method that can be
used when the solution to a problem may be viewed as the result
of a sequence of decisions.
To find a shortest path in a multi-stage graph
3
S
1
5

2
A
4
6
the shortest path from S to T :
1+2+5=8
7
B
5
T
Dynamic Programming

Equipment Replacement Problem
Consider a machine whose maintenances cost increases and
resale value decreases with the period.
The aim of replacement problem is to find an optimal period
for the replacement.
Equipment Replacement Problem

A m/c whose purchase price is 7000 has the following data.
Year
1
2
3
4
5
6
7
8
Maint .
Cost
900
1200
1600
2100
2800
3100
4700
5900
Resale
Value
4000
2000
1200
600
500
400
400
400

Find the Optimal period for replacement.
4
A
D
1
11
S
2
5
B
18
9
E
16
2
5
C
13
2
F
T
The shortest path
2
7
8
0
4
4
11
7
3
8
Breadth-first searching

A
B
C

D
E
F
G

H
L
M
I
N
O
J
P
K
Q

A breadth-first search (BFS)
explores nodes nearest the
root before exploring nodes
further away
For example, after searching
A, then B, then C, the search
proceeds with D, E, F, G
Node are explored in the
order A B C D E F G H I J K
LMNOPQ
J will be found before N
Depth-first searching

A
B
C

D
E
F
H
L
M
I
N
O
G
J
P
K

Q

A depth-first search (DFS)
explores a path all the way to
a leaf before backtracking and
exploring another path
For example, after searching
A, then B, then D, the search
backtracks and tries another
path from B
Node are explored in the
order A B D E H L M N I O P
CFGJKQ
N will be found before J
77
4. Backtrack :

Suppose you have to make a series of decisions, among
various choices, where

You don’t have enough information to know what to
choose

Each decision leads to a new set of choices

Some sequence of choices (possibly more than one)
may be a solution to your problem

Backtracking is a methodical way of trying out various
sequences of decisions, until you find one that “works”
78
Backtracking (animation)
dead end
?
dead end
start
?
dead end
?
?
dead end
dead end
?
success!
79
Terminology I
A tree is composed of nodes
There are three kinds of
nodes:
The (one) root node
Internal nodes
Leaf nodes
Backtracking can be thought of
as searching a tree for a
particular “goal” leaf node
80
5. Branch-and-Bound (B&B)

Branch-and-Bound (B&B) is a rather general optimization technique
that applies where the greedy method and dynamic programming
fail.

B&B design strategy is very similar to backtracking in that a state-
space tree is used to solve a problem.

Branch and bound is a systematic method for solving optimization
problems.

However, it is much slower. Indeed, it often leads to exponential
time complexities in the worst case.

On the other hand, if applied carefully, it can lead to algorithms that
81
5. Branch-and-Bound (B&B)
The general idea of B&B is a BFS-like search for the optimal
solution, but not all nodes get expanded (i.e., their children
generated). Rather, a carefully selected criterion determines which
node to expand and when, and another criterion tells the algorithm
when an optimal solution has been found.
Branch and Bound (B&B) is the most widely used tool for solving
large scale NP-hard combinatorial optimization problems. The
following table-1 summarizes these techniques with some common
problems that follows these techniques with their running time
82
Design strategy
Problems that follows
Divide & Conquer
 Binary search
 Multiplication of two n-bits
numbers
 Quick Sort
 Heap Sort
 Merge Sort
Greedy Method
 Knapsack (fractional) Problem
 Minimum cost Spanning tree
Kruskal‟s algorithm
Prim‟s algorithm
 Single source shortest path
problem
Dijkstra‟s algorithm
83
Design strategy
Problems that follows
Dynamic Programming
 All pair shortest pathFloyed algorithm
 Chain matrix multiplication
 Longest common subsequence
(LCS)
 0/1 Knapsack Problem
 Traveling salesmen problem
(TSP)
Backtracking
 N-queen‟s problem
 Sum-of subset
Branch & Bound
 Assignment problem
 Traveling salesmen problem
(TSP)
How do you look up a name in the phone book?

One Possible Way
Search:
middle page = (first page + last page)/2
Go to middle page;
If (name is on middle page)
done; //this is the base case
else if (name is alphabetically before middle page)
last page = middle page //redefine search area to front half
Search //same process on reduced number of pages
else //name must be after middle page
first page = middle page //redefine search area to back half
Search //same process on reduced number of pages
85
Recursion



Recursion: a definition in terms of itself
A recursive algorithm is one in which objects are
defined in terms of other objects of the same type
Advantages:



Simplicity of code
Easy to understand
Disadvantages



Memory
Speed
Possibly redundant work
86
Recursion


A recursive method must have at least one base, or
stopping, case.
A base case does not execute a recursive call


stops the recursion
Each successive call to itself must be a "smaller version
of itself”


an argument that describes a smaller problem
a base case is eventually reached
Key Components of a Recursive Algorithm Design

What is a smaller identical problem(s)?


Decomposition
How are the answers to smaller problems combined to
form the answer to the larger problem?


Composition
Which is the smallest problem that can be solved easily
(without further decomposition)?

Base/stopping case
Factoiral (N!)

N! = (N-1)! * N [for N > 1]
1! = 1
3! = 2! * 3
= (1! * 2) * 3
=1*2*3
Recursive design:



Decomposition: (N-1)!
Composition: * N
Base case: 1!
Factoiral (N!)
Factoiral (N!)
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
Compute 5!
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
f(5)=
5·f(4)
93
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
f(4)=
4·f(3)
f(5)=
5·f(4)
94
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
f(3)=
3·f(2)
f(4)=
4·f(3)
f(5)=
5·f(4)
95
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
f(2)=
2·f(1)
f(3)=
3·f(2)
f(4)=
4·f(3)
f(5)=
5·f(4)
96
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
f(1)=
1·f(0)
f(2)=
2·f(1)
f(3)=
3·f(2)
f(4)=
4·f(3)
f(5)=
5·f(4)
97
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
f(0)=
1
f(1)=
1·f(0)
f(2)=
2·f(1)
f(3)=
3·f(2)
f(4)=
4·f(3)
f(5)=
5·f(4)
98
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
1·1=
1
f(2)=
2·f(1)
f(3)=
3·f(2)
f(4)=
4·f(3)
f(5)=
5·f(4)
99
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
2·1=
2
f(3)=
3·f(2)
f(4)=
4·f(3)
f(5)=
5·f(4)
100
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
3·2=
6
f(4)=
4·f(3)
f(5)=
5·f(4)
101
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
4·6=
24
f(5)=
5·f(4)
102
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
5·24=
120
103
L8
Recursive Algorithms
long factorial(int n){
if (n<=0) return 1;
return n*factorial(n-1);
}
Return 5!
= 120
104
L8
Iterative vs. Recursive
• Iterative
factorial(n) =
Function does NOT
calls itself
1
if n=0
n x (n-1) x (n-2) x … x 2 x 1
if n>0
Function calls itself
• Recursive
factorial(n) = 1
• Factorial(n)=
•
n x factorial(n-1)
if n=0
if n>0
The End
106