Reg. No:…………………………….
KIGALI INSTITUTE OF SCIENCE AND TECHNOLOGY
INSTITUT DES SCIENCES ET TECHNOLOGIE
Avenue de l'Armée, B.P. 3900 Kigali, Rwanda
INSTITUTE EXAMINATIONS – ACADEMIC YEAR 2013
END OF SEMESTER EXAMINATION: SUPPLEMENTARY EXAM
FACULTY OF SCIENCE
DEPARTMENT OF APPLIED MATHEMATICS
FOURTH YEAR SEMESTER 2(Year 4 Math - Statistics, Year 4 Pure Math)
ALGORITHMS DESIGN AND ANALYSIS [MAT 3426]
DATE:
/ /2012
TIME: 3 HOURS
MAXIMUM MARKS = 60
INSTRUCTIONS
1. This paper contains Four (4) questions.
2. Answer One compulsory question section A and any two (2) questions in
section B
3. The compulsory question mark is 20 Marks and each question of section B is
20 Marks
4. No written materials allowed.
5. Do not forget to write your Registration Number.
6. Do not write any answers on this question paper.
SECTION A
a)Discuss in detail about algorithms and give an examples
[10 Marks]
b) Demonstrate the Asymptotic Bounds in terms of Ο-Notation and Ω-Notation
[10 Marks]
SECTION B
Question1:
[20 Marks]
Using greedy algorithm approach, make a change of a given amount using the smallest
possible number of coins.
Coins available are:
dollars (100 cents)
quarters (25 cents)
dimes (10 cents)
nickels (5 cents)
pennies (1 cent)
Question2:
Using Asymptotic analysis, demonstrate the running times of insertion sort algorithm are:
- Wost case: T(n)= Θ (n2)
-Best case : T(n)= Θ (n)
[20 Marks]
Question3:
Using Asymptotic analysis, demonstrate the running times of Merge sort algorithm are:
- Wost case: T(n)= Θ (nlogn)
-Best case : T(n)= Θ (1)
[20 Marks]
Question4:
a)Discuss about Divide-and-Conquer Algorithm
[10 Marks]
b)Give the pseudocode and implementation of Bubble Sort
[10 Marks]
SUPPLEMENTARY MARKING SCHEEME
ALGORITHMS DESIGN AND ANALYSIS [MAT 3426]
SECTION A
a)Discuss in detail about algorithms and give an examples
An algorithm is a set of rules for carrying out calculation either by hand or on a machine.
An algorithm is a finite step-by-step procedure to achieve a required result.
An algorithm is a sequence of computational steps that transform the input into the
output.
An algorithm is a sequence of operations performed on data that have to be organized in
data structures.
An algorithm is an abstraction of a program to be executed on a physical machine (model
of Computation).
b) Demonstrate the Asymptotic Bounds in terms of Ο-Notation and Ω-Notation
[10 Marks]
[10 Marks]
Ο-Notation (Upper Bound)
This notation gives an upper bound for a function to within a constant factor. We write
f(n) = O(g(n)) if there are positive constants n0 and c such that to the right of n0, the value
of f(n) always lies on or below c g(n).
In the set notation, we write as follows: For a given function g(n), the set of functions
Ο(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ c g(n) for all n
≥ n0}
We say that the function g(n) is an asymptotic upper bound for the function f(n). We use
Ο-notation to give an upper bound on a function, to within a constant factor.
Graphically, for all values of n to the right of n0, the value of the function f(n) is on or
below g(n). We write f(n) = O(g(n)) to indicate that a function f(n) is a member of the set
Ο(g(n)) i.e.
f(n) ∈ Ο(g(n))
Note that f(n) = Θ(g(n)) implies f(n) = Ο(g(n)), since Θ-notation is a stronger notation
than Ο-notation.
Example: 2n2 = Ο(n3), with c = 1 and n0 = 2.
Equivalently, we may also define f is of order g as follows:
If f(n) and g(n) are functions defined on the positive integers, then f(n) is Ο(g(n)) if and
only if there is a c > 0 and an n0 > 0 such that
| f(n) | ≤ | g(n) | for all n ≥ n0
Historical Note: The notation was introduced in 1892 by the German mathematician Paul
Bachman.
Ω-Notation (Lower Bound)
This notation gives a lower bound for a function to within a constant factor. We write f(n)
= Ω(g(n)) if there are positive constants n0 and c such that to the right of n0, the value of
f(n) always lies on or above c g(n).
In the set notation, we write as follows: For a given function g(n), the set of functions
Ω(g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ c g(n) ≤ f(n) for all n
≥ n0}
We say that the function g(n) is an asymptotic lower bound for the function f(n).
The intuition behind Ω-notation is shown above.
Example: √n = (lg n), with c = 1 and n0 = 16.
SECTION B
Question1:
[20 Marks]
Using greedy algorithm approach, make a change of a given amount using the smallest
possible number of coins.
Coins available are:
dollars (100 cents)
quarters (25 cents)
dimes (10 cents)
nickels (5 cents)
pennies (1 cent)
Make change for n units using the least possible number of coins.
MAKE-CHANGE (n)
C ← {100, 25, 10, 5, 1} // constant.
Sol ← {};
// set that will hold the solution set.
Sum ← 0 sum of item in solution set
WHILE sum not = n
x = largest item in set C such that sum + x ≤ n
IF no such item THEN
RETURN "No Solution"
S ← S {value of x}
sum ← sum + x
RETURN S
Example Make a change for 2.89 (289 cents) here n = 2.89 and the solution
contains 2 dollars, 3 quarters, 1 dime and 4 pennies. The algorithm is greedy because
at every stage it chooses the largest coin without worrying about the consequences.
Moreover, it never changes its mind in the sense that once a coin has been included in
the solution set, it remains there.
Question2:
Using Asymptotic analysis, demonstrate the running times of insertion sort algorithm are:
- Wost case: T(n)= Θ (n2)
-Best case : T(n)= Θ (n)
[20 Marks]
Algorithm: Insertion Sort
We use a procedure INSERTION_SORT. It takes as parameters an array A[1.. n] and the length
n of the array. The array A is sorted in place: the numbers are rearranged within the array, with at
most a constant number outside the array at any time.
INSERTION_SORT (A)
1.
2.
3.
4.
5.
6.
7.
8.
FOR j ← 2 TO length[A]
DO key ← A[j]
{Put A[j] into the sorted sequence A[1 . . j − 1]}
i←j−1
WHILE i > 0 and A[i] > key
DO A[i +1] ← A[i]
i←i−1
A[i + 1] ← key
Example: Following figure (from CLRS) shows the operation of INSERTION-SORT on the
array A= (5, 2, 4, 6, 1, 3). Each part shows what happens for a particular iteration with the value
of j indicated. j indexes the "current card" being inserted into the hand.
Read the figure row by row. Elements to the left of A[j] that are greater than A[j] move one
position to the right, and A[j] moves into the evacuated position.
Analysis
Since the running time of an algorithm on a particular input is the number of steps executed, we
must define "step" independent of machine. We say that a statement that takes ci steps to execute
and executed n times contributes cin to the total running time of the algorithm. To compute the
running time, T(n), we sum the products of the cost and times column. That is, the running time
of the algorithm is the sum of running times for each statement executed. So, we have
T(n) = c1n + c2 (n − 1) + 0 (n − 1) + c4 (n − 1) + c5 ∑2 ≤ j ≤ n ( tj )+ c6 ∑2 ≤ j ≤ n (tj − 1) + c7 ∑2 ≤ j ≤ n (tj
− 1) + c8 (n − 1)
In the above equation we supposed that tj be the number of times the while-loop (in line 5) is
executed for that value of j. Note that the value of j runs from 2 to (n − 1). We have
T(n) = c1n + c2 (n − 1) + c4 (n − 1) + c5 ∑2 ≤ j ≤ n ( tj )+ c6 ∑2 ≤ j ≤ n (tj − 1) + c7 ∑2 ≤ j ≤ n (tj − 1) + c8
(n − 1) Equation (1)
Best-Case
The best case occurs if the array is already sorted. For each j = 2, 3, ..., n, we find that A[i] less
than or equal to the key when i has its initial value of (j − 1). In other words, when i = j −1,
always find the key A[i] upon the first time the WHILE loop is run.
Therefore, tj = 1 for j = 2, 3, ..., n and the best-case running time can be computed using equation
(1) as follows:
T(n) = c1n + c2 (n − 1) + c4 (n − 1) + c5 ∑2 ≤ j ≤ n (1) + c6 ∑2 ≤ j ≤ n (1 − 1) + c7 ∑2 ≤ j ≤ n (1 − 1) + c8
(n − 1)
T(n) = c1n + c2 (n − 1) + c4 (n − 1) + c5 (n − 1) + c8 (n − 1)
T(n) = (c1 + c2 + c4 + c5 + c8 ) n + (c2 + c4 + c5 + c8)
This running time can be expressed as an + b for constants a and b that depend on the statement
costs ci. Therefore, T(n) it is a linear function of n.
The punch line here is that the while-loop in line 5 executed only once for each j. This happens if
given array A is already sorted.
T(n) = an + b = Θ (n)
It is a linear function of n.
Worst-Case
The worst-case occurs if the array is sorted in reverse order i.e., in decreasing order. In the
reverse order, we always find that A[i] is greater than the key in the while-loop test. So, we must
compare each element A[j] with each element in the entire sorted subarray A[1 .. j − 1] and so tj =
j for j = 2, 3, ..., n. Equivalently, we can say that since the while-loop exits because i reaches to 0,
there is one additional test after (j − 1) tests. Therefore, tj = j for j = 2, 3, ..., n and the worst-case
running time can be computed using equation (1) as follows:
T(n) = c1n + c2 (n − 1) + c4 (n − 1) + c5 ∑2 ≤ j ≤ n ( j ) + c6 ∑2 ≤ j ≤ n(j − 1) + c7 ∑2 ≤ j ≤ n(j − 1) + c8 (n
− 1)
And using the summations in CLRS on page 27, we have
T(n) = c1n + c2 (n − 1) + c4 (n − 1) + c5 ∑2 ≤ j ≤ n [n(n +1)/2 + 1] + c6 ∑2 ≤ j ≤ n [n(n − 1)/2] + c7 ∑2 ≤
j ≤ n [n(n − 1)/2] + c8 (n − 1)
T(n) = (c5/2 + c6/2 + c7/2) n2 + (c1 + c2 + c4 + c5/2 − c6/2 − c7/2 + c8) n − (c2 + c4 + c5 + c8)
This running time can be expressed as (an2 + bn + c) for constants a, b, and c that again depend
on the statement costs ci. Therefore, T(n) is a quadratic function of n.
Here the punch line is that the worst-case occurs, when line 5 executed j times for each j. This
can happens if array A starts out in reverse order
T(n) = an2 + bn + c = Θ (n2)
It is a quadratic function of n.
Question3:
Using Asymptotic analysis, demonstrate the running times of Merge sort algorithm are:
- Wost case: T(n)= Θ (nlogn)
-Best case : T(n)= Θ (1)
[20 Marks]
Analyzing Merge Sort
For simplicity, assume that n is a power of 2 so that each divide step yields two subproblems,
both of size exactly n/2.
The base case occurs when n = 1.
When n ≥ 2, time for merge sort steps:
Divide: Just compute q as the average of p and r, which takes constant time i.e. Θ(1).
Conquer: Recursively solve 2 subproblems, each of size n/2, which is 2T(n/2).
Combine: MERGE on an n-element subarray takes Θ(n) time.
Summed together they give a function that is linear in n, which is Θ(n). Therefore, the recurrence
for merge sort running time is
Solving the Merge Sort Recurrence
By the master theorem in CLRS-Chapter 4 (page 73), we can show that this recurrence has the
solution
T(n) = Θ(n lg n).
Reminder: lg n stands for log2 n.
Compared to insertion sort [Θ(n2) worst-case time], merge sort is faster. Trading a factor of n for
a factor of lg n is a good deal. On small inputs, insertion sort may be faster. But for large enough
inputs, merge sort will always be faster, because its running time grows more slowly than
insertion sorts.
Recursion Tree
We can understand how to solve the merge-sort recurrence without the master theorem. There is
a drawing of recursion tree on page 35 in CLRS, which shows successive expansions of the
recurrence.
The following figure (Figure 2.5b in CLRS) shows that for the original problem, we have a cost
of cn, plus the two subproblems, each costing T (n/2).
The following figure (Figure 2.5c in CLRS) shows that for each of the size-n/2 subproblems, we
have a cost of cn/2, plus two subproblems, each costing T (n/4).
The following figure (Figure: 2.5d in CLRS) tells to continue expanding until the problem sizes
get down to 1.
In the above recursion tree, each level has cost cn.
The top level has cost cn.
The next level down has 2 subproblems, each contributing cost cn/2.
The next level has 4 subproblems, each contributing cost cn/4.
Each time we go down one level, the number of subproblems doubles but the cost per
subproblem halves. Therefore, cost per level stays the same.
The height of this recursion tree is lg n and there are lg n + 1 levels.
Mathematical Induction
We use induction on the size of a given subproblem n.
Base case: n = 1
Implies that there is 1 level, and lg 1 + 1 = 0 + 1 = 1.
Inductive Step
Our inductive hypothesis is that a tree for a problem size of 2i has lg 2i + 1 = i +1 levels. Because
we assume that the problem size is a power of 2, the next problem size up after 2i is 2i + 1. A tree
for a problem size of 2i + 1 has one more level than the size-2i tree implying i + 2 levels. Since lg
2i + 1 = i + 2, we are done with the inductive argument.
Total cost is sum of costs at each level of the tree. Since we have lg n +1 levels, each costing cn,
the total cost is
cn lg n + cn.
Ignore low-order term of cn and constant coefÞcient c, and we have,
Θ(n lg n) which is the desired result.
Question4:
a)Discuss about Divide-and-Conquer Algorithm
[10 Marks]
Divide-and-conquer is a top-down technique for designing algorithms that consists
of dividing the problem into smaller subproblems hoping that the solutions of the
subproblems are easier to find and then composing the partial solutions into the
solution of the original problem.
Little more formally, divide-and-conquer paradigm consists of following major
phases:
Breaking the problem into several sub-problems that are similar to the original
problem but smaller in size,
Solve the sub-problem recursively (successively and independently), and then
Combine these solutions to subproblems to create a solution to the original
problem.
b)Write the pseudocode and implementation of Bubble Sort
pseudocode
[10 Marks]
SEQUENTIAL BUBBLESORT (A)
for i ← 1 to length [A] do
for j ← length [A] downto i +1 do
If A[A] < A[j-1] then
Exchange A[j] ↔ A[j-1]
Implementation
void bubbleSort(int numbers[], int array_size)
{
int i, j, temp;
for (i = (array_size - 1); i >= 0; i--)
{
for (j = 1; j <= i; j++)
{
if (numbers[j-1] > numbers[j])
{
temp = numbers[j-1];
numbers[j-1] = numbers[j];
numbers[j] = temp;
}
}
}
}
© Copyright 2026 Paperzz