DAA- Unit IV
By
Dr. A.S.Alvi
The shortest path in multistage graphs
The shortest path in multistage graphs
c[i][j] be a minimum cost path from node j in stage i.
c[i][j] = min { cost c[j][l] +c[i+1][l]} for l in Vi+1 and
(j,l) in edges
The shortest path in multistage graphs
4
A
D
1
11
S
2
5
B
18
9
E
16
2
5
C
13
2
F
T
The Traveling-salesman Problem:
In this a salesman needs to visit ‘n’ cities in such a manner that all
cities must be visited at once and at the end he returns to the city
from he started with minimum cost.
Suppose the cities are x1 , x2 , x3 ….. xn where cost cij denotes the
cost of travelling from city xi to xj
.
The travelling salesman
problem is to find a route starting and ending at xi. That will take in
all the cities with the minimum cost.
Greedy Strategy for Travelling Salesman Problem
Start with any arbitrary city called xi then choose the
minimum cost city from xi. The method is repeated
until all the cities are visited and at every step we
have to select the city with the minimum weight.
Ex. A newspaper agent daily drops the newspaper to the area
assigned in such a manner that he has to cover all the houses
in the respective area with minimum travel cost. Compute the
minimum travel cost.
The Traveling Salesman Problem
Directed Graph & its cost Matrix
2
1
2
2
10
4
6
5
9
3
8
7
4
4
3
1
1
2
3
4
2
3
4
2 10 5
2 9
4 3 4
6 8 7
The Traveling-salesman Problem:
Hamilton path
A path that visits each vertex of the graph once and only once.
Hamilton circuit
A circuit that visits each vertex of the graph once and only once (at
the end, of course, the circuit must return to the starting vertex).
The Traveling Salesman Problem
Hamilton Circuit
Figure (a) shows a graph that has Hamilton
circuits. One such Hamilton circuit is A, F,
B, C, G, D, E, A. Note that once a graph has
a Hamilton circuit, it automatically has a
Hamilton path. The Hamilton circuit can
always be truncated into a Hamilton path by
dropping the last vertex of the circuit.
Algorithm 1: The Brute-Force Algorithm
Step 1. Make a list of all the possible Hamilton circuits of the
graph.
Step 2. For each Hamilton circuit calculate its total weight (add
the weights of all the edges in the circuit).
Step 3. Choose an optimal circuit (there is always more than
one optimal circuit to choose from!).
Example 1: Traveling Salesman Problem
Given n cities with known distances between each pair, find
the shortest tour that passes through all the cities exactly once
before returning to the starting city
Alternatively: Find shortest Hamiltonian circuit in a weighted
connected graph
Example:
2
a
b
5
3
4
8
c
7
d
TSP by Exhaustive Search
Tour
Cost
a→b→c→d→a
2+3+7+5 = 17
a→b→d→c→a
2+4+7+8 = 21
a→c→b→d→a
8+3+4+5 = 20
a→c→d→b→a
8+7+4+2 = 21
a→d→b→c→a
5+4+3+8 = 20
a→d→c→b→a
5+7+3+2 = 17
Algorithm 2: The Nearest-Neighbor Algorithm
Step 1. Start at the designated starting vertex. If there is no
designated starting vertex, pick any vertex.
Step 2. From the starting vertex go to its nearest neighbor (the
vertex for which the corresponding edge has the smallest
weight.
Steps 3. From each vertex go to its nearest neighbor, choosing
only among the vertices that haven’t been yet visited. (If there
is more than one, choose at random). Keep doing this until all
the vertices have been visited.
H1
H6
H7
H2
H5
H8
H3
H4
The Traveling-salesman Problem:
H1
H1 H2 H3 H4 H5 H6 H7 H8
0 5 0 6 0 4 0 7
H2
5
0
2
4
3
0
0
0
H3
0
2
0
1
0
0
0
0
H4
6
4
1
0
7
0
0
0
H5
0
3
0
7
0
0
6
4
H6
4
0
0
0
0
0
3
0
H7
0
0
0
0
6
3
0
2
H8
7
0
0
0
4
0
2
0
The Traveling-salesman Problem:
The tour starts from area H1 and then select the minimum cost
area reachable from H1.
H1 H2 H3 H4 H5 H6 H7 H8
H1
0
5
0
6
0
4
0
7
H2
H3
5
0
0
2
2
0
4
1
3
0
0
0
0
0
0
0
H4
H5
H6
H7
6
0
4
0
4
3
0
0
1
0
0
0
0
7
0
0
7
0
0
6
0
0
0
3
0
6
3
0
0
4
0
2
H8
7
0
0
0
4
0
2
0
The Traveling-salesman Problem:
H1 H2 H3 H4 H5 H6 H7 H8
H1
0
5
0
6
0
4
0
7
H2
5
0
2
4
3
0
0
0
H3
0
2
0
1
0
0
0
0
H4
6
4
1
0
7
0
0
0
H5
0
3
0
7
0
0
6
4
H6
4
0
0
0
0
0
3
0
H7
0
0
0
0
6
3
0
2
H8
7
0
0
0
4
0
2
0
The Traveling-salesman Problem:
H1 H2 H3 H4 H5 H6 H7 H8
H1
0
5
0
6
0
4
0
7
H2
5
0
2
4
3
0
0
0
H3
0
2
0
1
0
0
0
0
H4
6
4
1
0
7
0
0
0
H5
0
3
0
7
0
0
6
4
H6
4
0
0
0
0
0
3
0
H7
0
0
0
0
6
3
0
2
H8
7
0
0
0
4
0
2
0
The Traveling-salesman Problem:
H1 H2 H3 H4 H5 H6 H7 H8
H1
0
5
0
6
0
4
0
7
H2
5
0
2
4
3
0
0
0
H3
0
2
0
1
0
0
0
0
H4
6
4
1
0
7
0
0
0
H5
0
3
0
7
0
0
6
4
H6
4
0
0
0
0
0
3
0
H7
0
0
0
0
6
3
0
2
H8
7
0
0
0
4
0
2
0
The Traveling-salesman Problem:
H1 H2 H3 H4 H5 H6 H7 H8
H1
0
5
0
6
0
4
0
7
H2
5
0
2
4
3
0
0
0
H3
0
2
0
1
0
0
0
0
H4
6
4
1
0
7
0
0
0
H5
0
3
0
7
0
0
6
4
H6
4
0
0
0
0
0
3
0
H7
0
0
0
0
6
3
0
2
H8
7
0
0
0
4
0
2
0
The Traveling-salesman Problem:
H1 H2 H3 H4 H5 H6 H7 H8
H1
0
5
0
6
0
4
0
7
H2
5
0
2
4
3
0
0
0
H3
0
2
0
1
0
0
0
0
H4
6
4
1
0
7
0
0
0
H5
0
3
0
7
0
0
6
4
H6
4
0
0
0
0
0
3
0
H7
0
0
0
0
6
3
0
2
H8
7
0
0
0
4
0
2
0
The Traveling-salesman Problem:
H1 H2 H3 H4 H5 H6 H7 H8
H1
0
5
0
6
0
4
0
7
H2
5
0
2
4
3
0
0
0
H3
0
2
0
1
0
0
0
0
H4
6
4
1
0
7
0
0
0
H5
0
3
0
7
0
0
6
4
H6
4
0
0
0
0
0
3
0
H7
0
0
0
0
6
3
0
2
H8
7
0
0
0
4
0
2
0
The Traveling-salesman Problem:
4
H1
3
H6
2
H7
4
H8
3
H5
Thus, the minimum travel cost
4+3+2+4+3+2+1+6= 25
2
H2
1
H3
6
H4
H1
The Traveling-salesman Problem:
Approximation-TSP
Input: A complete graph G (V, E)
Output: A Hamiltonian cycle
1.select a “root” vertex r ∈ V [G].
2.use MST-Prim (G, c, r) to compute a minimum spanning
tree from r.
3.assume L to be the sequence of vertices visited in a
preorder tree walk of T.
4.return the Hamiltonian cycle H that visits the vertices in
the order L.
The Traveling-salesman Problem:
The traveling salesman problem consists of a salesman and a set
of cities. The salesman has to visit each one of the cities starting
from a certain one (e.g. the hometown) and returning to the
same city. The challenge of the problem is that the traveling
salesman wants to minimize the total length of the trip.
The traveling salesman problem can be described as follows:
TSP = {(G, f, t): G = (V, E) a complete graph,
f is a function V×V → Z,
t ∈ Z,
G is a graph that contains a traveling salesman tour with cost
that does not exceed t}.
Longest Common Sub-sequence (LCS)
Dr. A.S.Alvi
Longest Common Sub-sequence (LCS)
Sub-sequence
A subsequence of a string S, is a set of characters that appear
in left-to-right order, but not necessarily consecutively.
Example
ACTTGCG
ACT , ATTC , T , ACTTGC are all subsequences.
TTA is not a subequence
Longest Common Sub-sequence (LCS)
Given two sequences X = <x1, x2 ……xm> and = <y1, y2 …yn> ,
The longest subsequences Z = <z1, z2 ……zk> that is common to
x and y.
For Ex.
If X = <A,B,C,B,D,A,B> and Y = <B,D,C,A,B,A> then some
common sub-sequences are
i.<A> ii.<B>
iii.<C>
iv.<D>
v.<A,A>
vi.<B,B>
vii.<B,C,A> viii.<B,C,B,A> ……. are Common Sub-sequence
Longest Common Sub-sequence (LCS)
A common subequence of two strings is a subsequence that
appears in both strings. A longest common subequence is a
common subsequence of maximal length.
Example
S1 = AAACCGTGAGTTATTCGTTCTAGAA
S2 = CACCCCTAAGGTACCTTTGGTTC
LCS is ACCTAGTACTTTG
Longest Common Subsequence
Strings A, B
|A| = n, |B| = m
LCS(A,B)
Subsequence
of A
Subsequence of B
Maximum length
A=adeftcg
B=cedetfg
LCS(A,B) = detg
30
Three Variants
LCS length : 4
Length of LCS
LCS string : “detg”
A string that is an LCS
LCS embed : ({2,3,5,7},{3,4,5,7})
Position of an LCS in A and B
A=adeftcg
B=cedetfg
31
Brute-force LCS algorithm
For every subsequence of X = <x1,……xm>,
check
whether
Y = <y1,……yn>,
it‘s
a
subsequence
of
Optimal substructure
Notation. X = <x1,……xm>, Y = <y1,……yn>.
Theorem.
Let Z = <z1,……zk> be any LCS of <x1,……xm> and Y =
<y1,……yn>.
1. If xm = yn
» then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1
2. If xm ≠ yn
» then zk ≠ xm ⇒ Z is an LCS of Xm-1 and Y
3. If xm ≠ yn
» then zk ≠ yn ⇒ Z is an LCS of X and Yn-1
Matrix Multiplication
Dr. A.S.Alvi
Matrix Multiplication cont……….
Matrix Multiplication
Matrix-chain Multiplication
Suppose we have a sequence or chain A1, A2, …, An of n
matrices to be multiplied That is, we want to compute the
product A1A2…An
There are many possible ways (parenthesizations) to
compute the product
Matrix Multiplication
Example: consider the chain A1, A2, A3, A4 of 4 matrices
Let us compute the product A1A2A3A4
There are different possible ways:
1.
2.
3.
4.
5.
Matrix Multiplication
Example: consider the chain A1, A2, A3, A4 of 4 matrices
Let us compute the product A1A2A3A4
There are 5 possible ways:
1.
(A1(A2(A3A4)))
2.
(A1((A2A3)A4))
3.
((A1A2)(A3A4))
4.
((A1(A2A3))A4)
5.
(((A1A2)A3)A4)
Matrix chain multiplication
𝑀6×2 𝑀2×5 𝑀5×20
𝑀6×2 𝑀2×5 𝑀5×20
Cost
𝑀6×2 (𝑀2×5 𝑀5×20 )
Cost
= 6 × 2 × 5 + 6 × 5 × 20
= 60 + 600 = 660
= 6 × 2 × 20 + 2 × 5 × 20
= 240 + 200 = 440
With different parenthesizations, costs are
different
Matrix-Chain Multiplication
To compute the number of scalar
multiplications necessary, we must know:
Algorithm to multiply two matrices
Matrix dimensions
Matrix-chain Multiplication ……………contd
Example: Consider three matrices A10100, B1005, and C550
There are 2 ways to parenthesize
((AB)C) = D105 · C550
AB 10·100·5=5,000 scalar multiplications
DC 10·5·50 =2,500 scalar multiplications
Total:
7,500
(A(BC)) = A10100 · E10050
BC 100·5·50=25,000 scalar multiplications
Total:
AE 10·100·50 =50,000 scalar multiplications 75,000
Single Source Shortest Path
Dr. A.S.Alvi
Reminding single source shortest path
Given:
(directed or undirected) graph G = (V, E, w)
and source node s, for each , a path in G from s to t
with minimum weight. Negative edges not included.
Single-source shortest-paths problem
Given source node s to all nodes from V
Single-destination shortest-paths problem
From all nodes in V to a destination u
Single-pair shortest-path problem
Shortest path between u and v
All-pairs shortest-paths problem
Shortest paths between all pairs of nodes
Shortest-Path Variants
Single-source shortest-paths problem: Find the shortest path from
s to each vertex v.
Single-destination shortest-paths problem: Find a shortest path to
a given destination vertex t from each vertex v.
Single-pair shortest-path problem: Find a shortest path from u to
v for given vertices u and v.
All-pairs shortest-paths problem: Find a shortest path from u to v
for every pair of vertices u and v.
Optimal Substructure Property
Theorem: Subpaths of shortest paths are also shortest paths
Let Plk = <v1, ... ,vk > be a shortest path from v1 to vk
Let Pij = <vi, ... ,vj > be subpath of P1k from vi to vj for any i, j
Then Pij is a shortest path from vi to vj
Shortest-Path Problems
1.
Dijkstra’s Algorithm: Non-negative weights
2.
Bellman-Ford Algorithm: Negative weights are allowed,
detects negative cycles.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
CSE 780 Algorithms
Example of Bellman-Ford
Note: Values decrease
monotonically.
Bellman-Ford Example
s
a
b
c
d
e
0 ∞ ∞ ∞ ∞ ∞
s0
8
4
-2
7
b∞
1
a∞
3
c∞
9
5
-2
e∞
d∞
0
8
8
-2
∞
4
∞ ∞
0 -2
5
-1 9 6
0 -2
5
-1 4 3/1
0 -2
5
-1 4
0
8
8/
5
7
1
1
-2
3
4/-1
9
5
-2
6
9
0
8
4
-2
4
7
5
9
5
-2
1
-2
3
∞
4
-2
1
-2
3
8
4
-2
7
0 -2
-1
9
5
1
4
Shortest-paths Tree
Predecessor Subgraph Gpred = (Vpred , Epred) for G=(V.E)
Vpred = { vV | pred(v) ± NIL} U {S}
Epred = { (pred(v),u)E | vVpred-{S}
Relaxation Method:In this method , a stricter upper bound
is achieved for d(v) on the weight of the shortest path form s to
v, maintaining d(v) and pred(v) for each vertex v.
Relaxation
Relaxation
• Relaxing an edge (u,v) means testing whether we can
improve the shortest path to v found so far by going
through u
u
5
2
v
u
9
5
Relax(u,v)
5
u
2
2
v
6
Relax(u,v)
7
5
v
u
2
6
v
6
Relaxation
RELAX(u, v, w)
if d[v] > d[u]+w(u,v) then
d[v] ← d[u]+w(u,v)
pred(v) = u;
end
u
5
2
v
u
9
5
v
2
6
Relax(u,v)
5
u
54
2
7
5
v
u
2
CS473
6
v
LectureX
Weight of path p = <v1,v2,…,vk> is
k 1
w( p ) w(vi , vi 1 )
i 1
Shortest Path Problem
Given a weighted , directed graph G= <V,E> with edge weight
w and a path definition of
p = <v1,v2,…,vk> with weight
w(p),find the shortest path weight from vertex u to vertex v.
min{ω(p) : u
p
v}; if there is a path from u to v,
δ(u,v)=
otherwise.
Basic Operation: Relaxation
Maintain shortest-path estimate d[v] for each node
d[v]: initialize
Intuition:
Do we have a
shorter path
if use edge (u,v) ?
Algorithms will repeatedly apply Relax.
Differ in the order of Relax operation
All-Pair Shortest Path
Shortest paths between all pairs of nodes
The algorithm use an adjacency matrix w to represent graphs,
where
wij =
0
if i=j
weight of the directed edge
if i ± j and (i,j)E
∞
if i ± j and (i,j)E
All-Pair Shortest Path cont……..
Distance Matrix (D) : dij is the weight of the shortest path
from I to j.
Predecessor Matrix () : ij is NIL if either i=j or there is no
path from I to j, otherwise ij is the predecessor of j on the
shortest path from i.
Predecessor Graph: From the predecessor matrix we can
derive the Predecessor graph G,i= (V,i , E,i) for each vertex
iV as
V,i ={ jV | i,j ≠ NIL}U{i}
E,i ={ (i,j , j) | jVi,j | i,j ≠ NIL}U{i}
All-Pair Shortest Path cont……..
a
3
1
b
2
c
1
d
a
b
W= c
d
0
3
0
2
1
0
1
0
a
b
D= c
d
0
2
0
2
1
0
1
0
All-Pair Shortest Path cont……..
a
3
1
b
2
c
1
d
a NIL
b NIL
= c NIL
d NIL
d NIL a
NIL NIL NIL
c NIL NIL
d NIL NIL
G,i=
a
b c d
d
b
a
b
The structure of a shortest path:
Consider a shortest path p from vertex i to vertex j, and
suppose that p contains at most m edges. Assuming that there
are no negative-weight cycles, m is finite. If i=j, then p has
weight 0 and no edges.If vertices i and j are distinct, then we
p'
decompose path p into i
k
j , where path p’ now
contains at most m-1 edges. Moreover, p’ is a shortest path
from i to k. Thus, we have (i, j ) (i, k ) + wkj.
Recursive Solution:
dij(m) = weight of the path from i to j containing at most m edges.
0
if i = j
min wt from I to j
if i = j
dij(m) =
dij (m) =min {dik(m-1)+ wkj}
1≤ k ≤ n
D(i) = be the distance matrix after considering path of
length ≤ i
Example :
5
1
1
4
3
2
1
1
3
Example :
5
1
3
1
2
1
3
1
4
1
2
D(1) = 3
4
0 3
0
1
5
1
0
1
0
1
2
D(2) = 3
4
0 2
0
1
4 1
1
0
2 0
D(3) =
0
2
0
1
3
1
0
2
1
0
Example:
Figure 1
2
4
3
1
3
8
1
-4
2
7
5
4
6
-5
0 3 8 4
0 1 7
0
D(1)= 4
2 5 0
6 0
8 2 4
0 3
3 0 4 1 7
D(2)=
4
0 5 11
2 1 5 0 2
8 1 6 0
0 3 3 2 4
3 0 4 1 1
7 4
0 5 11
D(3)=
2 1 5 0 2
8 5
1
6
0
0 1 3 2 4
3 0 4 1 1
7 4
0 5 3
2 1 5 0 2
8 5
1
6
0
D(4)=
FASTER-ALL-PAIRS-SHORTEST-PATHS(W)
FASTER-ALL-PAIRS-SHORTEST-PATHS(W)
n
rows[W]
D W
For m 2 to n-1 do
D EXTEND-SHORTEST-PATHS(D
end
return D(m-1)
(1)
(m)
(m-1),w)
EXTEND-SHORTEST-PATH(D,W)
n
rows (D);
Initialize D’;
for i
1 to n do
for j
1 to n
d’ij
for k
1 to n do
d’ij
min(d’ij , dik + wkj)
end
end
end
Return D’
Floyd-Warshall algorithm:
The Floyed-Warshall (FW) algorithm works by successively
reducing the number of intermediate vertices that can occur in
a shortest path and its sub-path.
The intermediate vertex of a simple path p = <v1,v2,…,vl> is
any vertex of p other than v1 or vl.
A recursive solution to the all-pairs shortest paths problem:
Let dij(k) be the weight of a shortest path from vertex i to vertex
j with all intermediate vertices in the set {1,2,…,k}. A
recursive definition is given by
dij(k) =
wij
if k=0,
min(dij(k-1),dik(k-1)+dkj(k-1))
if k 1.
Computing the shortest-path weights bottom up:
FLOYD-WARSHALL(W)
n
rows[W]
D W
for k 1 to n
do for i 1 to n
do for j 1 to n
d min(d
(0)
ij
return D(n)
(k)
(k-1),d (k-1)+d (k-1))
ij
ik
kj
Floyd Warshall Algorithm - Example
Original weights.
Consider Vertex 1:
D(3,2) = D(3,1) + D(1,2)
Consider Vertex 2:
D(1,3) = D(1,2) + D(2,3)
Consider Vertex 3:
Nothing changes.
Example:
Figure 3
2
4
3
1
1
-4
2
7
5
4
6
74
3
8
-5
0 3 8 4
0 1 7
0
D(0)= 4
2 5 0
6 0
1
NIL 1
NIL 1
2
2
NIL NIL NIL
NIL
3
NIL
NIL
NIL
(0)=
NIL
4
NIL NIL
4
NIL NIL NIL
5
NIL
0 3 8 4
0 1 7
4 0
D(1)=
2 5 5 0 2
6 0
1
NIL 1
NIL 1
2
2
NIL NIL NIL
(1)= NIL 3 NIL NIL NIL
1
4
NIL 1
4
NIL NIL NIL
5
NIL
75
0 3 8
0
4 0
D(2)=
2 5 5
4 4
1 7
5 11
0 2
6 0
1
2
1
NIL 1
2
2
NIL NIL NIL
NIL
3
NIL
2
2
(2)=
1
4
NIL 1
4
NIL NIL NIL
5
NIL
8
0 3
0
4
0
D(3)=
2 1 5
4 4
1 7
5 11
0 2
6 0
1
2
1
NIL 1
2
2
NIL NIL NIL
(3)= NIL 3 NIL 2 2
3
4
NIL 1
4
NIL NIL NIL
5
NIL
76
0 3 1 4 4
3 0 4 1 1
0 5 3
D(4)= 7 4
2 1 5 0 2
8 5
1
6
0
4
2
1
NIL 1
NIL
4
2
1
4
4
3
NIL
2
1
(4)=
3
4
NIL 1
4
4
3
4
5
NIL
0 1 3 2 4
3 0 4 1 1
7 4
0 5 3
D(5)=
2 1 5 0 2
8 5
1
6
0
3
4
5
1
NIL
NIL
4
2
1
4
(5)= 4 3 NIL 2 1
3
4
NIL 1
4
4
3
4
5
NIL
77
Optimal Polygon Triangulation
Dr. A.S.Alvi
Optimal Polygon Triangulation
A polygon is a piecewise-linear, closed curve in the plane. That
is, it is a curve ending on itself that is formed by a sequence of
straight-line segments, called the sides of the polygon. A point
joining two consecutive sides is called a vertex of the polygon
If the polygon is simple, as we shall generally assume, it
does not cross itself. The set of points in the plane enclosed by
a simple polygon forms the interior of the polygon, the set of
points on the polygon itself forms its boundary, and the set of
points surrounding the polygon forms its exterior.
Optimal Polygon Triangulation cont…….
A simple polygon is convex if, given any two points on its
boundary or in its interior, all points on the line segment drawn
between them are contained in the polygon's boundary or
interior.
We can represent a convex polygon by listing its vertices in
counterclockwise order. That is, if P = v0,v1,...,vn-1
convex polygon, it has n sides
we interpret vn as v0.
is a
, where
Optimal Polygon Triangulation cont…….
Given two nonadjacent vertices vi and vj, the segment
a chord of the polygon. A chord
is
divides the polygon into
two polygons: vi,vi+1,...,vj and vj,vj+1,...,vi .
A triangulation of a polygon is a set T of chords of the polygon
that divide the polygon into disjoint triangles.
In the optimal (polygon) triangulation problem, we are given a
convex
polygon
P=
<v0,v1,...…vn-1>and
a
weight
function w defined on triangles formed by sides and chords of P.
The problem is to find a triangulation that minimizes the sum of
the weights of the triangles in the triangulation.
Optimal Polygon Triangulation cont…….
Optimal Polygon Triangulation cont…….
A polygon is described by P=< vo, v1, v2,…….. Vn-1>.
A polygon is a convex if the line segment between any two
points lies on the boundary or the interior.
If v[i] and v[j] are not adjacent (i.e. there is no edge between
them in the polygon), segment vivj is a chord.
Optimal Polygon Triangulation cont…….
Problem :
1. Given P=< vo, v1, v2,…….. Vn-1>. A weight function w on the
triangles formed by P and T.
2. Find T that minimizes the sum of weights.
3. Example:
4. This problem has similarities with matrix chaining.
The End:
© Copyright 2026 Paperzz