Linear Programming
Jie Wang
University of Massachusetts Lowell
Department of Computer Science
J. Wang (UMass Lowell)
Linear Programming
1 / 47
Linear function:
f (x1 , x2 , . . . , xn ) =
n
X
ai xi ,
i=1
where a1 , a2 , . . . , an are real numbers and x1 , x2 , . . . , xn are valuables.
Linear equality: f (x1 , x2 , . . . , xn ) = b.
Linear inequalities:
f (x1 , x2 , . . . , xn ) > b.
f (x1 , x2 , . . . , xn ) ≥ b.
f (x1 , x2 , . . . , xn ) < b.
f (x1 , x2 , . . . , xn ) ≤ b.
Linear inequalities and linear equalities are often referred to as linear
constraints.
LP problems are to maximize (or minimize) a linear objective function
with linear constraints.
J. Wang (UMass Lowell)
Linear Programming
2 / 47
LP Standard Form
Standard form: Maximization + linear inequalities.
maximize
subject to
x1
4x1
2x1
5x1
+ x2
− x2
+ x2
− 2x2
x1
x2
≤
8
≤ 10
≥ −2
≥
0
≥ 0.
A minimization problem can be converted to an equivalent
maximization problem, and vice versa.
For example, maximizing f (x1 , x2 , . . . , xn ) under a set of constraints is
equivalent to minimizing −f (x1 , x2 , . . . , xn ) with the same set of
constraints.
J. Wang (UMass Lowell)
Linear Programming
3 / 47
LP Slack Form
Maximization + linear equalities with slack variables.
maximize
subject to
x1
x3
x4
x5
+ x2
=
8 − 4x1
= 10 − 2x1
= −2 − 5x1
xi
i
+
−
+
≥
=
x2
x2
2x2
0
1, 2, 3, 4, 5.
Variables on the left-hand side of the equalities are basic variables
and on the right-hand side nonbasic variables.
Initially, basic variables are slack variables, and nonbasic variables are
the original variables. These can be changed during the simplex
procedure.
The slack form turns an LP problem in a lower dimension with a set
of inequality constraints to an equivalent LP problem in a higher
dimension with a set of equality constraints. Equalities are much
easier to handle than inequalities.
J. Wang (UMass Lowell)
Linear Programming
4 / 47
LP Applications
An LP formulation for single-pair shortest path of (s, t) .
maximize
dt
subject to
dv
≤ du + w (u, v ) for each edge u → v ∈ E ,
ds
= 0.
Note: it’s maximization, not minimization.
An LP formulation for maximum flow.
P
P
maximize
v ∈V fsv −
v ∈V fvs
subject to P
fuv ≤ c(u,
P v)
f
=
v ∈V uv
v ∈V fuv
fuv ≥ 0
J. Wang (UMass Lowell)
Linear Programming
for each u, v ∈ V
for each u ∈ V − {s, t}
for each u, v ∈ V .
5 / 47
Minimum-Cost Flow
Each edge u → v in a flow network has a capacity c(u, v ) and a cost
a(u, v ).
Want to send d units of flow from s to t while minimizing the total
cost.
P
minimize
u→v ∈E a(u, v )fuv
subject to
for each u, v ∈ V :
fuv ≤ c(u,
P
P v)
for each u ∈PV − {s, t}:
v ∈V fuv =
v ∈V fuv
P
f
−
f
=
d
sv
vs
v ∈V
v ∈V
for each u, v ∈ V :
fuv ≥ 0.
J. Wang (UMass Lowell)
Linear Programming
6 / 47
Multicommodity Flow
Given a digraph G = (V , E ) with k different commodities
K1 , K2 , . . . , Kk , where
Each edge u → v ∈ E has a capacity c(u, v ) ≥ 0.
G has no anti-parallel edges.
Ki = (si , ti , di ), representing the source, sink, and demand of
commodity i.
Let fi denote the flow for commodity i, where fiuv is the flow of
commodity i from u to v .
P
Let fuv = ki=1 fiuv denote the aggregate flow.
Want to determine if such a flow exists. That is, there is no objective
function to optimize.
J. Wang (UMass Lowell)
Linear Programming
7 / 47
Multicommodity Flow Continued
optimize
subject to
0
Pk
P
P i=1 fiuv
f
−
iuv
P v ∈V
P v ∈V fiuv
f
−
v ∈V i,si ,v
v ∈V fi,v ,si
fiuv
i
J. Wang (UMass Lowell)
≤
=
=
≥
=
Linear Programming
c(u, v ) for each u, v ∈ V
0 for each u ∈ V − {si , ti }
di
0 for each u, v ∈ V and
1, 2, . . . , k.
8 / 47
Feasible Solutions and Feasible Regions
Feasible solutions are values of the variables that satisfy the
constraints.
Feasible region (a.k.a. simplex) is the set of feasible solutions,
which is a convex polytope.
A convex polytope is a geometric shape in d-dimensional space without
any dents.
J. Wang (UMass Lowell)
Linear Programming
9 / 47
LP Algorithms
The optimal solution occurs at the boundary of the simplex.
Could be at exactly one vertex. In this case there is only one optimal
solution.
Could be at a line segment. In this case there are multiple optimal
solutions.
The simplex algorithm, while having exponential runtime in the worst
case, is very efficient in practice.
Other algorithms include (1) the ellipsoid method, the first-known
polynomial-time algorithm; and (2) the Interior-Point method, which
walks through the interior of the simplex instead of the vertices.
J. Wang (UMass Lowell)
Linear Programming
10 / 47
The Simplex Algorithm
The simplex algorithm finds a basic solution from the slack form iteratively
as follows:
1 Set each nonbasic variable to zero.
2 Compute the values of the basic variables from the equality
constraints.
If a nonbasic variable in a constraint causes a basic variable in the
constraint to become 0, then the constraint is tight.
3
The goal of an iteration is to reformulate the linear program so that
the basic solution (when the nonbasic variables are 0) has a greater
objective value.
This is done by (performing a pivot): Select a nonbasic variable xe
(a.k.a. the entering variable); maximize it such that all constraints
are satisfied.
The variable xe becomes basic and another variable xl (a.k.a. the
leaving variable) becomes nonbasic.
4
The simplex algorithm terminates when all of the coefficients
appearing in the objective function become negative.
J. Wang (UMass Lowell)
Linear Programming
11 / 47
An Example
maximize
subject to
3x1
x1
2x1
4x1
+ x2
+ x2
+ 2x2
+ x2
+
+
+
+
2x3
3x3
5x3
2x3
x1 , x2 , x3
≤ 30
≤ 24
≤ 36
≥ 0.
Slack form:
maximize 3x1 + x2 + 2x3
subject to x4 = 30 − x1 − x2 − 3x3
x5 = 24 − 2x1 − 2x2 − 5x3
x6 = 36 − 4x1 − x2 − 2x3
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0.
J. Wang (UMass Lowell)
Linear Programming
12 / 47
Simplex Algorithm Example Continued
Set x1 = 0, x2 = 0, x3 = 0 and get x4 = 30, x5 = 24, and x6 = 36.
Select entering variable x1 and leaving variable x6 .
Solve the corresponding equation for x1 to get
x1 = 9 −
x2 2x3 x6
−
− .
4
4
4
Maximize it to get x1 = 9.
Substitute the right-hand side of x1 into each remaining equation:
x2 x3 3x6
z = 27 +
+
−
4
2
4
x2 x3 x6
x1 = 9 −
−
−
4
4
4
3x2 5x3 x6
x4 = 21 −
−
+
4
2
4
3x2
x6
x5 = 6 −
− 4x3 + .
2
2
J. Wang (UMass Lowell)
Linear Programming
13 / 47
Simplex Algorithm Example Continued
Set x2 , x3 , and x6 to 0 to get z = 27, x1 = 9, x4 = 21, and x5 = 6.
Select entering variable x3 and leaving variable x5 . Note that x6
cannot be chosen for it will decrease the value of z.
Solve for x3 to get
3 3x2 x6 x5
+
− .
x3 = −
2
8
8
4
3
The maximum value for x3 is 2 .
Substitute x3 ’s left-hand side into the rest of the equations to obtain
x5 11x6
111 x2
+
−
−
z=
4
16
8
16
33 x2
x5 5x6
x1 =
−
+
−
4
16
8
16
3 3x2 x6 x5
x3 = −
+
−
2
8
8
4
69 3x2 5x5
x6
x4 =
+
+
−
4
16
8
16
J. Wang (UMass Lowell)
Linear Programming
14 / 47
Simplex Algorithm Example Continued
Set x2 , x5 , and x6 to 0 to get z =
111
4 , x1
=
33
4 , x3
= 23 , and x4 =
69
4 .
Select entering variable x2 ; maximize it to get x2 = 4.
The leaving variable is x3 :
x2 = 4 −
8x3 2x5 x6
−
+ .
3
3
3
Substitute x2 ’s left-hand side into the rest of the equations to obtain:
x3 x5 2x6
z = 28 −
−
−
6
6
3
x3 x5 x6
x1 = 8 +
+
−
6
6
3
8x3 2x5 x6
x2 = 4 −
−
+
3
3
3
x3 x5
x4 = 18 −
+
2
2
J. Wang (UMass Lowell)
Linear Programming
15 / 47
Simplex Algorithm Example Continued
Now all the coefficients are negative, meaning that there is no more
room for improvement.
The algorithm terminates with z = 28 by setting x3 , x5 , x6 to 0.
The basic solution is
x1 = 8, x4 = 18,
x2 = 4, x5 = 0,
x3 = 0, x6 = 0.
Note: When ties are present, choose the variable with the smallest
index (Bland’s Rule).
J. Wang (UMass Lowell)
Linear Programming
16 / 47
Pivoting
Pivoting takes as input a slack form.
Let N denote the set of indices of nonbasic variables and B the set of
indices of basic variable in the slack form.
Let the slack form be given by (N, B, A, b, c, v ), where v is the
objective value.
Let l denote the index of the leaving variable xl and e the entering
variable xe .
Pivoting returns (N̂, B̂, Â, b̂, ĉ, v̂ ) for the new slack form.
J. Wang (UMass Lowell)
Linear Programming
17 / 47
Pivoting Continued
J. Wang (UMass Lowell)
Linear Programming
18 / 47
How Will Variables Change in Pivoting?
Lemma 29.1 In a call to Pivot(N, B, A, b, c, v , l, e) with ale 6= 0, let x
denote the basic solution after this call. Then
1 x j = 0 for each j ∈ N̂.
2 x e = bl /ale .
3 x i = bi − aie b̂e for each i ∈ B̂ − {e}.
Proof. The first statement is true because the basic solutions always sets
nonbasic variables
P to 0. For these variables we have x i = b̂i for i ∈ B̂,
since xi = b̂i − j∈N̂ âij xj .
Since e ∈ B̂, line 3 of Pivot gives
x e = b̂e = bl /ale .
For i ∈ B̂ − {e}, line 9 gives
x i = b̂i = bi − aie b̂e .
This completes the proof.
J. Wang (UMass Lowell)
Linear Programming
19 / 47
Simplex Code
J. Wang (UMass Lowell)
Linear Programming
20 / 47
Degeneracy
Each iteration of the simplex algorithm either increases the objective
value or leaves the objective value unchanged. The latter
phenomenon is degeneracy.
Example of degeneracy:
z =
x1 + x2 + x3
x4 = 8 − x1 − x2
x5 =
x2 − x3 .
Suppose that x1 is chosen to be the entering variable and x4 the
leaving variable. After pivoting:
z = 8
+ x3 − x4
x1 = 8 − x2
− x4
x5 =
x2 − x3
.
J. Wang (UMass Lowell)
Linear Programming
21 / 47
Degeneracy Continued
Now x3 is the only option for being the entering variable and x5
leaving. After pivoting:
z = 8 + x2 − x4 − x5
x1 = 8 − x2 − x4
x3 =
x2
− x5 .
The objective value of 8 has not changed, and degeneracy occurs.
Fortunately, x2 can now be chosen to be entering and x1 leaving.
After pivoting:
z = 16 − x1
x2 = 8 − x1
x3 =
J. Wang (UMass Lowell)
x2
Linear Programming
− 2x4 − x5
− x4
− x5 .
22 / 47
Cycling
Degeneracy may lead to cycling: The slack forms at two different
iterations are identical.
If we instead choose x2 entering and x3 leaving, then after pivoting:
z = 8
+ x3 − x4
x1 = 8 − x2
− x4
x2 =
x3
+ x5 .
This is a cycling.
J. Wang (UMass Lowell)
Linear Programming
23 / 47
Simplex Runtime
Cycling is the only reason that the simplex algorithm might not
terminate.
By choosing the variable with the smallest index, the simplex
algorithm always terminate.
The simplex algorithm terminates in at most n+m
iterations.
m
Proof. This is because |B| = m andthere are n + m variables,
implying that this are at most n+m
ways to choose B.
m
In practice, the simplex algorithm runs extremely fast.
J. Wang (UMass Lowell)
Linear Programming
24 / 47
Duality
The dual of a maximization LP problem (a.k.a. primal) is a minimization
LP, and vice versa.
Primal:
maximize
subject to
Pn
cx
Pnj=1 j j
a
j=1 ij xj
xj
≤ bi
≥ 0
for i = 1, 2, . . . , m
for j = 1, 2, . . . , n.
maximize
subject to
Pm
Pmi=1 bi yi
i=1 aij yi
yi
≥ ci
≥ 0
for j = 1, 2, . . . , n
for i = 1, 2, . . . , m.
Dual:
J. Wang (UMass Lowell)
Linear Programming
25 / 47
Primal-Dual Example
Primal:
maximize
subject to
3x1
x1
2x1
4x1
x1 , x2 , x3
+ x2
+ x2
+ 2x2
+ x2
≥
0.
+
+
+
+
2x3
3x3 ≤ 30
5x3 ≤ 24
2x3 ≤ 36
minimize
subject to
30y1
y1
y1
3y1
y1 , y2 , y3
+ 24y2
+ 2y2
+ 2y2
+ 5y2
≥
0.
+ 36y3
+ 4y3 ≥ 3
+
y3 ≥ 1
+ 2y3 ≥ 2
Dual:
J. Wang (UMass Lowell)
Linear Programming
26 / 47
Weak LP Lemma
Lemma 29.8 Let x be any feasible solution to the primal LP and y be any
feasible solution to the dual LP. Then
n
X
cj x j ≤
j=1
m
X
bi y j .
i=1
Proof. We have
n
X
cj x j ≤
j=1
n
m
X
X
j=1
=
m
X
≤
m
X
aij y i
xj
(Definition of cj from dual.)
aij x j y i
(Definition of bi from primal.)
i=1
n
X
i=1
!
j=1
bi y i .
i=1
J. Wang (UMass Lowell)
Linear Programming
27 / 47
Equality of Primal and Dual
Corollary 29.9. Let x be a feasible solution to a primal linear program
(A, b, c), and let y be a feasible solution to the corresponding dual linear
program. If
n
m
X
X
cj x j =
bi y i ,
j=1
i=1
then x and y are optimal solutions to the primal LP and dual LP,
respectively.
Proof. It follows from the Weak LP Lemma.
J. Wang (UMass Lowell)
Linear Programming
28 / 47
LP Duality
Theorem 29.10. Suppose that the simplex algorithm returns values
x = (x 1 , x 2 , . . . , x n ) for the primal LP (A, b, c). Let N and B denotes the
nonbasic and basic variables for the final slack form, let c0 denote the
coefficients in the final slack form, and let y = (y 1 , y 2 , . . . , y m ) be defined
as
(
0
−cn+i
if (n + i) ∈ N,
yi =
0
otherwise.
Then x is an optimal solution to the primal LP, y is an optimal solution to
the dual LP, and
n
m
X
X
cj x j =
bi y i .
j=1
J. Wang (UMass Lowell)
i=1
Linear Programming
29 / 47
Proof
It suffices to show that
n
X
j=1
cj x j =
m
X
bi y i .
i=1
Suppose the simplex algorithm terminates with a final slack form and
the objective function
X
z = v0 +
cj0 xj ,
j∈N
where cj0 ≤ 0 for all j ∈ N.
For the basic variables in the final slack form we can set their
coefficients to 0. Namely, let cj0 = 0 for all j ∈ B.
J. Wang (UMass Lowell)
Linear Programming
30 / 47
Proof Continued
We have
z
P
= v 0 + j∈N cj0 xj
P
P
= v 0 + j∈N cj0 xj + j∈B cj0 xj
P
0
= v 0 + n+m
j=1 cj xj
(since cj0 = 0 for j ∈ B)
(since N ∪ B = {1, . . . , n + m}).
Since for j ∈ N we have x j = 0, we have z = v 0 .
Since each step in the simplex algorithm is an elementary operation,
we know that the original objective function on x must be the same
to that of the final slack form. Namely,
n
X
cj x j = v 0 +
j=1
n+m
X
cj0 x j
j=1
0
=v +
X
cj0 x j +
j∈N
X
cj0 x j
j∈B
0
=v .
J. Wang (UMass Lowell)
Linear Programming
31 / 47
Proof Continued
We now show that y is a feasible solution to the dual LP and
m
X
bi y i =
i=1
n
X
cj x j = v 0 +
j=1
n+m
X
n
X
cj x j .
j=1
cj0 x j
j=1
0
=v +
= v0 +
n
X
j=1
n
X
cj0 x j
+
cj0 x j +
j=1
= v0 +
n
X
j=1
J. Wang (UMass Lowell)
cj0 x j +
n+m
X
cj0 x j
j=n+1
m
X
0
cn+i
x n+i
i=1
m
X
(−y i )x n+i
(by definition of y i )
i=1
Linear Programming
32 / 47
Proof Continued
= v0 +
= v0 +
= v0 +
=
0
n
X
cj0 x j +
m
X
j=1
i=1
n
X
m
X
j=1
n
X
cj0 x j −
cj0 x j −
j=1
m
X
v −
i=1
J. Wang (UMass Lowell)
i=1
m
X
n
X
(−y i ) bi −
aij x j
j=1
bi y i +
bi y i +
i=1
bi y i
(by slack form)
m X
n
X
i=1 j=1
n X
m
X
(aij x j )y i
(aij y i )x j
j=1 i=1
m
X
n X
+
cj0 +
j=1
aij y i x j
i=1
Linear Programming
33 / 47
Proof Continued
This implies that
v0 −
m
X
bi y i = 0,
i=1
cj0 +
m
X
aij y i = cj
for j = 1, 2, . . . , n.
i=1
Pm
Thus, i=1 bi y i = v 0 = z.
Now we show that y is feasible. Note that cj0 ≤ 0, we have
cj = cj0 +
m
X
aij y i
i=1
≤
m
X
aij y i .
i=1
This completes the proof.
J. Wang (UMass Lowell)
Linear Programming
34 / 47
Revisit: LP Formulation for Maximum Flow
We have seen an LP
P formulation for maximum flow, where the
objective function (s,u) fsu has a number of variables fsu for
(s, u) ∈ E .
In the formulation, variable fuv is the value of the flow from u to v for
(u, v ) ∈ E .
Recall that the flow network does not have an edge from t to s (the
residual network may change this, but we don’t deal with residual
networks in the LP formulation).
To make an LP dual simpler, we reformulate LP for maximum flow
with one variable in the objective function. To do so, we create an
edge (t, s) to E with capacity c(t, s) = +∞.
J. Wang (UMass Lowell)
Linear Programming
35 / 47
LP Formulation for Maximum Flow
The following LP models the maximum flow problem (Note: we use
indexing i and j instead of u and v to place ourselves in our comfort
zone):
Maximize
Subject to
fts
(only one variable)
∀(i, j) ∈ E − {(t, s)} : fij ≤ c(i, j)
(capacity constraint)
P
P
∀i ∈ V : (i,j)∈E fij − (j,i)∈E fji = 0 (flow conservation)
∀(i, j) ∈ E : fij ≥ 0
Note that there is no need to include constraint fts ≤ c(t, s).
J. Wang (UMass Lowell)
Linear Programming
36 / 47
Formulation of the Dual
Each primal constraint corresponds to a dual variable.
Let yij be the variable for constraint
fij ≤ c(i, j), for (i, j) ∈ E − {(t, s)}.
Let yi be the variable for constraint
X
X
fij −
fji = 0 for i ∈ V .
(i,j)∈E
(j,i)∈E
For the objective function we look at the constraints in the primal
with nonzero right-hand side and identify dual variables for these
constraints. The following is the objective of the dual:
X
Minimize
c(i, j)yij .
(i,j)∈E −{(t,s)}
J. Wang (UMass Lowell)
Linear Programming
37 / 47
Formulation of the Dual Continued
Each primal variable x must correspond to a dual constraint. To form
a dual constraint, follow the steps below:
Identify all the primal constraints containing x.
For each primal constraint containing x, let a be the coefficient of x
and y be the dual variable for this constraint.
3 Sum up all ay ’s to become the left-hand side of the dual constraint for
x.
4 The value of the right-hand side of the dual constraint is determined by
the coefficient of x.
1
2
J. Wang (UMass Lowell)
Linear Programming
38 / 47
Formulation of the Dual Continued
For the primal variable fij with i 6= t and j 6= s, we observe that (1)
the coefficient in the primal objective is 0; and (2) fij appears only in
the following three primal constraints:
X
fij −
j:(i,j)∈E
X
i:(j,i)∈E
X
fij ≤ c(i, j)
(yij )
fji = 0
(yi )
fij = 0
(yj )
j:(i,j)∈E
fji −
X
i:(i,j)∈E
Thus, for the primal variable fij with i 6= t and j 6= s, we have the
following dual constraint:
yij + yi − yj ≥ 0.
The “≥” sign is used because the variable appears in one constraint
with the “≤” sign and the rest with the “=” sign.
J. Wang (UMass Lowell)
Linear Programming
39 / 47
Formulation of the Dual Continued
For the primal variable fts , we note that (1) its coefficient in the
primal objective is 1 and (2) it appears in the following two
constraints:
X
ftj −
j:(t,j)∈E
X
j:(s,j)∈E
X
fjt
=0
(yt )
fjs
= 0. (ys )
j:(j,t)∈E
fsj −
X
j:(j,s)∈E
Thus, the dual constraint w.r.t. variable fts is
yt − ys = 1.
The “=” sign is used because the primal constraints involving variable
fts are equations.
J. Wang (UMass Lowell)
Linear Programming
40 / 47
The Dual and the Minimum Cut
Finally, the following is the dual for the maximum-flow LP:
Minimize
Subject to
P
(i,j)∈E −{(t,s)} c(i, j)yij
∀(i, j) ∈ E − {(t, s)} : yij + yi − yj
≥0
yt − ys
=1
yij , yi
≥0
We know that the maximum flow is equal to the summation of the
capacity of minimum cut, and any feasible solution to the primal is
bounded above by a feasible solution to the dual. Thus, the optimal
solution to the dual is the minimum cut.
J. Wang (UMass Lowell)
Linear Programming
41 / 47
Exercise
As an exercise let’s consider the previous dual as primal and obtain its
dual.
Let xij be the dual variable for the following primal constraint:
yij + yi − yj ≥ 0 for (i, j) ∈ E − {(t, s)}
Let xts denote the dual variable for the following primal constraint:
yt − ys = 1.
To form the dual objective, we note that only primal constraint
yt − ys = 1 has nonzero right-hand side. Thus, the objective for the
dual is
Maximize xts .
J. Wang (UMass Lowell)
Linear Programming
42 / 47
Exercise Continued
To form the dual constraint, we note that each primal varialbe yij
with (i, j) 6= (t, s) appears in the following constraints:
yij + yi − yj ≤ 0
and it has coeficient c(i, j) in the primal objective function. Thus, for
this variable we have a dual constraint
xij ≤ c(i, j) for (i, j) 6= (t, s).
Each primal variable yi has coeffieint 0 in the primal objective
function. For i 6∈ {s, t}, yi appears in the following constraints:
yij + yi − yj ≥ 0
for (i, j) ∈ E − {(t, s)}
yji + yj − yi ≥ 0
for (j, i) ∈ E − {(t, s)}
J. Wang (UMass Lowell)
Linear Programming
43 / 47
Exercise Continued
Thus, the dual constraint for the primal variable yi with i 6∈ {s, t} is
X
X
xij −
xji ≤ 0.
j:(i,j)∈E −{(t,s)}
j:(j,i)∈E −{(t,s)}
The primal variable ys appears in the following constraints:
ysj + ys − yj ≥ 0
for (s, j) ∈ E
yts + yt − ys ≥ 0.
Thus, the dual constraint for ys is
X
xsj − xts ≤ 0.
j:(s,j)∈E
Likewise, the dual constraint for yt is
X
xts −
xjt ≤ 0.
j:(j,t)∈E
J. Wang (UMass Lowell)
Linear Programming
44 / 47
Exercise Continued
View xij as flow from node i to node j. Because for each i ∈ V , the
total flow entering node i is at least the total flow going out of i and
there is an edge from t to s, all the inequaliies in the dual constrains
must be equalities, namely, for each i ∈ V ,
X
X
xij −
xji = 0.
j:(i,j)∈E
j:(j,i)∈E
Finally, the dual problem is
Maximize
Subject to
xts
∀(i, j) ∈ E − {(t, s)} : xij ≤ c(i, j)
P
P
∀i ∈ V : j:(i,j)∈E xij − j:(j,i)∈E xji
=0
∀(i, j) ∈ E : xij
≥ 0.
This is exactly the same as the maximum flow LP formulation.
J. Wang (UMass Lowell)
Linear Programming
45 / 47
A Sample Multiple-Choice Quesion
Unlike problems in quizes, the final exam consists of only multiple-choice
(including true/false) quesions. Example:
1. Which is the correct dual for the following primal LP:
Minimize
Subject to
−3x1 + 8x2 + 4x3
x1 + 5x2 − 3x4
≥7
−3x1 + 10x3 + 5x4 ≥ 1
x1 , x2 , x3 , x4
≥ 0.
(a)
Maximize
Subject to
J. Wang (UMass Lowell)
7y1 + y2
y1 − 3y2
≤ −3
5y1
≤8
10y2
≤4
−3y1 + 5y2
≤0
y1 , y2 , y3 , y4
≥ 0.
Linear Programming
46 / 47
Sample Multiple-Choince Quesions Continued
(b)
Maximize
Subject to
7y1 + y2
y1 − 3y2
≤ −3
−3y1 + 5y2
≤0
y1 , y2 , y3 , y4
≥ 0.
(c)
Maximize
Subject to
3y1 + 8y2 − 4y3
y1 − 3y2
≤ −3
5y1
≤8
10y2
≤4
−3y1 + 5y2
≤0
y1 , y2 , y3 , y4
≥ 0.
(d) None of the above.
J. Wang (UMass Lowell)
Linear Programming
47 / 47
© Copyright 2026 Paperzz