Chapter 6
Sensitivity Analysis and Duality
to accompany
Introduction to Mathematical Programming: Operations Research, Volume 1
4th edition, by Wayne L. Winston and Munirpallam Venkataramanan
Presentation: H. Sarper
1
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.1 – A Graphical Introduction to Sensitivity Analysis
Sensitivity analysis is concerned with how changes in an LP’s
parameters affect the optimal solution.
Reconsider the
Giapetto problem
from Chapter 3
shown to the right:
max z = 3x1 + 2x2
2 x1 + x2 ≤ 100 (finishing constraint)
x1 + x2 ≤ 80 (carpentry constraint)
x1
≤ 40 (demand constraint)
x1,x2 ≥ 0
(sign restriction)
Where:
x1 = number of soldiers produced each week
x2 = number of trains produced each week.
2
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.1 – A Graphical Introduction to Sensitivity Analysis
X2
100
finishing constraint
Slope = -2
80
Feasible Region
A
demand constraint
60
Isoprofit line z = 120
Slope = -3/2
B
D
40
carpentry constraint
Slope = -1
20
The optimal solution to the
Giapetto problem is z =
180, x1 = 20, x2 = 60 (Point
B in the figure to the right)
and it has x1, x2, and s3 (the
slack variable for the
demand constraint) as
basic variables. How would
changes in the problem’s
objective function
coefficients or the
constraint’s right-hand sides
change this optimal
solution?
Giapetto Problem
C
20
40
50
60
80 X1
3
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.1 – A Graphical Introduction to Sensitivity Analysis
Graphical Analysis of the Effect of a Change in an Objective
Function Coefficient
Recall from the Giapetto problem, if the isoprofit line is flatter than the
carpentry constraint, Point A(0,80) is optimal. Point B(20,60) is
optimal if the isoprofit line is steeper than the carpentry constraint but
flatter than the finishing constraint. Finally, Point C(40,20) is optimal
if the slope of the isoprofit line is steeper than the slope of the
finishing constraint. Since a typical isoprofit line is c1x1 + 2x2 = k, we
know the slope of the isoprofit line is just -c1/2. In summary:
1. Point A is optimal if -c1/2 ≥ -1 or 0 ≤ c1 ≤ 2 ( -1 is the carpentry
constraint slope).
2. Point B is optimal if -2 ≤ -c1/2 ≤ -1 or 2 ≤ c1 ≤ 4 (between the
slopes of the carpentry and finishing constraint slopes).
3. Point C is optimal if -c1/2 ≤ -2 or c1 ≥ 4 ( -2 is the finishing
constraint slope).
4
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.1 – A Graphical Introduction to Sensitivity Analysis
X2
finishing constraint, b1 = 120
100
finishing constraint, b1 = 100
80
Isoprofit line z = 120
A
demand constraint
60
finishing constraint, b1 = 80
B
D
40
carpentry constraint
Feasible Region
20
A graphical analysis can
also be used to determine
whether a change in the
rhs of a constraint will
make the basis no longer
optimal. Letting b1 =
number of finishing hours in
the Giapetto LP to the right,
we see a change in b1shifts
the finishing constraint
parallel to its current
position. The current
optimal point (Point B) is
where the carpentry and
finishing constraints are
binding.
C
20
40
50
60
80 X1
5
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.1 – A Graphical Introduction to Sensitivity Analysis
If we change the value of b1, then as long as the point where the
finishing and carpentry constraints intersect are binding remains
feasible, the optimal solution will still occur where these constraints
intersect.
We see that if b1 > 120, x1 will be greater than 40 and will violate the
demand constraint. Also, if b1 < 80, x1 will be less than 0 and the
nonnegativity constraint for x1 will be violated.
Therefore: 80 ≤ b1 ≤ 120
The current basis remains optimal for 80 ≤ b1 ≤ 120, but the decision
variable values and z-value will change.
In a constraint with a positive slack (or positive excess) in an LPs
optimal solution, if we change the rhs of the constraint to a value in the
range where the basis remains optimal, the optimal solution to the LP
remains the same.
6
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.1 – A Graphical Introduction to Sensitivity Analysis
Shadow Prices - It is important to determine how a constraint’s rhs
changes the optimal z-value. We define:
The shadow price for the i th constraint of an LP is the amount by
which the optimal z-value is improved (increased in a max problem or
decreased in a min problem) if the rhs of the i th constraint is
increased by one. This definition applies only if the change in the rhs
of constraint i leaves the current basis optimal.
Using the finishing constraint as an example, we know, 100 + D
finishing hours are available (assuming the current basis remains
optimal). The LP’s optimal solution is then x1 = 20 + D and x2 = 60 – D
with z = 3x1 + 2x2 = 3(20 + D) + 2(60 - D) = 180 + D. Thus, as long as
the current basis remains optimal, a one-unit increase in the number
of finishing hours will increase the optimal z-value by $1. So, the
shadow price for the first (finishing hours) constraint is $1.
7
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.1 – A Graphical Introduction to Sensitivity Analysis
The Importance of Sensitivity Analysis
Sensitivity analysis is important because:
1. Values of LP parameters might change. If a parameter
changes, sensitivity analysis shows it is unnecessary to solve
the problem again. For example in the Giapetto problem, if the
profit contribution of a soldier changes to $3.50, sensitivity
analysis shows the current solution remains optimal.
2. Uncertainty about LP parameters. In the Giapetto problem for
example, if the weekly demand for soldiers is at least 20, the
optimal solution remains 20 soldiers and 60 trains. Thus, even
if demand for soldiers is uncertain, the company can be fairly
confident that it is still optimal to produce 20 soldiers and 60
trains.
8
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
An LP’s optimal tableau can be expressed in terms of the LP’s
parameters. The formulas developed in this section are used in the
study of sensitivity analysis, duality, and advanced LP topics.
Assume that we are solving
a max problem that has
been prepared for solution
by the Big M method with
the LP having m constraints
and n variables. Although
some of the variables may
be slack, excess, or artificial,
we choose to label them x1,
x2, …,xn. The LP may then
be written as shown.
max z = c1x1 + c2x2 + … + cnxn
s.t.
a11x1 + a12x2 + … + a1nxn = b1
a21x1 + a22x2 + … + a2nxn = b2
….
…
….
…
…
amx1 + am2x2 + … + amnxn = bm
xi ≥ 0 (i = 1, 2, …, n)
9
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
As an example,
consider to the
right the Dakota
Furniture problem
from Section 4.3
(without the x2 ≤ 5
constraint).
z – 60x1 – 30x2 – 20x3 + 0s1 + 0s2 + 0s3
Suppose we have
found the optimal
solution to the LP
with the optimal
tableau show to
the right.
z
8x1 +
6x2 +
x3 + s1
4x1 +
2x2 + 1.5x3
= 48
+ s2
2x1 + 1.5x2 + 0.5x3
+
5x2
-
2x2
-
2x2 + x3
x1 + 1.25x2
=0
= 20
+ s3
+ 10s2 + 10s3
=8
= 280
+ s1 + 2s2
- 8s3
= 24
+ 2s2
- 4s3
=8
- 0.5 s2 + 1.5s3
=2
10
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
Define:
BV = {BV1, BV2, …, BVn} to be the set of basic
variables in the optimal tableau.
NBV = {NBV1, NBV2, …, NBVn} the set of
nonbasic variables in the optimal tableau.
xBV = vector listing the basic variables in
the optimal tableau.
xNBV = vector listing the nonbasic variables
in the optimal tableau.
cBV = row vector of the initial objective
coefficients for the optimal tableau’s basic
variables.
cNBV = row vector of the initial objective
coefficients for the optimal tableau’s
nonbasic variables.
Dakota Problem
xBV
s1
x 3 xNBV
x
1
x2
s2
s
3
Since BV = {s1,x3,x1},
cBV = {0 20 60}
Since NBV = {x2,s2,s3},
cNBV = {0 20 60}
11
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
Dakota Problem
Define:
B is an m x m matrix whose
j th column is the column for
BVj in the initial tableau.
Aj is the column (in the constraints)
for the variable xj.
N is the m x (n-m) matrix whose
columns are the columns for the
nonbasic variables (in NBV order) in
the initial tableau. NBV = {x2,s2,s3} for
Dakota problem.
B
a2
N
1 1 8
0 1.5 4
0 0.5 2
6
2
1.5
6 0 0
2 1 0
1.5 0 1
12
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
Dakota Problem
Define:
The m x 1 column vector b is the
right-hand side of the constraints
in the initial tableau.
b
48
20
8
We can now use matrix algebra to determine how an LP’s
optimal tableau (with the set of basic variables BV) is related to
the original LP.
z = cBVxBV + cNBVxNBV
We observe the Dakota
LP may be written as:
s.t.
BxBV + NxNBV = b
xBV, xNBV ≥ 0
13
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
Using the format on the previous slide, the Dakota problem is written:
s1
x2
max z = ( 0 20 60 ) x3 ( 30 0 0 ) s2
x1
s3
s.t.
1 1 8 s1 6 0 0 x2
0 1.5 4 x3 2 1 0 s2
0
0.5
2
1.5
0
1
x1
s3
s1 0
x3 0
x1 0
48
20
8
x2 0
s2 0
s3 0
14
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
Multiplying the constraints through by B-1 yields:
B-1BxBV + B-1NxNBV = B-1b or xBV + B-1NxNBV = B-1b
Using the Gauss-Jordan method
for the Dakota problem we know:
Substituting into
xBV + B-1NxNBV = B-1b
yields:
1 2 8
-1
B = 0 2 4
0
0.5
1.5
s1 2.0 2 8 x2
x3 2.0 2 4 s2
1.25
.5
1.5
s3
x1
24
8
2.0
15
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
Conclusions:
Column for xj in optimal tableau’s constraints = B-1aj
Example: Column x2 in the Dakota optimal tableau = B-1a2
x2
1 2 8 6
0 2 4 2
0
0.5
1.5
1.5
2
2
1.25
Right-hand side of optimal tableau’s constraints = B-1b
Example:
rhsoptimal
1 2 8 48
0 2 4 20
0 0.5 1.5 8
24
8
2
16
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
Determining the Optimal Tableau’s Row 0 in Terms of BV and the Initial LP
We multiple the constraints
BxBV + NxNBV = b through by
the vector cBVB-1.
cBVxBV + cBVB-1NxNBV = cBVB-1b
We know the original
objective function:
z - cBVxBV + cNBVxNBV = 0
Adding the two equations
together and eliminating
the optimal tableau’s basic
variables we obtain:
z + (cBVB-1N – cNBV) xNBV = cBVB-1b
17
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
The coefficient of xj in row 0 is:
cBVB-1(column of N for xj) – (coefficient of xj in cNBV) = cBVB-1aj - cj
And the rhs of row 0 is cBVB-1b
Letting
cj
showing
be the coefficient of xj in the optimal tableau’s row 0
cj
= cBVB-1aj - cj
and the rhs of the optimal tableau’s row 0 = cBVB-1b
18
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.2 – Some Important Formulas
Formulas for computing the optimal tableau from the initial LP
xj column in optimal tableau’s constraints = B-1aj
Right-hand side of optimal tableau’s constraints = B-1b
cj
= cBVB-1aj - cj
Coefficient of slack variable si in optimal row 0 = i th element of cBVB-1
Coefficient of excess variable ei in optimal row 0 = -(i th element of cBVB-1)
Coefficient of artificial variable ai in optimal row 0 =
(i th element of cBVB-1) + M (max problem)
Right-hand side of optimal row 0 = cBVB-1b
19
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.3 – Sensitivity Analysis
How do changes in an LP’s parameters (objective function coefficients,
right-hand sides, and technological coefficients) change the optimal
solution? Let BV be the set of basic variables in the optimal tableau.
Given a change in an LP, determine if the BV remains optimal. From
Chapter 4 we know the simplex tableau (for a max problem) for a set
of basic variables is optimal if and only if each constraint has a
nonnegative rhs and each variable has a nonnegative coefficient.
Whether a tableau is feasible and optimal
depends only upon the rhs of the constraints
and the objective function coefficients of
each variable in row 0 For example,, if an
LP has variables x1, x2, …, x6 , the tableau
to the right would be optimal.
z + x2 + x4 + x6 = 6
=1
=2
=3
This tableau’s optimality is not affected by parts of the tableau that are omitted.
20
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.3 – Sensitivity Analysis
Suppose we have solved an LP and have found the BV is an optimal
basis. Use the following procedure to determine if any change in the
LP will cause the BV to no longer be optimal.
Step 1 Using the formulas of Section 6.2 determine how changes in
the LP’s parameters change the right hand side row 0 of the optimal
tableau (the tableau having BV as the set of basic variables).
Step 2 If each variable in row 0 has a nonnegative coefficient
and each constraint has a nonnegative rhs, BV is still optimal.
Otherwise, BV is no longer optimal.
If BV is no longer optimal, find the new optimal solution by using
the Section 6.2 formulas to recreate the entire tableau for BV and
then continuing the simplex algorithm with the BV tableau as your
starting tableau.
21
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.3 – Sensitivity Analysis
There can two reasons why a change in an LP’s
parameters cause BV to no longer be optimal:
1. A variable (or variables) in row 0 may have a negative
coefficient. In this case, a better (larger z-value) bfs
can be obtained by pivoting in a nonbasic variable with
a negative coefficient in row 0. If this occurs, the BV is
now a suboptimal basis.
2. A constraint (or constraints) may now have a negative
rhs. In this case, at least one member of BV will now
be negative and BV will no longer yield a bfs. If this
occurs, we say they BV is now an infeasible basis.
22
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.3 – Sensitivity Analysis
Six types of changes in an LP’s parameters change the
optimal solution:
1. Changing the objective function coefficient of a
nonbasic variable.
2. Changing the objective function coefficient of a basic
variable.
3. Changing the right-hand side of a constraint.
4. Changing the column of a nonbasic variable.
5. Adding a new variable or activity.
6. Adding a new constraint.
23
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.4 – The 100% Rule
100% Rule for Changing Objective Function Coefficients
Depending on whether the objective function coefficient of any variable
with a zero reduced cost in the optimal tableau is changed, there are
two cases to consider:
Case 1 – All variables whose objective function coefficients are changed
have nonzero reduced costs in the optimal row 0. In Case 1, the
current basis remains optimal if and only if the objective function
coefficient for each variable remains within the allowable range given
on the LINDO printout. If the current basis remains optimal, both the
values of the decision variables and objective function remain
unchanged. If the objective coefficient for any variable is outside the
allowable range, the current basis is no longer optimal.
Case 2 – at least one variable whose objective function coefficient is
changed has a reduced cost of zero.
24
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.4 – The 100% Rule
The 100% Rule for Changing Right-Hand Sides
Case 1 – All constraints whose right-hand sides are being
modified are nonbinding constraints. In case 1, the current basis
remains optimal if and only if each right-hand side remains within
its allowable range. Then the values of the decision variables
and optimal objective function remain unchanged. If the righthand side for any constraint is outside its allowable range, the
current basis is no longer optimal.
Case 2 – At least one of the constraints whose right-hand side is
being modified is a binding constraint (that is, has zero slack or
excess).
25
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.5 – Finding the Dual of an LP
Associated with any LP is another LP called the dual.
Knowledge of the dual provides interesting economic and
sensitivity analysis insights.
When taking the dual of any LP, the given LP is referred to
as the primal. If the primal is a max problem, the dual will
be a min problem and visa versa.
Define the variables for a max problem to be z, x1, x2, …,xn and
the variables for a min problem to be w, y1, y2, …, yn.
Finding the dual to a max problem in which all the variables are
required to be nonnegative and all the constraints are ≤
constraints (called normal max problem) is shown on the next
slide.
26
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.5 – Finding the Dual of an LP
max z = c1x1+ c2x2 +…+ cnxn
s.t.
a11x1 + a12x2 + … + a1nxn ≤ b1
a21x1 + a22x2 + … + a2nxn ≤ b2
Normal max problem
It’s dual
…
…
…
…
am1x1 + am2x2 + … + amnxn ≤ bm
xj ≥ 0 (j = 1, 2, …,n)
mim w = b1y1+ b2y2 +…+ bmym
s.t.
Normal min problem
It’s dual
a11y1 + a12y2 + … + am1ym ≥ c1
a21y1 + a22y2 + … + am2ym ≥ c2
…
…
…
…
a1ny1 + a2ny2 + …+ amnym ≥ cn
yi ≥ 0 (i = 1, 2, …,m)
27
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.5 – Economic Interpretation of the Dual Problem
Interpreting the Dual of a the Dakota (Max) Problem
The primal is: max z = 60x1 + 30x2 + 20x3
s.t. 8x1 +
4x1 +
6x2 +
x3 ≤ 48
2x2 + 1.5x3 ≤ 20
2x1 + 1.5x2 + 0.5x3 ≤ 8
(Lumber constraint)
(Finishing constraint)
(Carpentry constraint)
x1, x2, x3 ≥ 0
The dual is:
min w = 48y1 + 20y2 + 8y3
2y3 ≥ 60
(Desk constraint)
2y2 + 1.5y3 ≥ 30
(Table constraint)
y1 + 1.5y2 + 0.5y3 ≥ 20
(Chair constraint)
s.t. 8y1 +
6y1 +
4y2 +
y1, y2, y3 ≥ 0
28
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.5 – Economic Interpretation of the Dual Problem
The dual is:
min w = 48 y1 + 20y2 +8y3
2y3 ≥ 60
(Desk constraint)
2y2 + 1.5y3 ≥ 30
(Table constraint)
y1 + 1.5y2 + 0.5y3 ≥ 20
(Chair constraint)
s.t. 8y1 +
6y1 +
4y2 +
y1, y2, y3 ≥ 0
Relevant information about the Dakota problem dual is shown below.
Resource
Lumber
Finishing
Carpentry
Selling Price
Desk
Table
8 board ft 6 board ft
4 hours
2 hours
2 hours
1.5 hours
$60
$30
Chair
1 board ft
1.5 hours
0.5 hours
$20
Availability
48 boards ft
20 hours
8 hours
29
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.5 – Economic Interpretation of the Dual Problem
The first dual constraint is associated with desks, the second with
tables, and the third with chairs. Decision variable y1 is associated with
lumber, y2 with finishing hours, and y3 with carpentry hours.
Suppose an entrepreneur wants to purchase all of Dakota’s resources.
The entrepreneur must determine the price he or she is willing to pay
for a unit of each of Dakota’s resources.
To determine these prices we define:
y1 = price paid for 1 boards ft of lumber
y2 = price paid for 1 finishing hour
y3 = price paid for 1 carpentry hour
The resource prices y1, y2, and y3 should be determined by solving the
Dakota dual.
30
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.5 – Economic Interpretation of the Dual Problem
The total price that should be paid for these resources is 48 y1 +
20y2 + 8y3. Since the cost of purchasing the resources is to
minimized:
min w = 48y1 + 20y2 + 8y3
is the objective function for the Dakota dual.
In setting resource prices, the prices must be high enough to induce
Dakota to sell. For example, the entrepreneur must offer Dakota at
least $60 for a combination of resources that includes 8 board feet of
lumber, 4 finishing hours, and 2 carpentry hours because Dakota
could, if it wished, use the resources to produce a desk that could be
sold for $60. Since the entrepreneur is offering 8y1 + 4y2 + 2y3 for
the resources used to produce a desk, he or she must chose y1, y2,
and y3 to satisfy: 8y1 + 4y2 + 2y3 ≥ 60
31
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
6.5 – Economic Interpretation of the Dual Problem
Similar reasoning shows that at least $30 must be paid for the
resources used to produce a table. Thus y1, y2, and y3 must satisfy:
6y1 +
2y2 + 1.5y3 ≥ 30
Likewise, at least $20 must be paid for the combination of resources
used to produce one chair. Thus y1, y2, and y3 must satisfy:
y1 + 1.5y2 + 0.5y3 ≥ 20
The solution to the Dakota dual yields prices for lumber, finishing hours,
and carpentry hours.
In summary, when the primal is a normal max problem, the dual
variables are related to the value of resources available to the decision
maker. For this reason, dual variables are often referred to as resource
shadow prices.
32
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc.
© Copyright 2026 Paperzz