Duality and Non-Simplex method
Sang Park
12/04/06
Review from Simplex Method
• The basic idea of simplex algorithm is to move from vertex to vertex of
the feasible set until an optimal basic feasible solution is found.
• The algorithm starts with a tableau, and there are five steps using the
simplex method.
Duality
• Every linear programming problem is associated with its Corresponding
dual linear programming problem.
• The meaning of duality can easily be understood if we think about a
manufacturing problem.
• When scarce resources are allocated in a way that maximizes profit, the
associated minimization problem is one that seeks to minimize cost.
1
Maximize z c T x
subject to Ax b
x 0,
T
The dual problem is
Minimize z' b
T
subject to A c
0 , where A is an m n matrix,
c and x are n 1 column vectors, and b and are m 1 column vectors.
When the primal problem is
Example: If the primal problem is Maximize z x1 x2 x3
subject to 2 x1 x2 2 x3 2
4 x1 2 x2 x3 2
xi 0 , i 1,2,3.
then the dual problem is
Minimize z ' 21 22
Subject to 21 42 1
1 22 1
21 2 1
j 0 , j =1, 2.
2
The Differences Between Primal and Dual Problem
Primal Problem
Dual Problem
Maximization
Minimization
Coefficient of objective function
Right-hand sides of constraints
Coefficients of i-th constraint
Coefficients of i-th variable, one in each
constraint
i-th constraint is an inequality 0
i-th variable is 0
i-th constraint is an equality
i-th variable is unrestricted
j-th variable is unrestricted
j-th constraint is an equality
j-th variable is 0
j-th constraint is an inequality 0
Number of variables
Number of constraints
3
Example: If the primal problem is
Maximize z ' 161 122 243
subject to 21 2 43 10
31 2 33 15
1 0 , 2 unrestricted, 3 0 ,
then the dual problem is
Minimize z 10x1 15x2
subject to 2 x1 3x2 16
x1 x2 12
4x1 3x2 24
x1 unrestricted, x2 0 .
Five Steps Using Simplex Method:
Step 1: Set up the initial tableau.
Step 2: Apply the optimality test: If the objective row has no negative entries, then the indicated
solution is optimal. Stop computation.
Step 3: Find the pivotal column by determining the column with the most negative entry.
Step 4: Find the pivotal row by using the -ratios, the ratios formed by dividing c column by the
pivotal column using only positive entries of pivotal column. The pivotal row is the row for
which the minimum ratio occurs. If none of the entries in the pivotal column is positive, the
problem has no finite optimum solution. In this case, stop computation.
Step 5: Obtain a new tableau by pivoting, and then return to step 2.
4
Complementary Slackness Theorem
• In order to solve the pair of primal-dual problem, we need the
complementary slackness theorem.
• For any pair of optimal solutions to the primal problem and its dual, the
product of the i-th slack variable for the primal problem and the i-th dual
variable is zero.
That is, xn11 xn2 2 xnm m 0 , where each term is non-negative.
Example: For a primal problem
Minimize z 4 x 6 y
Subject to x 3 y 5
2x y 3
x 0,
Introducing slack variables,
y 0,
Minimize z 4 x 6 y 0u 0v
Subject to x 3 y u
5
2x y
v 3
x 0, y 0 , u 0, v 0 .
Now its dual is
Maximize z 51 32
Subject to 1 22 4
31 2 6
1 0 , 2 0 ,
5
Maximize z 51 32 03 04
subject to 1 22 3 4
and introducing slack variables
31 2
4 6
1 0 , 2 0 , 3 0 , 4 0 .
To solve the dual problem using simplex algorithm, we set up a table as
following.
The first tableau is:
-ratio
1
2
3
4
3
1
2
1
0
4
4
4
(3)
1
0
1
6
2
-5
-3
0
0
0
Basic Variables
Objective Row
c
In
6
The second tableau is:
Basic Variables
1
2
3
4
c
-ratio
3
0
(5/3)
1
-1/3
2
6/5
1
1
1/3
0
1/3
2
6
Objective Row
0
-4/3
0
5/3
10
There is still one negative entry in the objective row, so we form the third tableau:
Basic Variables
1
2
3
4
c
2
0
1
3/5
-1/5
6/5
1
1
0
-1/5
2/5
8/5
Objective Row
0
0
4/5
7/5
58/5
-ratio
Now all the objective row is positive, so it is the optimal solution.
7
8 6
So 0 ( , ,0,0) , and
5 5
8 5
6 5
58 .
0
0
5
0
since b T (5,3,0,0) , we have b T 0 5 3 0
Using complementary slackness, 3
Also
u
x
4 3 x 4 y 0 .
y
v 1 1u 2 v 0 .
2
Since 1
8
6
, we have u 0 , and since 2 , we have v 0 .
5
5
.
Then solving simultaneous equation, x 3 y 5 & 2 x y 3 yields x 4 5 , y 7 5 ,
4 58
T
T
So x0 4 5 7 5, and x0 c 4 5 7 5
.
6
5
T T
From this result, we see that 0 b x0 c 58 5 , so the duality theorem is satisfied.
8
Non-Simplex Method
The Differences between Simplex Method and Non-Simplex Method
1. Simplex Method
2. Non-Simplex Method
6
5 x 3 y 15
Optimal solution
(26/3, 14/3), z=940/3
6
4
(3 2, 5 2)
2
2x 2 y 8
0
2
4
6
Extreme Point (x,y)
Value of z = 120x + 100y
(0,0)
0
(3,0)
360
(0,4)
400
(3/2, 5/2)
430
• The simplex method jumps along the
edge from vertex to vertex of the
feasible set seeking an optimal vertex.
4
2
0
2
4
6
8
10
• Non-simplex method is
an interior path method
and the method burrows
through the interior of the
set to find an optimal
solution.
9
Khachiyan’s Method
• An interior path method and the method burrows through the interior
of the set to find an optimal solution.
• Also called the ellipsoid algorithm and is still not a practical alternative
to the simplex method.
• Designed to find a point that strictly satisfies a system
of linear
inequalities. That is, it tries to find a point x such that Ax b .
Suppose we are given a primal linear programming in canonical form
Minimize z c T x
Subject to Ax b
x 0,
together with its dual program
maximize z ' T b
subject to T A c T
0.
By duality theorem, at the optimal solutions of both problems, the two objective values
will be equal, i.e., c T x T b , and the constraints to both problems will be satisfied.
T T
c x b
Ax b
AT c
10
x 0 , 0.
T T
T T
And c x b is equivalent to the two inequalities c x b 0 ,
T T
where c x b 0 .
Then we can get a system of inequalities for the previous set of relations as
cT
T
c
A
In
0
0
bT
bT
0 x
0
AT
I m
0
0
b
.
0
c
0
Therefore, we have reduced the problem of finding an optimal solution to
the primal-dual pair into one of finding a vector [ x T , T ]T . If we find a
solution to above system of inequalities, then it is an optimal solution to the
primal-dual problem. The above system can also be written as Pz q ,
11
cT
T
c
A
where P
In
0
0
bT
0
0
bT
b
0
x
z
,
,
q
.
0
0
c
AT
I m
0
Ellipsoid
As shown in this figure, Khachiyan’s Method is also called the
ellipsoid algorithm.
12
Affine Scaling method
• The simplest of interior point methods
• Start inside the feasible set and moves within it toward an
optimal vertex.
• Also effective at solving large scale problems.
x*
level set of
cT x
The canonical form of Linear
Programming problem is,
T
c
x
Minimize
Subject to Ax b
x 0.
The algorithm starts with a feasible
point x ( 0 ) that is strictly interior.
The step originating from center point is more effective and yield a lower cost value
for the new point compared with the step originating from the non-center point.
When we are given a point x ( 0 ) that is feasible but is not a center point,
we can transform the point to the center by using Affine scaling method.
13
Then we need the new coordinate system.
Now the new linear programming problem with the new coordinate system becomes
T
Minimize c0 x
Subject to A0 x b
x 0 , where c0 D0 c and A0 AD0 .
The first iteration finding optimal solution for Affine scaling method is
( 0)
(1)
( 0)
x
x
0d
where x 0 Is given.
( 0)
d D0 P0 D0 c , and
Also
D0
Since
x1 ( 0 )
0
,
(0)
xn
0
P0 I n A0 ( A0 A0 ) 1 A0 , where
T
T
A0 AD0 ,
We have P0 = I n ( AD0 ) T ( AD0 ( AD0 ) T ) 1 AD0 = I n D0 AT ( AD0 D0 AT ) 1 AD0
2 T 1
T
=
I
D
A
(
AD
A ) AD0 .
n
0
0
( 0)
2
= D0 [ I n D0 AT ( AD0 AT ) 1 AD0 ]D0 c
Then d
2
2
2
2
= [ D0 D0 AT ( AD0 AT ) 1 AD0 ] c .
And one more value needed for the iteration is 0 r0 , where
( 0)
( 0)
xi
(1)
( 0)
r0 min ( 0 ) .
Then we will finally obtain x
x
0d .
d
{i:d i ( 0 ) 0}
i
14
Example of Affine Scaling Algorithm
(0)
T
A
x
2 3 2 3 3 ,
,
,
,
and
b
c
For linear programming problem with inputs,
where x ( 0 ) is a strictly feasible initial point, the problem is written as
Maximize z 2 x1 5x2
Subject to x1 4
x2 6
x1 x2 8
x1 0 , x2 0 .
Introducing slack variables
1
A 0
1
0
1
1
Minimize z 2 x1 5 x2 0 x3 0 x4 0 x5
4
x3
Subject to x1
0
0 1 0 , c 2
0 0 1
2 0
0 3
and D0 0 0
0 0
0 0
1
x4 6
x2
x5 8
x1 x2
x1 ,, x5 0 .
0
5
0
0
0
0
2
0
0
3
0
0
0
0
T
0 , b 4
6
Then we have,
8 ,
T
0
0
0 which is a diagonal matrix.
0
3
15
Since A0 AD0 ,
P0 = I n D0 AT ( AD0 AT ) 1 AD0 .
2
Calculating numerical value of P0 by hand is very difficult and complicated,
but it can be very easily calculated by using MATLAB.
0.1935
0.3548
0.0968 0.3548 0.2903
0.0968
0.4355 0.0968 0.1935 .
0.3548 0.0968 0.3548
0.2903
0.2903 0.1935
0.2903
0.4194 0.5806
14.8065
( 0)
(0)
0.5806 .
Since d
D0 P0 D0 c , and again, by using MATLAB, we have d
14
.
8065
15.3871
( 0)
xi
2
3
3
,
,
}
For k 0 , we have r0 min ( 0) = min{
(0)
0
.
5806
14
.
8065
15
.
3871
d
{i:d i
0}
i
0.4355
0.0968
So we have P0 0.4355
0.0968
0.1935
0.0968
0.4355
0.0968
= min {3.4447, 0.2026, 0.1950} = 0.1950.
And 0 r0 , where 0.99 by definition.
Then we have 0 r0 = (0.99)(0.1950) = 0.1930.
16
( 0)
(1) (0)
Finally, x x 0 d
2
0.5806 2.1121
3
14.8065 5.8576
= 2 (0.1930) 0.5806 1.8879 .
3
14
.
8065
0
.
1424
3
15.3871 0.0303
As we can see from above, we have used the formula,
.
x (1) x ( 0) 0 d ( 0)
This is the same formula that we have learned in gradient method.
This completes the Affine scaling method, and we have verified that
Khachiyan’s and Affine scaling method are non-simplex method, and the
iteration in these methods starts from inside the feasible set and directly
moves within it toward an optimal vertex.
17
Karmarkar’s method
• Interior point method because it starts from the interior of the
feasible set and moves directly to the optimal solution.
• Just like Affine scaling method, Karmarkar’s method also needs iteration for the
feasible solution closer to the optimal solution.
x (1) x (0) x , and this is the same type of equation
• The iteration formula is
that we used in Affine scaling method, and also the same type of equation that
we have learned in the gradient methods.
• Need to transform the problem into Karmarkar’s canonical form.
The canonical form is written as
Minimize z c T x
Subject to Ax 0
x 0.
From this canonical form, we have the matrix A, and the vector
and Pc c AT ( AAT ) 1 Ac .
c,
Now based on these values ,
we can find the first iteration with the formula,
(1) (0)
Pc
x x 0.98s .
Pc
18
Optimal solution
(26/3, 14/3), z=940/3
6
4
2
0
2
4
6
8
10
As we can see, just like Affine Scaling method, Karmarkar’s
algorithm also starts with the interior point and directly goes
towards the optimal solution.
19
© Copyright 2026 Paperzz