a modification of revised simplex method

A MODIFICATION OF REVISED SIMPLEX
METHOD
Marko D. Petković1 , Predrag S. Stanimirović1∗
and Nebojša V. Stojković2
1
University of Niš, Department of Mathematics, Faculty of Science, Višegradska 33,
18000 Niš, Yugoslavia.
2
University of Niš, Faculty of Economics, Trg Kralja Aleksandra 11,
18000 Niš, Yugoslavia
E-mail: dexter of [email protected], [email protected], [email protected]
Abstract
We introduce a modification of the revised simplex method, which
accelerates the process of finding the first basic feasible solution in the
two phases revised simplex method. We report computational results on
numerical examples from Netlib test set.
AMS Subj. Class.: 90C05
Key words: Linear programming, Revised simplex method, MATHEMATICA.
1
Introduction
Consider the linear program in the general form
Maximize
f (x) =
f ((xN )1 , . . . , (xN )n1 ) =
n1
X
ci (xN )i + d
i=1
(1)
subject to
Ni
(2)
Ni
(1.1)
:
:
Ji :
n1
X
j=1
n1
X
j=1
n1
X
aij (xN )j ≤ bi ,
i = 1, . . . , r
aij (xN )j ≥ bi ,
i = r + 1, . . . , s
aij (xN )j = bi ,
i = s + 1, . . . , m
j=1
(xN )j ≥ 0,
∗ Corresponding
author
1
j = 1, . . . , n1 .
2
M.D. Petković, P.S. Stanimirović and N.V. Stojković
(1)
Every inequality of the form Ni
by adding a slack variable xB,i :
(1)
Ni
:
n1
X
(LE constraint) we change into the equality
aij (xN )j + (xB )i = bi ,
i = 1, . . . , r.
j=1
(2)
Also, every inequality of the form Ni (GE constraint) we transform into the
equality by subtracting a surplus variable (xB )i :
(2)
Ni
:
n1
X
aij (xN )j − (xB )i = bi ,
i = r + 1, . . . , s.
j=1
After these transformations we get the equivalent linear program into the standard form
Maximize
subject to
(1.2)
c1 x1 + · · · + cn xn + d = cT x + d, ci = 0, i > n1
Ax = b,
b = (b1 , . . . , bm ),
x = ((xN )1 , . . . , (xN )n , (xB )1 , . . . , (xB )s ),
(xN )j ≥ 0, j = 1, . . . , n, (xB )i ≥ 0, i = 1, . . . , s,
where the real matrix A is of the order m × n, for n = n1 + s.
The paper is organized as follows. In the second section we describe twophases revised simplex method which is analogous with corresponding twophases simplex method from [3], [4], [6] and [7]. In the third section is described
an algorithm for finding the first basic feasible solution. In the last section we
arrange several Netlib test problems.
2
The revised simplex method
By Ai we denote ith column of A, i = 1, . . . , n. We suppose that matrix A is of
full rank ( rank(A) = m), so there exists a base {Aik , k = 1, . . . , m} in Rm . The
matrix B = [Ai1 , . . . , Aim ] containing corresponding columns from A is called
basic matrix. Variables xi1 , . . . xim are called basic variables, and the vector
containing basic variables is denoted by xB . The m × (n − m) matrix containing
columns from A corresponding to nonbasic variables is denoted by N , and the
(n − m) vector containing nonbasic variables is denoted by xN . Column Ti of
Tucker table T are representations of vector Ai in the base B, i.e. T = B −1 N .
Here the target function is represented as equality c1 x1 + · · · + cn xn + d =
−xc where xc is a basic variable representing a value of goal function. After
corresponding modifications of the corresponding algorithm from [3], [4], [6], [7],
we state the following two phases revised simplex algorithm. In this algorithm
the notation Ai means ith row of the matrix A.
Algorithm 1. (Revised simplex method for basic feasible solution.)
A modification of revised simplex method
3
Step S1A. Reconstruct the vector bT = B −1 b, Tucker table T = B −1 N and
target function cT = T n+1 .
Step S1B. If (cT )1 , . . . , (cT )n ≤ 0, then the basic solution is an optimal solution.
Step S1C. Choose an arbitrary (cT )j > 0.
Step S1D. If t1j , . . . , tmj ≤ 0, stop the algorithm. Maximum is +∞.
Otherwise, go to the next step.
Step S1E. Compute
½
min
1≤i≤m
¾
(bT )i
| aij > 0
tij
=
(bT )p
,
tpj
replace nonbasic and basic variables (xN )j and (xB )p , respectively, and go to
Step S1B.
If the condition (bT )1 , . . . , (bT )m ≥ 0 is not satisfied, we use the following
algorithm to search for the first basic feasible solution. Algorithm in this version
is not found in the literature.
Algorithm 2. (Find the first basic feasible solution).
Step S1. Reconstruct the vector bT = B −1 b, Tucker table T = B −1 N .
Step S2. Select the last (bT )i < 0.
Step S3. If ti1 , . . . , tin ≥ 0 then STOP. Linear program can not be solved.
Step S4. Otherwise, find tij < 0, compute
µ½
¾ ½
¾¶
(bT )i
(bT )k
(bT )p
min
∪
| tkj > 0
=
,
k>i
tij
tkj
tpj
replace nonbasic and basic variables (xN )j and (xB )p respectively and goto Step
S1. We use the last tij < 0.
3
Modification of algorithm for finding first basic solution
The problem of the replacement of a basic and a nonbasic variable in the general
revised simplex method is contained in Step S4. We observed two drawbacks
of Step S4.
1. If p = i and if there exists index y < i = p such that
(bT )y
(bT )p
<
,
tyj
tpj
(bT )y > 0, tyj > 0
in the next iteration (xB )y becomes negative:
(xB )1y = (bT )1y = (bT )y −
(bT )p
(bT )y
tyj < (bT )y −
tyj = 0.
tpj
tyj
4
M.D. Petković, P.S. Stanimirović and N.V. Stojković
2. If p > i, in the next iteration (bT )1i is negative:
(bT )p
(bT )i
(bT )p
<
⇒ (bT )1i = (bT )i −
tij < 0.
tpj
tij
tpj
We can find solution of this problem in two ways.
For fixed (bT )i try to find tij < 0 such that
µ½
¾ ½
¾¶
(bT )i
(bT )i
(bT )k
(3.1)
min
| tij < 0 ∪
| tkj > 0, (bT )k > 0
=
.
k
tij
tij
tij
In this case, it is possible to choose tij for the pivot element and obtain a new
solution with less number of negative (bT )i .
This way is good for the basic simplex but not for the revised simplex because
we need to reconstruct columns corresponding to all tij < 0.
In the second one, for fixed tij < 0 where (bT )i < 0 the main idea is to find
a minimum
½
¾
(bT )k
(bT )p
< min
|(bT )k < 0, tkj < 0, k = 1, . . . , m
tpj
tkj
and if relation (3.1) is valid then choosing (bT )p > 0 as pivot element we obtain
a new solution with less number of negative (bT )i .
For this purpose, we propose a modification of Step S4 . This modification
follows from the following lemma.
Lemma 3.1 Let the problem (1.2) be feasible and let x be the basic infeasible
solution with q negative coordinates. There exists tij < 0, and in the following
two cases:
a) q = m, and
b) q < m and
½
¾
(bT )k
N eg =
|(bT )k < 0, tkj < 0, k = 1, . . . , m ,
tkj
½
¾
(bT )k
|(bT )k > 0, tkj > 0, k = 1, . . . , m
P os =
tkj
(3.2)
min (N eg ∪ P os) =
bT,r
∈ N eg
trj
it is possible to produce the new basic solution with at most q − 1 negative
coordinates in only one iterative step of the revised simplex method, if we choose
trj for the pivot element, i.e. replace nonbasic variable (xN )j with the basic
variable (xB )r .
Proof.
For every (bT )i , if all tij > 0 then −(xB )i = ti1 (xN )1 + · · · +
tim (xN )m − (bT )i > 0. That is contradiction because for all (xN )k > 0 we have
(xB )i < 0 and problem (1.2) is feasible.
A modification of revised simplex method
5
a) If q = m, for pivot element t1j < 0 we get a new solution with at least
one coordinate positive:
(xB )1i = (bT )1i =
(bT )i
> 0.
tij
b) Assume now that the conditions q < m and (3.1) are satisfied. Choose
trj for the pivot element.
For (bT )k > 0 and tkj < 0 it is obvious that
(bT )1k = (bT )k −
For (bT )k > 0 and tkj > 0, using
(bT )r
tkj > (bT )k ≥ 0.
trj
(bT )k
tkj
≥
(bT )1k = bT,k −
(bT )r
trj ,
we conclude immediately
(bT )r
tkj ≥ 0.
trj
Hence, all positive (bT )k , it remains positive. Moreover, for (bT )r < 0 we get
(bT )1r =
(bT )r
≥ 0,
trj
which completes the proof.
¤
Remark 3.1 Let the condition (3.2) is not satisfied, min (N eg ∪ P os) =
N eg. If we choose trj for pivot element we will also have
(bT )1k = (bT )k −
(bT )r
trj
∈
(bT )r
trj ≥ 0.
tkj
for (bT )k > 0 and trj both positive or negative. But for negative (bT )k > 0 we
can prove similarly that
(bT )1k = (bT )k −
(bT )r
trj < 0.
tkj
for both trj positive or negative. Finally new solution has the same number of
negative coordinates q.
Algorithm 3. (Modification of Algorithm 2 ).
Step SM1. Let B is basic and N non-basic matrix. Reconstruct vector bT ,
Step SM2. Construct the set
B = {(bi )1 , . . . , (bi )q } = {(bi )k | (bi )k < 0, k = 1, . . . , q}.
Step SM3. Select (bi )s < 0.
6
M.D. Petković, P.S. Stanimirović and N.V. Stojković
Step SM4. Reconstruct is th row. If tis ,1 , . . . , tis ,n ≥ 0 then STOP. Linear
program is not solvable.
Otherwise, choose first tis ,j ≤ 0.
Step SM5. Reconstruct jth column Tj . Compute
½
min
1≤i≤n
(bT )i
|(bT )i tij > 0, i = 1, . . . , m
tij
¾
=
(bT )p
,
tpj
replace nonbasic and basic variables (xN )j and (xB )p and go to Step SM2.
4
Implementation and numerical experience
In previous algorithms the reconstruction of the Tucker table is based on the
matrix inversion. We also require only few rows and columns of Tucker table.
Next we will show how to reconstruct a simple row or column of Tucker table
using linear equation solver. It is well known that it is much more efficient to
solve a linear system than inverting a matrix [2]. Reconstruction of ith column
Ti is simply: BTi = Ai . To find a row Ri we need to know (B −1 )i , so:
T i = (B −1 )i N
Vector (B −1 )i can be found from B −1 B = I, which implies (B −1 )i B = I i . If
we apply transpose on left and right side we get: B T ((B −1 )i )T = (I i )T . This
method is implemented in following algorithm :
Algorithm 4. Reconstruction of row of Tucker table T i .
Step 1. Let B be basic, N non-basic, and I identity matrix. Solve a system
B T ((B −1 )i )T = (I i )T
Step 2. Compute and return T i = (B −1 )i N .
Revised simplex and modification are implemented in program RevMarPlex
in program language MATHEMATICA[8] and tested on some Netlib test problems.
Results are compared with robust LP solver PCx.
¿From the columns two and three in Table 1 we can see that our results are
in accordance with the results obtained by PCx. In the columns four and five
of Table 1 we give the number of iterations needed for the construction of the
first basis feasible solution, using Algorithm 3 and Algorithm 2, respectively.
A modification of revised simplex method
Problem
Adlittle
Afiro
Agg
Agg2
Blend
Sc105
Sc205
Sc50a
Sc50b
Scagr25
Scagr7
Stocfor1
LitVera
Kb2
Recipe
Share1B
P Cx
2.25494963×105
-4.64753143×102
3.59917673×107
-2.0239251×107
-3.08121498×101
-5.2202061212×101
-5.22020612×101
-6.4575077059×101
-7.000000000×101
-1.47534331×107
-2.33138982×106
-4.1131976219×104
1.999992×10−2
-1.7499×103
-2.66616×102
-765893186×102
7
RevM arP lex
225494.963162
-464.753142
-35991767.286576
-20239252.3559776
-30.812150
-52.202061
-52.202061
-64.575077
-70
-14753433.060769
-2331389.824330
-41131.976219
0
-1749.9001299062
-266.6160
76589.3185791859
Alg3
32
2
151
75
520
55
14
13
498
Alg2
39
17
151
129
> 1500
74
17
15
> 1500
Table 1.
After the observation of these columns, it is easy to see that Algorithm 3 is
faster with respect to Algorithm 2 in all cases. A streak in the corresponding
position means that there are no iterative steps before the first basic feasible
solution.
Also, RevMarPlex solved examples taken from [1], [5] which PCx is unable
to solve (it ends with UNKNOWN status). Results are presented in Table 2. In
all cases numbers of iterations of Algorithm 2 and Algorithm 3 were the same.
Problem
07-20-02
15-30-03
15-30-04
15-30-06
15-30-07
15-60-07
15-60-09
20-40-05
30-60-05
30-60-08
LitVera
RevM arP lex
0
2.16968 × 10−5
2.16968 × 10−5
4.90359 × 10−6
3.2807 × 10−4
−6.29305 × 10−7
−6.29305 × 10−7
1.63948 × 10−6
1.64567 × 10−11
5.22527 × 10−5
0
Alg3 − Alg2
3
2
2
2
3
3
3
3
3
-
Table 2.
RevM arP lex (LinearSolve function) indicates that matrices of these examples are badly conditioned (with the condition number between 1015 and 1020 ).
Nevertheless RevM arP lex solved these examples giving solutions close to optimal.
8
M.D. Petković, P.S. Stanimirović and N.V. Stojković
References
[1] M.D. Ašić and V.V. Kovačević-Vujčić,, Ill-conditionedness and interior-point
methods, Univ. Beograd Publ. Elektrotehn. Fak., 11 (2000), 53–58.
[2] M.A. Bhatti, Practical optimization with MATHEMATICA applications, Springer
Verlag Telos, 2000.
[3] B.D. Bounday, Basic linear programming, Edvard Arnold, Baltimore, 1984.
[4] J.P. Ignizio, Linear programming in single-multiple-objective systems, Englewood
Cliffs: Prentice Hall, 1982.
[5] V.V. Kovačević-Vujčić, and M.D. Ašić, Stabilization of interior-point methods for
linear programming, Computational Optimization and Applications, 14 (1999),
1–16.
[6] M. Sakarovitch, Linear programming, Springer-Verlag, New York, 1983.
[7] S. Vukadinović, S. Cvejić Mathematical programming, Univerzitet u Prištini, Priština, 1996. (In Serbian)
[8] S. Wolfram, Mathematica Book, Version 3.0, Wolfram Media and Cambridge
University Press, 1996.