Linear Fractional Programming

Linear Fractional Programming
What is LFP?
Minimize
Subject to
p x 
t
q x
t
Ax  b
x0
p,q are n vectors, b is an m vector, A is an m*n
matrix, α,β are scalar.
Lemma 11.4.1
Let f(x)=(ptx+α)/(qtx+β), and let S be a
convex set such that qtx+β0 over S.
Then f is both pseudoconvex and
pseudoconcave over S.
Implications of lemma 11.4.1
Since f is both pseudoconvex and
pseudoconcave over S, then by Theorem
3.5.11, it is also quasiconvex, quasiconcave,
strictly quasiconvex, and strictly quasiconcave.
Since f is both pseudoconvex and
pseudoconcave, the by theorem 4.3.7, a point
satisfying the kuhn-Tucker conditions for a
minimization problem is also a global
minimum over S. Likewise, a point satisfying
the kuhn-Tucker conditions for a maximization
problem is also a global maximum over S.
Implications of lemma
11.4.1(cont.)
Since f is strictly quasiconvex and strictly
quasiconcave, then by Theorem 3.5.6, a local
minimum is also a global minimum over S.
Likewise, a local maximum is also a global
maximum over S.
Since f is quasiconvex and quasiconcave, if
the feasible region is bounded, then by
theorem 3.5.3, the f has a minimum at an
extreme point of the feasible region and also
has a maximum at an extreme point of the
feasible region.
Solution Approach
From the implications:


Search the extreme points until a Kuhn-Tucker
point is reached.
t
t
t
1
r


f
(
x
)


f
(
x
)
B
N
Direction: N
N
B
 If r  0 Kuhn-Tucker point, stop.
N
 Otherwise, -rj=max{-ri:ri<=0}
 Increase nonbasic variable xj, adjust basic variables.
Gilmore and Gomory(1963)
Charnes and Cooper(1962)
Gilmore and Gomory(1963)
Initialization Step:


Find a starting basic feasible solution x1,
Form the corresponding tableau
Main Step
1.
Compute rNt  (r1 , r2 )   N f ( x1 )t   B f ( x1 )t B 1 N
–
If rN  0 , Stop.

–
Current xk is an optimal solution.
Otherwise, go to the step 2.
Gilmore and Gomory
2. Let –rj=max{-ri:ri<=0}, where rj is the
ith component of rN.
Determine the basic variable xB, to
leave the basis by the minimum ratio
test: br
bi
yrj
 min { yij : yij  0}
1i  m
Gilmore and Gomory
3. Replace the variable xB, by the variable
xj.Update the tableau corresponfing by
pivoting at yrj. Let the current solution
be xk+1. Replace k by k+1, and go to
step 1.
Example:Gilmore and Gomory:
 2 x1  x2  2
min x  3 x  4
1
2
s.t.
x2
7
6
 x1  x2  4
x2  6
2 x1  x2  14
x1 , x2  0
(2,6)
(4,6)
5
(0,4)
4
3
2
1
0
(0,0)
(7,0)
x1
Iteration 1
x1
f ( x1 )

10
16
x2
x3
x4
x5
RHS
2
16
0
0
0
-

x3
-1
1
1
0
0
4
x4
0
1
0
1
0
6
x5
2
1
0
0
1
14
2
16
0
0
0
-
r

10
16

Computation of Iteration 1
q t x1    4
p t x1    2
 f ( x)t  (
10 2
, ,0,0,0)
16 16
10 2
f B ( x)t  (0,0,0)
, )
16 16
rNt  (r1 , r2 )   N f ( x1 ) t   B f ( x1 ) t B 1 N
 N f ( x)t  (
 1 1
10 2
10 2
 ( , )  (0,0,0)  0 1  ( , )
16 16
16 16
 2 1
rBt (r3 , r4 , r5 )  (0,0,0)
rN  0
max{ r1 ,r2 ,r3 ,r4 ,r5 }  r1 (
min{
10
), x1enter
16
bi
4 6 14
: yij  0}  {( ), ( ), }  x5leave
yij
1 0 2
Iteration 2
f ( x1 )

x1
x2
x3
x4
x5
RHS
10
121
47
121
3
2
0
0
0
-
1
0
1
2
11
x3
0
x4
0
1
0
1
0
6
x1
1
0
0
0
0
0
1
2
5
121
7
r
1
2
52
121
-
Computation of Iteration 2
q t x2    11
p t x2    12
  f ( x2 ) t  ( 
10
47
,
,0,0,0)
121 121
10 47
f B ( x)t  (0,0,0)
,
)
121 121
rNt  (r2 , r5 )   N f ( x2 ) t   B f ( x2 ) t B 1 N
 N f ( x)t  (
 32 12 
47
10 
52 5

(
,0)  (0,0,
) 1 0  (
,
)
121
121 1 1
121 121
 2 2 
rN  0, Stop.
Optimal Solution: x1=7, x2=0, min=-12/11=-1.09
Charnes and Cooper
Minimize p x  
t
q x
Minimize p y  z
t
t
Subject to Ay  bz  0
Subject to Ax  b
q y  z  1
t
x0
y0
1
qt x  
y  zx
z
z0
Example: Charnes and Cooper
Min  2 y1  y2  2 z
s.t.  y1  y2  4 z  0
y2  6 z  0
2 y1  y2  14 z  0
y1  3 y2  4 z  1
y1 , y2 , z  0
Solved by Lingo
Global optimal solution found at iteration:
Objective value:
Variable
Y1
Y2
Z
x1  y1
z
x2  y2
z
7
0
6
-1.090909
Value
0.6363636
0.000000
0.9090909E-01
Reduced Cost
0.000000
4.727273
0.000000