The Method of Cost Evaluated Distance Variables for Solving

The Method of Cost Evaluated Distance Variables for
Solving Piecewise Linear Programming Models
Milan Houška, Helena Brožová
Department of Operational and System Analysis
Faculty of Economics and Management
Czech University of Agriculture in Prague
phone 02/24382351
e-mail: [email protected]
Abstract
The article introduces a new method, which can be used for solving piecewise linear programming
models. The reference [1] offers an algorithm, which may be used for solving of a convex
piecewise linear programming model only and an algorithm, which solves models determined by
the linear constraints and a separable piecewise linear objective function by the transformation to
the bivalent programming model. This research is appointed to finding easier way to solving such
models. The method published in this article is useful for the convex optimization models. The
future research will be focused on its generalization for the non-convex models.
Keywords
The Method of Cost Evaluated Distance Variables, Piecewise Linear Programming, Convex
Optimization Models
1 Introduction
The objective function of many practical problems can be expressed as additively separable functions;
each of them depends on one variable only. Such problems are mentioned in literature [1] as so called
“separable tasks” and the whole class of the standard linear programming models can be given as the
example.
The standard linear programming model is generally formulated as follows
max{Z(x)
∑a
n
j=1
ij x j

≤ b i , i = 1, 2, ..., m; x j ≥ 0, j = 1, 2,..., n  ,

(1)
where
Z(x) = c1x1 + c2x2 + … + cnxn
(2)
The vector c is a vector of coefficients of the objective function, which is maximized at the set of
feasible solutions determined by linear inequalities and by the constraints of non-negativity of all
variables included in the model. We can suppose that the objective function can be written as the
additively separable functions
Z(x) = Z1(x1) + Z2(x2) + … + Zn(xn),
(3)
where
Zj(xj) = cjxj for j = 1, 2, … , n.
(4)
The variables are characterized by the coefficients cj in the objective function on the whole definition
range.
1
The article is focused on the model, which can be written as follow
max{Z(x)

a ij x j ≤ b i , i = 1, 2, ..., m; x j ≥ 0, j = 1, 2,..., n  ,

j=1
∑
n
(5)
where
Z(x) = Z1(x1) + Z2(x2) + … + Zj(xj) + … + Zn(xn),
(6)
and
Zj(xj) = Zj1(xj), xj ∈ 〈0 ; kj1〉,
Zj(xj) = Zj2(xj), xj ∈ 〈kj1 ; kj2〉,
Zj(xj) = Z jp j (xj), xj ∈ 〈 k jp j ; ∞) for j = 1, 2, … , n..
(7)
The objective function is expressed as additively separable function and each part of this function is
piecewise linear. Each partial objective function can be divided to linear functions in pj+1 intervals by
pj boundary points kj1, kj2, …, k jp j . An example of this situation is displayed at the
figure 1,
where two partial objective functions are shown: function Z1SLP is standard linear function and
function Z1PLP is the piecewise linear function.
Figure 1: Example of two objective functions courses
Z1
35
30
25
20
Z1SLP
15
10
Z1PLP
5
0
x1
0
2
4
6
8
10
12
14
Note: Z1SPL = 2x1 for x1∈〈0, ∞), Z1PLP = x1 for x1∈〈0, 6〉 and Z1PLP = 3x1 – 12 for x1 ∈ 〈6, ∞)
In the reference [1], there is described the way, how to transform the piecewise linear programming
model to the standard linear programming model. Then the following model can be solved
 n
max 
 j=1
∑∑
pj
k =1
s jk x jk
∑ ∑x
n
j=1
pj
a ij
k =1
jk

≤ b i , i = 1,2,..., m; 0 ≤ x jk ≤ k jk , k = 1,2,..., p j , j = 1,2,..., n  ,

(8)
where sjk is the tangent of k-th line of the j-th variable and
kjk is the k-th dividing point of the j-th variable.
This model can be solved using the simplex algorithm. The j-th variable final value is equal to the sum
of all partial variables xjk, so x j = x j1 + x j2 + ... + x jp j . Because the model maximizes the objective
function, the objective function has to be a concave function in the whole definition range. Then the
case xjl = 0 and xjk > 0 for k > l is impossible, because variables xjk are defined as a length of the
intervals 〈kjk , kjk+1〉 ∩ 〈0 , xj〉.
2
2 The method of cost evaluated distance variables
The method of cost evaluated distance variables is based on ideas of the goal programming. Special
type of variables (distance variables) and constraints (boundary points constraints) are included to the
original model for the course description of each part of each piecewise linear objective function.
Each piecewise linear function can be characterised as linear functions in pj+1 intervals by pj
boundary points kj1, kj2, …, k jp j . Linear function tangents are different in each of this interval. The
actual value of the j-th variable can be explained as right and left distance from each boundary point.
Then the actual value of the piecewise linear function depending on this variable can be explained as a
proper liner function depending on variable xj and corresponding right distance variables. This idea is
shown at the figure 2 and explained in the following text.
This method consists of four following steps. Transformed model solvable by the simplex algorithm
is obtained at the end of this process.
Step 1
Determine all boundary points kj1, kj2, …, k jp j in which the tangent of the piecewise linear function is
changed for each partial objective function and determine corresponding function value.
When the model is piecewise linear, then all this boundary points are known. When the piecewise
linear programming model is used as a piecewise approximation of model with non linear but
separable objective function, this step would be more difficult [3], because we have to find the best
piecewise linear approximation of original partial non-linear objective functions.
Step 2
Add the boundary points constraints with distance variables for description of partial piecewise linear
objective functions.
This step is the most important step of the whole algorithm. The principle of boundary points
constraints is based on the same idea as the goal-programming algorithm. In such models the decision
maker sets goals for each objective function and minimizes distance between the actual values of the
objective functions and target values.
Let two distance variables marked as rj1 and rj2 determine the difference between the actual value of
the variable xj = xj* and the boundary point kj1. We suppose the change of the value of the variable xj
during the optimization, but only one of next possible cases may happen in one moment:
xj* < kj1: it holds rj1 = kj1 – xj* and rj2 = 0;
xj* > kj1: it holds rj1 = 0 and rj2 = xj* – kj1;
xj* = kj1 it holds rj1 = 0 and rj2 = 0.
Look at the figure 2.
Figure 2: Values of the variables xj, rj1, rj2, rj3 and rj4 for the actual value of the variable xj = xj*
Zj
rj1 = 0
rj4 = 0
rj3
Zj(xj* )
rj2
xj
cj2rj2
c jx j
k j1
x j*
xj
kj2
3
This distance variables have a character of the slack variables and they are put into the new added
constraints in pairs (one of them as overachievement and the other as underachievement). By this way
the feasible solution set could not be changed.
The general formulation of the added boundary points constraints is follows
xj + rj(2k-1) – rj(2k) = kjk .
(9)
Formally, there are required the non-linear constraints for each pair of the distance variables
rj(2k-1) × rj(2k) = 0 .
(10)
Similarly as in the goal programming model we do not use any special algorithm to treat them where
the convex optimization model is solved. Failing of these constraints leads to the downgrade of the
objective function value. The proof of this fact is the same as in the goal programming models [1].
Variables xj values can lie in one interval and only one distance variable from each pair can enter to
the basis because of such formalization.
It is easy to show that if there are d dividing points in the whole model, the size of the model increases
about d constraints and 2d distance variables.
The transformed model is shown in the following figure.
Figure 3: The transformed model
c1
c11
c12
...
c1(p1-1) c1(p1)
...
x1
r11
r12
...
r1(p1-1) r1(p1)
cq
cq1
cq2
... cq(pq-1) cq(pq)
rq1
rq2
...
...
xq+1
cn
...
xq
...
xn
a11
...
a1q
a1(q+1)
...
a1n
<=
b1
a21
...
am1
...
...
...
a2q
...
amq
a2(q+1)
...
am(q+1)
...
...
...
a2n
...
amn
<=
...
<=
b2
...
bm
=
...
=
k11
...
1
...
1
1
rq(pq-1) rq(pq)
cq+1
-1
...
1
-1
...
1
...
1
1
-1
...
1
-1
=
...
=
k1(p1/2)
...
kq1
...
kq(pq/2)
Note: Variables x1, x2, ..., xq are the variables with partial piecewise linear objective function and the
variables xq+1, xq+2, ..., xn are the variables with a partial linear objective function. The cost coefficients
c will be computed later in the step 3.
When the algorithm is used for approximation of a non-linear function, it is possible to show that for
good computation accuracy it is sufficient to work with two, three or four dividing points for each
partial objective function [4].
Step 3
This step of the algorithm is the computation of new cost coefficients of both structure variables and
distance variables. The method is based on the solving of an equation system, which is constructed for
function values Zj(0), Zj(kjk), k = 1, ..., pj, Zj( k jp j +1), so the value of the original partial objective
function is obtained. Because actual value of variable xj depends on right and left distance variables
rj1, rj2, ..., r j(2p j ) and actual value of piecewise linear function depends on this variables, this equation
system has pj +2 equations and 2pj +1 variables, then we can equate cj3 = cj5 = ... = c j(2p j -1) = 0.
4
A general equation system for computing the cost coefficients of the new objective function for the jth structure variable and all its distance variables in the case that the j-th partial piecewise linear
function has pj dividing points with their function value f(kjk) k = 1,2,…,pj follows
0c j + k j1c j1 + 0c j2 + 0c j4 + ... + 0c j(2k) + ... + 0c j(2p j ) = Z j (0)
k j1 c j + 0c j1 + 0c j2 + 0c j4 + … + 0 c j(2k) + … + 0c j(2p j ) = Z j (k j1 )
k j2 c j + 0c j1 + (k j2 - k j1 )c j2 + 0c j4 + … + 0 c j(2k) + … + 0c j(2p j ) = Z j (k j2 )
k jk c j + 0c j1 + (k jk - k j1 )c j2 + (k jk - k j2 )c j4 + … + 0 c j(2k) + … + 0c j(2p j ) = Z j (k jk )
(11)
k jp j c j + 0c j1 + (k jp j - k j1 )c j2 + (k jp j - k j2 )c j4 + … + (k jp j - k jk )c j(2k) + … + 0c j(2p j ) = Z j (k jp j )
(k jp j + 1)c j + 0c j1 + (k jp j - k j1 + 1)c j2 + (k jp j - k j2 + 1)c j4 + … + (k jp j - k jk + 1)c j(2k) + … + 1c j(2p j ) = Z j (k jp j + 1)
For example, suppose the following piecewise linear objective function: the partial objective function
for variable xj is divided into two intervals with the boundary point kj1 = 6. At the first interval 〈0 , 6〉
the partial objective function is Zj1 = xj and at the other interval 〈6 , ∞) the function will be
Zj2 = 2xj – 6 (figure 4).
Figure 4: Example of the piecewise linear partial objective function and the distance variables rj1 and
rj2 values.
35
Zj
30
25
rj1
20
rj2
15
cj2rj2
10
5
cjxj
c jx j
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
xj
The equation system for calculating the cost coefficients of the structure variable xj and distance
variables rj1 and rj2 is following
0cj + 6cj1 + 0cj2 = 0
6cj + 0cj1 + 0cj2 = 6
7cj + 0cj1 + 1cj2 = 7.
(12)
The result of this equation system is cj = 1, cj1 = 0 and cj2 = 1. The interpretation is not difficult. At the
first intercept <0 , 6> increasing of the variable xj value by one unit induces the same increasing of the
partial objective function value. At the other intercept <6 , ∞) increasing of the variable xj value by
one unit induces the increasing of the value of the partial objective function by two units: one unit by
variable xj and one unit by variable rj2.
5
Step 4
Finally, the transformed model has the following form

maxZ(x,r)

∑a x
n
ij
j
j=1
where Z(x, r) =
∑a x
∑
n
j=1
n
ij

≤ b i , i = 1, 2, ..., m; x j ≥ 0, j = 1, 2,...,n, x j + r j(2k−1) − rj(2k) = k jk , j = 1,2,...,n, k = 1,2,...pj  ,

j
c jx j +
∑c
pj
(13)
r , j = 1,2,..., n is the linear objective function,
jk jk
k =1
≤ b i , i = 1,2,...m are the original constraints,
j=1
x j + r j(2k −1) − r j(2k) = k jk , j = 1,2,..., n, k = 1,2,...p j are the added boundary points constraints,
cj and cjk are the new cost coefficients
xj ≥ 0 , j = 1,..., n, are the non-negativity constraints for original structure variables and
rjk ≥ 0, j = 1,..., n, k = 1, ..., pj, are the non-negativity constraints for distance variables.
This model can be solved using the standard simplex method.
3 Conclusion
The new method for solving the piecewise linear programming model has been introduced. The
method transforms the original model to the form, which is solvable by the standard simplex method.
The transformation consists in adding the new variables and constraints for describing all piecewise
linear partial objective functions.
The main advantages of this method are:
•
•
•
simple transformation of the model
realization of the necessary computations in the spreadsheet, especially in the case, when the
number of intercepts is the same for each partial objective function [4]
the original variables xj values are obtained as the result of the simplex method; no back
transformation is required
The main disadvantage of the method is that it is impossible to use it for non-convex optimization
models.
References
[1] Laščiak, A. a kol.: Optimálne programovanie. Alfa, Bratislava, 1983, ISBN 63-560-83
[2] Churchman, C. W., Ackoff, R. L., Arnoff, E. L.: Introduction to Operations Research. John Wiley
and Sons, Inc., London, 1957
[3] Houška, M.: Aproximace degresivně–progresivní nákladové funkce pro potřeby modelů
lineárního programování. In Sborník konference MENDELNET, PEF MZLU Brno, 2001, ISBN
80-7302-023-8
[4] Houška, M.: Systémové aspekty aproximačních postupů pro modely lineárního programování. In
Sborník konference Systémové přístupy, VŠE v Praze, 2001, ISBN 80-245-0253-4
6