On approximation of the best case optimal value

Optimization Letters manuscript No.
(will be inserted by the editor)
On approximation of the best case optimal value in
interval linear programming
Milan Hladı́k
Received: date / Accepted: date
Abstract Interval linear programming addresses problems with uncertain coefficients and the only information that we have is that the true values lie
somewhere in the prescribed intervals. For the inequality constraint problem,
computing the worst case scenario and the corresponding optimal value is an
easy task, but the best case optimal value calculation is known to be NP-hard.
In this paper, we discuss lower and upper bound approximation for the best
case optimal value, and propose suitable methods for both of them. We also
propose a not apriori exponential algorithm for computing the best case optimal value. The presented techniques are tested by randomly generated data,
and also applied in a simple data classification problem.
Keywords linear programming · interval linear systems · interval analysis
1 Introduction
Uncertainty is a real-life phenomenon that must be taken into account in (not
only) optimization models to obtain reliable results. There exist many ways to
tackle uncertainties such as stochastic, interval, robust or fuzzy programming,
in which interval approach is often useful because of its simplicity. All we need
are the lower and upper limits of the uncertain quantities, and interval methods compute guaranteed bounds of the optimal values and optimal solutions,
among others.
In this paper, we consider the problem of computing the range of all possible optimal values. This problem was studied for several decades, see e.g. the
surveys [4,9], but few was done in approximating the intractable cases. We proM. Hladı́k
Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostranské nám. 25, 118 00, Prague, Czech Republic,
E-mail: [email protected]
2
Milan Hladı́k
pose methods for both lower and upper approximation of the computationally
difficult extremal values of the optimal value range.
Let us introduce some notation first. An interval matrix is defined as
A = {A ∈ Rm×n ; A ≤ A ≤ A},
where A ≤ A are given matrices. By
Ac :=
1
(A + A),
2
A∆ :=
1
(A − A)
2
we denote the center and the radius of A, respectively. The set of all m-by-n
interval matrices is denoted by IRm×n . Interval vectors are defined analogously.
Interval arithmetic is defined e.g. in books [12,14]. Next, Dv stands for the
diagonal matrix with entries v1 , . . . , vn , and sgn(r) denotes the sign of r ∈ R
(for vectors it is applied entrywise), i.e., sgn(r) = 1 if r ≥ 1 and sgn(r) = −1
otherwise.
An interval linear system of equations is a family of systems
Ax = b,
A ∈ A, b ∈ b,
where A ∈ IRm×n and b ∈ IRm are given. The solution set is defined as a
union of all solutions, i.e.,
{x ∈ Rn ; ∃A ∈ A∃b ∈ b : Ax = b}
and characterized by the Oettli–Prager theorem [4] as
{x ∈ Rn ; |Ac x − bc | ≤ A∆ |x| + b∆ }.
(1)
Analogously, an interval system of linear inequalities is defined as a family
Ax ≤ b,
A ∈ A, b ∈ b
and its solution set
F := {x ∈ Rn ; ∃A ∈ A : Ax ≤ b},
is described by Gerlach’s theorem [4] as
F = {x ∈ Rn ; Ac x ≤ A∆ |x| + b}.
(2)
On approximation of the best case optimal value in interval linear programming
3
Algorithm 1 The worst case optimal value f
1: compute
T
ϕ = sup bT y subject to A y ≤ c, −AT y ≤ −c, y ≤ 0
2: if ϕ = ∞ then
3:
put f := ∞
4:
return
5: end if
6: if the system
Ax1 − Ax2 ≤ b, x1 ≥ 0, x2 ≥ 0
(A-1)
is feasible then
7:
put f := ϕ
8: else
9:
put f := ∞
10: end if
11: return
2 Problem statement
Consider a linear programming problem in the inequality form
f (A, b, c) := min cT x subject to Ax ≤ b.
(3)
Let A ∈ IRm×n , b ∈ IRm and c ∈ IRn be given. By an interval linear programming problem we understood a family of linear programs (3) with A ∈ A,
b ∈ b and c ∈ c. A scenario means a concrete setting of (3).
There are diverse problems studied in interval linear programming. Two
main problems are the optimal value range calculation [3,4,8,9] and determining or approximating of the optimal solution set [1,8,9,11]. In the former, we
have to determine the best and worst optimal values, that is,
f := min f (A, b, c) subject to A ∈ A, b ∈ b, c ∈ c,
f := max f (A, b, c) subject to A ∈ A, b ∈ b, c ∈ c.
It is known [8,9] that computing f is cheap by solving two suitable linear
programs; see Algorithm 1. Feasibility of (A-1) ensures that each scenario
Ax ≤ b is feasible for every A ∈ A and b ∈ b; see [4].
On the other hand, determining f is known to be strongly NP-hard [5].
Both values can be easily determined under the so called basis stability, when
there is a basis being optimal for each scenario [7]. So far, the only possibility
to compute f in general was to reduce the problem to solving 2n ordinary
linear programs. Due to the exponential number, this approach is applicable
only in a very small dimension, and motivates us to derive some lower and
upper bound approximations of f .
Remark 1 By duality in linear programming, the proposed methods will work
similarly also for the worst case approximation of equality constrained interval
4
Milan Hladı́k
linear programs
min cT x subject to Ax = b, x ≥ 0,
where A ∈ IRm×n , b ∈ IRm and c ∈ IRn .
Before moving to approximation, notice that Ax ≤ b implies Ax ≤ b for
any x ∈ Rn , A ∈ A and b ∈ b. This means that f is attained for b := b and
in the following we assume without loss of generality that b = b is a point
interval vector.
3 Exact computation of f
The feasible set F from (2) is the union of all feasible sets over all scenarios. It
is not convex in general, but it becomes convex when restricted to any orthant.
Let s ∈ {±1}n, then the corresponding orthant is described by Ds x ≥ 0, and
its intersection with F reads
(Ac − A∆ Ds )x ≤ b, Ds x ≥ 0.
(4)
Thus, the smallest optimal value f can be calculated by solving 2n ordinary
linear programs (cf. [3,9])
f=
min fs ,
s∈{±1}n
(5)
where
fs = min(cc − Ds c∆ )T x
c
(6)
∆
subject to (A − A Ds )x ≤ b, Ds x ≥ 0.
Moreover, we also obtain a scenario, for which the best case optimal value is
attained: If s∗ is a minimizer in (5), then f is attained for
A := Ac − A∆ Ds∗ ,
c := cc − Ds∗ c∆ .
Since the exponential number 2n of linear programs is intractable, we will
try to decrease it. First, we observe that the infeasible linear programs needn’t
be considered (their optimal value is ∞). If we are able to compute an interval
(or any other) enclosure x to the solution set F , then it is sufficient to inspect
only those orthants having nonempty intersection with x.
Provided that F is connected, it is sufficient to start with the orthant
corresponding to f (Ac , b, cc ) (as in Section 5), and then check the neighboring
connected orthants. This search in orthants needn’t pass through all orthants,
but inspects all orthants having al least one feasible point of F .
The bad news is that F can be disconnected. For instance, the solution set
to the interval linear system
[−1, 1]x + y ≤ −1, y ≤ 0, −y ≤ 0
On approximation of the best case optimal value in interval linear programming
5
consists of two disjoint sets (−∞, −1] × {0} and [1, ∞) × {0}. It might seem
that disconnectivity is caused by the interval containing the zero, but it is
not hard to find other example of disconnected solution set without such an
interval:
5x + 2y = 4, [2, 3]x + y ≤ 1.
Below, we propose some sufficient conditions for connectivity.
Proposition 1 If b ≥ 0, then F is connected.
Proof The condition b ≥ 0 implies 0 ∈ F . Since F is connected in each orthant,
it is connected as a whole via the origin.
⊓
⊔
This condition is very cheap, but not very strong in general. The following
condition is obviously stronger; consider e.g. the interval system −x ≤ −1
with degenerate intervals.
Proposition 2 If the linear system of inequalities
Au − Av ≤ b, u, v ≥ 0
(7)
is feasible, then F is connected.
Proof By [4], if u, v solves (7), then x∗ := u − v is a solution to Ax ≤ b for
every A ∈ A (the converse implication holds, too). Thus, every two points in
F are connected via x∗ .
⊓
⊔
Proposition 2 gives only sufficient conditions for connectivity, but not necessary in general. For example, consider the interval linear system
−x ≤ −1, [1, 2]x ≤ 1.
Here, F = {1} is connected, but no sufficient condition holds.
Remark 2 Provided the solution set F is disconnected, we can still think of
inspecting all its connectivity components. However, such a search may be
exhausting. We now show that one additional inequality may split a connected
solution set into an exponential number of components.
Consider the interval linear inequalities
−K ≤ xi ≤ K,
xi +
X
1
1
, n−1
]xj ≤ 0,
[− n−1
i = 1, . . . , n,
i = 1, . . . , n,
j6=i
where K > 0 is large enough. Due to the symmetry, it is sufficient to investigate
the non-negative orthant only. In this orthant, the restricted solution set is
0 ≤ xi ≤ K,
xi −
X
j6=i
1
n−1 xj
≤ 0,
i = 1, . . . , n,
i = 1, . . . , n,
6
Milan Hladı́k
Algorithm 2 The best case optimal value f
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
if F is not verified to be connected then
use (5)
return
end if
compute f := f (Ac , b, cc ) and let x∗ be the corresponding optimal solution
put s := sgn(x∗ ), L := ∅, R := {s}
for i = 1, . . . , n do
put q := s, qi := −qi , L := L ∪ {q}
end for
while L 6= ∅ do
take s ∈ L, remove it from L, and put R := R ∪ {s}
if (4) is feasible then
put f := min{f , f (Ac − A∆ Ds , b, cc − Ds c∆ )}
for i = 1, . . . , n do
put q := s, qi := −qi
if q 6∈ R then
put L := L ∪ {q}
end if
end for
end if
end while
which describes the segment joining the origin and the point (K, . . . , K)T .
Thus, the solution set F is connected. Now, consider an additional constraint
n
X
[−1, 1]xi ≥ 1.
i=1
In each orthant, it cuts off the closer-to-the-origin part of the segment. For
instance, in the non-negative orthant, the restricted solution set will be the
segment joining the points ( n1 , . . . , n1 )T and (K, . . . , K)T . Hence the resulting
solution set will consist of 2n components. Notice again, that this exponential
increase of the components is not caused by the zero-containing intervals in
the additional inequality. If the inequality reads
n
X
[ε, 1]xi ≥ 1,
i=1
where ε > 0 is sufficiently small, then the solution set splits into 2n − 1
connectivity components.
The resulting method may look as in Algorithm 2. Therein, L denotes the
list of orthants to be visited, and R lists the already visited orthants.
4 Upper bound approximation
Herein, we focus directly on an upper bound on f , that is a value f U satisfying
f U ≥ f . First, we solve (3) with A := Ac and c := cc giving the optimal
On approximation of the best case optimal value in interval linear programming
7
Algorithm 3 Upper bound f U on f
1:
2:
3:
4:
5:
6:
7:
compute f ∗ := f (Ac , b, cc ) and let x∗ be the corresponding optimal solution
repeat
put f U := f ∗
put s := sgn(x∗ )
compute the optimal value f ∗ and the optimal solution x∗ to (8)
until f s ≥ f U or s = sgn(x∗ )
return f U := min{f U , f ∗ }
solution x∗ and the initial bound f U := f (Ac , b, cc ). Then, we run an iterative
local improvement method to find a scenario with as small as possible optimal
value.
Put s := sgn(x∗ ). The best case optimal value for the feasible set restricted
to the orthant Ds x ≥ 0, is calculated by the linear program (6). This motivates
us to choose the following scenario of (3) as a promising one for achieving the
lowest optimal value.
f s := min(cc − Ds c∆ )T x
c
(8)
∆
subject to (A − A Ds )x ≤ b.
We update the upper bound f U := min(f U , f s ). Then, we move to the most
promising orthant by putting s := sgn(xs ), where xs is an optimal solution to
(8). Now, s corresponds to a new orthant and the process is repeated until no
improvement happens.
Provided we limit the number of iterations by a polynomial function, in
addition, we get a polynomial procedure for computing f U . Anyway, the proposed iterations needn’t yield f in general. Moreover, due to NP-hardness of
computation of f and its tight approximation, the estimation f U may be far
from f in some pathological situations. However, our numerical experiments
(Section 7) show that practically the method behaves well.
Algorithm 3 describes the iterations in a pseudocode.
5 Lower bound approximation
In this section, we are concerned with the problem of computing a lower bound
f L ≤ f . Let B be an optimal basis corresponding to f (Ac , b, cc ). Consider the
interval linear system of equations
ATB y = c,
c ∈ c, AB ∈ AB .
(9)
Even though the solution set to this system, which is described by (1), is hard
to determine and deal with, various methods exist to calculate its interval
enclosure [14,16], that is, an interval vector y ∈ IRn containing the solution
set. Computing the best interval enclosure is NP-hard problem, too, but there
are many efficient algorithms yielding sufficiently tight enclosures.
8
Milan Hladı́k
Suppose that y is such an enclosure. If it lies in the non-positive orthant,
i.e. y ≤ 0, then B is a feasible basis of the dual problem to (3) for each
scenario. By the duality theory in linear programming, objective value of any
dual feasible point is a lower bound on the primal optimal value. Taking the
lowest possible, we get a lower bound on f . The minimum of bTB y over y ∈ y
is simply calculated as
f L := bTB y ∗ ,
where yi∗ = yi if (bB )i ≥ 0, and yi∗ = yi otherwise. By using interval arithmetic,
we can express it as
f L := bTB y,
Better result can be obtained by using the so called right preconditioning [6,
13]. This technique computes an enclosure to the solution set of (9) in the form
of Rz, where R ∈ Rn×n and z ∈ IRn . Then a lower bound on f calculated by
f ℓ := (bTB R)z
is usually tighter than f L .
Notice that we can also employ the basis computed in the previous section
to be a promising candidate for the best case optimal basis. Replacing the
optimal basis corresponding to f (Ac , b, cc ) by this one seems to be two-fold:
It may tighten the lower bound but one can expect that the condition y ≤ 0
fails more frequently.
6 Extensions
Each linear program can be formulated in the form (3). However, this is not the
case in interval linear programming, since transformation to the basic forms
may cause dependencies between interval coefficients. That is why different
forms are studied separately, and the complexity of handling various forms
differs, too; cf. [9]. We consider extension of (3) to the most general linear
programming form
min cT x + dT y subject to Ax + By = a, Cx + Dy ≤ b, y ≥ 0
(10)
where A ∈ A, B ∈ B, C ∈ C, D ∈ D, a ∈ a, b ∈ b, c ∈ c, and d ∈ d. By [10],
the set of all feasible solutions is described by
|Ac x + B c y − ac | ≤ A∆ |x| + B ∆ y + a∆ ,
C c x + Dy ≤ C ∆ |x| + b,
y ≥ 0.
On approximation of the best case optimal value in interval linear programming
9
In any orthant Ds x ≥ 0, s ∈ {±1}n , (note that y ≥ 0), the description
becomes linear
(Ac − A∆ Ds )x + By ≤ a,
(11a)
c
∆
(11b)
c
∆
(C − C Ds )x + Dy ≤ b,
(11c)
y ≥ 0,
Ds x ≥ 0.
(11d)
(11e)
(A + A Ds )x + By ≥ a,
From the descriptions we see that we can fix D := D and b := b. In addition,
we fix also d := d, since the lowest optimal value is attained in this setting.
The results developed in the previous section are easily adapted to the
general case. Instead of the linear programs (6) and (8), we solve the linear
programs
min(cc − Ds c∆ )T x + dT y subject to (11a)–(11e),
and
min(cc − Ds c∆ )T x + dT y subject to (11a)–(11d),
respectively. For the lower bound approximation, consider (10) with the coefficients assigned to the midpoints of the given intervals. Let x∗ , y ∗ be an
optimal solution and (B1 , B2 ) an optimal basis. That is, B1 indices the entries
of y ∗ that are positive, and B2 indices the inequalities of Cx∗ + Dy ∗ ≤ b that
hold as equations. Now, consider the interval linear system of equations
T
T
T
= d,
u + DB
= c, BB
AT u + CB
12
1
2
T
denotes the
where A ∈ A, BB1 ∈ B B1 , CB2 ∈ C B2 and c ∈ c. Herein, CB
2
T
T
restriction of C to the columns indexed by B2 , BB1 the restriction of B T
T
to the rows indexed by B1 , and DB
denotes the restriction of DT to the
12
columns indexed by B2 and rows indexed by B1 . Let (u, v) be an enclosure to
the solution set of this interval system. If v ≤ 0, then the value of
aT u + bTB2 v,
gives a lower bound on f .
7 Examples
Example 1 Consider an interval linear programming problem (3) with




−[4, 5] −[2, 3]
−[11, 12]
[2, 3]
c=
, A =  [4, 5] −[1, 2] , b =  [26, 28]  .
[6, 7]
[2, 3] [5, 6]
[43, 45]
10
Milan Hladı́k
x2
12
9
6
3
−8 −6 −4 −2
−3
2
4
6
8
10 x1
−6
Fig. 1 (Example 1) The feasible set in light gray, intersection of all feasible sets is dark
gray.
The union and the intersection of all feasible sets is illustrated at Figure 1.
The best case optimal value f = −41.3846. It is calculated by the decomposition metod from Section 3. Notice that F is connected by Proposition 2,
so only three orthants have to be inspected (which is not a great deal in this
small dimensional example).
The upper bound heuristic from Section 4 proceeds as follows. The linear
program (3) with A := Ac , c := cc has the optimal solution and the optimal
value
x∗ = (4.8056, −4.2500)T , f U = −15.6111.
Since x∗ lies in the orthant corresponding to the sign vector s = (1, −1), we
solve the linear program (8). Its optimal solution and optimal value are
xs = (5.1538, −7.3846)T ,
f s = −41.3846.
We update the upper bound f U := −41.3846. We got into the same orthant,
so we terminate. The upper bound f U calculated is the optimal one.
Now, we compute a lower bound on f according to Section 5. We call the
Hansen–Bliek–Rohn method [4,15] to compute an enclosure
y = ([−2.9058, −1.1285], [−2.5290, −0.4999])T
to the interval system (9). Since the assumption y ≤ 0 is valid, we get a lower
bound
f L := bTB y = −58.3973.
By using the right preconditioning, we obtain
−0.0833 −0.2500
R=
, z = ([−0.3914, 6.2609], [4.3902, 10.2609])T ,
0.1389 −0.2500
On approximation of the best case optimal value in interval linear programming
11
10
9
8
7
6
5
4
3
2
1
0
0
2
4
6
8
10
Fig. 2 (Example 2) Classification problem of two interval data sets.
yielding a tighter lower bound
f ℓ := (bTB R)z = −45.4891.
Example 2 Consider a classification problem, in which we want to find a separating hyperplane aT x = b for two sets of points {x1 , . . . , xm } ⊂ Rn and
{y1 , . . . , yk } ⊂ Rn . By [2], this can be formulated as a linear program
min 1T u + 1T v
subject to aT xi − b ≥ 1 − ui ,
i = 1, . . . , m,
aT yj − b ≤ −(1 − vj ),
u, v ≥ 0.
j = 1, . . . , k,
If the optimal value is zero, then the points can be separated and the optimal solution gives the separating hyperplane. If the optimal value is positive,
then the points cannot be separated, but the optimal value approximates the
minimum number of misclassified points and the optimal solution gives the
corresponding hyperplane.
Now, suppose that there is an uncertainty in measuring the points and the
only information that we have is that the true points lie in the interval vectors
x1 , . . . , xm ∈ IRn and y 1 , . . . , y ∈ IRn . Thus, f and f give us approximately
the lowest and highest number of misclassified points.
For concreteness, consider a randomly generated problem in R2 with two
sets of 30 and 35 interval vectors; see Figure 2. For the midpoint values, the
optimal value of the linear program is 3.15, saying that approximately three
points violate the computed separating hyperplane. The best case optimal
value is zero, so there exists a separating hyperplane for a suitable realization
12
Milan Hladı́k
Table 1 (Example 3) Randomly generated data.
input
fU
f
fL
m
n
δ
time
orth all
orth ∩
time
opt
iter
time
opt
10
10
10
15
15
15
50
50
50
100
100
100
3
3
3
5
5
5
10
10
10
15
15
15
1
0.1
0.01
1
0.1
0.01
1
0.1
0.01
1
0.1
0.01
0.0580
0.0589
0.0612
0.2153
0.2076
0.2077
14.31
12.77
12.61
997.8
936.2
892.4
8
8
8
32
32
32
1024
1024
1024
32768
32768
32768
5.99
5.58
5.56
26.9
26.1
25.9
919
749
729
31015
22656
22694
0.00805
0.00789
0.00800
0.0117
0.0110
0.0106
0.0376
0.0262
0.0233
0.08303
0.05426
0.04519
0.0630
0.00913
0
0.6926
0.00052
0
0.132
0.613
0
0.1587
0.00143
0.00003
2.1
2.04
2
2.2
2.08
2
2.9
2.2
2
3.32
2.38
2.08
0.00171
0.00173
0.00177
0.00177
0.00176
0.00176
–
0.00188
0.00187
–
0.00199
0.00199
4.403
0.176
0.0118
19.9
0.192
0.0239
–
0.192
0.0239
–
1.986
0.0806
of the intervals. The heuristic from Section 4 finds the best case value, too,
using only 2 linear programs instead of 8. The lower bound method (Section 5)
fails in this example, but we do not need it since the heuristics already found
the best case optimal value. For completeness, the worst case optimal value is
8.20, meaning that for a bad realization of intervals, we may expect about 8
misclassified points.
Example 3 This example shows results for randomly generated data. For given
dimensions m, n, and a radius parameter δ > 0, we generate the entries of
Ac ∈ Rm×n randomly in [−10, 10] with uniform distribution. The radii of A
are equal to δ. The right-hand side b is constructed as b := Ac e + nr, where
e = (1, . . . , 1)T is the vector of ones, and r ∈ Rm is taken randomly in [0, 10].
Similarly, the entries of cc ∈ Rn come randomly from [−10, 10] and c∆ := δe.
Table 1 gives the results; each row is an average of 100 runs. Concerning
f , we display the running time, and the number of orthants (=2n ) when using
the formula (5). Next, we show the number of orthants intersecting F , which
roughly approximates the cost of Algorithm 2. We do not present the running
time of this algorithm since it makes no sense for random data as it heavily
depends on how A, b are constructed – we can easily generate systems for
which all orthants must be inspected and also systems for which the whole
feasible set F lies in only one orthant.
Concerning the upper bound f U , we display the running time, the relative
deviation from f given by opt := |f − f U |/|f |, and the number of iterations
iter (i.e., how many linear programs we solved). Eventually, for the lower
bound f L we present the running time and the relative deviation from f given
by opt := |f − f L |/|f |. If these values are not mentioned, the method failed
due to singularity of AB .
From the numerical tests we can see that for narrow enough intervals, f L
is a good estimation of f , and f L = f almost always. On the other hand, as
On approximation of the best case optimal value in interval linear programming
13
input intervals become wider, f L is becoming much poorer estimation or we
failed to compute it, but f U is still quite reasonable bound. Running times
of f U and especially f L are substantially smaller than for f , particularly for
nontrivial dimensions with n ≥ 10.
8 Conclusion
We presented exact and approximation methods for the best case optimal value
f of inequality constrained interval linear programs (By duality in linear programming, the methods work the same also for the worst case approximation
of equality constrained interval linear programs). The exponential number of
steps in computation of f can often be decreased by inspecting less orthants.
Moreover, the proposed local improvement heuristic seems promising in finding a cheap but tight upper approximation of f . Contrary, the presented lower
bound method for f needn’t be very tight, so there is open space for further
development.
Acknowledgements The author was supported by the Czech Science Foundation Grant
P402-13-10660S.
References
1. Allahdadi, M., Nehi, H.M.: The optimal solution set of the interval linear programming
problems. Optim. Lett. To appear, DOI: DOI 10.1007/s11590-012-0530-4
2. Boyd, S., Vandenberghe, L.: Convex optimization. Cambridge University Press (2004)
3. Chinneck, J.W., Ramadan, K.: Linear programming with interval coefficients. J. Oper.
Res. Soc. 51(2), 209–220 (2000)
4. Fiedler, M., Nedoma, J., Ramı́k, J., Rohn, J., Zimmermann, K.: Linear optimization
problems with inexact data. Springer, New York (2006)
5. Gabrel, V., Murat, C., Remli, N.: Linear programming with interval right hand sides.
Int. Trans. Oper. Res. 17(3), 397–408 (2010)
6. Goldsztejn, A.: A right-preconditioning process for the formal-algebraic approach to
inner and outer estimation of AE-solution sets. Reliab. Comput. 11(6), 443–478 (2005)
7. Hladı́k, M.: How to determine basis stability in interval linear programming. Optim.
Lett. To appear, DOI: 10.1007/s11590-012-0589-y
8. Hladı́k, M.: Optimal value range in interval linear programming. Fuzzy Optim. Decis.
Mak. 8(3), 283–294 (2009)
9. Hladı́k, M.: Interval linear programming: A survey. In: Z.A. Mann (ed.) Linear Programming - New Frontiers in Theory and Applications, chap. 2, pp. 85–120. Nova Science
Publishers, New York (2012)
10. Hladı́k, M.: Weak and strong solvability of interval linear systems of equations and
inequalities. Linear Algebra Appl. 438(11), 4156–4165 (2013)
11. Luo, J., Li, W.: Strong optimal solutions of interval linear programming. Linear Algebra
Appl. 439(8), 2479–2493 (2013)
12. Moore, R.E., Kearfott, R.B., Cloud, M.J.: Introduction to interval analysis. SIAM,
Philadelphia, PA (2009)
13. Neumaier, A.: Overestimation in linear interval equations. SIAM J. Numer. Anal. 24(1),
207–214 (1987)
14. Neumaier, A.: Interval methods for systems of equations. Cambridge University Press,
Cambridge (1990)
14
Milan Hladı́k
15. Neumaier, A.: A simple derivation of the Hansen-Bliek-Rohn-Ning-Kearfott enclosure
for linear interval equations. Reliab. Comput. 5(2), 131–136 (1999)
16. Rohn, J.: A handbook of results on interval linear problems. technical report No. 1164,
Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague
(2012)