On stable least squares solution to the system of linear inequalities

DOI: 10.2478/s11533-007-0003-7
Research article
CEJM 5(2) 2007 373–385
On stable least squares solution to the system of
linear inequalities
Evald Übi∗
Department of Economics,
Tallinn University of Technology,
11712 Tallinn, Estonia
Received 8 May 2006; accepted 30 December 2006
Abstract: The system of inequalities is transformed to the least squares problem on the positive ortant.
This problem is solved using orthogonal transformations which are memorized as products. Author’s
previous paper presented a method where at each step all the coefficients of the system were transformed.
This paper describes a method applicable also to large matrices. Like in revised simplex method, in this
method an auxiliary matrix is used for the computations. The algorithm is suitable for unstable and
degenerate problems primarily.
c Versita Warsaw and Springer-Verlag Berlin Heidelberg. All rights reserved.
Keywords: System of linear inequalities, method of least squares, Householder transformation, successive
projection
MSC (2000): 90C05, 65K05
1
Introduction
During the last 50 years the simplex method based on Gaussian elimination is used to
solve most of the linear programming and related problems. But for some problems
the simplex method is having poor results. 30 years ago more through investigation of
such problems was started. The simplex method and the polynomial-time interior point
method give poor results in solving unstable linear and quadratic programming problems.
For such problems the least squares method based on orthogonal transformations is more
recommendable, because there is no change of vectors’ norm.
The purpose of this paper is solving a system of inequalities using highly developed
least squares technique. This method is used not only in mathematics but also in statis∗
E-mail: [email protected]
374
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
tics, physics etc where mainly nonlinear problems are solved. To do this, a certain number
of similar linear least squares problems are composed differing in a variable or constraint.
In this paper will be proved that such methodology is usable also for solving linear inequalities and mathematical programming problems. The least squares technique and its
applications to the mathematical programming is described thoroughly in [1, 2]. Let us
have a system of inequalities
li (x) = ai1 x1 + ... + ain xn ≤ bi , i = 1, ..., m,
(1)
As proved in [2] such system is equivalent to the least squares problem
AT u = 0,
(b, u) = −1
(2)
u≥0
or
min{ϕ(u) = AT u2 + (1 + (b, u))2 , u ≥ 0}
(3)
where A is an m × n matrix, b and u are m−vectors. To solve this least squares
problem there is given a finite orthogonal method in paper [2]. Like in simplex method,
at each step all the coefficients of the system (2) are transformed. Such algorithm is
using for problems of small dimensions only. In this paper a new method similar to the
revised simplex method is presented. This method can also be used for large matrices.
The proposed method corresponds to the first version of the second method [1, ch 24p].
The matrix of the system (2) is not transformed in the process of computations. For this
matrix the QR−transformation is used,Q is orthogonal and R upper-triangular matrix.
The least squares solution is found from upper-triangular system.
Instead of storing the zero elements in the subdiagonal part of R the elements of
normals Householder reflections are memorized. At first all the variables u are passive
uj = 0, j = 1, ..., m.
Like in paper [2], at each step one variable uj is activated (and a column added to the
triangular matrix R) or one variable uj ≤ 0 (and its corresponding column in R) is
removed. In the last case all the columns of R corresponding to this and to following
variables are replaced with the originals from the system (2). Then to the replaced
columns of R these orthogonal transformations are applied which held before the variable
uj (uj ≤ 0). At last the replaced part is transformed to the triangular form keeping the
reflection normals in subdiagonal [1]. Only orthogonal transformations - Householder
reflections - are used, see Example 4.4.
In its nature the process of solving a system of linear inequalities has the same complexity as solving a general linear programming problem, see [5]. The first solution
method of the system of linear inequalities was described by Fourier [8]. This method
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
375
is very laborious, paper [7] proves, that the number of iterations needed to reach the
solution can grow exponentially. Paper [9] devises a method of finding a fundamental
system of solutions to the system of linear inequalities. The method of ellipsoids and the
interior-point method are the noteworthy results of the latest works. In Section 6 we consider the idea of D.Gale [10] concerning the use of the simplex method to solve the system
of linear inequalities. In this section we additionally consider a solution of a system of
linear inequalities with simplex method, in case of having some free variables.At each
iteration the algorithm presented in this paper solves a system of linear inequalities, the
number of which increases by one at each step. The constraint inserted is ”maximally”
linearly independent of preceding constraints. With this the stability of the algorithm
is guaranteed. The presented algorithm is based on three theorems given in Section 2
and 4.
In Section 3 the triangular systems differing by one column are solved.
In Section 4 the algorithm INE is described and an example is given.
In Section 5 unstable and degenerate problems are solved.
2
Application of the least squares method to the system of linear inequalities
In papers[2, 3] the following theorems are proved.
Theorem 2.1. Let the system (1) have a solution and the number of the linearly independent functions li (x) be r, 1 ≤ r ≤ m. Then among m functions such r linearly
independent functions could be found that every solution to the system
li1 (x) = bi1 , ..., lir (x) = bir
(4)
is also the solution to the system (1).
If the solution with the minimal norm x̂ to the system (1) is an extreme point of the
set Q = {x : Ax ≤ b} then according to the theorem 2.1 the solution is determined by n
linearly independent constrains li (x) = bi . If x̂ isn’t an extreme point it is determined by
r < n linearly independent functions li (x) of the system (1). The application of the least
squares method is based on the theorem 2.2 proved in [2]. Solving the problem (2) using
Householder reflections the solution x̂ of minimum norm is given by formula (5).
Theorem 2.2 (of the alternative). For any matrix A and vector b only one of two
following statements is valid:
1) the inequality Ax ≤ b has no solution and the equation
AT u = 0
(b, u) = −1
have a nonnegative solution;
376
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
2) the inequality Ax ≤ b has a solution, the solution of minimum norm is
x̂ =
−Aû
1 + (b, û)
(5)
where û is the least squares solution to the problem
AT u = 0
(b, u) = −1
u≥0
Example 2.3.
x1 + 2x2 ≤ 4
−2x1 − 4x2 ≤ −10
Corresponding least squares problem
u1 − 2u2 = 0
2u1 − 4u2 = 0
4u1 − 10u2 = −1
has a nonnegative solution û = (1, 1/2)T . According to the theorem 2.2 the system of
inequalities doesn’t hold.
Example 2.4.
x1 + 2x2 ≤ 4
−2x1 − 4x2 ≤ −8
The nonnegative least squares solution is found from the system of the normal equations
21u1 − 42u2 = −4
−42u1 + 84u2 = 8.
For any c ≥ 0 the solution is û = (c, c/2 + 2/21)T . For the system of inequalities the
T
solution of minimum norm is x̂ = −(−4/21,−8/21)
= (4/5, 8/5)T .
5/21
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
3
377
Solving the triangular systems
Let us have triangular systems differing in one column only. It can be shown how to use
the computations from previous steps. In the system
R(k)u(k) = y(k)
(6)
R is upper-triangular k × k−matrix. Its inverse matrix P = R−1 is upper-triangular too.
The elements are calculated moving up from the main diagonal
1
pii = , pij =
rii
l=j
l=i+1 ril plj
rii
, i, j = 1, .., k
(7)
This means that adding a column to the matrix R there is no change in columns calculated
earlier in the inverse matrix. Only the additional column Pk+1(k +1) has to be calculated.
A recurrence formula (8) holds
u(k + 1) = u(k) + Pk+1 (k + 1)yk+1(k + 1), k = 1, 2, ...
(8)
It means that a new least squares solution u(k+1) is calculated using previous solution
u(k), new column of the inverse matrix and transformed right hand sides.
Therefore, when a new variable is activated first we have to find the new column of
matrix R and right hand sides of the system (6). Then, according to (7) the new column
of inverse matrix P is calculated and according to(8) the solution to the expanded system
is found. There is no need to memorize the inverse matrix, only the last column is saved.
The reflection normals are kept subdiagonal instead of zeroes, their first components in
array w.
In Section 4 the calculations needed in case of removing a variable uj ≤ 0 are discussed.
4
Description of the algorithm INE
We’ll describe the INE algorithm for solving the system (1)). Let us write the least
squares problem (2) in form
Du = f, u ≥ o,
(9)
where D is an (n + 1) × m matrix, u is a m− vector, f is a (n + 1)− vector, f =
(0, 0, ...0, −1)T , Dij = Aji , i = 1, ..., n, Dn+1j = bj , j = 1, ..., m, m ≥ 2.
Algorithm INE(D, f, R, IJ, F, w, v, x, u, m, n).
(1) Check inequalities Dn+1j ≥ 0 for j = 1, ..., m. If all of them are valid, then x̂ = 0 is
the solution. Stop.
i=n+1
Dij2 .
(2) For j = 1, ..., m compute new Dij = Dij /Sj ,where, Sj =
i=1
(3) Initiate the number of active variables k = 0 and u = 0.
(4) Do Loop 4 - 23.
378
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
(5) Compute vi = − Dij uij , i = 1, ..., n + 1, vn+1 = vn+1 − 1,where summation is
performed over all k active variables j ∈ IJ(on the first iteration the set IJ is
empty, v = f ).
i=n+1
(6) Compute products Fj = i=1
Dij vi , j = 1, ...n.
(7) Determine the following active variable u(j0) by solving the problem
maxFj = Fj0 = Re,
(8)
(9)
(10)
(11)
where the maximum is found for all passive (i.e. uj = 0) variables.
If Re > 0, then go to Step 10.
Go to 24.
Increase the number of active variables, k = k + 1. and write index j0 into array IJ.
Write a new active column into array R,
Rik = Dij0 , i = 1, ..., n + 1, w(k) = R(k, k).
(12) At k ≥ 2 to perform k th column R the previous k − 1 Householder transformations.
(13) Compute a new pivoting element Rkk .
(14) If k < n + 1 then fulfill Householder transformation with the column Rk to the right
hand side Rm+1 (k).
(15) Solve the triangular system R(k)u(k) = Rm+1 (k) of order k to determine the active
variables uj (k).
(16) Check inequalities uj (k) > 0, j ∈ IJ. If all inequalities hold, goto 23.
(17) If uj (k) ≤ 0 for j = IJ(L), delete the active indexj from IJ and initially uj (k) = 0.
(18) Substitute columns RL+1 , ..., Rk−1 with corresponding columns D and Rm+1 with f .
(19) Do columns RL+1 , ..., Rk−1 and right hand side Rm+1 the previous L − 1 Householder
transformations determined by the subdiagonal part of the columns R and array w.
(20) Transform the active columns D into the triangular form by the Householder reflections,memorizing the normals of these reflections in the subdiagonal part R and
array w the normals of these reflections.
(21) After elimination the L−th element initially IJ(t) = IJ(t + 1), t = 1, ..., k − 1.
(22) Decrease the number of the active variables, k = k − 1, and go to 15.
(23) If k ≤ m then go to 4.
(24) Find (n + 1)−vector v with coordinates vi =
Dij uj , i = 1, ..., n + 1, vn+1 =
vn+1 + 1, j ∈ IJ.
(25) If vn+1 = 0 then the problem has no solution, stop.
(26) Compute the least squares solution x̂, x̂i = vi /vn+1 , i = 1, ..., n.
(27) The problem is solved.
Remark 4.1. We’ll show that the first variable to become active is uj0, it’s corresponding
column Dj0 forms a minimal angle to the right side f . The value of the variable uj0 is
determined by the normal equation
(Dj0 , Dj0 )uj0 = (Dj0 , f ),
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
1
2
u4
u2
u1
u3
Rm+1
1
0,070
0,689
0,151
0,970
0
-0,997
-0,176
0,869
-0,221
0
-0,289
-0,703
-0,470
-0,096
-0,970
-0,910
-0,497
-0,012
-0,026
u4
u2
u3
R(m + 1)
1
0,070
0,0151
0,970
0
-0,997
0,869
-0,221
0
-0,289
-0,470
-0,096
-0,970
-0,910
-0,341
-0,023
379
where (Dj0, Dj0 ) = 1 (step 2). Calculating sum of squares ϕ(0, ..., uj0, ..., 0) =
((Dj0 , f )Dj0 − f, (Dj0, f )Dj0 − f ) = f 2 − (Djo , f )2 ≤ f 2 − (Dj , f )2, j = 1, ..., m.
we conclude the last inequality from the criterion Fj −→ max (step 7).Therefore at each
step of the INE algorithm there is movement along the axis where ϕ is decreasing most.
Remark 4.2. At steps 24-26 the formula v2 = Du − f 2 = vn+1) [2] is considered.
When the system Du = f has an exact solution then vn+1 = 0 and the system of
inequalities doesn’t hold (Example 2.3).
Remark 4.3. During the actual solution process the inequality u(j) > 0 holds almost
always.
Example 4.4.
x1 − x2 + 2x3 ≤ −3
3x1 + x2 − x3 ≤ −1
−4x1 − 3x2 + 4x3 ≤ 0
≤ −4
−x1
Consider the least squares problem (2).
u1 + 3u2 − 4u3 − u4 = 0
−u1 + u2 − 3u3
=0
2u1 − u2 + 4u3
=0
−3u1 − u2
− 4u4 = −1
u ≥ 0.
380
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
At the first step the variable u4 = 0, 970 is activated, at the second step u2 = 0, 221,
becomes active and u4 = 0, 954. These values are found from triangular system
u4 + 0, 070u2 = 0, 970
−0, 997u2 = −0, 221
which is put together using the first two rows and columns of Table 1. At the
third step the variable u1 is activated, at the fourth step the variable u3. The first
four rows and columns of the Table 1 give a solution to the triangular system u =
(−1, 291; 2, 309; 2, 134; 1, 374)T which isn’t positive. Third column of the matrix R corresponding to the variable u1 has to be removed and replaced with the column of the initial
matrix D corresponding to the variable u3 . At the same time the right side of the system
is replaced by vector f . The two reflections made at the activating of these variables
are applied to the corresponding columns. The system is transformed to the triangular
form applying reflections to the third column and the right side (Table 2). Solving the
triangular system a non-negative least squares solution u = (0; 0, 400; 0, 205; 0, 911)T is
found. According to the formulas at steps 24-26 of the algorithm we find the solution of
minimum norm to the system of inequalities
x̂i = −
vi
vn+1
, i = 1, ..., n,
where vector v = Du − f and x̂ = (4, −36, −23)T . (see Example 2.4 and [1, 2]).
Example 4.5.
x1 ≤ 3
x2 ≤ 1
−x1 − x2 ≤ −4.
From the system of normal equations for problem (2) the least squares solution
û = (0, 2/11, 3/11)T , v = (−3/11, −1/11, 1/11)T , x = (3, 1)T
is found. Replacing the last inequality by −x1 − x2 ≤ −5 we get an exact solution
û = (1, 1, 1)T , v = Du − f = 0 and the system of inequalities isn’t valid.
Theorem 4.6. Algorithm INE solves the problem (3) in finite number of iterations and
the values of ϕ(us ) strict decrease.
Proof. Let us be the sequence of points computed by the algorithm INE, s = 1, 2, ....
The inequality ϕ(us ) > ϕ(us+1) is obviously performed in the case, when a new point us+1
is obtained from us by adding one component and all active variables us+1
> 0. Let us
j
assume that at activating a new variable certain active variable uj ≤ 0.Then because of
convexity ϕ(u) its value in the point obtained as a result of elimination of the variable uj
from the set of active variables is strictly smaller than ϕ(us ) . Strict inequality results
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
381
from fact that at the point us all active variables usj > o. Analogously the function ϕ(u)
decreases at elimination of several negative variables. At every step we minimize ϕ(u) in
certain subspace of Rm . Finite number steps of INE results from the total finite number
of those subspaces.
5
Solving large, unstable and degenerate problems
Method for solving least squares problem (9) is based on the QR−decomposition of
the matrix D, where Q is orthogonal and R upper triangular matrix. It is important
to stress that matrix Q should not be memorized nor computed, fulfilled orthogonal
transformations are stored as products, see [1]. In the least squares solution u are k
components ui = 0, if there are k strict inequalities (ai , x) < bi , where xis the solution
of minimum norm. The solution uis found from the triangular system with matrix R,
which order at the first step is one and the second step is two etc. until m, if the number
of constraints m ≤ n. If m > n, then maximal order of matrix R is n, because every
solution to the system of linear inequalities is determined by no more than n equation.
Consequently order of matrix R is not bigger than min(m, n). The INE algorithm is
usable for solving large and sparse systems of linear inequalities, in addition to the matrix
R only some additional vectors are used. In this paper we are not closely studying sparse
system of linear inequalities. The solving sparse least squares problems is described in
papers [11, 12].
The INE algorithm is effective in case of random A and b, if m ≤ 220, n ≤ 220.
In problems where the number of constraints is significantly bigger than the number of
variables, in most cases the system was not valid. The number of iterations did not exceed
1, 5n. Conversely, if m < n, the number of iterations was considerately smaller than the
number of constraints m. Generally, the algorithm INE is the more effective, the more
there are passive constraints (ai , x) < bi , where xis the solution of minimum norm. For
instance, the following problem was solved with one iteration.
Example 5.1.
max {z = (c, x) = x1 + x2 + x3 + x4 }
(1 + r)x1 + x2 + x3 + x4 ≤ 4 + r
x1
+ x3 + x4 ≤ 3
x1
+ x4 ≤ 2
x ≥ 0, r > 0.
The maximum value of the objective function is zmax = 4 + r, basic variables are x2
or x2 , x4 or x1 , x2 , x3 or x1 , x2 , x3 , x4 . The problem is transformed to a system of linear
inequalities using shifts, xj = xj − cj t = xj − t. In the paper [2] it is proved that for
sufficient large shifting parameter t the linear programming problem is equivalent to the
following system of linear inequalities
382
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
(1 + r)x1 + x2 + x3 + x4 ≤ (4 + r)(1 − t)
x1
+ x3 + x4 ≤ 3(1 − t)
x1
+ x4 ≤ 2(1 − t)
x ≥ 0
.
The INE algorithm with one iteration for r = 0, 000001 and t = 101 found the least
squares solution u= (0, 999; 0; 0)T , x1 = x + 101 = 0, 999925, x2 = x3 = x4 = 1, 000025.
Now we consider the solution of degenerated problems. If an LP has many degenerate
basic feasible solutions, the simplex algorithm is often very inefficient. The objective
function may not grow in case of a degenerate basis. In case of the least squares method
(3), the objective function ϕ(u) decreases at every iteration (theorem 4.1), the solution of
minimum norm uis always unique. Let’s have a classical problem with the degenerated
basis.
Example 5.2.
max {z = 0, 75x1 − 1, 5x2 + 0, 02x3 − 0, 06x4 }
0, 25x1 − 0, 6x2 − 0, 04x3 + 0, 09x4 ≤ 0
0, 5x1 − 0, 9x2 − 0, 02x3 + 0, 03x4 ≤ 0
x3 ≤ 1
x ≥ 0.
Similarly to the previous example, we will make a coordinates shift. Algorithm
INE found in four steps for t = 100 that u = (0; 0, 9173; 0, 0002; 0, 0802; 0, 0023).x =
(0, 0400000009; 0; 1, 0000000007; 0)T .
The following example is an unstable problem with Hilbert matrix,
Example 5.3. ai,j = 1/(i + j), bi =
ai,j , j = 1, 2, ...n, i = 1, ..., m, z =
cj xj , j =
1, ..., n, cj = bj + 1/(j + 1), x ≥ 0.
Well-known programs solve this LP problem only for m ≤ 8. The algorithm INE with
shifting parameter t = 100 found a solution to this problem for m ≤ 12.
The use of the simplex method for solving the system of linear inequalities is a widerange topic. Regarding solution of practical problems, a suitable simplex method variant
is chosen depending on the dimension of the system (m ≤ n or m > n), depending
on the number of bi < 0 and depending on the constraints on choice variables (x ≥
0, −∞ < xj < ∞, lj ≤ xj ≤ uj ). Sometimes it is practical to introduce free variables
xj as differences xj = uj − vj , uj ≥ 0, vj ≥ 0. If the non-negativity of variables x ≥ 0
belongs also to the system of inequalities Ax ≤ b, the simplex method considers it as
an important information. The algorithm INE presented in this paper, takes inequalities
x ≥ 0 into consideration as general linear constraints. Consequently, for large size systems
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
383
of inequalities with random coefficients aij , bi the simplex method was able to find a
solution 1,5 to 3 times faster than using algorithm INE. While considering this fact, we
should also keep in mind that the simplex method has been perfected during the last
50 years. Also, the Householder transformations, used here in order to achieve greater
accuracy, are twice time consuming as Gaussian eliminations.
We transform a system of linear inequalities into a LP problem in two ways. First,
we do it by using the idea of D.Gale [10], solving a problem similar to the dual problem
using the simplex method.
max{w = ym+1 }
AT y = 0
(10)
(b, y) − ym+1 = −1
y ≥ 0.
If w = 0, the system of linear inequalities does not hold (theorem 2.2). Otherwise, if
w > 0, we obtain the optimal values of dual variables x∗ and r∗ from the optimal simplex
table, which satisfy the inequalities
Ax∗ ≤ br∗
0 < r∗ ≤ 1
Hence the solution of the system (10) is x ∗ /r∗.
Formulating the dual problem (10) for example 4.1 and solving it using the two-phase
simplex method, can see that basic variables are y2 = 0, y3 = 0, y4 = 0, y5 = 1 . We used
the Bland’s rule [5] as we are dealing with the degenerate basic solutions. Reduced costs
are x∗ = (4, −36, −23)T , r∗ = 1.
Another renowned variation of the simplex method uses only one artificial and a
number of free variables, if there are no specific constraints lj ≤ xj ≤ uj to the variables.
This method is suitable in case of big number variables n. Let us explain it by using
Example 4.4. Slack variables x4 , x5 , x6 , x7 are added to the constraints and the artificial
variables is subtracted from all the equations that have negative RHS. Subsequently,
the last equation (corresponding to the minimal bi ) is divided by minus one, and then
added to the other equations with the negative RHS. Then we have obtained a linear
programming problem
min{z = x8 }
2x1 − x2 + 2x3 + x4
=1
4x1 + x2 − x3
=3
−4x1 − 3x2 + 4x3
x1
+ x5
+ x6
=0
− x7 + x8 = 4
384
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
−∞ < x1 , x2 , x3 < ∞
x4 , x5 , x6 , x7 , x8 ≥ 0.
with the solution x∗ = (4, −36, −23, 3, 0, 0, 0, 0)T .
6
Conclusion
In this paper a stable method to find a minimum norm solution to the system of linear
inequalities is given. Only orthogonal transformations are used which are memorized as
products.
This method has a simple geometrical interpretation. In Example 4.4 the variable
u(4) is activated at the first iteration. This corresponds to the minimum norm element
x = (4, 0, 0)T of the half-space determined by the fourth constraint which is found using
formulas at steps 24-26 of the algorithm. The next iteration gives the element x =
(4; −6, 5; 6, 5)T determined by the second and the fourth constraint et cetera. This is the
method of successive projection for finding the common point of convex sets.
In the examples solved where the number of constraints m and the number of variables
n were approximately equal the constraints have to be removed from the active ones.
When m and n differ considerably there is no removal of constraints. There is no case of
reactivating of a constraint after the removal in the examples solved.
Finally, we discuss the use of algorithm INE for solving linear programming problems.
A possibility is described in paper [2] where the linear programming problem is reduced
to finding minimum norm solution to the system of linear inequalities using shift of
coordinates. The examples in Section 5 affirm the accuracy of this method which is
substantially simpler than methods recommended by L.Khachiyan and C Papadimitriou,
[5, 6].
In order to simplify the process of solution LP problem we may apply the concept of
goal programming. Let us substitute the objective function z = (c, x) −→ max with an
inequality (c, x) ≥ z0 , where z0 is a ”sufficient large” value of the objective function
(c, x) ≥ z0
Ax ≤ b
x ≥ 0.
This problem will be solved using the INE algorithm. The problem of finding a better
initial solution to the simplex method needs additional examination of this method.
Acknowledgment
Author is grateful to the anonymous referees for carefully reading the first version of the
manuscript and for their many constructive comments and suggestions.
E. Übi / Central European Journal of Mathematics 5(2) 2007 373–385
385
References
[1] C. Lawson and R. Hanson: Solving Least Squares Problems, Prentice-Hall, NewJersey, 1974.
[2] E. Übi: “Exact and Stable Least Squares Solution to the Linear Programming Problem”, Centr. Eur. J. Math., Vol. 3(2), (2005), pp. 228–241.
[3] Fan Ky: “On Systems of Linear Inequalities”, In: H. Kuhn and W. Tucker (Eds.):
Linear Inequalities and Related Systems, Priceton, 1956.
[4] E. Übi: “Finding Non-negative Solution of Overdetermined or Underdetermined System of Linear Equations by Method of Least Squares”, Transactions of Tallinn TU,
Vol. 738, (1994), pp. 61–68.
[5] C. Papadimitriou and K. Steiglitz: Combinatorial Optimization:Algorithms and
Complexity, Prentice-Hall, New-Jersey, 1982.
[6] L. Khachiyan: “A Polynomial Algorihm in linear programming”, Soviet Mathematics
Doklady, Vol. 20, (1979), pp. 191–194.
[7] L. Khachiyan: “Fourier-Motzkin Elimination Method”, Encyklopedia of Optimization, Vol. 2, (2001), pp. 155–159.
[8] G. Danzig: Linear Programming and Extensions, Princeton University Press, 1963.
[9] S. Chernikov: Lineare Ungleichungen, Deutscher Verlag der Wissenschaften, Berlin,
1971.
[10] D. Gale: The Theory of Linear Economic Models, McGgraw-HILL Book Company,
1960.
[11] A. Björck: “Generalized and Sparse Least Squares Problems”, NATO ASI Series C,
Vol. 434, (1994), pp. 37–80.
[12] M. Hath: “Some Extensions of an algorithm for Sparse Linear Least Squares Problems”, SIAM J.Sci. Statist. Comput., Vol. 3, (1982), pp. 223–237.
[13] L. Bregman: “The Method of Successive Projecton for Finding The Common Point
of Convex Set”, Soviet. Math. Dokl., Vol. 6, (1969), pp. 688–692.