A Polynomial-Time Algorithm for Linear Optimization Based on a

Yugoslav Journal of Operations Research
25 (2015) Number 2, 233-250
DOI: 10.2298/YJOR120904006K
A POLYNOMIAL-TIME ALGORITHM FOR LINEAR
OPTIMIZATION BASED ON A NEW KERNEL FUNCTION
WITH TRIGONOMETRIC BARRIER TERM
B. KHEIRFAM1, M. MOSLEMI2
Department of Mathematics
Azarbaijan Shahid Madani University
Tabriz, Iran
Received: November 2013 / Accepted: March 2014
Abstract: In this paper, we propose a large-update interior-point algorithm for linear
optimization based on a new kernel function. New search directions and proximity
measure are defined based on this kernel function. We show that if a strictly feasible
starting point is available, then the new algorithm has ( log ) iteration complexity.
Keywords: Kernel function, Interior-point algorithm, Linear optimization, Polynomial complexity,
Primal-dual method.
МSC: 90C05, 90C51.
1. INTRODUCTION
In this paper, we consider linear optimization (LO) problem in the standard form:
min
. . = ,( )
≥ 0,
×
where ∈
is a real
problem of (P) is given by
×
matrix of rank
1 Corresponding author: [email protected]
2 [email protected]
, and , ∈
, ∈
. The dual
234
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
max
. . + = ,( )
≥ 0,
with
∈
and ∈ . In 1984, Karmarkar [12] proposed a polynomial-time
algorithm, the so-called interior-point method (IPM) for linear optimization (LO). This
method and its variants are frequently extended for solving wide classes of optimization
problems, for example, quadratic optimization problem (QOP), semidefinite optimization
(SDO) problem, second-order cone optimization (SOCO) problem, ∗ ( ) linear
complementarity problems (LCPs), and convex optimization problem (CP). IPM is the
most efficient method from computational point of view. Also, its promising
performance in solving large-scale linear programming problems caused it to be used in
practical issues. Usually, if parameter
in this method is a constant, which is
independent of the dimension of the problem, then the algorithm is called a large-update
method. If it depends on the dimension, then the algorithm is said to be a small-update
method. At present, the best known theoretical iteration bound for small-update IPM is
better than the one for large-update IPM, but in practice large-update IPM is much more
efficient than the small-update IPM [2, 3, 4, 5, 6]
Most of IPM algorithms for LO are based on the logarithmic barrier function [1,
10, 11]. In 2002, Peng et al. proposed new variants of IPM based on a specific selfregular barrier function. Such a function is strongly convex and smooth coercive on its
domain, the positive real axis. They obtained the best known complexity results for largeand small-update methods, which have (√ log log ) complexity for large-update and
(√ log ) complexity for small-update methods [18, 19], and extended the results for
LO, second order cone optimization (SOCO), semi-definite optimization (SDO) and
NCPs. Later, Bai et al. [2] introduced eligible kernel functions and gave comprehensive
complexity analysis. Cho [7] presented other barrier function which has (√ log log )
complexity for large-update method and (√ log ) complexity for small-update
method. This kernel function does not belong to the family of self-regular functions. Kim
et al. [16] defined new kernel functions that are both self-regular and eligible, and
showed their properties. They also identified the relation between the classes of eligible
and self-regular kernel functions. Kheirfam and Hasani [14] presented a large-update
primal-dual interior-point algorithm for convex quadratic semi-definite optimization
problems based on a new parametric kernel function. They investigate such a kernel
function, and show that their algorithm has the best complexity bound, i.e.,
(√ log log ).
In 2012, El Ghami et al. [9] proposed a new primal-dual IPM for LO problems
based on a kernel function, which has a trigonometric barrier term, and Kheirfam defined
another trigonometric barrier function and presented a new algorithm for semidefinite
optimization [13]. They obtained
(
log ) iteration bound for large-update and
(√ log ) for small-update methods, respectively. El Ghami generalized the analysis
presented in the above paper for ∗ ( )-LCPs [8]. Recently, Kheirfam [15] proposed a
new kernel function with trigonometric barrier term which yields the complexity bound
(√n log n log ) for large-update methods and is currently the best known bound for
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
235
such methods. Some examples of kernel function, which have been analyzed in earlier
papers can be seen in [8, 14, 16, 17]
In this paper, we define a new kernel function, which has a trigonometric barrier
term, and propose a primal-dual interior-point algorithm for LO based on this function.
We analyze the complexity for large-update method based on three conditions of kernel
function. This algorithm has ( log ) complexity bound for large-update method
similar to complexity obtained in [9, 13].
The paper is organized as follows. In Section 2, we recall the generic pathfollowing IPM. In Section 3, we define a new kernel function and give its properties,
which are essential for the complexity analysis. In Section 4, we derive the complexity
result for large-update method and obtain an upper bound to decrease the barrier function
during an inner iteration. In the final section, we conclude with some remarks.
2. THE PRIMAL-DUAL ALGORITHM
In this section, we recall some basic concepts and the generic path-following IPM.
Without loss of generality, we assume that a strictly feasible pair ( , ) exists, i.e.,
there exists ( , , ) such that
= ,
+
= , > 0, > 0.
This assumption is called the interior-point condition (IPC) [20]. The IPC ensures
∗
the existence of an optimal primal-dual pair ( ∗ , ∗ ) with zero duality gap, i.e.,
−
∗
∗
∗
=( )
= 0.
It is well known that finding an optimal solution of (P) and (D) is equivalent to
solving the following system
= , ≥ 0,
(1)
+ = , ≥ 0, = 0.
The basic idea of primal-dual IPMs is to replace the third equation in (1), the socalled complementarity condition for (P) and (D), by the parameterized equation
=
with > 0; where denotes the all-one vector (1,1, … ,1) . Thus, we have the following
parameterized system:
= , ≥ 0,
+ = , ≥ 0,
= .
(2)
For each
> 0, the parameterized system (2) has a unique solution
( ( ), ( ), ( )) (see [20]), which is called the -center of (P) and (D). The set of centers (with running through all positive real numbers) gives a parameterized curve,
which is called the central path of (P) and (D). If → 0, then the limit of the central path
exists and, since the limit point satisfies the complementarity condition, the limit yields
optimal solutions for (P) and (D) [20].
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
236
A natural way to define a search direction is to follow the Newton approach and to
linearize the third equation in (2) by replacing , and with
= +Δ ,
= +
Δ and
= + Δ respectively. This leads to the following system:
Δ = 0,
Δ + Δ = 0,
Δ + Δ =
−
(3)
.
Since
has full row rank, the system (3) uniquely defines a search direction
(Δ , Δ , Δ ) for any > 0 and > 0 [20]. We define the vector
:=
,
and its th component as
:=
(4)
. Introduce the scaled search directions as follows:
, : =
.
(5)
Using (5), we can rewrite the system (3) as follows:
A
̅ Δ +
+
= 0,
= 0,
=
(6)
− ,
where
̅: =
1
, : = d
( ), : = d
( ).
A crucial observation is that the right-hand side of the third equation in (6) is the
negative gradient of the classical logarithmic barrier function Ψ ( ), that is,
+
= −∇Ψ ( ),
(7)
where
Ψ ( ): =
( ),
( )=
One may easily verify that
−1
− log( ).
2
( ) satisfies
(1) = (1) = 0,
( ) > 0, > 0,
lim ( ) = lim ( ) = +∞.
→
(8)
→
This shows that Ψ ( ) is strictly convex, and attains its minimal value at
with Ψ ( ) = 0. Thus,
Ψ ( ) = 0 ⇔ ∇Ψ ( ) = 0 ⇔
=
⇔
=
.
=
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
237
In this paper, we replace the right-hand side of the third equation in (6) by
−∇Ψ( ), where Ψ is a barrier function induced by a new kernel function ( ) as defined
in (12). Thus, system (6) can be reformulated as follows:
A
̅ Δ +
+
= 0,
= 0,
= −∇Ψ( ).
(9)
The new search direction ( , Δ , ) is obtained by solving (9) so that
(Δ , Δ , Δ ) is computed via (5). By taking a step along the search direction determined
by (9), with a step size defined by some line search rules, a new triple ( , , ) is
constructed according to
=
Since
and
+ Δ ,
=
+ Δ ,
= + Δ .
(10)
are orthogonal, we have
Ψ( ) = 0 ⇔
=
⇔ ∇Ψ( ) = 0 ⇔
=
=0⇔
=
.
We use Ψ( ) as the proximity function to measure the distance between the
current iterate and the -center for given > 0. We also define the norm-based
proximity measure, ( ), as follows:
( ): = ∥ ∇Ψ( ) ∥=
∑
(
( )) , ∈
.
We assume that (P) and (D) are strictly feasible, and the starting point (
is strictly feasible for (P) and (D). Choose
and
=
(11)
,
,
)
initial strictly feasible point
such that Ψ( ) ≤ , where is threshold value. We then decrease to : = (1 − ) ,
for some ∈ (0,1). In general, this will increase the value of Ψ( ) above . To get this
value smaller again, and coming closer to the current -center, we solve the scaled search
directions from (9), and unscaled these directions by using (5). By choosing an
appropriate step size , we move along the search direction, and construct a new pair
( , , ) given by (10). If necessary, we repeat the procedure until we find iterates
such that Ψ( ) no longer exceeds the threshold value , which means that the iterates are
in a small enough neighborhood of ( ( ), ( ), ( )). Then is again reduced by the
factor 1 − and we apply the same procedure targeting at the new -centers. This
process is repeated until is small enough, say
≤ for a certain accuracy parameter
, at this stage we have found an -approximate solution of (P) and (D). The generic IPM
outlined above is summarized in Algorithm 1.
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
238
Algorithm1 : Primal-Dual Algorithm for LO
Input:
Accuracy parameter > 0;
barrier update parameter , 0 < < 1;
threshold parameter ≥ 1;
> 0, > 0,and = 1 such thatΨ(
, , )≤ .
begin
≔ 0 ; ≔ 0 ; = 0 ;
while ≥ do
begin
-update:
≔ (1 − ) ;
whileΨ( , , ) > do
begin
Solve the system (9) and use (5) for Δ , Δ , Δ ;
Determine a step size ;
≔ + Δ ;
≔ + Δ ;
≔ + Δs;
end
end
end
A crucial question is how to choose the parameters ,
minimizes the iteration complexity of the algorithm.
, and the step size
that
3 THE NEW KERNEL FUNCTION
In this section, we define a new kernel function and give its properties needed for
the complexity analysis. Now, we define a new kernel function ( ) as follows:
( )=
−2 +
where ( ) =
( )= 2 −2−
1
sin
( )
,
> 0,
. Then, we have the first three derivatives of ( ) as follows:
( )
( )
( )
,
(13)
( ) = 2 +
( )
( ( ))
( )
( ( ))
( )
( ( ))
( )
( ( ))
,
(14)
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
239
( ) =
1
(3 ( )
sin ( ( ))
−
( )sin ( ( )) − 5 ( ) sin ( ( ))cos( ( ))
( ) ( )sin ( ( ))cos( ( )) + 6 ( )
−6
( )sin( ( ))cos ( ( ))
( ) .
) cos
(15)
Lemma 1 For the function ( ) defined in (12), one has
1. − cos( ( )) < (1 + )sin( ( )) < , > 0.
2.
cos( ( )) < (1 + )sin( ( )) < , > 0.
3. (1 + )sin( ( )) < 2 cos( ( )),0 < ≤ .
Proof. For > 0, we define
( )=
− sin
1+
, ( ) =
1+
+ tan
1+
1+
.
We have
( )=
(
1 + cos
)
> 0, ( ) = − (
)
tan
< 0.
Thus, ( ) is strictly increasing and ( ) is strictly decreasing for
Therefore ( ) > 0 and ( ) < 0, which implys that sin( ) <
and
( ) respectively. Letting
define
( )=
1+
− sin
> 0.
<
= follows the first part. To prove the second part, we
, ( ) =
1+
− tan
1+
1+
.
It can be easily seen that ( ) is strictly increasing and ( ) is strictly decreasing
and (0) = (0) = 0. Therefore ( ) > 0 and ( ) < 0, which implys the desired
inequalities. Now, for 0 < ≤ . , we define
( ) = 2 cos
1+
− (1 + )sin
1+
.
We have
( )=
(
)
sin
the inequality follows from sin(
that
( ) for 0 < ≤
−
(
)
cos(
)−
) > 0 and cos(
is strictly concave, since
(
(
)
)
sin(
) < 0,
) > 0, for 0 < ≤ . This implies
(0) = 0 and
( ) > 0, therefore
( ) > 0. This completes the proof of lemma. ∎
The next lemma is fundamental in the analysis of algorithm based on the kernel
function (12).
Lemma 2 For ( ) defined in (12), we have
( ) > 2,
(16)
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
240
( )+
( ) > 0, < 1(17)
( )−
( ) > 0,
(18)
( ) < 0.
( )=(
Proof. From (14), by
)
and (19)
( ) = −(
)
, we get
( )=
(
)
( ( ))
(1 +
sin( ( ))cos( ( )) +
cos ( ( ))) + 2.
Case 1 Assume that 0 < ≤ 1.
In this case sin( ( )) > 0, cos( ( )) > 0 and the proof is obvious.
Case 2 Assume that > 1.
Using the first part of Lemma 1, we obtain
2+2
( )≥
(1 +
sin( ( ))cos( ( )) + cos ( ( ))) + 2
(1 + )
≥ ( ) 1 + 2cos ( ) + cos ( ) + 2
=
(
)
(1 + cos( ( ))) + 2 > 2.
This proves (16). By using (13) and (14), we have
( sin ( ) (1 + ) sin ( ( ))
+2(1 + )sin( ( ))cos( ( )) + 2 cos ( ( )))
( ( ))
−( )
+2 −2
( ( ))
1
=
(
sin ( ( )) + 2 (1 + )sin( ( ))cos( ( ))
(1 + ) sin ( ( ))
+2 cos ( ( )) − (1 + ) sin( ( ))cos( ( ))
+4 (1 + ) sin ( ( )) − 2(1 + ) sin ( ( )))
:= ( )
ℎ( ).(20)
( ( ))
( )+
( )=2 +
Case 1 Assume that 0 < ≤ .
Using the second part of Lemma 1, we obtain
ℎ( ) ≥ (1 + )sin ( ( )) + 2
cos ( ( )) + 2
cos ( ( ))
− (1 + ) sin( ( ))cos( ( )) + (4 − 2)(1 + ) sin ( ( ))
= (1 + )sin ( ( ))( + (4 − 2)(1 + ) )
+ (1 + )(2 cos( ( )) − (1 + )sin( ( )))cos( ( )) > 0,
the last inequality follows from sin( ( )) > 0 for 0 < ≤ , the third part of Lemma 1
and
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
+ (4 − 2)(1 + ) =
+4
+ 10
+6
241
− 2 − 2
> ( − 3) + ( − 1) > 0.
Case 2 Assume that < ≤ 1.
In this case, by the second part of Lemma 1, we have
ℎ( ) =
+ 2 (1 + )sin
( ) cos ( ) +
cos
( ) − (1 + ) sin( ( ))cos( ( )) + (4 − 2)(1 + ) sin ( ( ))
≥ (1 + )sin( ( )) + (1 + )(2 − (1 + ))sin( ( ))cos( ( ))
+(4 − 2)(1 + ) sin ( ( ))
≥ (1 + )sin
( )
1 + ( − 1)cos
( ) +(4 − 2)(1 + ) sin ( ( )) > 0,
the last inequality is true by sin( ( )) > 0 and 1 + ( − 1)cos( ( )) > 0, for < ≤ 1.
The two cases together prove (17). To prove (18), considering the first two derivatives of
( ) , we have
( )−
( )=
1
(
(1 + ) sin ( ( ))
sin
( ) +2 (1 + )sin( ( ))cos( ( ))
+2
cos ( ( )) + (1 + ) sin( ( ))cos( ( ))
+2(1 + ) sin ( ( ))).
Case 1: Assume that 0 < ≤ 1.
In this case 0 < ( ) ≤ , so sin( ( )) > 0 and cos( ( )) ≥ 0. Therefore
( ) > 0.
Case 2: Assume that > 1. In this case, we have
( )−
=
1
( sin ( ( )) − 2 cos ( ( ))
(1 + ) sin ( ( ))
+ 2 cos ( ( ))
−(1 + ) sin ( ( )) + 2(1 + ) sin ( ( )))
( )>
1
(
(1 + ) sin ( ( ))
sin
( ) ( )−
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
242
+ (2 + 2 )sin ( ) − 1 1 + ) sin ( )
1
>
sin ( ) + 3(1 + t) sin ( )
(1 + ) sin ( ( ))
> 0.
The two cases together prove (18). To prove (19), using the first three derivatives of ( )
and substituting into (15), we obtain
′
( )=
−
ℎ( ),
(1 + ) sin ( ( ))
where
ℎ( ) = 6 (1 + )sin
( ) +5
sin
( ) cos ( ) +6(1 + ) sin ( ( ))cos( ( ))
+12 (1 + )sin( ( ))cos ( ( )) + 6
cos ( ( )).
Case 1: Assume that 0 < ≤ 1.
In this case 0 < ( ) ≤ , so sin( ( )) > 0 and cos( ( )) ≥ 0, and implies that
ℎ( ) > 0.
Case 2: Assume that > 1.
In this case, cos( ( )) < 0 and sin( ( )) > 0. Therefore,
ℎ( ) > 6 sin
( )
(1 + )sin
( ) + cos
( ) +6(1 + )sin( ( ))cos( ( ))((1 + )sin( ( )) + cos( ( )))
+6 cos ( ( ))((1 + )sin( ( )) + cos( ( )))
= 6( sin ( ( )) + (1 + )sin( ( ))cos( ( )) + cos ( ( )))
× ((1 + )sin( ( )) + cos( ( )))
(1
≥ 6 1 + cos
+ )sin ( ) + cos ( ) > 0.
The second and the last inequalities follow by Lemma 1.
∎
Lemma 3 For ≥ 1, one has
1. cos( ( )) ≥ 1 − .
( )
2.
( ( ))
≤
− 2 + 2.
Proof. For ≥ 1,
Then, we have
≤ ( )≤
and sin( ( )) ≤ 1. Define ( ): = cos( ( )) + − 1.
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
( )=−
243
sin( ( ))
− sin( ( )) + (1 + )
− +4
+1 =
≥
> 0.
(1 + )
(1 + )
(1 + )
Thus, ( ) is increasing for ≥ 1 , and hence ( ) ≥ (1) = 0. This implies the
first inequality. For the second part, defining
( )=
−2 +2−
1
sin
( )
,
we have
( ) = 2 −2+
cos( ( ))
( − 1)
≥ 2( − 1) −
(1 + ) sin ( ( ))
(1 + ) sin ( ( ))
= (2 1 + ) sin
≥ (8 − )
( ) −
−1
(1 + ) sin ( ( ))
−1
≥ 0.
(1 + ) sin ( ( ))
The two inequalities are followed by (1 + )sin( ( )) > 2 for ≥ 1, and the
first part, respectively. Therefore, ( ) is increasing and hence ( ) ≥ (1) = 0. This
completes the proof.
∎
Note that (1) = (1) = 0, and
( ) > 0 imply that ( ) is a nonnegative
strictly convex such that ( ) achieves its minimum at = 1, i.e., (1) = 0. This
implies that, since ( ) is twice differentiable, it is completely determined by its second
derivative:
( )=∫ ∫
( )
.
(21)
The next lemma is very useful in the analysis of interior-point algorithms based on
the kernel functions (see for example [2, 17]).
Lemma 4 (Lemma 2.1.2 in [19]) Let ( ) be a twice differentiable function for > 0.
Then the following three properties are equivalent:
1.
(√
) ≤ ( ( ) + ( )) for , > 0.
2.
( )+
( ) ≥ 0, > 0.
3.
3. ( ) is convex.
Following [19], the property described in Lemma 4 is called exponential
convexity, or shortly -convexity. Therefore, Lemma 4 and (17) show that our new
kernel function (12) is -convex for > 0.
Lemma 5 (Lemmas 7 and 8 in [13]) For ( ) defined as (12), one has
1.
( )<
( )( − 1) , > 1.
2.
( )≤
( ), ≥ 1.
The next theorem gives a lower bound on ( ) in terms of Ψ( ). This is due to
the fact that ( ) satisfies (19).
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
244
Theorem 6. (Theorem 4.9 in [2]) Let : [0, ∞) → [1, ∞) be the inverse function of ( )
for ≥ 1. One has
( )≥
Lemma 7. If
1
2
( (Ψ( ))).
( ) ≥ 1, then
1
Ψ( ).
4
Proof. Using (21) and (16), we have
( )≥
= ( )=
( )
≥
2( )
= ( − 1) ,
which implies = ( ) ≤ 1 + √ . Using Theorem 6 and Lemma 5, we have
Ψ( )
( )≥
≥
2
=
Ψ( )
Ψ( )
4 Ψ( )
≥
2
Ψ( )
Ψ( )
≥
Ψ( )
2 1 + Ψ( )
1
Ψ( ).
4
The proof of the statement is completed.
∎
At the start of each outer iteration, just before the update of with the factor
1 − , we have Ψ( ) ≤ . Due to the update of , the vector defined by (4) is divided
by the factor √1 − , with 0 < < 1, which leads to an increase of the value of Ψ( ).
Then, during the inner iterations, Ψ( ) decreases until it passes the threshold again.
Hence, during the course of the algorithm, the largest values of Ψ( ) occur just after the
updates of . In the rest of this section, we derive an estimate for the effect of -update
on the value of Ψ( ). We start with an important theorem. This is due to the fact that
( ) satisfies (16) and (18).
Theorem 8. (Theorem 3.2 in [2]) Let be the inverse function of ( ) for ≥ 1. Then
for any positive vector , and any ≥ 1
Ψ(
)≤
Corollary 9. Let 0 ≤
Ψ(
)≤
(
Ψ( )
(
)).
< 1 and
√1 −
=
.
√
. If
( ) ≤ , then
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
245
Suppose that the barrier update parameter
and threshold value are given.
According to the algorithm, at the start of each outer iteration we have Ψ( ) ≤ . Define
: = ( , , ): =
.
√
According to Corollary 9,
the -update.
(22)
is an upper bound of Ψ(
), the value of Ψ( ) after
4. ANALYSIS OF THE ALGORITHM
In this section, we determine a default step size and obtain an upper bound to the
decrease of the barrier function Ψ( ) during an inner iteration.
4.1. Decrease the value of
( ) and choose a default step size
In each iteration, the search directions Δ , Δ and Δ are obtained by solving the
system (9) and via (5). After a step with size and due to (5), the new iterate is obtained
by
+ Δ = ( +
=
),
= + Δ =
( +
).
Thus we have
=
=( +
)( +
).
Since the proximity after one step is defined by Ψ(
Ψ(
) = Ψ( ( +
)( +
), it follows that
)).
Hence, by Lemma 4
1
) ≤ (Ψ( +
) + Ψ( +
)).
2
Let us denote the difference between the proximity before and after one step by a
function of the step size, that is
Ψ(
( ): = Ψ(
Then ( ) ≤
) − Ψ( ).
( ), where
1
( ): = (Ψ( +
2
Obviously
(0) =
(0) = 0.
) + Ψ( +
)) − Ψ( ).
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
246
Taking the derivative with respect to , we obtain
( )= ∑
(
( )= ∑
(
(
)
+
+
(
)
+
),
(23)
),
(24)
and
(
)
+
(
+
)
+
where
and
denote the th components of the vectors
(23), using (11) and the third equation of (9), we obtain
1
(0) = ∇Ψ( ) (
2
+
and
, respectively. From
)
= − ∇Ψ( ) ∇Ψ( ) = −2 ( ) .
(25)
In what follows, we use the short notation : = ( ) , and state four important
lemmas without proofs. These are due to the fact that
( ) is monotonically decreasing.
Lemma 10.(Lemma 4.1 in [2]) One has
( )≤2
(
−2
).
Lemma 11. (Lemma 4.2 in [2]) If the step size
(
−
−2
)+
(
satisfies
) ≤ 2 ,
(26)
then ( ) ≤ 0.
Lemma 12. (Lemma 4.3 in [2]) Let : [0, ∞) → (0,1] denote the inverse function of the
restriction of −
( ) on the interval (0,1], then the largest possible value of the step
size of satisfying (26) is given by
:=
1
( ( ) − (2 )).
2
Lemma 13. (Lemma 4.4 in [2]) Let
≥
and
be the same as defined in Lemma 11. Then
1
.
( (2 ))
For the purpose of finding an upper bound for ( ), we need a default step size
that is the lower bound of the , and consists of .
Lemma 14. Let : [0, ∞) → (0,1] denote the inverse function of the restriction of
−
( ) on the interval (0,1] and ( ) ≥ ≥ 1. Then
1
1
≥
( (2 )) (16 + 24√6
Proof. To obtain the inverse function
solve the equation
.
)
= ( ) of −
( ) for 0 < ≤ 1, we need to
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
−
( ) = −2 + 2 +
247
cos( ( ))
=2 .
(1 + ) sin ( ( ))
The above equation implies
( ( ))
2+(
)
( ( ))
= 2 + 2 ≤ 2 + 2.
(27)
By setting = (2 ), we have
−
( )=4 .
Hence, we have
( )
=
(
)
( ( ))
(
(
)
.
( ( ))
( ( ))
( ( )))
Using the second part of Lemma 1, (1 + )sin( ( )) ≤
1
≥
( ) 2+
(1 + ) sin ( ( ))
(1 + ) sin ( ( ))
(1 + cos( ( )))
1
2+
≥
(1 + 2cos( ( )) + cos ( ( )))
1
=
≥
for 0 < ≤ 1, we have
1
≥
2+
≤
( cos( ( )) + 2(1 + ) sin ( ( )))
(1 + cos( ( )))
(1 + ) sin ( ( )) (1 + cos( ( )))
1
cos( ( )) + 2(1 + ) sin ( ( ))
2+(
)
(1 + ) sin ( ( ))
(1 + cos( ( )))
(2cos ( ( )) + 2sin ( ( )))
1
.
cos( ( ))
4
2+(
+ 2)
(1 + ) sin ( ( ))
2√2
Using (27) and Lemma 7, we get
1
1
≥
( ) 2 + (4 + 2)
√2
1
≥
(16 + 24√6
This completes the proof.
In the sequel, we use the notation
=
,
(
√
)
.
)
∎
(28)
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
248
as the default step size. By Lemma 13, ≥ .
Lemma 15. (Lemma 4.5 in [2]) If the step size
( )≤−
Theorem 16. If
( )≤
is such that
≤ , then
.
is the default step size as given by (28), then
−Ψ
.
16 + 24√6
=
Proof. Using Lemma 15 with
( )≤−
and (28), we have
−
≤
=
(16 + 24√6
)
−
16 + 24√6
.
Using Lemma 7, we obtain
( )≤−
Ψ( )
32 + 48√6
.
∎
This proves the theorem.
4.2. Iteration Bound
We need to count how many inner iterations are required to return to the situation
where Ψ( ) ≤ after a -update. We define the value of Ψ( ) after -update as Ψ , and
the subsequent values in the same outer iteration are denoted as Ψ , = 1,2, … , , where
denotes the total number of inner iterations in the outer iteration. According to
decrease of ( ), for = 1,2, … , − 1, we obtain
Ψ
( ) ≤Ψ ( )−
( )
.
√
Lemma 17 (Lemma 14 in [18]) Suppose
such that
≤
where
−
> 0 and 0 <
,
= 0,1, … ,
≤ 1. Then
(29)
, ,…,
be a sequence of positive numbers
− 1,
≤
Letting = Ψ ( ), = and
=
, we can get the following theorem from
√
Lemma 17.
Theorem 18. Let be the total number of inner iterations in the outer iteration. Then
we have
4
≤ (32 + 48√6
3
)Ψ ,
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
249
where, Ψ is the value of Ψ( ) after the -update in outer iteration.
According to Lemma 3, it is clear that ( ) ≤ 2( − 1) for ≥ 1. Applying Corollary 9
and the proof of Lemma 7, we obtain
Ψ ≤
(
)≤
√1 −
1+
1+
(
)≤2 (
√1 −
√1 −
≤
2
( +
1−
− 1) ) ,
where the last inequality holds from 1 − √1 − ≤ , for 0 < ≤ 1. The number of
outer iterations is bounded above by log( ) (Lemma . 17 in [20]). By multiplying the
number of outer iterations and the number of inner iterations, we get an upper bound for
the total number of iterations, namely
4(32 + 48√6
3
Large-update methods use
)
(
2
( +
1−
) ) log .
= Θ(1) and = ( ), so the iteration bound becomes
(
log ).
5. CONCLUSION
In this paper, we have proposed a new barrier function and a large-update
version of primal-dual interior-point algorithm. The kernel function (12) has a new
trigonometric barrier term with some properties described in Section 3. We have showed
that the large-update IPM based on this kernel function has ( log ) iteration bound,
which improves the classical iteration complexity for large-update methods, i.e.,
( log ).
REFERENCES
[1] Andersen, E.D., Gondzio, J., Mészaros, Cs., and Xu, X., “Implementation of interior point
methods for large scale linear programmin”, in: T. Terlaky (Ed.), Interior Point Methods of
Mathematical Programming, Kluwer Academic Publishers, The Netherlands, 1996.
[2] Bai, Y.Q., El Ghami, M., and Roos, C., “A comparative study of kernel functions for primaldual interior-point algorithms in linear optimization”, SIAM Journal on Optimization, 15(1)
(2004) 101-128.
[3] Bai, Y.Q., Guo, J., and Roos, C., “A new kernel function yielding the best known iteration
bounds for primal-dual interior-point algorithms”, Acta Mathematica Sinica, English Series,
25 (12) (2009) 2169-2178.
[4] Bai, Y.Q., and Roos, C., “A polynomial-time algorithm for linear optimization based a new
simple kernel function”, Optimization Methods and Software, 18 (6) (2003) 631-646.
250
B.Kheirfam, M.Moslem / A Polynomial-Time Algorithm
[5] Bai, Y. Q., Lesaja, G., Roos, C., Wang, G.Q., and El Ghami, M., “A class of large-update and
small-update primal-dual interior-point algorithms for linear optimization”, Journal of
Optimization Theory and Applications, 138 (3) (2008) 341-359.
[6] Bai, Y.Q., El Ghami, M., and Roos, C., “A new efficient large-update primal-dual interiorpoint method based on a finite barrier”, SIAM Journal on Optimization, 13 (3) (2003) 766782.
[7] Cho, G. M., “An interior-point algorithm for linear optimization based on a new barrier
function”, Journal of Applied Mathematics and Computing, 218 (2011) 386-395.
[8] El Ghami, M., “Primal-dual algorithms for P∗ (κ) linear complementarity problems based on
kernel-function with trigonometric barrier term”, Springer Proceedings in Mathematics and
Statistics, 31 (2013) 331-349.
[9] El Ghami, M., Guennoun, Z. A., Bouali, S., and Steihaug, T., “Interior-point methods for
linear optimization based on a kernel function with trigonmetric barrier term”, Journal of
Computational and Applied Mathematics, 236(15) (2012) 3613-3623.
[10] Gonzaga, C.C., “Path following methods for linear programming”, SIAM Review, 34 (2)
(1992) 167-227.
[11] Hertog, D. den, Interior Point Approach to Linear, Quadratic and Convex Programming,
Mathematics and its Applications, 277 Kluwer Academic Publishers, Dordrecht, 1994.
[12] Karmarkar, N.K., “A new polynomial-time algorithm for linear programming”,
Combinatorica, 4 (1984) 375-395.
[13] Kheirfam, B., “Primal-dual interior-point algorithm for semidefinite optimization based on a
new kernel function with trigonometric barrier term”, Numerical Algorithms, 61(4) (2012)
659-680.
[14] Kheirfam, B., and Hasani, F., “A large-update feasible interior-point algorithm for convex
quadratic semi-definite optimization based on a new kernel function”, Journal of the
Operations Research Society of China, 1 (3) (2013) 359-376.
[15] Kheirfam, B., “A generic interior-point algorithm for monotone symmetric cone linear
complementarity problems based on a new kernel function”, Journal of Mathematical
Modelling and Algorithms in Operations Research, (2013) DOI 10.1007/s10852-013-9240-x.
[16] Kim, M.K., Cho, Y.Y., and Cho, G.M., “New path-following interior-point algorithms for
P∗ (κ)-nonlinear complementarity problems”, Nonlinear Analysis: Theory, Methods and
Applications, 14 (2013) 718-733.
[17] Lee, Y.H., Cho, Y.Y., and Cho, G.M., “Interior-point algorithms for P∗ (κ)-LCP based on a
new class of kernel functions”, Journal of Global Optimization, (2013) DOI 10.1007/s10898013-0072-z.
[18] Peng, J., Roos, C., and Terlaky, T., “Self-regular functions and new search directions for
linear and semidefinite optimization”, Mathematical Programming, 93 (1) (2002) 129-171.
[19] Peng, J., Roos, C., and Terlaky, T., Self-regularity: A New Paradigm for Primal-Dual
Interior-Point Algorithms, Princeton University Press, 2002.
[20] Roos, C., Terlaky, T., and Vial, J-Ph., Theory and Algorithms for Linear Optimization. An
Interior-Point Approach, John Wiley and Sons, Chichester, UK (1997).