On the Computer Implementation of Feasible - Optimize

On the Computer Implementation of Feasible Direction
Interior Point Algorithms For Nonlinear Optimization
Jose Herskovits and Gines Santos
COPPE, Federal University of Rio de Janeiro
Caixa Postal 68503, 21945-970 Rio de Janeiro, BRAZIL
[email protected]
Abstract
We discuss the computer implementation of a class of interior point algorithms for
the minimization of a nonlinear function with equality and inequality constraints. These
algorithms use xed point iterations to solve Karush-Kuhn-Tucker rst order optimality
conditions. At each iteration a descent direction is dened by solving a linear system.
In a second stage, the linear system is perturbed in such a way to deect the descent
direction and obtain a feasible descent direction. A line search is then performed to get a
new interior point with a lower objective. Newton, quasi - Newton or rst order versions
of the algorithm can be obtained. This paper is mainly concerned with the solution of the
internal linear systems, the algorithms that we employ for the constrained line search and
also with the quasi - Newton matrix updating. We also present some numerical results
obtained with a quasi Newton Algorithm of the family. A set of test problems were solved
very eciently with the same set of values of the internal parameters.
1 Introduction
This paper is concerned with the problem of minimizing a function submitted to a set of equality
and inequality constraints, when nonlinear smooth functions are involved. Engineering design
is a natural application for this problem, since designers want to nd the best design that
satises all the requirements of feasibility. Calling [x1 ; x2 ; :::; xn] the design variables, f (x) the
objective function, gi(x); i = 1; 2; ::; m the inequality constraints and hi(x); i = 1; 2; ::; p the
inequality constraints, the optimization problem can be denoted
9
>
minimize f (x)
=
subject to g(x) 0 >
and h(x) = 0; ;
where g 2 Rm and h 2 Rp are functions in Rn.
(1.1)
Research partially developed at INRIA, Institut National de Recherche en Informatique et en Automatique,
Rocquencourt, France.
1
This problem is said to be a Mathematical Program and, the discipline that studies the
numerical techniques to solve it, Mathematical Programming, [15]. Even if mathematical programs arise naturally in optimization problems related to a wide set of disciplines that employ
mathematical models, several physical phenomena can also be modeled by means of mathematical programs. This is the case when "the equilibrium" is attained at the minimum of an
energy function.
Feasible direction algorithms are an important class of methods for solving constrained
optimization problems. At each iteration, the search direction is a feasible direction of the
inequality constraints and, at the same time, a descent direction of the objective or an other
appropriate function. A constrained line search is then performed to obtain a satisfactory
reduction of the function, without loosing the feasibility.
The fact of giving interior points makes feasible direction algorithms very ecient in engineering design, where functions evaluation is in general very expensive. Since any intermediate
design can be employed, the iterations can be stopped when the cost reduction per iteration
becomes small enough.
There are also several examples that deal with an objective function, or constraints, that are
not dened at infeasible points. This is the case of size or shape constraints in some examples
of structural optimization. When applying feasible direction algorithms to real time problems,
as feasibility is maintained and cost reduced, the controls can be activated at each iteration.
In this paper we discuss the numerical implementation of a class of feasible direction algorithms that uses xed point iterations to solve the nonlinear equations, in the primal and the
dual variables, given by the equalities included in Karush-Kuhn-Tucker optimality conditions.
With the object of ensuring convergence to Karush-Kuhn-Tucker points, the system is solved
in such a way as to have the inequalities in Karush-Kuhn-Tucker conditions satised at each
iteration. Based on the present general technique, rst order, quasi Newton or Newton algorithms can be obtained. In particular, the algorithms described in [10, 12] and the quasi Newton
one presented in [11].
The present algorithm is simple to code, strong and ecient. It does not involve penalty
functions, active set strategies or quadratic programming subproblems. It merely requires the
solution of two internal linear systems with the same matrix at each iteration, followed by an
inexact line search. In practical applications it can be taken advantage of the structure of the
problem and of particularities of the functions in it to improve the calculus eciency. This
is the case of the Newton algorithm for linear and quadratic programming presented in [29],
the Newton algorithm for limit analysis of solids, ([32]) and the Newton algorithm for stress
analysis of linear elastic solids in contact, ([1, 31]).
Several problems in Engineering Optimization were solved using the present method. We
can mention applications in structural optimization, ([13, 28]), uid mechanics, ([2, 4, 5]) and
multidisciplinary optimization with aerodynamics and electromagnetism, ([2, 3]).
In the next section we discuss the basis of the method and then an algorithm for inequality
constraints is presented. The following sections are devoted to the implementation of rst order,
quasi - Newton and Newton versions of the algorithm, to the solution of the internal systems
of equations and to the line search criteria and algorithms. Finally and algorithm for equality
and inequality constraints is introduced and the numerical results on a set of test problems are
shown.
2
2 The Feasible Direction Interior Point Method
To describe the basic ideas of these technique, we consider the inequality constrained optimization problem
)
minimize f (x)
(2.1)
subject to g(x) 0;
We denote rg(x) 2 Rnm the matrix of derivatives of g and call 2 Rm Pthe vector of dual
variables, L(x; ) = f (x) + t g(x) the Lagrangian and H (x; ) = r2f (x) + mi=1 ir2gi(x) its
Hessian. G(x) denotes a diagonal matrix such that Gii(x) = gi(x).
The corresponding Karush Kuhn Tucker rst order optimality conditions can be expressed
as
rf (x) + rg(x) = 0
(2.2)
G(x) = 0
(2.3)
0
(2.4)
g(x) 0:
(2.5)
A Newton's iteration to solve the nonlinear system of equations (3)-(4) in (x; ) is stated as
"
# "
#
"
#
B
rg(xk ) xk0+1 , xk = , rf (xk ) + rg(xk )k
(2.6)
k
t
k
rg (x ) G(xk )
G(xk )k
k0+1 , k
where (xk ; k ) is the starting point of the iteration and (xk0+1; k0+1) is the new estimate, B =
H (xk ; k ) and k a diagonal matrix with kii ki .
In our approach, we can also take B equal to a quasi-Newton estimate of H (x; ) or to the
identity matrix. Depending on the way in which the symmetric matrix B 2 Rnn is dened,
(2.6) represents a second order, a quasi-Newton or a rst order iteration. However, to have
global convergence, B must be positive denite, (see [14]).
Iterations (2.6) are modied in a way to get, for a given interior pair (xk ; k ), a new interior
estimate with a better objective. With this purpose, a direction dk0 = xk0+1 , xk in the primal
space, is dened. Then, from (2.6, we have the linear system in (dk0 ; k0+1)
B k dk0 + rg(xk )k0+1 = ,rf (xk )
(2.7)
k rgt(xk )dk0 + G(xk )k0+1 = 0;
(2.8)
It can be proved that dk0 is a descent direction of f . However, dk0 is not useful as a search
direction since it is not necessarily feasible. This is due to the fact that as any constraint goes
to zero, (2.8) forces dk0 to tend to a direction tangent to the feasible set, (see [14]).
To obtain a feasible direction, by adding a negative vector in the right side of (2.8), we
dene the new linear system in dk and k+1
B k dk + rg(xk )k+1 = ,rf (xk )
k rgt(xk )dk + G(xk )k+1 = ,k k ;
3
(2.9)
(2.10)
Figure 1: Search direction
obtained by adding a negative vector to the right side of (2.8), where k > 0, dk is the new
direction and k+1 is the new estimate of . We have now that dk is a feasible direction, since
rgit(xk )dk = ,k < 0 for the active constraints.
The inclusion of a negative number in the right hand side of (2.8) produces a deection of
dk0 , proportional to k , in the sense of the interior of the feasible region. To ensure that d is a
descent direction also, we establish an upper bound on k in order to ensure that
(dk )trf (xk ) (dk0 )trf (xk ); 2 (0; 1);
(2.11)
which implies (dk )t rf (xk ) < 0. Thus, dk is a feasible descent direction.
In general, the rate of descent of f along dk will be smaller than along dk0 . This is a price
that we pay for obtaining a feasible descent direction.
To obtain the upper bound on k , we solve the linear system in (dk1 ; k1+1)
Since
we have that, if
Bdk1 + rg(xk )k1+1 = 0
k rgt(xk )dk1 + G(xk )k1+1 = ,k :
(2.12)
(2.13)
dk = dk0 + k d1;
dk0 )trf (xk ) ;
k < ( ,(d1)(
k )t rf (xk )
1
(2.14)
then (2.11) holds.
To determine a new primal point, an inexact line search along dk is done, looking for a new
interior point (xk + tk dk ) with a satisfactory decrease of the function. Dierent rules can be
employed to dene new positive dual variables k+1.
In Figure 1, the search direction of an optimization problem with two design variables and
one constraint is illustrated. At xk on the boundary, the descent direction d0 is tangent to the
constraint. Even if in this example, we can prove that d1 is orthogonal to d0, in general we can
only say that d1 is in the subspace orthogonal to the active constraints at xk . Since d1 points
to the interior of the feasible domain, it improves feasibility.
4
3 A Basic Algorithm
Based on the ideas presented above, we state the following algorithm for inequality constrained
optimization:
FEASIBLE DIRECTION INTERIOR POINT ALGORITHM
Inequality Constraints
Parameters. 2 (0; 1) and positive ' > 0; ; and I :
Data. Initialize x such that g(x) < 0, > 0 and B 2 Rnn symmetric and positive denite.
Step 1. Computation of the search direction d.
i) Solve the linear system for (d0; 0)
Bd0 + rg(x)0 = ,rf (x);
rgt(x)d0 + G(x)0 = 0:
(3.1)
(3.2)
Bd1 + rg(x)1 = 0;
rgt(x)d1 + G(x)1 = ,:
(3.3)
= inf [' k d0 k2 ; ( , 1)dt0rf (x)=dt1rf (x)]:
(3.5)
= ' k d0 k2 :
(3.6)
d = d0 + d1 :
(3.7)
If d0 = 0; stop.
ii) Solve the linear system for (d1; 1)
iii) If dt1 rf (x) > 0; set
Otherwise, set
iv) Compute the search direction
(3.4)
Step 2. Line search.
Find a step length t satisfying a given constrained line search criterion on the objective
function f and such that g(x + td) < 0.
Step 3. Updates.
i) Dene a new x:
x = x + td
ii) Dene a new :
Set, for i = 1; m;
i := sup [0i; k d0 k2 ]:
(3.8)
If gi(x) , and i < I , set i = I .
iii) Update B symmetric and positive denite.
iv) Go back to Step 1.
We assume that the updating rules for B are stated in a way that:
Assumption) There are positive 1 and 2 such that
1 k d k2 dt Bd 2 k d k2 for any d 2 Rn:
5
The new components of are a second order perturbation of the components of 0 , given
by Newton's iteration (2.7). If and I are taken small enough, then after a nite number
of iterations, i becomes equal to 0i for the active constraints. In Step 1 is bounded as in
(2.14) and, in addition, we don't allow it to grow faster than d20 .
The theoretical analysis in [14] includes rst a proof that the solutions of the linear systems
(3.1)-(3.2) and (3.3)-(3.4) are unique, provided that the vectors rhi (x), for i = 1; 2; :::; p,
and rgi(x) for i 2 I (x) are linearly independent. Then, it is shown that any sequence fxk g
generated by the algorithm converges to a Karush-Kuhn-Tucker point of the problem, for any
way of updating B , if the previous assumption is true. We also have that (xk ; k ) converges to
a Karush-Kuhn-Tucker pair (x ; ). Updating rules for B will be discussed in the next section.
4 Dierent Ways of Computing B
In the present method B can be equal to the second derivative of the Lagrangian, to a quasiNewton estimate or to the identity matrix. In this section, these alternatives will be discussed.
FIRST ORDER ALGORITHMS.
Taking
B I;
we have an extension of gradient method. In particular, when all the constraints are active, the
search direction becomes the same as the projected gradient, [15]. There are not theoretical
results about the speed of convergence, but probably the rate is no greater than linear. Since the
computer eort and memory storage are smaller, this algorithm can be ecient in engineering
applications that do not need a very precise solution.
QUASI-NEWTON ALGORITHMS.
In quasi - Newton methods for constrained optimization, B is an approximation of the Hessian
of the Lagrangian H (x; ). Then, it should be possible to obtain B with the same updating
rules as in unconstrained optimization, but taking rxL(x; ) instead of rf (x). However, since
H (x; ) is not necessarily positive denite at a K-K-T point, it is not always possible to get B
positive denite, as required by the present technique. To overcome this diculty, we employ
BFGS updating rule as modied by Powell, [27]:
Take
= xk+1 , xk
and
= rxL(xk+1 ; k0 ) , rxL(xk ; k0 ):
If
t < 0:2tB k ;
then computed
tBk 0
:
8
= tBk , t
and take
= + (1 , )B k :
6
Set
t B k t B k
B k+1 := B k + t , tBk In [14] it was proved that the convergence is two-step superlinear, provided that a unit step
length is obtained after a nite number of iterations.
NEWTON ALGORITHMS.
To have a Newton algorithm we take B = H (xk ; k ) as in the iteration (2.6). However, as
mentioned above, B must be positive denite. Since this is in general not true, Newton versions
of the present method can be obtained for particular applications only. This is the case of the
algorithm for linear stress analysis with contact described in [1].
5 The Internal Linear Systems
The matrices corresponding to the linear systems (3.5), (3.6) and (3.7), (??) solved in Step 2
are not symmetric neither denite. However, in [25] it was proved that the solution is unique,
provided the following regularity assumption is true at x:
Assumption (Regularity Condition) For all x 2 a the vectors rgi(x), for i such that
gi(x) = 0, are linearly independent.
This condition must be checked in practical applications. There are several examples when
the previous assumption is not true. In the case when there are sets constraints that take
the same values due to symmetries of the problem, only one constraint of each set must be
considered.
It follows from (3.5) that:
d0 = ,B ,1 [rf (x) + rg(x)0]
(5.1)
[rgt(x)B ,1 rg(x) , ,1G(x)]0 = ,rgt (x)B ,1rf (x):
(5.2)
and, by substitution in (3.6):
Then, instead of (3.5), (3.6), we can get 0 by solving (5.2), and d0 by substitution in (5.1).
In a similar way, 1 can be obtained by solving
and d1 follows from:
[rgt(x)B ,1 rg(x) , ,1G(x)]1 = ,;
(5.3)
d1 = ,B ,1 rg(x)1:
(5.4)
Both systems, (5.2) and (5.3), with m equations has the same matrix. This one is symmetric
and positive denite when he regularity condition is true, [11] and [12]. However the condition
number becomes worst as some of the components of grows. To compute the matrices we
need B ,1 or the products B ,1rg(x) and B ,1 rf (x). In quasi - Newton algorithms, this can
be ease obtained by working with an approximation of H ,1(x).
On the other side, it follows from (3.6) that
7
0 = ,G,1 (x)rgt(x)d0
and, by substitution in (3.5), we get a system in d0:
(5.5)
[B , rg(x)G,1rgt(x)]d0 = ,rf (x)
Once d0 was computed 0 follows from (5.5. in a similar way, we have
(5.6)
and
1 = ,G,1(x)rgt(x)d1 , G,1 (5.7)
[B , rg(x)G,1rgt(x)]d1 = rg(x)G,1
(5.8)
The linear systems (5.6) and (5.8) have he size of the number of variables n, they are
symmetric and positive denite. This is true, since B is positive denite and rg(x)G,1rgt(x)
is negative semidenite. When some constraint goes to zero the system becomes strongly bad
conditioned.
6 Line Search Procedures for Interior Point Algorithms
Interior Point algorithms need a constrained line search. In the present method, once a search
direction is obtained, the rst idea consists on nding t that minimizes f (xk + tdk ) subject to
g(xk + tdk ) 0. Instead of making an exact minimization on t, it is much more ecient to
employ inexact line search techniques. For that, we have to state a criterion to know whether
the step length is good or not and to dene an iterative algorithm to obtain a good step length.
In [15] we introduced inexact line search criteria based on Armijo's and on Wolfe's criteria for
unconstrained optimization.
ARMIJO'S CONSTRAINED LINE SEARCH
Dene the step length t as the rst number of the sequence f1; ; 2; 3 ; :::g satisfying
f (x + td) f (x) + t1 rf t(x)d
(6.1)
g(x + td) 0;
(6.2)
and
where 1 2 (0; 1) and 2 (0; 1) also.
WOLFE'S CONSTRAINED LINE SEARCH CRITERION
Accept a step length t if (6.1) and (6.2) are true and at least one of the followings m + 1
conditions hold:
rf t (x + td)d 2 rf t(x)d
(6.3)
and
gi(x + td) gi(x); i = 1; 2; :::; m
(6.4)
where now 1 2 (0; 1=2), 2 2 (1; 1) and 2 (0; 1).
8
Conditions (6.1) and (6.2) dene upper bounds on the step length in both criteria and, in
Wolfe's criterion, a lower bound is given by one of the conditions (6.3) and (6.4).
A step length satisfying Wolfe's criterion for interior point algorithms can be obtained
iteratively in a similar way as in [16]. Given an initial t, if it is too short, extrapolations are
done until a good or a too long step is obtained. If a too long step was already obtained,
interpolations based on the longest short step and the shortest long step are done, until the
criterion is satised. Since the function and the directional derivative are evaluated for each
new t, cubic interpolations of f can be done. As the criterion of acceptance is quite wide, the
process generally requires very few iterations.
A LINE SEARCH ALGORITHM WITH WOLFE'S CRITERION
Parameters. 1 2 (0; 0:5), 2 2 (1 ; 1) and 2 (0; 1):
Data. Dene an initial estimate of the step length, t > 0. Set tR = tL = 0
Step 1. Test for the upper bound on t.
If,
f (x + td) f (x) + t1 rf t(x)d
and
g(x + td) 0;
Go to Step 2. Else go to Step 4.
Step 2. Test for the lower bound on t.
If
rf t(x + td)d 2rf t (x)d;
or any
gi(x + td) gi(x) for i = 1; 2; :::; m;
then t veries Wolfe's Criteria, STOP.
Else, go to Step 3.
Step 3. Get a longer t.
Set tL = t:
i) If tR = 0, nd a new t by extrapolation based on (0; t).
ii) If tR > 0, nd a new t by interpolation in (t; tR ).
Step 4. Get a shorter t.
Set tR = t.
Find a new t by interpolation in (tL; t)
Return to Step 1.
7 Equality and Inequality Constrained Optimization
In what follows, we describe an approach to extend the present technique in order to solve
the equality and inequality constrained problem (1.1). The resulting algorithm requires an
initial point at the interior of the inequality constraints, not necessarily verifying the equality
constraints. It generates a sequence fxk g also at the interior of the inequalities that converges
to a Karush-Kuhn-Tucker point. In general, the equalities are only veried at the limit.
Karush-Kuhn-Tucker rst order optimality conditions of problem (1.1) can be expressed as
follows:
9
rf (x) + rg(x) + rh(x) = 0;
(7.1)
G(x) = 0;
(7.2)
h(x) = 0;
(7.3)
g(x) 0
(7.4)
and
0:
(7.5)
where 2 Rp is the dual variables vector corresponding to the equality constraints. Now, the
Lagrangian is L(x; ; ) = f (x) + tg(x) + t h(x), and its second derivative becomes
p
m
X
X
H (x; ; ) = r2 f (x) + ir2 gi(x) + ir2 hi(x).
i=1
i=1
A Newton iteration for the solution of (7.1) to (7.3) is dened by the following system:
2
B
6
6
6
6
6
6
4
rgt(xk )
3 2
3
2
7
7
7
k
77 =
7
5
6
6
6
6
6
4
rg(xk ) rh(xk ) 7 6 xk0+1 , xk
G(xk )
rht (xk ) 0
0
0
7
7
7
7
7
5
6
6
6
6
6
4
k+1 ,
0
k0+1 , k
,
3
rf (xk ) + rg(xk )k + rh(xk )k 7
6
G(xk )k
h(xk )
7
7
7
7
7
5
(7.6)
with B = H (xk ; k ; k ); (xk ; k ; k ) is the current point and (xk0+1 ; k0+1; k0+1) a new estimate.
If we dene dk0 = xk0+1 , xk ; then (7.6) becomes
and
Bdk0 + rg(xk )k0+1 + rh(xk )k0+1 = ,rf (xk );
(7.7)
k rgt(xk )dk0 + G(xk )k0+1 = 0
(7.8)
rht (xk )dk0 = ,h(xk )
(7.9)
which is independent of the current value of . As before, we can deduce that dk0 is not useful
as a search direction, since it is not always a feasible direction.
When only inequality constraints are considered, as all the iterates are feasible, we know
that a minimum is approached as the objective decreases. In the present problem, an increase of
the function can be necessary to satisfy the equality constraints. Then we need an appropriate
cost function for the line search. With this objective we consider the function
c(x) = f (x) +
p
X
i=1
cikhi(x)k;
(7.10)
where ci are positive constants. It can be shown that, if ci are large enough, then c(x) is an
Exact Penalty Function of the equality constraints, [19]. In other words, there exists a nite c
10
such that the minimum of c subject only to the inequality constraints occurs at the solution of
the problem (1.1). Then, the use of c as a penalty function is numerically very advantageous,
since it does not require parameters going to innite. On the other hand, c has no derivatives
at points where there are active equality constraints.
In [14] it is shown that, when xk veries the inequality constraints and c is such that
sg[hi(xk )](ci + 0ik ) < 0; i = 1; 2; ::; p;
(7.11)
where sg(:) = (:)=j(:)j: Then, dk0 is a descent direction of c(x).
Following, we state an algorithm that uses c in the line search, but avoids the points where
this function is nonsmooth. It generates a sequence with decreasing values of c at the interior
of the inequality constraints and such that the equalities are negative. With this purpose, the
system (7.7)-(7.9) is modied in a way to obtain a feasible direction of the inequalities that
points to the negative side of the equalities and, at the same time, is a descent direction of c.
FEASIBLE DIRECTION INTERIOR POINT ALGORITHM
Equality and Inequality Constraints
Parameters. 2 (0; 1) and positive '; ; and I :
Data. Initialize x such that g(x) < 0, h(x) < 0, > 0, c 2 Rp and B 2 Rnn symmetric and
positive denite.
Step 1. Computation of the search direction d.
i) Solve the linear system for (d0; 0; 0)
Bd0 + rg(x)0 + rh(x)0 = ,rf (x);
rgt(x)d0 + G(x)0 = 0;
rht(x)d0 = ,h(x)
If d0 = 0; stop.
(ii) Compute (d1; 1; 1) by solving the linear system
(7.12)
Bd1 + rg(x)1 + rh(x)1 = 0
rgt(x)d1 + G(x)1 = ,;
(7.13)
rht (x)d1 = ,e;
(7.14)
where e [1; 1; :::; 1]t.
(iii) If ci < ,1:20i, then set ci = ,20i; i = 1; :::; p:
(iv) If dt1rc(x) > 0; set
Otherwise, set
= min[' k d0 k2 ; ( , 1)dt0rc(x)=dt1rc(x)]:
= ' k d0 k2 :
11
(v) Compute the search direction
d = d0 + d1 ;
Step 2. Line search.
Find a step length t satisfying a given constrained line search criterion on the auxiliary
function c and such that g(x + td) < 0 and h(x + td) < 0.
Step 3. Updates.
i) Dene a new x:
x = x + td
ii) Dene a new :
Set, for i = 1; m;
i := sup [0i; k d0 k2 ]:
(7.15)
If gi(x) , and i < I , set i = I .
iii) Update B symmetric and positive denite.
iv) Go back to Step 1.
In the case when linear equality constraints are included and the initial point veries them,
it follows from (7.12) that d0 is feasible for those constrains. Then, if d0 is not deected with
respect to the equality constraints, they are always active.
8 Numerical Tests
We present in this section some numerical results obtained with a quasi Newton Algorithm.
The line search considers Wolfe's criterion and makes cubic interpolations and extrapolations.
That means that the objective function, the constraints and their derivatives are computed for
each iteration in the line search.
We report here our experience with 115 examples, described in [17]. The results are summarized in Tables 1, 2 and 3, where
n = number of variables,
m = number of inequality constraints,
p = number of equality constraints,
iter = number of iterations,
evf = number of functions and gradients evaluations,
cfv = computed objective function value,
ofv = optimum function value, ([17]). and
sup jhij = maximum computed error of the equality constraints.
The inequality constraints are feasible. All the problems were solved with the same set of
values for the involved parameters, taken as follows: = 0:7, 1 = 0:1, 2 = 0:7, = 0:5, = 1
and c = 0.
12
Ex.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
n m p iter evf
cfv
ofv
sup jh j
2 1 0 36 68
0.0000
0.0000
2 1 0 29 47
0.0504
0.0504
2 1 0 15 16
0.0000
0.0000
2 2 0
4
5
2.6667
2.6667
2 4 0
3
5
-1.9132
-1.9132
2 0 1
6
8
0.0000
0.0000 0.33D-13
2 0 1 10 11
-1.7321
-1.7321 0.33D-08
2 0 2
6
9
-1.0000
-1.0000 0.97D-08
2 0 1
5
6
-0.5000
-0.5000 0.17D-14
2 1 0 10 18
-1.0000
-1.0000
2 1 0
6 10
-8.4985
-8.4985
2 1 0
5 10
-30.0000
-30.0000
2 3 0 30 46
1.0000
1.0000
2 1 1
4
8
1.3935
1.3935 0.92D-08
2 3 0
5
7 306.5000 306.5000
2 5 0 10 12
0.2500
0.2500
2 5 0 23 31
1.0000
1.0000
2 6 0 13 16
5.0000
5.0000
2 6 0 19 39 -6961.8139 -6961.8139
2 5 0 21 22
38.1987
38.1987
2 5 0 15 17
-99.9600
-99.9600
2 2 0
8 15
1.0000
1.0000
2 9 0
7 14
2.0000
2.0000
2 5 0 11 20
-1.0000
-1.0000
3 6 0 92 104
0.0000
0.0000
3 0 1 11 15
0.0000
0.0000 0.71D-04
3 0 1 13 19
0.0401
0.0400 0.13D-05
3 0 1
3
5
0.0000
0.0000 0.00D+00
3 1 0
7 14
-22.6274
-22.6274
3 7 0 11 12
1.0000
1.0000
3 7 0
6
8
6.0000
6.0000
3 4 1
4
7
1.0000
1.0000 0.22D-15
3 6 0 12 20
-4.5858
-4.5858
3 8 0 18 34
-0.8340
-0.8340
3 4 0
5
7
0.1111
0.1111
3 7 0 12 15 -3300.0000 -3300.0000
3 8 0 10 14 -3456.0000 -3456.0000
4 8 0 16 29
0.0000
0.0000
4 0 2 12 13
-1.0000
-1.0000 0.16D-08
4 0 3
5
6
-0.2500
-0.2500 0.21D-10
i
Table 1: Numerical Results
13
Ex.
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
70
71
72
73
74
75
76
77
78
79
80
n
4
4
4
4
5
5
5
5
5
5
5
5
5
6
6
7
2
2
2
3
3
3
3
3
3
3
3
4
4
4
4
4
4
4
5
5
5
5
m
8
0
3
10
10
0
0
0
0
0
0
0
10
12
8
0
3
5
7
6
0
6
3
4
7
8
20
9
9
10
6
10
10
7
0
0
0
10
p iter evf
cfv
ofv
1 32 37
1.9259
1.9259
2
7 10
13.8579
13.8579
0
8 16 -44.0000
-44.0000
0 21 31 -15.0000
-15.0000
0 34 35
1.0000
1.0000
2 22 46
0.0000
0.0000
3 14 25
0.0000
0.0000
2
3
7
0.0000
0.0000
2 10 17
0.0000
0.0000
3 10 18
0.0000
0.0000
3
2
5
0.0000
0.0000
3
5
8
5.3266
5.3266
3
8 11
4.0930
4.0930
1 56 64
-0.8674
-0.9081
5
6
9
6.3333
6.3333
4
8 13
-3.4560
-3.4560
0 31 36
0.0285
0.0285
0 11 16 100.9901
|
0 12 14
-7.8042
-7.8042
1
5
8
0.0326
0.0326
2
8 11 -143.6461 -143.6461
1
5
9 -26272.51 -26272.51
2
4
7 961.7152
961.7152
0 31 43 6299.8424 6299.8424
0
8 13
0.9535
0.9535
0
5
9
0.5182
0.5182
0 118 263 -1162.00 -1162.0365
0 35 43
0.0075
0.0075
1 11 22
17.0180
17.0140
0 144 203 727.6794
727.6794
1 14 19
29.8944
29.8944
3 28 45 5126.4981 5126.4981
3 30 40 5174.4129 5174.4129
0
7
9
-4.6818
-4.6818
2 13 21
0.2415
0.2415
3
9 12
-2.9197
-2.9197
3
8 10
0.0788
0.0788
3
9 14
0.0539
0.0539
Table 2: Numerical Results
14
sup jh j
0.13D-16
0.70D-10
i
0.61D-07
0.17D-08
0.44D-15
0.00D+00
0.00D+00
0.00D+00
0.34D-15
0.33D-15
0.00D+00
0.36D-11
0.97D-07
0.33D-07
0.29D-11
0.11D-15
0.39D-06
0.12D-05
0.68D-14
0.34D-04
0.37D-08
0.15D-08
0.10D-08
0.38D-08
0.21D-09
Ex.
81
83
84
85
86
87
88
89
90
91
92
93
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
116
117
118
119
n
5
5
5
5
5
6
2
3
4
5
6
6
6
6
6
6
7
7
7
7
7
8
8
8
9
9
9
10
10
10
10
10
13
15
15
16
m
10
16
16
48
15
12
5
7
9
11
13
8
16
16
16
16
14
4
20
20
20
22
17
22
8
14
20
20
20
10
8
28
15
20
59
32
p
3
0
0
0
0
4
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
6
0
6
0
3
3
0
3
0
0
0
8
iter
15
13
32
172
13
60
6
7
10
12
10
13
4
5
3
3
14
13
39
39
37
23
57
119
12
21
47
5
107
18
17
118
35
31
45
54
evf
cfv
ofv sup jh j
25
0.0539
0.0539 0.17D-13
15
-30665.54 -30665.54
36
-5280335
-5280335
216
-1.9051
-1.9051
20
-32.3487
-32.3487
95 8927.6078 8927.5977 0.10D-05
15
1.3627
1.3627
27
1.3627
1.3627
24
1.3627
1.3627
39
1.3627
1.3627
33
1.3627
1.3627
24
135.0760
135.0760
7
0.0002
0.0156
11
0.0002
0.0156
6
0.0010
3.1358
6
0.0010
3.1358
22 -831080000 -831079891 0.41D-05
25
680.6301
680.6301
93 1809.7689 1809.7648
83
911.8806
911.8806
80
543.6680
543.6680
61
3.9512
3.9512
66 1138.4126 1138.4162
181 7049.3309 7049.3309
14 5055.0118 5055.0118 0.12D-10
30
-0.6749
-0.8660
56 5362.0693 5362.0693 0.43D-01
6
-45.7785
-45.7785
370
-47.7605
-47.7611 0.18D-05
26
-47.7611
-47.7076 0.46D-14
32
24.3063
24.3062
251
-1743.71 -1768.8070 0.34D-03
76
98.2996
97.5884
33
32.3487
32.3487
51
664.8204
664.8204
66
244.8997
244.8997 0.15D-13
i
Table 3: Numerical Results
15
9 Conclusions
The present algorithm is simple to code since it does not require the solution of quadratic
programming subproblems but merely of two linear systems with the same matrix. We can
take advantage of the structure of this matrix to solve eciently very large problems. Depending
on the information available about the problem to be solved, a rst order, a quasi - Newton or
a Newton version of the algorithm can be employed. The numerical examples were solved very
eciently and, it is important to remark that, all of them were solved without any change in
the code and with the same parameters of the algorithm.
References
[1] Auatt, S. S., Borges, L. A. and Herskovits, J., An Interior Point Optimization Algorithm for Contact Problems in Linear Elasticity, Numerical Methods in Engineering'96,
Edited by Desideri, J. A., Le Tallec, P., O~nate, E., Periaux, J. and Stein, E., John Wiley
& Sons, 1996.
[2] Baron, F. J. and Pironneau, O., Multidisciplinary Optimal Design of a Wing Prole, Proceedings of STRUCTURAL OPTIMIZATION 93, Rio de Janeiro, Edited by J.
Herskovits, 8/1993.
[3] Baron, F. J. Constrained Shape Optimization of Coupled Problems with Electromagnetic
Waves and Fluid Mechanics, (in Spanish), University of Malaga, Spain, PhD Thesis, 1994.
[4] Baron, F. J., Duffa, G., Carrere, F. and Le Tallec, P., Optimisation de forme
en aerodynamique , (in French), CHOCS, Revue scientique et technique de la Direction
des Applications Militaires du CEA, France, 1994.
[5] Bijan, M. and Pironneau, O., New Tools for Optimum Shape Design , CFD Review,
Special Issue, 1995.
[6] Dew, M. C. A First Order Feasible Directions Algorithm for Constrained Optimization,
Technical Report No. 153, Numerical Optimization Centre, The Hateld Polytechnic, P.
O. Box 109, College Lane, Hateld, Hertfordshire AL 10 9AB, U. K., 1985.
[7] Dew, M. C. A Feasible Directions Method for Constrained Optimization based on the
Variable Metric Principle, Technical Report No. 155, Numerical Optimization Centre, The
Hateld Polytechnic, P. O. Box 109, College Lane, Hateld, Hertfordshire AL 10 9AB, U.
K., 1985.
[8] Dennis, J. E. and Schnabel, R., Numerical Methods for Constrained Optimization
and Nonlinear Equations, Prentice Hall, 1983.
[9] Dikin, I. I., About the Convergence of an Iterative Procedure (in Russian), Soviet
Mathematics Doklady, Vol. 8, pp. 674 -675, 1967.
[10] Herskovits, J., A Two-Stage Feasible Directions Algorithm for Non-Linear Constrained
Optimization , Research Report No. 103, INRIA, BP 105, 78153 Le Chesnay CEDEX,
France, 1982.
16
[11] Herskovits, J., A Two-Stage Feasible Directions Algorithm Including Variable Metric Techniques for Nonlinear Optimization , Research Report n0 118, INRIA, BP 105, 78153 Le
Chesnay CEDEX, France, 1982.
[12] Herskovits, J., A Two-Stage Feasible Directions Algorithm for Nonlinear Constrained
Optimization , Mathematical Programming, Vol. 36, pp. 19-38, 1986.
[13] Herskovits, J. and Coelho, C.A.B. An Interior Point Algorithm for Structural Optimization Problems , in Computer Aided Optimum Design of Structures: Recent Advances,
edited by C.A. Brevia and S.Hernandez, Computational Mechanics Publications, SpringerVerlag, June 1989.
[14] Herskovits, J.,An Interior Point Technique for Nonlinear Optimization, Research Report
No. 1808, INRIA, BP 105, 78153 Le Chesnay CEDEX, France, 1992.
[15] Herskovits, J., A View on Nonlinear Optimization, Advances in Structural Optimization, Edited by J. Herskovits, Kluwer Academic Publishers, Dordrecht, pp 71-117, 1995.
[16] Hiriart-Urruty, J. B. and Lemarechal, C. Convex analysis and Minimization Algorithms, Springer - Verlag, Berlin, Heildelberg, 1993.
[17] Hock, W. and Schittkowski, K., Test Examples for Nonlinear Programming Codes,
Lecture Notes in Economics and Mathematical Systems 187, Springer - Verlag, Berlin,
1981.
[18] Kojima, M., Mizuno, S. and Yoshise, A., A Primal-Dual Interior Point Method for
Linear Programming , Progress in Mathematical Programming: Interior-Point and Related
Methods, Edited by N. Meggido, Springer-Verlag, New York, pp 29-47,1989.
[19] Luenberger D.G., Linear and Nonlinear Programming , 2nd. Edition, Addison-Wesley,
1984.
[20] Maratos N., Exact Penalty Function Algorithms for Finite Dimensional Optimization
Problems , Ph.D. thesis, Imperial College of Science and Technology, London, 1978.
[21] Mayne, D.Q. and Polak, E., Feasible Directions Algorithms for Optimization Problems with Equality and Inequality Constraints , Mathematical Programming 11, pp.
67-80, 1976.
[22] Meggido, N., Pathways to the Optimal Set in Linear Programming , Progress in Mathematical Programming: Interior-Point and Related Methods, Edited by N. Meggido,
Springer-Verlag, New York, 131-158, 1989.
[23] Monteiro, R.D.C., Adler, I. and Resende, M.G.C., A Polynomial-Time PrimalDual Ane Scaling Algorithm for Linear and Convex Quadratic Programming and its
Power Series Extension, Mathematics of Operations Research 15(1990), 191-214.
[24] Panier, E.R.,Tits A.L., A Superlinearly Convergent Algorithm for Constrained optimization problems, SIAM J. Control Optim.,25(1987),pp.934-950.
17
[25] Panier, E.R.,Tits A.L. and Herskovits J. A QP-Free, Globally Convergent, Locally Superlinearly Convergent Algorithm for Inequality Constrained Optimization, SIAM
Journal of Control and Optimization, Vol 26, pp 788-810, 1988.
[26] Powell, M.J.D. The Convergence of Variable Metric Methods for Nonlinearly Constrained Optimization Calculations , in Nonlinear Programming 3, edited by O.L. Mangasarian,
R.R. Meyer and S.M. Robinson, Academic Press, London, 1978.
[27] Powell M. J. D., Variable Metric Methods for Constrained Optimization , in Mathematical Programming - The State of the Art, Edited by A. Bachem, M. Grotschet and B.
Korte, Springer-Verlag, Berlin, 1983.
[28] Santos, G., Feasible Directions Interior Point Algorithms for Engineering Optimization
, (in Portuguese), DSc. Dissertation, COPPE - Federal University of Rio de Janeiro, Mechanical Engineering Program, Caixa Postal 68503, 21945-970, Rio de Janeiro, Brazil,
1996.
[29] Tits, L. A. and Zhou J. L. A Simple, Quadratically Convergent Interior Point Algorithm for Linear Programming and Convex Quadratic Programming , Large Scale Optimization: State of the Art, Edited by W.W. Hager, D. W. Hearn and P.M. Pardalos, Kluwer
Academic Publishers B.V,1993.
[30] Vanderbei, R. J. and Lagarios, J. C., I. I. Dikin's Convergence Result for the Ane
Scaling Algorithm, Technical Repport, AT&T Bell Laboratories, Murray Hill, NJ,1988.
[31] Vautier, I., Salaun, M. and Herskovits, J., Application of an Interior Point Algorithm to the Modeling of Unilateral Contact Between Spot-Welded Shells, Proceedings of
STRUCTURAL OPTIMIZATION 93, Rio de Janeiro, Edited by J. Herskovits, 8/1993.
[32] Zouain, N.A., Herskovits J., Borges, L.A. and Feijoo, R. An Iterative Algorithm
for Limit Analysis with Nonlinear Yield Functions, International Journal on Solids and
Structures, Vol 30, No 10, pp 1397-1417, Gt. Britain, 1993.
18