Evans, J.P. and Gould, F.J.; (1970)Stability and exponential penalty function technique in non-linear programming."

,~
ReBeatteh~
t
1!his 7J)ork 7J)1I8 suppomd by the Office of Naval
Contmct No.
N00014-6?-A-0321-0003 (NR04?·09SJ.
It
Graduate School of Business Aaninistmtion and CultPiculun in Operations
ReBeatteh and Systems AnaZy8is~ lJnivezrsity of Nonh Ca1'OUna at ChapeZ HiZZ.
** Depal'tment of Statistics and Cultnoul.um in Opwations Reseat'Ch and
Systems AnaZyBis~ Univepsity of Nonh CaroUna at Chapel. HitZ.
.
STABILITY AND EXPONENTIAL PENALTY FUNCTION
TECHNIQUES IN NONLINEAR PROGRAMMINGt
by
J. P. Evans
,.
and F. J. Gould **
DepClPtment of Statistics
lJnivepsi ty of Nopth CClPoZina at Chape Z Hi ZZ
Institute of Statistics Mimeo Series No. 723
Novembvr., 1970
STABILITY AND EXPONENTIAL PENALTY FUNCTION TECHNIQUES
IN
NONLINEAR PROGIWt1INGt
by
J. P. Evans
*
and
F. J. Gould
**
In this paper, we develop the concept of a general exponential penalty
function as a method of solving inequality constrained optimization problems.
Relationships to the concept of stability in nonlinear programming are presented
together with results on approximate solutions and bounds on the optimal value
of a program.
Conditions are stated which guarantee that sequences of penalty
function maximizers will yield optimal solutions to the original programming
problem.
I.
INTROIllCTIOO
Consider the general inequality constrained non-linear programming problem,
(Ph)
max. f (x),
subject to
j = I, ••. , m
t
This 1J1or-k 1J1assuppol'ted by the Office of Navat ReseaPch, Contract No. N0014-
67-A-0321-0003 (NR047-096J.
•
Gr-aduate School of Business Administration and Cumcu1.um in Operations
Resear-ch and Systems Analysis~ Univer-sity of Nonh Carolina at Chapel Hill.
•• Depanment of Statistics and CuPPicuZum in Operations Researtch and System8
AnaZysis~ Univepsity of Nonh Carto'tina at ChapeZ HiZt.
2
where
f, gj'
j =l, ••• ,m,
are real-valued functions defined on
n
R •
Gould [4]
explored the use of nonlinear multiplier functions in extended Lagrangians associated with problem
(PI)'
Define
m
L(X,A}
(1.1)
=
f(x) -
L
A.(gj(X})
j=l
where
A ,
j
j . = 1, ••• ,m,
J
is a monotonically nondecreasing function of a single
variable with values in the extended reals such that
A (I) } < ClO,
j
j
j = 1, ••• ,m.
For such multiplier functions, the following result is valid:
Suppose
X is given and
i)
(1.2)
x*
8 (x*)
ii)
j
iii)
then
x*
maximizes
x*
L(x,X)
s I). ,
Xj (gj (x*»
over
Xj(I)j)'
is an optimal solution to
Aj'S
(1.1) to be ·larger for larger values of
L(x, A)
feasible in
(P )
b
j
= 1, .•. ,m;
(PI)'
The monotonicity property of the
LagTangian
Rn ,
j == 1, ••• ,m,
J
=
satisfies
in
gj(x).
(~.l)
causes the second term in
Roughly speaking, the extended
is a general penalty function which penalizes those
less than those which are infeasible.
In [2]
x' s
Fiacco and
McCormick cite a number of references to the use of an exponential type of penalty function, e.g. where the
A (')
j
in (1.1) has the form
~
> 0
~
s O.
These references include Motzkin [5], Goldstein and Kripke [3], and Zangwill [6].
In this paper, we define a class of general exponential penalty functions
and employ the properties of these functions in order to identify approximate
solutions to mathematical programs and to obtain certain convergence results in
3
the same context.
Relationships to stability in nonlinear programming [1] and
to the general exterior penalty function of Fiacco and McCormick [2] are explored.
In the remainder of the paper, we will employ the following definitions and
notation:
n
{xe:R :
a)
Sb
b)
B
c)
f-6Up(b) :
1111
1111
{be:Rm:
Sb .;
B ~ R
-6Up[ f(x):
U
<ph
{+lXl},
the perturbation function defined by
xe:S ] j
b
A vector -xe:Rn
d)
j = 1, ••• ,m};
gj (x) S b ,
j
is an e:-solution to
if given
-x
0,
E >
satisfies
i)
xe:S ,
b
ii)
f(x)
~
Rn
e)
5:
B~ 2
f)
z:
B ~ R,
(1.3)
f-6Up(b) ,
Ej
a point-to-set correspondence defined by
S(b)
= Sb;
the support function defined by
z(b)
g)
An m-vector of
II
I
l's
is represented bye.
GENERAL ExPOOENTIAL PENALlY FIJ'&CTlOOS
The relationship between penalty function techniques and the concept of
stability will be developed in the following exposition.
given
First we observe that
X satisfying the monotonicity and finiteness conditions stated above, if
some point
x
maximizes
has been obtained, namely
L(x,X)
over
f,6up(g(x».
n
R ,
then a point of the
This follows by replacing
,in the optimality conditions (1.2) above.
function
f-6Up
b
by
g(x)
But then it can be shown [4] that
4
z(b)
Vb
=
~ B.
Thus after each maximization of the penalty function, the support function
yields an upper bound on the value of
original problem.
f,6I.Lp
at
the right-hand side in the
i),
Hence it is intuitively reasonable to employ the information
from a succession of penalty function maximizations to choose new multiplier
functions,
A ,
j
such that the next penalty function maximizer will in some
sense be closer to an optimal solution for
However, it should be clear
(Pi»'
that the success of this strategy will be sensitive to the behavior of the
function at arguments near i).
f,6I.Lp
Consider the situations shown in Figures land
2 below.
y
y
= z(b)
_ _ _'--
w---~--------!>b
b
o
o
Figure 1
In Figure 1, the
!>
Figure 2
f,6Up
function is lower semi-continuous at
o.
Intui-
tively we would like to "approach
b = 0
yield poor estimates of
This suggests that we would like to choose
f,6I.Lp(O).
from below",
because values of
multiplier functions which increase rapidly for values of
trast, Figure 2 displays an
f,6I.Lp
Consequently, we wish to approach
yield poor estimates of
f,6Up(O).
b
near zero.
b > 0
In con-
function that is upper semi-continuous at
b = 0
from above since values of
b < 0
O.
will
5
e'
The continuity properties of
point-to-set correspondence
f~up
are closely related to properties of the
S defined in the preceding section.
These proper-
ties were explored in [1] and can be summarized for our purposes here as follows.
is compact and nonempty and that each constraint function,
n
is continuous on R.
a.
b.
The mapping
is a
~
Ii)
= {x:
If
at
c.
S is upper semi-continuous at i)
i)
>
i)
such that
S~ is compact: 1
g(x) < i)} rj lP~
if and only if
If the mapping
if and only if there
the mapping S
cl(~)
= Sb
is lower semi-continuous
cl denotes closure).
(where
S is upper semi-continuous at
semi-continuous on
then
i)
and
f(·)
is upper
is upper semi-continuous at
b.
d.
Ii) rj lP.
Suppose
If
semi-continuous at
From these
characterize
Hith this
statements~
f~up
at
f(·)
li,
is lower semi-continuous and
f~up
then
it is clear that if
is lower semi-continuous at
f(·)
is
continuous~
b from properties of the constraint map
background~
S is lower
b.
we can
S.
we will now define a class of general exponential mu1-
tiplier functions which are designed to permit sufficient flexibility to deal with
the situations in Figures 1 and 2.
Definition:
A:
R + R is a general exponential multiplier function for
if given parameters
1
a
~ 1~
8 > 0,
and
0
~
(p-)
b
0,
The precise definitions of upper and "lower semi.-aontinuity for the mapping
S need not Ctoncem us here; however" these are stated e:cpliaitZy in [1].
6
h: R-+-R
where
is a continuous, strictly increasing function satisfying
i)
t-+
ii)
h(O)
iii)
=
00,
1,
h(8 (X»
>
j
V
0
n
x e R ,
= 1, ••• ,m.
j
=
Example 1:
h(~}
ExampZe 2:
Suppose
and
h (t) -+
implies
oo
=
K >
I
inn gj (x)
> -
j
00,
xe:Rn
min k j
I.
Then define
= 1, ••• ,m,
h(t}
=
(K+t) /K,
j
and in the specification of
A,
further restrict
a
to the odd
integers.
Part of the motivation for general exponential penalty functilns can be seen
from the property that for an
Thus an arbitrarily
x
such that
gj (x) > bj , as
cel,
large penalty can be imposed on infeasible points.
spondingly with reference to Figure 1, by increasing
can be given a large slope in the vicinity of
II I.
a -+-
a,
Corre-
the support function
b.
FEASIBLE AND NEAA-FEASIBLE PeNAlTY FLNCTION MAxIMIZERS
In this section, we show that under rather general conditions, given a value
for the parameter
S,
the parameters
a
and
l5
can be manipulated so as to
control the location of the penalty function maximizers.
on e-solutions and convergence to optimal solutions.
assumptions for the remainder of the exposition.
This leads to theorems
First, we make the following
7
A,,:,1.
The multiplier function
A( •)
has the general exponential form de-
fined above.
= 0,
A·2.
In
(Ph)' h
A·3.
There is a scalar
So is nonempty and compact, and
n
are continuous on R •
such that for each
a
j.
f, gl' ••• ,gm'
1, ••• ,m,
the ratio
f(x)
is bounded from above on
n
R.
The last assumption is employed to guarantee that for an appropriate choice
of
a
the general exponential penalty function
P(x; a, 1, 0)
(3.1)
=
m
L
f(x) -
A(gj(X); a, 1, 0)
j=l
has a finite supremum.
In the absence of some such condition, it is easy to con-
struct examples for which the penalty function is unbounded on
n
R
•
Now
we pre-
sent a collection of results related to the case portrayed in Figure 1 in which
f.6Up
is, at worst, only lower semi-continuous.
lfMt.'A
3.1:
{X~
Let
be given and suppose
B > 0
gj(x) S -<5,
(which depends on
j
= l, ••• ,m}
8
and
<5)
<5 > 0
is nonempty.
such that
a >
is such that
s-<5e =
Then there exis ts a scalar
a
implies
P(x; a,
a,
-a
<5)
has a global maximum and each such maximizer belongs to
PROOF:
By assumption A-3, there is an
H such that for some
"1
and for
j = l, ... ,m
Ie
1
He:l'e~ as in the enti:l'e cleve lopemtn,t
asswned in the definition of A.. as~ for
J
Ct
is aZuJays subject to any restrictions
in E:x;ample 2 of Section II.
instance~
8
f(x)
f(x}
=
M,
<
a
each
x
€
n
R
•
[h (gj (x»] 1
Suppose xe:Rn -SO;
then for some j,
that
XER
a > a2
and
n
-SO
f(x)
<
a
[ h(e)] [h(g . (x»]
J
J
XES -oe'
and let
c
Then we can choose
a
2
such
imply
f(x)
[h(g.(x)+o)]2a
Let
gj (x) > O.
=
a
<
M
[h(e) ]a
then we can choose
f(x);
as
<
a/2.
such that a > a 3
yields
~
~
<
[h(gj(X)+o)]a
f(x)
f(x)
+ ~ < (3.
2et
2a
[h(gj(X)+o)]
[h(gj(X)+o)]
c - 8m
:s:
2a
[h(gj(x)+o)]
-
This implies there is an a
~
-
a > et
f(x)
[h(gj (x)+o)]
a
(3/2.
<
[h(o)]et
yields
<
Q
...
+
c - 8m
---"------
[h(g/x)+e)] a '
which can be rewritten as
f(x)
<
(3([h(g.(x)+o)]a_m) + c,
J
Since we can always choose -a
above is positive, for all
x
E
n
R
- SO'
large enough to insure that the coefficient of
et >
a
we have
B
9
f(x) - c
a,
<
n
each xt:R,
f(x) - c
l
a,
<
m
[h(gj(X)+O)]a - m
j=l
-
Thus we have shown that there is an a.
a
>
a
which depends on
(3
and
0,
such that
implies
m
(3.2)
l
f(x) -
A(gj(X); a, (3, 0)
f(x) - me,
<
j=l
Now since
x t:
So
is compact,
S-\Se £ SO'
(3.3)
P(x;a,(3,o)
me
This follows because
~
m
f(x) -
l
gj(x) ~ -0
and
xt:S
0 >
a
=
L
m
(3[h(gj(X)+o)]a
s
j=l
(3, 0)
xt:S O
and furthermore, it is seen that (for
r
13
j=l
Combining (3.2) and (3.3) we conclude that
max P(x; a,
O
implies
m
A(gj(X); a, (3, 0)
j=l
SO.
s max P(x; a, B, 0).
A(gj(X); a, (3, 0)
j=l
m
outside
and since
we have
f(x) -
L
has a maximum on So
=
a
>
a
=
rna.
implies
max P(x; a, S, 0)
xt:Rn
a >
a)
no maximizer of P(x;a,B,o)
lies
Q.E.D.
Now we apply Lemma 3.1 to obtain a theorem on e-solutions to
(PO)
for the
case in which the mapping S defined in Section I is lower semi-continuous at
O.
10
Let
{S}
be any sequence of positive numbers such that
n
there is a . d > 0
continuous).
is an
S -d
3
Let
eS
n
to
Let
3.2:
e: > 0
P(x;a ,13 ,0)
n
n
will exist if
S
S
{x*n (0) }
Then there
0) such that the conclusion
(PO)'
let
x*_o
denote any solution
a
n
is lower semi-continuous at
Then there is a scalar
implies the sequence
is lower semi-
0 < c5 < Cl.
{a} be a sequence of scalars such that
n
Assume the mapping
be given.
and
and assume
will have a maximum at some point
n
denote any solution to
and let
(P-eSe)'
THEOREM
x*
f3
(which depends on
of Lemma 3.1 is valid; hence
X*n(O)ES O•
a
-+ 0,
n
be a fixed positive scalar such that
for each
(i
n
is nonempty (such a
- e
13
d
e:
> 0
such that
>
0
a,
n
each
n.
and let
0 < eS < d
has a convergent sebsequence, and if
e:
vIS
is
f.6Up
is
a subsequential limit
ii)
PROOF:
Since
f
is continuous and
is lower semi-continuous at
S
O.
that
f.6Up(O) - e: ::;; f.6Up(-c5e) ::;; £.6 up (0) ,
0 < c5 < d
implies
Thus given
e: > 0,
lower semi-continuous at
f(x*) - e: ::;; f(x* -0) ::;; f(x*).
because
Of course as noted above,
S_c5e
is nonempty.
Thus for
gj(X*_eS) + c5 ::;; 0,
j = 1, ... ,m,
which implies
ficiently small that
(3.4)
there is a scalar
f(x*) .., e:-
d
0,
d > 0
such
and thus
can be chosen suf-
OE (O,d)
we have
mSn
h(gj(x*_c5)+eS)::;; 1,
which in turn imp lies
a , 13 , 0)
n
n
=
r
. 1
J=
13
n
[h(gj(X*_~)+O)]au
u
::;;
ml3.
n
11
Since
x* (6)
n
maximizes
f(x~~) -
(3.5)
v
P(x; a ,
n
an ,
6),
we have
m
m
r A(gj(x*~);a,6
,6)
j=l
-v
n n
r
S
f(x*(6» A(gj(x*(6»;a,8 ,6)
n
j=l
n
n n
<
f(x*(6»
in which the last inequality follows from the fact that
Now since the sequence
and
{x*n (6)h:S O'
So
{x* (o)} converging to Vo€SO·
nk
follows that for any such convergent subsequence
~
S
f(x*
~
x*n (6)e:SO'
n.
From (3.4) and (3.5), it
f(x*),
(0»
each
there is a con-
is compact,
vergent subsequence
f(x*) - E - ma
f(x*)
S
n
each
k.
Hence we conclude
f(x*) - E S
f(v
o)
f(x*)
S
Q.E.D.
The thrust of the above theorem is that holding the parameter
we can employ a sequence of exponents
a
n
(Po)'
6 > 0,
whereas unless
maximizers.
constant,
so as to insure convergence of the se-
quence of penalty function maximizers to an E-solution for
noted that with
<5
(PO) •
It 1s to be
we cannot guarantee convergence to a true solution to
we cannot insure feasibility of the penalty function
0 > 0
At the expense of a somewhat more complicated algorithm, we can ob-
tain both of these properties; this is indicated in the following result.
THEOREM 3.3:
Suppose the assumptions of Theorem 3.2 hold and let
sequence of positive scalars such that
a sequence of exponents
i)
ii)
P(x; a ,
n
an ,
~ f(x*)
n
n+ oo
{a }
n
0)
n
=
on
d,
each
can be chosen such that
has a maximum at
f(x*);
<
x*n € SO;
n
and
{6}
n
be a
0 + O.
n
Then
12
iii)
{X*}
n
has a convergent subsequence and each subsequential limit
is a solution to
PROOF:
The set
continuity of
is nonempty and compac~ by the assumptio~s of lower sem-
S-tine
S,
timal solution to
(PO).
continuity of
"n
(P-ti e);
x
n
g,
and compactness of
SO.
exists by the continuity of
also guarantee that Lemma 3.1 is valid, hence there is an
clusion i) of the theorem holds.
f(~ ) -m(3
(3.6)
n
a
n
"n
x
Let
f.
be an op-
The assumptions
such that con-
Then we have
s f(~ )
n
n
-
m
r
"
A(gj (x ); a ,
n
n
j=l
en ,
ti )
n
m
f(x* )
n
S
<
Since
f.6Up
f(x* )
n
- I
j=l
A(gj(X*); an' I3 n , ti n )
n
s f(x*)
is lower semi-continuous at
0,
=
f.6Up(O) •
we have
.um
n-+oo
Combining this with (3.6) yields
.urn
f(x*)
n
n-+ co
=
which proves ii).
Since
So
is compact and
convergent subsequence,
an
was chosen to insure
x*nk -+ XOES •
O
3.2 permits us to conclude that
x*nE:S O'
{x*}
n
has a
The argument at the conclusion of Theorem
f(x ) = f.6Up(O)
O
which proves iii).
Q.E.D.
Now we tum to a collection of results paralleling thoee above which permit
us to deal with the case pictured in Figure 2 in which
tinuous at
O.
...,up is upper semi-con-
f.l
lIe continue to assume that conditions A-l, A-2, and A-3 hold.
13
I.£r.t1A 3.4:
Let
8 > 0t
and suppose
a
there exists a scalar
P(x;atBtO)
PROOF:
dependent on
a > 1
then for
~x
(3.7)
is such that
Band
tion
A-3
a > a
implies
a
f(x)/[h(gjO(X»]
<
f(x) -
then for some
at
we have
r
A(gj (x); at St 0)
:t
f(x) - mB.
1 S jo S mt
&jO(x) >
M such that for
o.
By assump-
a > a it
Mt which implies
M
<
[h (gjo (x» ]
a
f(x)
r
m
[h(gj (x)}]
h(gj (x» > 0
ht
such that for
a
M
<
2a
By property iii) of the ftmction
2
jot
and a constant
f(x)
a
Soe"
j=l
n
there is an
Soe
Then
m
~
P(x; at 8 t 0)
xe:R -Soe;
we can choose
is compact.
such that
by the compactness of
x e: Soe
Now suppose
0
Soe
has a global maximum and each such maximizer belongs to
iE:s O;
Let
0 > 0
>
<
01
S •
<
for each
j
= 1, ••• tm;
hence
2
8t
each
x
E:
n
R
- Soe •
j=l
With an argument similar to that employed in the proof of Lemma 3.1 t we can con-
elude that there is an
(3.8)
as
such that
f(x) - f(x)
m
L
<
a >
01
3
implies
flt
[h(&j(x»]a - m
j=l
since the denominator on the left side of (3.8) can be assumed to be positive,
m
(3.9)
f(x) -
r
j=l
m
A(gj(X); at St 0)
=
f(x) -
L
j=l
S[h(g.(x»]a
J
<
f(x) - mB
14
n
for each
xe:R -S15e.
The conclusion of the lemma follows from (3.8) and (3.9).
Q.E.D.
In a proposition similar to Theorem 3.2, we can now state a result conceming
what might be termed near-feasible, near-optimal solutions for
be as in Theorem 3.2 and assume the mapping
S
Then for each
n
< c5 s
i:JUch that for
a
(0
6')
an
there is an
n
> a
will have a maximum at some point
THEOREM 3.5:
Let
n
0 < 6 < 6' be given,
e: > 0,
{6 }
n
0;
6'
> 0
such that
which depends
on
.
13
and
n
S6'e
6
the conclusion of Lemma 3.4 is valid; hence
n
P(x;a ,6 ,0)
n
Let
is upper semi-continuous at
from the discussion in Section II above there m,,-st be a
is compact.
PO.
compact.
Then there is a scalar
sequence
{x*n (6)}
d <
x* (6)E:S~.
n
ve
where
S6'e
6' such that 0
Let
x*~
v
be opti-
is assumed to be
< c5 < d
has a convergent subsequence and if
v
6
implies the
is a subse-
quential limit,
ii)
PROOF:
Since
tinuous at
f
O.
is continuous and
Thus, given
sumed to be less than
6',
e: > 0
S6e
is compact,
there is a scalar
such that
0 < 6 < d
f.6u.p ~ is upper semi-cond > 0,
which can be as-
implies
and hence
(3.10)
Now since
f(x*)
S6e
is compact for
valid, we can choose
an
6 < d,
and since the conclusion of Lemma 3.4 is
such that the maximizer of· P(x;a ,6 ,0)
n
n
belongs to
15
Soe
for each n.
The compactness of Soe guarantees that there is a convergent
subsequence
x* Co) -+ V.1'e:S.1' •
v
ve
nk
Now x*e:S O' hence gj{x*)
~
0,
= l, ••• ,m.
j
Thus for each k
= 1,2, ••• ,
we have
f(x*) - 8 m ~
nk
(3.ll)
and since
m
L
f(x*) -
j=l
A(gj(X*); a
~
, 8~, 0),
K
is a penalty function maximizer,
m
L
A(gj{x*);a ,8 ,0)
j=l
nk ~
(3.12) f(x*) -
m
~
f(x* Co»~ - I A(gj(X* (o»;a ,8 ,0)
nk
j=l
nk
nk ~
~
f(x*n· (0».
k
f(v ) ~ f(x*)+e:; thus combining the fact
o
Vo with (3.11), and (3.l2), we conclude f{x*) ~ f{v o} ~ f(x*)+e:.
By the upper semi-continuity of S,
that
x*nk(o)
-+
Q.E.D.
Observe that in Lemma 3.4 and Theorem 3.5 we do not guarantee the feasibility
of any·penalty function maximizer.
point
In addition, we cannot assert that the limit
Vo is optimal or even feasible in
CoRoUARY
3.6:
(PO)'
Assume the conditions of Theorem 3.5 hold.
Then a sequence
{a }
n
can be chosen such that the sequence of penalty function maximizers has a
subsequential limit
PRooF:
V€SO
and v is optimal in
For each n we can choose
maximum value at
x* €Sn
n
Sn =
a
n
to insure that
(PO).
P{x;a,8 ,O}
where
{x:
gj (x) ~ oln,
the conclusion follows from Theorem 3.5.
Q.E.D.
j =
1, ... ,m};
n
n
has its
16
In the discussion in Section II, it was observed that each maximization of
a penalty function (Lagrangian) as in (1.1) yields a point of the
ftmction
f-6Up
and that via the support function (1. 3), an upper botmd can be computed for
f.6Up(O).
In this section, we present a result on these upper bounds for general
exponential penalty functions.
~
4.1:
Let
a > 1,
P(x;a,13,o).
13 > 0,
<5 <:: 0
be given and assume
x
maximizes
Then
m
S
f(x) + 13{m[h(o)]a -
L
[h(gj (x)+o) ]Q}.
j=l
PRooF:
Let
x*
be an optimal solution to
(PO>'
Then
gj(x*> S 0,
j
= 1, ••• ,m,
hence
m
f(x*) - mS(h(o)]a
S
f{x*) - l3
L
(h(gj{x*)+o)]a
j=l
r
m
S
f(x) - l3
[h(gj(X)+o)]a.
j=l
The first inequality above holds because of the monotonicity of
second inequality holds because
x
maximizes
P(x;a,l3,o).
h,
and the
The conclusion follows.
Q.E.D.
V.
RELATIONSHIP TO OlHER PENAlTY FlIiCTION
rtTHom
As mentioned earlier much attention has been devoted to penalty function
methods in general, and several references exist to various kinds of exponential
penalties.
First we mention a relationship to what Fiacco-McCormick (2] called
a general exterior penalty ftmction.
This is a class of one-parameter functions
17
which satisfy several conditions, among them being the following, restated in
the form for the maximization requited in
~
t-+o oo
If
where V(x;t)
In this formulation,
(Po).
V(x;t)
= f(x) - U(x;t).
A
relationship to the general exponential penalty
function of equation (3.1) can now be stated.
P(x; a, lla, 0)
h(gj(x»
S
S
1,
= l, ••• ,m.
0,
j
=
= l, ••• ,m;
gj(x)
j
f(x)
V is the penalty function and U corresponds to the sum
of the multiplier functions.
If X€So'
=
f(x) - lla
Consider the special case
m
l
[h(gj(x»]
a
•
j=l
thus by property
i~
of the function
h,
Hence for feasible points
~
P(x; a, lla, 0)
=
f(x) •
0.-+000
Thus the Fiacco-McCormick condition above is satisfied and it can be shown that
the other conditions for a general exterior penalty function are also satisfied
by P(x; a, lla, 0).
I f we retain the condition
on the boundary of
e = 11 a,
but select
cS >
0,
then for any point
So
lim P(x; a, lla,
0)
= -
00
a-+o co
hence
cS
> 0
does not yield a general exterior penalty function.
The following analogues of Lemmas 3.1 and 3.4 can be stated; they are given
here without proofs.
f3
= l/a.
Assume conditions A-l, A-2 hold and that A-3 holds with
18
lEMMA 5.1:
scalar
Suppose
a
<5
> 0
such that
is such that
a >
a
implies
and each such maximizer belongs to
I.svMA 5.2:
scalar
Suppose
a
0 > 0
such that
is such that
a >
a
~
is nonempty. Then there exists a
-ve
S
P(x;a,l!a,o)
has a global maximizer
SO'
S oe
implies that
imizer and each such maximizer belongs to
is compact.
Then there is a
P(x;a,l!a,O)
has a global max-
Soe.
These results differ from Lemma 3.1 and 3.4 in that here
a
is no longer
fixed.
REFERENCES
[1]
Evans, J.P. and F.J. Gould, "Stability in Nonlinear Programming,"
Operations Re8earch~ Vol. 18: 107-118 (1970).
[2]
Fiacco, A. V. andG.P. McCormick,
Uncontrained Minimization
NonZinearo Proogmnuning:
Technique8~
Sequent;ial,
John Wiley and Sons, Inc.
(1968) •
[3]
Goldstein, A.A. and B.R. Kripke, ''Mathematical Prggramming by Minimizing
Differentiable Functions," Numero. Math.~ Vol. 6: 47-48 (1964).
[4]
Gould, F.J., "Extensions of Lagrange Multipliers in Nonlinear Programming," SIAM J. AppZ. Math.~ Vel. 19: 1280-1297 (1969).
[5]
Motzkin, T.S., ''New Techniques for Linear Inequalities and Optimization,"
in Project SCOOP~ Symposium on Unear InequaZities and PX'oflrconming~
Planning Research Division, Director of Management Analysis ServIce,
u.S. Air Force, Washington, D.C., No. 10, 1952.
[6]
Zangwill, W.A.,
"Nonlinear Programming via Penalty Functions,"
Mgmt. Sci.~ Vol. 13: 344-358 (1967).