Gould, F.J. and Tolle, J.W.; (1970)A necessary and sufficient qualification for constrained optimization."

t Graduate School of Business and Department of Statistics, University
of North Carolina at Chapel Hill.
tt Department of Mathematics, University of North Carolina at Chapel Hill.
* This research was partially supported by a grant from the Office of
Naval Research, contract Number N00014-67-A-0321-003 (NR047-095); the U.S.
Air Force Office of Scientific Research under contract AFOSR-68=1415.
A NECESSARY AND SUFFICIENT QUALIFICATION
FOR CONSTRAINED OPTIMIZATION*
by
F. J. Gouldt and Jon W. Tollett
Institute of Statistics Mimeo Series No. 662
University of North Carolina at Chapel Hill
JANUARY 1970
A Necessary and Sufficient Qualification
for Constrained Optimization*
F. J. Gouldt and Jon W. TOllett
-
Abstract. A weak qualification is given which insures that a
~
broad class of constrained optimization problems satisfies the analogue
of the Kuhn-Tucker conditions at optimality. The qualification is
shown to be necessary and sufficient for these conditions to be valid
for any objective function which is differentiable at the optimum.
1.
Introduction.
~
Consider the general optimization problem
maximize
(1.1 )
g. (x)
l
r. (x)
l
~
f(x) , subject to
E
I
=
{l, ... ,m}
i
E
E
=
{I, ... ,k}
E
D
c
Rn
0, i
= 0,
x
Much research in mathematical programming has been devoted to determining necessary conditions for a given point to be a solution to this
problem.
These necessary conditions usually include special assumptions
which qualify the problem under consideration.
A familiar example is
the classical Lagrange mUltiplier theorem for the equality-constrained
pro bl em (I
emp t y, D
= Rn).
The usual assumption in this theorem is
that the Jacobian matrix of the constraints
local optimum.
r.
l
has full row rank at a
Such a condition is called a qualification, in this
t Graduate School of Business and Department of Statistics,
University of North Carolina at Chapel Hill.
tt Department of Mathematics, University of North Carolina at
Chapel Hill.
* This research was partially supported by a grant from the Office
of Naval Research, contract number N00014-6i-A-0321-003 (NR04i-095); the
U.S. Air Force Office of Scientific Research under contract AFOSR-68-141S.
-2-
case a constraint qualification.
Note that this condition is inde-
pendent of the objective function, f.
Therefore, if the constraint
qualification holds it is implicit in the classical theorem that the
Lagrange mUltiplier rules are valid for the same constraints and any other
differentiable objective function. Numerous authors have obtained different
qualifications for various special cases of (1.1) in order to insure that
similar optimality criteria are valid.
There has been interest in both
determining the weakest such qualifications and in extending the results
to increasingly general problems.
This paper will be concerned with
both of these efforts.
Historically, the interest in optimality criteria for the inequalityconstrained problem (E
n
empty , D -_ R )
has been comparatively recent.
The first necessary conditions appear to have been presented in 1948
[10] with no qualification other than the assumption that
by Fritz John
all functions are continuously differentiable.
Fritz John's Necessary Optimality Conditions:
solution to (1.1), with
u = (u , ... ,u )
o
m
E
m+l
R
u Vf(x )
o
0
(1. 2)
x
o
is a
n
E = 0. D = R , then there exists a
such that
----
m
I: u.Vg.(x ) =
i=l
l
l
0
°
u.g. (x ) = 0,
l
l
0
u ::f 0, u.l
Stron~er
If
~
0,
i = 1, ... ,m
i = 0, ... ,m.
necessary conditions for the same inequality-constrained
problem were obtained in 1951 by Kuhn and Tucker [11].
They produced a
qualification for the constraint functions which when satisfied implies
that the
u
o
in (1.2) must be positive.
Thus the set of candidates for
-3a local optimum is narrowed by the introduction of the constraint
qualification.
It is a simple exercise to find constraint qualifications
which allow the strengthening of Fritz John's result to that of Kuhn and
Tucker.!!
In 1961, Arrow, Hurwicz, and Uzawa [2], considering the inequalityconstrained problem, gave a constraint qualification for the Kuhn-Tucker
conditions which was weaker than that given by Kuhn and Tucker.
In
particular their qualification was shown to be the weakest possible if
the constraint set
S
g
=
{x: g.(x) ~ 0, i
l
E
I}
is convex.
In 1967,
Abadie [1] introduced a new qualification which neither implies nor is
implied by the qualification of Arrow, Hurwicz, and Uzawa.
Evans
Recently,
[6] has exhibited a qualification which is weaker than any of those
mentioned above.
In 1967, Mangasarian and Fromovitz [12] considered the problem
(1.1) with
D
= Rn .
They proved the analogue of the Fritz John result
for this problem and then presented a constraint qualification which
validated the extension of the Kuhn-Tucker conditions to the case of
mixed constraints.
Other significant papers in this area include those of Canon,
Cullum, and Polak
[4], Cottle [5], and Varaiya [14].
In this paper a necessary and sufficient qualification is
exhibited for the extension of the Kuhn-Tucker conditions to problem
(1.1), and for its special forms, the problem with inequality constraints
and the classical problem with equality constraints.
It is important to
note that the new qualification, in its general form, is dependent upon
the set
D as well as the constraint functions.
-42.
n
R
Notation and Terminology.
and let
(1.1).
n
g: R
XED
o
m
n
R , r: R
containing
k
R
+
n
f: R
are assumed to be continuous on some open set
.
I, E
R and the constraint functions
+
D and differentiable at
one of the sets
D be an arbitrary set in
be a local solution to the optimization problem
The objective function
+
Let
x .
o
We shall assume that at least
2/
nonempty.-
1S
Define the following sets in terms of the constraint functions
and
r:
sg = {x
S
r
S
S
r
= Rn .
set
0
=
C
o
g
o
n
=R ,
S
n
R
of
spanned by
complement of
n
E R : <x, IJg (x }
i o
if
I
0
= ¢,
IJr. (x ), i E E
1
0
L
0
= O}
{i E I: g. (x )
1
0
= {x
i E E}
r
= orthogonal
The set
constraints, I
and
S
n S
0
0
= ¢,
= Sg
0, i E I}
= 0,
{x E D: r. (x)
1
L.L
C
~
1
= subspace
I
I
=
E D: g. (x)
L
0
If
g
set
~
C
0
0, i E I }.
0
= Rn ,
and i f
is called the constraint set, g
and
E
= ¢,
r
set
are called
is the set of indices of active inequality constraints,
is a nonempty closed convex polyhedral cone determined by the
active inequality constraints.
Let
A be an arbitrary set in
DEFINITION 1.
denoted
n
x E R
AI , if and only if
n
R .
is said to be in the polar cone of
<x,y)
~ 0
for all
yEA.
A,
-5Properties of the polar cone relevant to this work are:
i)
AI
is a closed convex cone
ii )
Al .5:. A2 ~ A .5:. Ai
iii)
A"
=A
iv)
AI
=
v)
If
2
~
A
is a closed convex cone
[closure of the convex hull of
A
is a subspace, A.l
A]'
= AI .
For a discussion of polar cones and their properties, see [3],
[8], and [13].
DEFINITION 2.
at
x
o
E
sequence
If
A
is nonempty, then the cone of tangents to
A, T(A,x ), is the set of all
0
{x }
n
E
A, converging to
{A } E R, such that
{A (x
n
The set
n
T(A,x)
o
n
x
E
n
R
A
such that there exists a
x ' and a nonnegative sequence
o
- x)}
0
converges to
x.
is a nonempty closed cone determined by the
geometric configuration of
A.
It need not be convex.
The cone of
tangents to certain constraint sets was apparently first employed in
the mathematical programming context in concurrent pUblications of
Abadie [1] and Varaiya [14].
If problem (1.1) has a local solution at
function
x.
o
f,
then
f
x
o
for a given objective
is said to have a constrained local maximum at
The following definition extends the notion of Lagrange regularity
given in [2] .
DEFINITION 3.
Lagrange regular at
The triple
x
o
(g,r,D)
u
E
m
R
such that
(1.1) is said to be
if and only if for every objective function
with a constrained local maximum at
and
of
x o ' there exist vectors
WE
k
R
f,
-6m
L: 1jJ. \lr . (x ) + L: u. \I g. (x )
k
(2.1)
'Vf ( x ) =
o
(2.2)
i=l
l
l
i=l
0
u.g. (x ) = 0
l
l
~
l
0
i = 1, ... ,m
0
U.
l
l
i = 1, ... ,m.
0
The conditions (2.1), (2.2), (2.3) represent the analogue of the KuhnTucker conditions for problem (1.1).
Conditions (2.1) and (2.2) can
be written in the more convenient form
(2.4)
=
\If(x )
o
3.
k
L: 1jJ.'Vr.(x ) +
i=l
l
l
Statement of the Theorem.
THEOREM.
and only i f
(C
The triple
o
L:
iE
0
(g,r,D)
1
u.'Vg.(x).
0
l
l
0
The major result of this paper is
is Lagrange regular at
x
o
if
n L.l), = T' (S x ).
0
The condition
'
(C
0
.1
o
n L )' = T'(S x)
0
'
0
represents a qualification
which enjoys the generality of problem (1.1).
constrained problem with
n
D= R , E =
C' = T'(S x).
o
g' 0
T'
The set
0,
For the special inequality-
the qualification reduces to
appears to have been first introduced and
discussed by Varaiya [14], but the weak qualification
was explicitly discussed by Evans
[6].
C'=T'(S x)
o
g' 0
The above theorem shows that
this latter qualification is in fact necessary, i.e., the weakest
possible, for validating the Kuhn-Tucker conditions.
constrained problem with
to
L =T'(S x).
o
r' 0
For the equality-
n
D = R , I = 0, the new qualification reduces
This condition is therefore necessary and sufficient
for the classical Lagrange multiplier rule to be valid.
4.
Proof of the Theorem.
Several lemmas will be given which
will be employed in the development of the proof.
-7LEMMA 4.1.
(Farkas' Lemma).
Ax
Of the two systems
= b,
x
2:
0
and
one and only
element
b
~
has
~
solution, where the
m
n
x
matrix
A and the
m
R are given.
in
A proof of this lemma can be found in [7], [9].
LEMMA 4.2.
Vf(x)
o
E
(C
nLl.),.
0
0
Proof.
i
E
E, and
(2.4)
The conditions (2.4), (2.3) hold if and only if
R be the
Let
n x k
matrix whose columns are
G be the matrix whose columns are
Vg. (x ), i
1
0
E
Vr.(x),
1
I .
0
0
Then
can be written in the form
(4.1)
with
Suppose (4.1) has a solution
u.
1
2:
0, i
1
(n,v,u)
1
is a solution of the system
(4.2)
if
Vf(x )
=
(R,-R,G{)
for
E
E and
0
1
~
tJ;. < 0
1
n.
Dl=fine
1
tJ;. < 0
with
0
n., v., i E E, by
variables
Then
I .
E
2:
(n,v,u)
0, v.
1
2:
0
i
u.
0
2:
1
for
is a nonnegative solution to (4.2) then
be a solution to (4.1) with
u.
1
2:
0, i
E
I
0
.
i
E
I.
0
Conversely,
(n - v,u) will
Now from Lemma 4.1, it
follows that (4.2) has a nonnegative solution if and only if
-8-
<x,
I7f(X O ~
~
~,l7gi (XO V> ~
with
u.
.1
C
X E
n L .
o
o
I7f(x )
E
o
for
0
0, i
~
1
whenever
0
(C
E
i
for
E and
E
That is, (4.1) has a solution
I.
E
i
o
I , if and only if <X,l7f(X
0
O
)>
~
0
whenever
Therefore conditions (2.4), (2.3) are equivalent to
n L.1)"
0
completing the proof.
0
The following lemma was proved by Varaiya [14].
A proof is
included here for completeness.
LEMMA 4.3.
17f(x )
E
o
n
X
T(S,x ).
E
x
converges to
0
as
sA
n
n
~
00,
x.
n
~
+ /x
'\I n
Since
{A}ER
n
f
X
T(S,x ).
E
o
{x }
n
con-
S
E
such that
is differentiable at
- x , I7f(x )\ +
be nonpositive since
Ilx - x
n
0
€
II
= An (f (xn
) - f (x )) - sA II x
on
n
<A n (xn
we have
o~
0
- x ), I7f(x )\.
0
A (f(x ) - f(x ))
n
Consequently <x,
n
-r
0/1
Ilx - x II ~ O. Thus A (f(x ) - f(x))
non
n
0
large.
then
o
This implies that
00.
I7f (x )\.
o/'
Letting
x
n,
= f(x 0 )
f(x )
n
~
for all
and a nonnegative sequence
o
we have for each
€
0
Then there exists a sequence
o
0
where
~l7f(Xo),3> ~
We show that
- x)}
n
has a constrained local maximum at
0
verging to
{A (x
f
T'(S,x ).
Proof.
Suppose
If
0
I7f(Xo~ = lim
~
0
- x
II
0
/x, I7f(x )'.
~
oV
and
has a limit which must
for all
n
A (f(x ) - f(x ))
n
n
0
sufficiently
~
0
which
proves the lemma.
Lemma 4.3 furnishes a necessary condition for a differentiable
objective function to attain a local maximum over any set
case
Whence
x
o
is interior to
17f(x)
o
= o.
A, T(A,x )
o
= Rn
and
T' (A,x )
o
A.
=
{a}.
In
-9-
4.4.
LEMMA
Proof.
X
x
E: T(S,x ).
o
o
(L.1 n C )'
o
T' (s,x ).
c
0
-
0
It is sufficient to show that
T(S,x)
Then there exists a sequence
{x } E: S
x.
r.(x)
l
n
where
~
E:.
l
0
Since each
= r.(x)
l
0
as
n
~
r.
l
0
Now
00.
x
n
0/
E: S, so
.1
Le.,xE: L .
o
shown that
T(S,x )
o
T(S,x )
Hence
o
T(S,x) c C.
g
0
-
) ,
x)
C
.1
L .
-
l
n
- x
0
II
= r.(x
) = 0
l
0
n
~
00,
and
we obtain
Abadie [1] has
Taking
S c S
g
it follows that
C , which completes the proof of the lemma.
c
-
0
function,
o
converging to
n
Letting
We now turn to a proof of the theorem.
x .
Let
0
= 0,
0
Since
0
l
r. (x )
II.
/vr. (x ), A (x - x )'.. = -E:. A Ilx - x
o )/
l
0
n n
l n
n
0
(x0
0
n
- x\+ E:.llx
n
'"
~r i
L.1 n C.
is differentiable,
l
+/Vr.(x), x
\'
-
{A} E: R such that {A (x - x )}
n
n n
0
and a nonnegative sequence
converges to
c
o
differentiable at
By Lemma
Let
f
be any objective
x, with a constrained local maximum at
o
4.3, Vf(xo ) E: T'(S,x0 ).
If
= (C0
T'(S , x0 )
then by Lemma 4.2 conditions (2.4), (2.3) hold and hence
Lagrange regular at
x .
Lagrange regular at
x o ' then
Assume
for every
It remains to show that if
o
(g,r,D)
T'(S,x )
o
=
(C
0
is Lagrange regular at
n L.1)"
0
(g,r,D)
(g,r,D)
is
n L.1),.
0
x .
o
We shall show that
there exists an objective function
Y E: T' (S,x )
o
f, which
is differentiable at
x, which has a constrained local maximum at
and for which
= y.
Vf(x)
o
o
with Lemma 4.2 then yields
T'(S,x )
o
Lemma
c
-
4.4.
(C
0
is
x,
0
The Lagrange regularity hypothesis together
Y E: (C
o
n L.1)"
0
and hence the result
That equality actually holds follows from
-10-
Let
y
E
T'(S,x).
In order to make the notation less cumber-
o
some, we will assume, without loss of generality, that
origin, and that
= l, ... ,n
i
E:
radius
y
is the unit vector
= 1.
- 1, Yn
The symbol
about the origin, and
S
x
is the
o
e , i.e., y. = 0,
n
N
E:
l
will denote an open ball of
will represent the set
E:
S
n
N •
E:
00
A sequence of sets
Note that
{Ct}t=l
C U {a}
t
is defined as follows
is a closed circular convex cone with axis
and boundary rays forming an angle
LEMMA 4.5.
SE:(t)
with
t, there exists an
Suppose otherwise.
{x }
p
y.
°
E:(t) >
{xp/l
The sequence
Illx Ill.
C
t
with
I} has
Illx II
Pj
Pj
It follows that
<y,x) = lim
X
E
p
such that
Slip
J
E
t
there would exist
for every
p.
This implies
t : 2) = y > 0.
a convergent subsequence, say
-+x.
Then since
T(S,O).
Iy,x /
P.+oo\
Y
x
IXpl
Let x
Pj
-+ 0.
E
Then for some
xp/llxpll>~ cos(~-
-<y,
Hence
t + 2
R - Ct'
a sequence
Pj
If
2
n
c
Proof.
{x
For every
If
y
PJ
Ilx
E
Pj
8
11 Pj .s.8,
But
.11> ~
PJ
x
y > 0.
i T'(S,O) which is a contradiction, completing the proof.
Define the nonincreasing sequence
{E: }
k
by
(4.3)
Note that some
E:
k
may be infinite. Now define
{~k}
by
-11-
It is clear from the definition that
Let
z
denote points in
n-l
R
€
k
=1
k
>
k+l
< €
and let
1
A
and
k
€
n-l
P: R
k
~
~
o.
R be
defined as follows
p(z)
=
Ek +1 ~
for
0, for
z
=0
with
P
Proof.
n
In addition, if
-
lim
Z~€
x
-
=
#
(z,x )
n
0, XES,
€
p(z).
<
Since
P
is linear except at the points
can easily establish the continuity of
that
n-l
R a n d differentiable at
is continuous on
= o.
VP(O)
x
Ek ' k ~ 2
<
= O.
z
LEMMA 4.6.
II z II
p(z)
= P(E k ).
p(z)
1
= ----
P
for
Now observe that for
x
#
Ek +l
E , the reader
k
by verifying
0
~
I Izi I
<
Ek
we
k
have
U\ - Ek+l)
and hence
(4.4)
To show that
(tan k: 2)1 Izi
P
I
~ p(z) ~ (tan k: 1)1 Izi
I·
is differentiable at the origin with Vp(O)
= 0,
it
-12lim
suffices to show that
p(z)
II z 11-*0 1Tz1T
from the last half of (4.4) .
with
x ::I O.
II z II
::;;
E: k +l
II z II = 0,
If
Ek ,
<
k
n
rr;rr
<
t
then
Since
1.
:?:
x
Lemma 4.5 that
Now let
'IT
an k +
= O.
This follows immediately
x = (z,x )
n
be any point in
x < p(z) immediately.
n
A
E:
::;; E:
i t follows from
k
k
2'
or
x
n
'IT
< (tan k
+
S,
Now consider
(l~. 3)
2)II z ll.
and
This
inequality together with (4.4) yields
x
<
n
p(z)
which completes the proof.
We shall now define the objective function
distinguished with a local maximum on
at
0, and for which
Vf(O)
= y.
f(x)
where
=
x
(z,x).
Similarly, f
Vf(O)
x
n
<
=
(0,1 )
p(z)
Note that
n
so
at
-*
0, which is differentiable
- p(z)
is continuous since
P
is differentiable at the origin, and since
= y.
Let
x
f(z,x ) < O.
n
S
local maximum on
=
R which is
Let
= xn
f
S
n
f: R
SA , x ::I O.
E:
l
(z ,x )
n
E
Since
f( 0)
at the origin.
= 0,
f
is continuous.
Vp(O)
= 0,
Then from Lemma 4.6,
attains the promised
This completes the proof of the
theorem.
5.
1.
Remarks.
~
The function
P
constructed in the proof of the theorem
could be modified so as to be differentiable on
would likewise be differentiable on
n
R .
n
R .
Then
f
=
x
n
Thus theorem 1 holds if we
restrict the problem (1.1) to objective functions differentiable on
2.
If
D
= Rn ,
- p(z)
n
R .
it is known [4], [12] that either of the following
qualifications guarantee Lagrange regularity at
x .
o
-13i)
is an
h
The vectors
E
ii)
Rn
Vr.(x)
1
such that
The vectors
are linearly independent and there
0
h
and -<h, Vg (x ~
i o
L~
E
o
Vr.(x) i
1
0
E
E, and
<
0
'Vg.(x), i
1
0
for
E
i
E
I
o
I , are all
0
linearly independent.
It follows from the theorem that both of these qualifications
imply
T'(S x )
,
0
=
(C
0
n L~)'.
An example which shows that the converse
0
implication does not hold is provided by the constraints
g(x,y,z)
=z
r(x,y,z)
- x
2
- y
= -z
In this case, neither i) nor ii) hold at
(L~ n C ),
o
0
3.
E
2
0
but
= T'(S,O).
The assumption that at least one of the index sets
is not empty was made only for convenience.
I
and
If both sets are empty,
the conditions (2.1), (2.2) and (2.3) should be replaced with the
condition
D
Vf(x)
o
= O.
In this case the theorem correctly states that
is Lagrange regular at
x
T'(D , x 0 )
if and only if
o
This condition can be satisfied without
x
o
=
(C 0 n L~)'
0
=
{O}.
being an interior point of
D.
4.
The inequality constraints
r.(x)
1
equivalent pairs of inequality constraints
=0
r.(x)
1
can be replaced by
<
-
0, - r.(x)
< 0,
1-
and so every problem with mixed constraints can be considered as an
inequality-constrained problem.
If the original mixed problem (1.1) is
transformed to an inequality-constrained problem in this way, and
Co
is the cone determined by the new active inequality constraints, then it is
easily verified that
L~ n Co
=
CO.
Therefore it is possible to conclude
-14from the theorem that the two problems are equivalent in the sense that
the Lagrange regularity of one implies the Lagrange regularity of the
other.
This result is not apparent from some of the previously known
constraint qualifications (cf. the qualifications in Remark 2 above).
-15-
REFERENCES
[1]
J. Abadie, On the Kuhn-Tucker theorem, Nonlinear Programming,
J. Abadie, ed., John Wiley and Sons, New York, 1967, pp. 21-36.
[2]
K. Arrow, L. Hurwicz, H. Uzawa, Constraint qualifications in
maximization problems, Naval Res. Logist. Quart., VIII (1961),
pp. 175-191.
[3]
A. Ben-Israel, Linear equations and inequalities on finite
dimensional, real or complex, vector spaces:
a unified theory,
J. Math. Anal. Appl., 27 (1969), pp. 367-389.
[4]
M. Canon, C. Cullum, E. Polak, Constrained minimization problems
in finite-dimensional spaces, SIAM J. Control, 4 (1966),
pp. 528-547.
[5]
R. Cottle, A theorem of Fritz John in mathematical programming,
RAND Memorandum RM-3858-PR, October 1963.
[6]
J. Evans, A note on constraint qualifications, Center for Mathematical Studies in Business and Economics Report 6917,
June 1969, Graduate School of Business, University of Chicago,
Chicago, Ill.
[7]
J. Farkas, Uber die theorie der einfachen ungleichungen, J. Reine
Angew, Math., 124 (1901), pp. 1-27.
[8]
W. Fenchel, Convex Cones, Sets and Functions, Department of
Mathematics, Princeton University, Princeton, N. J., 1953.
[9]
D. Gale, The Theory of Linear Economic Models, McGraw-Hill Book
Company, Inc., New York, 1960, p. 44.
-16-
[10]
F. John, Extremum problems with inequalities as subsidiary
conditions, Studies and Essays, Courant Anniversary Volume,
Interscience, New York, 1948.
[11]
H. Kuhn, A. Tucker, Nonlinear programming, Proceedings of the
Second Berkeley Symposium on Mathematical Statistics and
Probability, J. Neyman, ed., University of California Press,
Berkeley, 1951, pp. 481-492.
[12]
o.
Mangasarian, S. Fromovitz, The Fritz John necessary optimality
conditions in the presence of
e~lality
and inequality constraints,
J. Math. Anal. Appl.,17 (1967), pp. 37-47.
[13]
H. Uzawa, Chapter 2, Studies in Linear and Nonlinear Programming, K. Arrow, L. Hurwicz, H. Uzawa, eds., Stanford University Press, Stanford, California, 1958.
[14]
P. Varaiya, Nonlinear programming in Banach space, SIAM J. Appl.
Math. 15 (1967), pp. 284-293.
-17-
FOOTNOTES
l!
For example, suppose the vectors
linearly independent.
i
= l, ... ,m,
2/
-
Then if
u
o
= 0,
Vgl(x ), ... ,Vg (x)
it follows that
which contradicts the fact that
c. f. remark 3.
o
u
# o.
m
0
u.
= 0,
1
are
UNCLASSIFIED
Securitv CI assi icatlon
DOCUMENT CONTROL DATA· R&D
(Securily cle•• ilicelion 01 IIt/e, body of ab.'racland inde"inllannolellon musl be enlered when Ihe overell ,._rl /. c/e •• II'.II1
\. ORIGINATING ACTIVITY (Corpore'e eUlhor)
ae. REPORT SECURITY CLASSIFICATION
Department of Statistics
University of North Carolina
Chapel Hi 11 , North Carolina 27514
2b.
Unclassified
GROUP
3. REPORT TITLE
A Necessary and Sufficient Qualification for Constrained Optimization
4. DESCRIPTIVE NOTES (Type of reporl end Inclusive dele.)
Technical Report
5· AU THORISI (Firs I name, middle inltlel, le.' name)
F. J. Gould and Jon W. Tolle
6· REPORT DATE
January 1970
8a. CONTRACT 01'1 GRANT NO.
NOOO14-67-A-0321-003 (NR047-095)
b. PROJECT NO.
c.
7e. TOTAL NO. OF PAGES
-19-
rb.
NO. OF REFS
-14-
ga. ORIGINATOR'S REPORT NUMBERIS'
Institute of Statistics Mimeo Series No.
662
gb. OTHER REPORT NOIS) (Any olher number. 'hel may be e •• '",ed
Ihl. reporl)
d.
10. DISTRIBUTION STATEMENT
Unlimited
11. SUPPLEMENTARY NOTES
12. SPONSORING MILITARY ACTIVITY
Operations Research Program
Office of Naval Research
Washington, D.C. 20360
'3. ABSTRACT
A weak qualification is given which insures that a broad class of
constrained optimization problems satis fi es the analogue of the Kuhn-Tucker
conditions at optimality. The qualification is shown to be necessary and
sufficient for these conditions to be valid for any objective function
which is differentiable at the optimum.
I
i
i
Security Classification
UNCLASSIFIED
Security Classificatlon
14
LIN K
LIN K
A
B
LIN K
C
KEY WOROS
ROLE
WT
ROLE
WT
Constrained optimization
necessary and sufficient constraint qualificatio n
Inequality constraints
Equality constraints
Nonlinear programming
,
Security
Clas~ification
ROLE
WT