First-order conditions for optimization problems with quasiconvex

Serdica Math. J. 34 (2008), 607–618
FIRST-ORDER CONDITIONS FOR OPTIMIZATION
PROBLEMS WITH QUASICONVEX INEQUALITY
CONSTRAINTS
Ivan Ginchev, Vsevolod I. Ivanov
Communicated by R. Lucchetti
Abstract. The constrained optimization problem min f (x), gj (x) ≤ 0
(j = 1, . . . , p) is considered, where f : X → R and gj : X → R are nonsmooth functions with domain X ⊂ Rn . First-order necessary and first-order
sufficient optimality conditions are obtained when gj are quasiconvex functions. Two are the main features of the paper: to treat nonsmooth problems
it makes use of the Dini derivative; to obtain more sensitive conditions, it
admits directionally dependent multipliers. The two cases, where the Lagrange function satisfies a non-strict and a strict inequality, are considered.
In the case of a non-strict inequality pseudoconvex functions are involved
and in their terms some properties of the convex programming problems are
generalized. The efficiency of the obtained conditions is illustrated on an
example.
2000 Mathematics Subject Classification: 90C46, 90C26, 26B25, 49J52.
Key words: Nonsmooth optimization, Dini directional derivatives, quasiconvex functions,
pseudoconvex functions, quasiconvex programming, Kuhn-Tucker conditions.
608
Ivan Ginchev, Vsevolod I. Ivanov
1. Introduction. The constrained optimization problem
(1)
min f (x), gj (x) ≤ 0 (j = 1, . . . , p)
is investigated, where f : X → R and gj : X → R (j = 1, . . . , p) are nonsmooth
functions with domain X ⊂ Rn . The scope of the paper is to obtain first-order
necessary and sufficient optimality conditions of Kuhn-Tucker type for problems
with nonsmooth quasiconvex constraints, and in particular ones with quasiconvex
objective functions. Quasiconvex (quasiconcave) programming initiates in the
well-known paper of Arrow, Enthoven [1] and has been studied thereafter by
various authors, e. g. in [10], [2], [3], [4], [7], [8], [12]. The main features of
the paper are the following: to treat nonsmooth problems it makes use of Dini
directional derivatives; to obtain more sensitive conditions it admits directionally
dependent multipliers. This approach has been used in [6] for problems with
locally Lipschitz data, making use of the set-valued Dini derivative. Here we
show, that for problems with quasiconvex constraints we can use instead the
single-valued Dini derivative.
2. Basic definitions. For a set X ⊂ Rn and x ∈ X we denote by X(x)
the set of the admissible directions, that is the set of all u ∈ R n for which t = 0 is
an accumulating point for the set {t ∈ R + | x + tu ∈ X}. Consider the function
(1)
f : X → R. The lower Dini derivative f− (x, u) of f at x ∈ dom f in direction
u ∈ X(x) is defined as an element of R := R ∪ {−∞} ∪ {+∞} by
1
(1)
f− (x, u) = lim inf (f (x + tu) − f (x)).
t→0+ t
The role of the Dini derivatives for quasiconvex programming is stressed in [4].
Recall that a function f : X → R, X ⊂ Rn , is said quasiconvex (strictly
quasiconvex) if X is convex and for all x 0 , x1 ∈ X, x0 6= x1 , such that f (x0 ) ≥
f (x1 ), and all t ∈ (0, 1), it holds f ((1 − t)x 0 + tx1 ) ≤ f (x0 ) (f ((1 − t)x0 + tx1 ) <
f (x0 )). If these properties hold for a fixed x 0 ∈ X, we say that f is quasiconvex
(strictly quasiconvex) at x0 . Moreover, in the last definition we will not suppose
that X is convex, but the above properties will be assumed to hold only for those
t ∈ (0, 1), for which (1−t)x0 +tx1 ∈ X (this relaxed definition allows for instance
further to state Theorem 1 without the hypothesis that X is convex).
Following Diewert [5], we use the Dini derivative to introduce pseudoconvexity for nonsmooth functions. We call the set X ⊂ R n convex-like at x0 if for
each x1 ∈ X it holds x1 − x0 ∈ X(x0 ). We say that set X is convex-like if it is
convex-like for each x0 ∈ X (turn attention that the convex sets are convex-like).
We say that the function f : X → R, where X is convex-like at x 0 , is pseudoconvex (strictly pseudoconvex) at x0 ∈ X, if f (x0 ) > f (x1 ) (f (x0 ) ≥ f (x1 )) implies
First-order conditions for optimization problems
609
(1)
f− (x0 , x1 − x0 ) < 0. The function f : X → R, where X is convex-like, is said
pseudoconvex (strictly pseudoconvex) if it is pseudoconvex (strictly pseudoconvex) at each point x ∈ X (the definition of Diewert requires that the domain X
is convex, here we relax this requirement to X is convex-like).
3. Conditions with non-strict inequalities We can write problem
(1) in the form
(2)
min f (x),
g(x) ≤ 0,
accepting that g(x) = (g1 (x), . . . , gp (x)) (the lower indexes will be used for the
coordinates of a vector) and that g(x) ≤ 0 means that the coordinates satisfy this
(1)
(1)
(1)
inequality. We put g− (x, u) = (g1 − (x,
Ppu), . . . , gp − (x, u)). pThe scalar product
p
in R is denoted h·, ·i, that is hη, zi = j=1 ηj zj for η, z ∈ R . Besides the usual
algebraic operations with infinities, we accept that (±∞) · 0 = 0 · (±∞) = 0.
We write as usual R+ = [0, +∞) and R+ = R+ ∪ {+∞}. If r ∈ R+ we
put
R,
r > 0,
R,
r > 0,
R+ [r] =
R+ [r] =
R+ , r = 0,
R+ , r = 0.
Given z 0 ∈ Rp+ , we introduce the notations
Rp+ [z 0 ] = R+ [z10 ] × · · · × R+ [zp0 ],
p
R+ [z 0 ] = R+ [z10 ] × · · · × R+ [zp0 ].
Recall that there exists a standard topology on R, in which a neighbourhood of +∞ (−∞) is any set U ⊂ R containing an interval of the type
Qk(a, +∞]
([−∞, a)) for some a ∈ R. When Ai ⊂ R (i = 1, . . . , k), then int i=1 Ai =
Qk
Qk
k
i=1 int Ai is the interior of
i=1 Ai with respect to the product topology R .
With these agreements the following lemma has place.
p
Lemma 1. Let z 0 ∈ Rp+ and let ȳ ∈ R, z̄ ∈ R . Then the following two
conditions are equivalent:
p
(ȳ, z̄) ∈
/ −int (R+ × R+ [z 0 ]),
(3)
and
(4)
∃(ξ 0 , η 0 ) ∈ R+ × Rp+ : (ξ 0 , η 0 ) 6= (0, 0),
= 0 if ȳ = −∞, ηj0 = 0 if z̄j = −∞,
ηj0 zj0 = 0 (j = 1, . . . , p) and ξ 0 ȳ + hη 0 , z̄i ≥ 0.
ξ0
P r o o f. If r ∈ R we put A(r) = R+ when r ≥ 0, and A(r) = R when
r < 0. Now it is clear that condition (3) is satisfied if and only if the set
610
Ivan Ginchev, Vsevolod I. Ivanov
Q
A(ȳ) × pj=1 A(z̄j ) is separated from −int (R+ × Rp+ [z 0 ]), the two sets are in
Rp+1 . Applying the Separation theorem for these two sets, we see that (3) implies (4). Conversely, when condition (4) is satisfied, then (3) follows immediately,
p
since ξ 0 y + hη 0 , zi < 0 for (y, z) ∈ −int (R+ × R+ [z 0 ]). Remark 1. In (4) due to z 0 ∈ Rp+ and η0 ∈ Rp+ the slackness condition
ηj0 zj0 = 0 (j = 1, . . . , p) can be represented in the equivalent form hη, zi = 0. The
P
sum ξ 0 ȳ + hη 0 , z̄i = ξ 0 ȳ + pj=1 ηj0 z̄j always has sense, since it has not addends
equal to −∞. The proof of Lemma 1 leads to a practical rule how to choose the
multipliers ξ 0 and η 0 = (η10 , . . . , ηp0 ). Namely, we can put

0

 1, zj = 0, z̄j ≥ 0,
1, ȳ ≥ 0,
0, zj0 = 0, z̄j < 0,
ξ0 =
and ηj0 =
0, ȳ < 0,


0, zj0 > 0.
The next theorem uses the following notion of a minimizer. We say that
the feasible point x0 is a radial minimizer (strict radial minimizer) of (1), if for
all admissible directions u ∈ X(x0 ), there exists δ(u) > 0, such that f (x 0 ) ≤
f (x0 + tu) (f (x0 ) < f (x0 + tu)) whenever 0 < t < δ(u) and the point x 0 + tu is
feasible. Obviously, each local (strict local) minimizer of (1) is its radial (strict
radial) minimizer.
Recall that given a feasible point x 0 ∈ X, the set of the active indexes for
(1) at x0 is defined by J(x0 ) = {j | gj (x0 ) = 0}.
Theorem 1 (Necessary conditions, non-strict inequalities). Consider
problem (1) and let x0 be a radial minimizer. Let the functions g j (j = 1, . . . , p)
be continuous at x0 when j ∈
/ J(x0 ) and quasiconvex at x0 when j ∈ J(x0 ). Then
0
for each u ∈ X(x ) the following condition is satisfied:
p
(1)
(1)
(5)
(f− (x0 , u), g− (x0 , u)) ∈
/ −int R+ × R+ [−g(x0 )] .
P r o o f. Suppose on the contrary, that for some u 0 ∈ X(x0 ) we have
p
(1)
(1)
∈ −int R+ and g− (x0 , u0 ) ∈ −int R+ [−g(x0 )]. Let f− (x0 , u0 ) =
(1)
limk (1/tk )(y k − y 0 ) and gj − (x0 , u0 ) = limk (1/sjk )(z jk − z j0 ) for some sequences
tk → 0+ and sjk → 0+ (j = 1, . . . , p), where y k = f (x0 + tk u0 ), y 0 = f (x0 ),
z jk = gj (x0 + sjk u0 ), z j0 = gj (x0 ). Passing to a subsequence of {tk }, we may
assume that tk < min(s1k , . . . , spk ). Now we prove that the points x0 + tk u0
are feasible for all sufficiently large k. The condition x 0 + tk u0 ∈ X is imposed
implicitly taking the value f (x0 +tk u0 ). Since f and gj (j = 1, . . . , p) are supposed
to have the same domain X, the values g j (x0 + tk u0 ) are defined. It remains to
(1)
f− (x0 , u0 )
First-order conditions for optimization problems
611
prove that gj (x0 + tk u0 ) ≤ 0 (j = 1, . . . , p) for all sufficiently large k. When
j ∈ J(x0 ) we have gj (x0 + sjk u0 ) ≤ 0 = g(x0 ). Since gj is quasiconvex at x0 ,
/ J(x0 ), we have gj (x0 ) < 0. Now the
this gives gj (x0 + tk u0 ) ≤ 0. When j ∈
0
0
continuity of gj at x implies gj (x + tk u0 ) < 0 for all sufficiently large k. Thus,
for all sufficiently large k the point x 0 + tk u0 is feasible, and at the same time
f (x0 + tk u0 ) − f (x0 ) = y k − y 0 < 0, which contradicts the hypothesis that x 0 is
a radial minimizer. Remark 2. Condition (5) will be referred as primal form condition. On
the base of Lemma 1 it is equivalent to the following dual form condition:
∃ (ξ 0 , η 0 ) ∈ R+ × Rp+ : hξ 0 , η 0 i 6= (0, 0),
(1)
ξ 0 = 0 if f− (x0 , u) = −∞,
(1)
ηj0 = 0 if gj − (x0 , u) = −∞ (j = 1, . . . , p),
ηj0 gj (x0 ) = 0 (j = 1, . . . , p),
P
(1)
(1)
and ξ 0 f− (x0 , u) + pj=1 ηj0 gj − (x0 , u) ≥ 0.
(6)
The multipliers ξ 0 and ηj0 (j = 1, . . . , p) can be chosen according to Remark 1.
Remark 3. The last row in (6) is a non-strict inequality. Since only
such conditions are considered in this section, it is entitled “Conditions with nonstrict inequalities”. In the next section we will occupy with similar conditions,
but with strict inequalities.
The following example shows, that without the hypothesis that g j is con/ J(x0 ) Theorem 1 is not true.
tinuous at x0 when j ∈
Example 1. Consider problem (2) with f, g : R → R given by f (x) = −x
and
g(x) =
−1, x ≤ 0,
1, x > 0.
The function g is quasiconvex. It holds g(x 0 ) < 0 and g is not continuous at
x0 . The point x0 = 0 is a radial (and global) minimizer, but condition (5) is not
satisfied. Indeed, for u = 1 it holds
(1)
(1)
(f− (x0 , u), g− (x0 , u)) = (−1, +∞) ∈ −int (R+ × R) = −int (R+ × R+ [−g(x0 )]).
If in Theorem 1 we replace the primal form condition (5) with the equivalent dual form condition (6), we observe that, in contrast to the classical theory,
the multipliers depend on the directions. The next example shows that, when
treating nonsmooth problems, the hypotheses of Theorem 1 do not imply condition (6) with directionally independent multipliers.
612
Ivan Ginchev, Vsevolod I. Ivanov
Example 2. Consider problem (2) with f, g : R → R given by
x, x ≥ 0,
−2x, x ≥ 0,
f (x) =
g(x) =
2x, x < 0,
−x, x < 0.
The functions f and g are continuous and strictly quasiconvex (also strictly
pseudoconvex). The set of the feasible points is R + . Put x0 = 0. Obviously
x0 is a global minimizer. Then condition (6) is satisfied in virtue of Theorem 1,
but cannot be satisfied with directionally independent multipliers.
Indeed, assume in the contrary, that condition (6) is satisfied with some
(1)
directionally independent multipliers (ξ 0 , η 0 ). For u ≥ 0 it holds f− (x0 , u) = u,
(1)
g− (x0 , u) = −2u, whence in particular we should have
(1)
(1)
ξ 0 f− (x0 , 1) + η 0 g− (x0 , 1) = ξ 0 − 2η 0 ≥ 0.
(1)
(1)
Similarly, for u ≤ 0 it holds f− (x0 , u) = 2u, g− (x0 , u) = −u, whence in particular we should have
(1)
(1)
ξ 0 f− (x0 , −1) + η 0 g− (x0 , −1) = −2ξ 0 + η 0 ≥ 0.
Adding the two inequalities we obtain −(ξ 0 +η 0 ) ≥ 0, which obviously contradicts
to ξ 0 ≥ 0, η 0 ≥ 0, (ξ 0 , η 0 ) 6= (0, 0).
Theorem 2 (Sufficient conditions, non-strict inequalities). Consider
problem (1) with X convex-like at x0 . Let the functions gj , j ∈ J(x0 ), be strictly
pseudoconvex at x0 , and f be pseudoconvex (strictly pseudoconvex) at x 0 . Suppose
that for each u ∈ X(x0 ) condition (5) is satisfied. Then x0 is a global minimizer
(strict global minimizer).
P r o o f. Assume on the contrary, that x 0 is not a global (strict global)
minimizer. Then there exists a feasible point x 1 6= x0 such that f (x1 )−f (x0 ) < 0
(f (x1 ) − f (x0 ) ≤ 0). Since f is pseudoconvex (strictly pseudoconvex) at x 0 , it
(1)
holds f− (x0 , u) < 0 with u = x1 − x0 . Therefore condition (5) gives that
p
(1)
/ −int R+ [−g(x0 )]. On the other hand for j ∈ J(x0 ) we have gj (x1 ) ≤
g− (x0 , u) ∈
(1)
0 = gj (x0 ). Since gj is strictly pseudoconvex at x0 , we have gj − (x0 , u) < 0. This
p
(1)
gives g− (x0 , u) ∈ −int R+ [−g(x0 )], a contradiction. The following example shows that in Theorem 2 the strict pseudoconvexity requirements for the constraint functions g j cannot be relaxed to only
pseudoconvexity.
Example 3. Consider problem (2) with f, g : R → R given by f (x) = −x
and g(x) = 0. Put x0 = 0. The function f is strictly pseudoconvex, and g is
pseudoconvex but not strictly pseudoconvex at x 0 . The point x0 is not a global
First-order conditions for optimization problems
613
minimizer, while condition (5) is satisfied, since
(1)
(1)
(f− (x0 , u), g− (x0 , u)) = (−u, 0) ∈
/ −int (R+ × R+ ) = −int (R+ × R+ [−g(x0 )]).
The following example shows that also the pseudoconvexity requirements
for the objective function are essential for Theorem 2 and cannot be reduced to
(strict) quasiconvexity.
Example 4. Consider problem (2) with f, g : R → R given by f (x) = x 3
and g(x) = x. Put x0 = 0. The functions f and g are strictly quasiconvex,
(1)
g is strictly pseudoconvex at x0 , but f is not so. Since f− (x0 , u) = 0 and
(1)
(1)
/ −int R+ ), but x0 is
g− (x0 , u) = u, condition (5) is satisfied (now f − (x0 , u) ∈
not a global minimizer.
The following theorem is a consequence of Theorems 1 and 2. Strengthening there the pseudoconvexity and the strict pseudoconvexity requirements
respectively to convexity and strict convexity, we obtain a known classical result.
Theorem 3. Let in problem (1) the set X be convex (or more generally
convex-like), the functions f be pseudoconvex (strictly pseudoconvex), and g j (j =
1, . . . , p) be continuous and strictly pseudoconvex. Then a point x 0 ∈ X is a global
minimizer of problem (1) if and only if x 0 satisfies condition (5).
The given so far examples serve to clarify to what extend the hypotheses of the theorems are essential. Now we give an example to illustrate, that
the obtained results are effective in solving complex nonsmooth problems (the
nonsmoothness here is due to the appearance of the min function).
Example 5. Solve problem (2) with f : R 2 → R and g : R2 → R given
by
f (x1 , x2 ) = min x21 + 8x1 x2 + 16x22 − 8x1 − 32x2 + 20, x1 + 4x2 ,
p
g(x1 , x2 ) = −x1 − x2 + (x1 − x2 )2 + 4.
The function f can be written into the form
2
x1 + 8x1 x2 + 16x22 − 8x1 − 32x2 + 20, 4 ≤ x1 + 4x2 ≤ 5,
f (x1 , x2 ) =
x1 + 4x2 ,
otherwise.
There are no solutions among the points outside the lines ` 1 : x1 +4x2 = 4
and `2 : x1 + 4x2 = 5. We leave this case, since it can be checked easily with
the given here theory, but also with a classical approach (near such points both
f and g are smooth).
614
Ivan Ginchev, Vsevolod I. Ivanov
At the points x ∈ `1 we have
0, u1 + 4u2 ≥ 0,
(1)
f− (x, u) =
u1 + 4u2 , u1 + 4u2 < 0,
(x1 − x2 )(u1 − u2 )
(1)
.
g− (x, u) = −u1 − u2 + p
(x1 − x2 )2 + 4
(1)
Now the sign of f− (x, u) is easily estimated and from Remark 1 we can limit
the choice of ξ 0 to:
(1)
f− (x, u) ≥ 0 ⇒ ξ 0 = 1
(1)
f− (x, u) < 0 ⇒ ξ 0 = 0
for
for
u1 + 4u2 ≥ 0,
u1 + 4u2 < 0.
(1)
According to Remark 1 the choice of η 0 can be conditioned by the sign of g− (x, u)
and the solution of the system
p
−x1 − x2 + (x1 − x2 )2 + 4 = 0,
x1 + 4x2 = 4,
(that is g(x) = 0, x ∈ `1 ). The latter has the unique solution x 0 = (2, 1/2). At
2
8
(1)
(1)
x0 we have g− (x0 , u) = − u1 − u2 , which gives g− (x0 , u) ≥ 0 for u1 +4u2 ≤ 0.
5
5
Therefore the choice of η 0 can be restricted to:

0

 1, x = x , u1 + 4u2 ≤ 0,
0, x = x0 , u1 + 4u2 < 0,
η0 =


0, x ∈ `1 \ {x0 }.
Now we see that at x0 = (2, 1/2) for all directions u ∈ R2 condition (6) can be
satisfied (in which case we call the point x 0 stationary) with the choice:
(1, 0), u1 + 4u2 ≥ 0,
0
0
(ξ , η ) =
(0, 1), u1 + 4u2 < 0.
All the remaining points x ∈ `1 \ {x0 } are not stationary, since for any such point
x we can choose at least one direction u, for which the obtained ξ 0 and η 0 give
the zero pair (ξ 0 , η 0 ) = (0, 0).
Similarly, in the case x ∈ `2 we see that there are no stationary points.
Thus x0 = (2, 1/2) is the only point which satisfies the necessary condition
from Theorem 1. Since, as it can be easily checked, the function f is pseudoconvex
at x0 , and g is strictly pseudoconvex at x0 , we can draw the conclusion that
x0 = (2, 1/2) is a global minimizer for the considered problem, and its only
radial minimizer.
First-order conditions for optimization problems
615
Let us note, that f is pseudoconvex and g is strictly pseudoconvex (not
only at x0 ), therefore looking for solutions of the considered problem, we can
refer to Theorem 3.
4. Conditions with strict inequalities. In this section we show
that the first-order conditions with strict inequalities are related to the radial
isolated minimizer.
Let k be a positive real. We say that the feasible point x 0 ∈ X is a radial
isolated minimizer (of order 1) for problem (1) if for all u ∈ X(x 0 ), there exist
positive reals δ = δ(u) and A = A(u), such that the inequality
f (x0 + tu) ≥ f (x0 ) + A t kuk
is satisfied for all feasible points x 0 + tu such that 0 ≤ t < δ(u). If the reals δ and
A can be chosen to be independent on u, then x 0 is called an isolated minimizer
for (1).
The following lemma is analogous of Lemma 1 and is proved in a similar
way.
Lemma 2. Let z 0 ∈ Rp+ and let ȳ ∈ R, z̄ ∈ R. Then the following two
conditions are equivalent:
p
(ȳ, z̄) ∈
/ −(R+ × R+ [z 0 ])
(7)
and
(8)
∃(ξ 0 , η 0 ) ∈ R+ × Rp+ : (ξ 0 , η 0 ) 6= (0, 0),
ξ 0 = 0 if ȳ = −∞, ηj0 = 0 if z̄j = −∞,
hη 0 , z 0 i = 0, and ξ 0 ȳ + hη 0 , z̄i > 0.
Theorem 4 (Sufficient conditions, strict inequalities). Let x 0 ∈ X be a
feasible point for problem (1). Suppose that for all u ∈ X(x 0 ) \ {0} the following
condition is satisfied:
(9)
(1)
(1)
p
(f− (x0 , u), g− (x0 , u)) ∈
/ −(R+ × R+ [−g(x0 )]).
Then x0 is a radial isolated minimizer of (1). Under the additional assumption
that X is convex, f is quasiconvex (strictly quasiconvex) and g j (j = 1, . . . , p)
are quasiconvex, then x0 is a global (strict global) minimizer of (1).
P r o o f. Assume on the contrary, that x 0 is not a radial isolated minimizer of (1). Choose a sequence εk → 0+ . ¿From the made assumption,
there exists u ∈ X(x0 ) \ {0} and a sequence tk → 0+ , such that the points
x0 + tk u are feasible and (1/tk ) f (x0 + tk u) − f (x0 ) < εk kuk. The latter gives
(1)
(1)
f− (x0 , u) ≤ 0, that is f− (x0 , u) ∈ −R+ . When gj (x0 ) = 0 we have similarly
616
Ivan Ginchev, Vsevolod I. Ivanov
(1)
(1)
(1/tk ) gj (x0 + tk u) − gj (x0 ) ≤ 0. Hence gj − (x0 , u) ≤ 0, that is gj − (x0 , u) ∈
−R+ = −R+ [−gj (x0 )]. When gj (x0 ) < 0, then R+ [−gj (x0 )] = R and again
(1)
(1)
(1)
gj − (x0 , u) ∈ −R+ [−gj (x0 )]. These reasonings show that (f− (x0 , u), g− (x0 , u))
p
∈ −(R+ × R+ [−g(x0 )]), which contradicts the hypothesis (9).
Let the mentioned additional assumption are fulfilled. Suppose on the
contrary, that x0 is not a global (strict global) minimizer. Then there exists a
feasible point x1 ∈ X \ {x0 }, such that f (x0 ) > f (x1 ) (f (x0 ) ≥ f (x1 )). Since gj
(j = 1, . . . , p) are quasiconvex, the points x 0 + tu with u = x1 − x0 are feasible.
Since x0 , as proved above, is a radial minimizer of (1), the point t 0 = 0 is a local
minimizer for the quasiconvex (strictly quasiconvex) function φ(t) = f (x 0 + tu),
0 ≤ t ≤ 1, and therefore its global (strict global) minimizer. In particular f (x 0 ) =
φ(0) ≤ φ(1) = f (x1 ) (f (x0 ) = φ(0) < φ(1) = f (x1 )), a contradiction. Remark 4. On the base of Lemma 2, the primal form condition (9) is
equivalent to the following dual form condition:
∃ (ξ 0 , η 0 ) ∈ R+ × Rp+ : hξ 0 , η 0 i 6= (0, 0),
(1)
ξ 0 = 0 if f− (x0 , u) = −∞,
(1)
ηj0 = 0 if gj − (x0 , u) = −∞ (j = 1, . . . , p),
ηj0 gj (x0 ) = 0 (j = 1, . . . , p),
P
(1)
(1)
and ξ 0 f− (x0 , u) + pj=1 ηj0 gj − (x0 , u) > 0.
(10)
As an application consider the problem in Example 2 for x 0 = 0 putting
= 3, η 0 = 1 when u > 0, and ξ 0 = 1, η 0 = 3 when u < 0. Now it is easy to
verify that
ξ0
(11)
(1)
(1)
ξ 0 f− (x0 , u) + η 0 g− (x0 , u) = |u| > 0
for all
u ∈ Rn \ {0}.
On the base of Theorem 4 we conclude that x 0 is a strict global minimizer, hence
the unique minimizer, of the considered problem.
For the next theorem, being a reversal of Theorem 4, we need the following
constraint qualification of Kuhn-Tucker type:
Q0− (x0 ) :
(1)
If x0 is feasible and gj − (x0 , u) ∈ −R+ [−gj (x0 )] for j = 1, . . . , p,
then exists t̄ > 0 such that x0 + t̄u is a feasible point for (1)).
Theorem 5 (Necessary conditions, strict inequalities). Let the set X
be convex, the functions gj (j = 1, . . . , p) be quasiconvex, and the feasible point
x0 be a radial isolated minimizer of problem (1). Suppose that the constraint
qualification Q−0 (x0 ) has place. Then for all u ∈ X(x0 ) \ {0} condition (9) is
satisfied.
First-order conditions for optimization problems
617
(1)
/ R+ [−gj (x0 )]
P r o o f. Condition (9) is certainly true when g j − (x0 , u) ∈
(1)
for some j. The alternative is that g j − (x0 , u) ∈ −R+ [−gj (x0 )] for j = 1, . . . , p.
Then the assumed constraint qualification implies the existence of a positive
real t̄, such that the point x0 + t̄u is feasible. From the quasiconvexity of g j it
follows that all points x0 + tu, 0 ≤ t ≤ t̄, are feasible. Since x0 is a radial
isolated
0
0
minimizer, there exists a real A > 0, such that (1/t) f (x + tu) − f (x ) ≥ A kuk
(1)
is satisfied for all sufficiently small positive t. This gives f − (x0 , u) ≥ A kuk.
(1)
/ −R+ , which verifies (9) in this case.
Hence f− (x0 , u) ∈
Like in the classical Kuhn-Tucker condition [9] (compare also with Mangasarian [11]) the sense of the constraint qualification Q 0− (x0 ) is roughly speaking
that a point cannot leave the set of the feasible points at x 0 in tangent directions.
The following example shows that Q0− (x0 ) is essential for Theorem 5.
Example 6. Consider problem (2) with f, g : R → R given by f (x) =
g(x) = x2 and let x0 = 0. The function g is quasiconvex. The point x 0 , as the only
(1)
(1)
feasible point, is a radial isolated minimizer. It holds f − (x0 , u) = g− (x0 , u) = 0
for all u ∈ Rn , whence obviously condition (9) is not satisfied.
REFERENCES
[1] K. J. Arrow, A. C. Enthoven. Quasi-concave programming. Econometrica 29 (1961), 779–800.
[2] J. Bair. Un problème de dualité en programmation quasi-concave. Optimization 19, 4 (1988), 461–466.
[3] C. R. Bector, S. Chandra, M. K. Bector. Sufficient optimality conditions and duality for a quasiconvex programming problems. J. Optim. Theory
Appl. 59, 2 (1988), 209–221.
[4] J.-P. Crouzeix. Some properties of Dini-derivatives of quasiconvex functions. New Trends in Mathematical Programming, 41–57, Appl. Optim., vol.
13, Kluwer Acad. Publ., Boston MA, 1998.
[5] W. E. Diewert. Alternative characterizations of six kind of quasiconvexity
in the nondifferentiable case with applications to nonsmooth programming.
In: Generalized Concavity in Optimization and Economics (Eds S. Schaible,
W. T. Ziemba) Academic Press, New York 1981, 51–95.
618
Ivan Ginchev, Vsevolod I. Ivanov
[6] I. Ginchev, A. Guerraggio, M. Rocca. First-order conditions for C 0,1
constrained vector optemization. In: Variational Analysis and Applications
(Eds F. Giannessi, A. Maugeri) Nonconvex Optim. Appl. vol. 79, Springer,
New York, 2005, 427–450.
[7] G. Giorgi. Quasiconvex programming revisited. Calcolo 21, 4 (1984), 307–
316.
[8] R. N. Kaul. Programming with semilocally quasiconvex functions.
Opsearch 23, 1 (1986), 7–14.
[9] H. W. Kuhn, A. W. Tucker. Nonlinear programming. Proc. 2nd Berkeley
Symposium on Mathematical Statistics and Probability, (Ed. J. Neyman),
University of California Press, Berkeley CA, 1951.
[10] D. Luenberger. Quasiconvex programming. SIAM J. Appl. Anal. 16
(1968), 1090–1095.
[11] O. L. Mangasarian. Nonlinear programming. Repr. of the orig. 1969, Classics in Applied Mathematics vol. 10, SIAM, Philadelphia PA, 1994.
[12] J. P. Penot. Lagrangian approach to quasiconvex programming. J. Optim.
Theory Appl. 117, 3 (2003), 637–647.
Ivan Ginchev
University of Insubria
Department of Economics
Via Monte Generoso 71
21100 Varese, Italy
(on leave from Technical University of Varna, Bulgaria)
e-mail: [email protected]
Vsevolod I. Ivanov
Technical University of Varna
Department of Mathematics
Studentska Str. 1
9010 Varna, Bulgaria
e-mail: [email protected]
Received July 8, 2007