International Journal of Mathematics & Applications
Vol. 4, No. 1, (June 2011), pp. 13-20
GENERALIZED CONVEXITY AND INVEXITY IN
OPTIMIZATION THEORY
S. K. GHOSH, D. K. DALAI AND S. R. DAS
ABSTRACT: In this paper, we purpose a survey of the recent studies concerning convexity and
invexity in optimization theory.
Keywords: Generalized convexity and invexity, optimization theory, Kuhn – Tucker conditions.
1. INTRODUCTION
A few years later, Hanson [4] by a seminar paper introduced the differentiable function
from Rn into R for which there exists a vector function η (x, u) ∈ Rn such that
f (x, y) – f (u, v) ≥ {n (x, y; u, v)} t∇f (u, v),
(1)
where ∇ denotes the gradient. The class of functions satisfying (1) were called invex later
by Craven [3] and actually represented an important concept in optimization theory because
figured as a very broad generalization of convexity. Craven and Glover [5] showed that the
class of invex functions is equivalent to the class of functions whose stationary points are
global minima.
In a preceeding paper Caristi, Ferrara and Stefensen [2] introduced some generalizations
of invex functions in a smooth vision of the problem studied In fact, let ρ be a differentiable
function on a non-empty open set X ⊆ Rn, φ : X → Rn.
Definition 1: φ in η-invex at point (x0, y0) ∈ X × X if there exist η : X → Rn such that
φ (x, y) – φ (x0, y0) ≥ η (x, y; x0, y0) ⋅ φ (x0, y0)
(2)
φ is η-invex on if X × X there exist η : X × X → Rn such that (x, y) ∈ X × X and (x0, y0) ∈ X × X.
φ (x, y) – φ (x0, y0) ≥ η (x, y; x0, y0) ⋅ φ (y, y0)
(3)
φ is η-pseudo-invex at point (x0, y 0) ∈ X × X if
η (x, y; x0, y0) ⋅ φ (x0, y0) ≥ 0
(4)
for some (x, y) ∈ X × X ⇒ φ (x, y) – φ (x0, y0) ≥ 0
φ is η-quasi-invex at point (x0, y 0) ∈ X × X if
φ (x, y) – φ (x0, y0) ≤ 0
for some (x, y) ∈ X × X ⇒ 〈 η (x, y; x0, y0) ⋅ φ (x0, y0) 〉 ≤ 0
(5)
14
S. K. Ghosh, D. K. Dalai and S. R. Das
Definition 2: φ in η-inf-invex at point (x0, y 0) ∈ X × X if
inf
( x, y ) ∈ X0 × X0
[ φ ( x, y ) − φ ( x 0 , y 0 )] ≥
inf
η ( x, y; x 0 , y 0 ) ⋅ ∇ φ ( x 0 , y 0 )
sup
η ( x, y; x 0 , y 0 ) ⋅ ∇ φ ( x 0 , y 0 )
( x, y ) ∈ X0 × X0
φ is η-is sup-invex at point (x0, y 0) ∈ X × X if
sup
( x , y ) ∈ X0 × X0
[φ ( x, y) − φ ( x 0 , y 0 )] ≥
( x , y ) ∈ X0 × X0
φ in η-inf-pseudo-invex at point (x0, y 0) ∈ X × X if
inf
( x , y ) ∈ X0 × X0
⇒
η ( x, y; x 0 , y 0 ) ⋅ ∇ φ ( x 0 , y 0 ) ≥ 0
inf
( x, y ) ∈ X0 × X0
[ φ ( x, y ) − φ ( x 0 , y 0 )] ≥ 0
φ is η-sup-quasi-invex at point (x0, y 0) ∈ X × X if
sup
( x, y ) ∈ X0 × X0
⇒
[ φ ( x, y ) − φ ( x 0 , y 0 )] ≤ 0
η ( x, y; x 0 , y 0 ) ⋅ ∇ φ ( x 0 , y 0 )
sup
( x , y ) ∈ X0 × X0
≤0
(6)
where X0 is a subset of X.
Proposition 1: φ η-invex not implies that φ is η-inf-invex and η-sup-invex. As can be
seen from the following example.
Example 1: The function θ [– 1, 1] → R defined by θ (x, y) = x3 + y3 is not invex but if
we consider µ = 0, then the function φ is η-sup-invex. If assume µ = – 1
inf ( − µ 3 ) ≥ inf ( − 3µ 2 )
( x, y )
( x, y )
(8)
then the function considered is η-inf-invex.
Remark 1: Obviously, η-inf-invexity implies η-inf-pseudo invexity and sup-invexity
implies η-sup-quasi invexity.
The above properties will be used in the classical frame work of the scalar optimization
problem:
(P)
inf
( x , y ) ∈ X0 × X0
f ( x, y) where X 0 = {x ∈ X : g j ( x ) ≤ 0, j = 1, 2, ..., m}
Where f and gj are differentiable.
(9)
Generalized Convexity and Invexity in Optimization Theory
15
Definition 3: The problem (P) is η-inf-sup-invex at (x, y) ∈ (X, Y ), if f is η-inf-invex
with respect to (X0, Y0) and gj , j = 1, 2, ..., m are η-sup-invex at y w.r.t to X0. (P) is η-inf-supinvex on X if it is η-inf-invex at every point y ∈ X.
Let (D) the Wolfe dual in (X, Y ) of (P):
( D) sup f ( x, y) +
( x, y ) ∈ V
m
j =1
∑ vj g j ( x, y)
(10)
where,
V = ( x, v) ∈ X × R+m ∇f ( x, y) +
m
j =1
∑ v j ∇g j ( x, y) = 0 .
(11)
In [1] authors will show that (P) is η-inf-sup-invex on (X, Y ) for some η, then any
Kuhn-Tucker point is an optimum solution of (P) and the weak duality property holds
assuming f and differentiable on (X,Y ), a Kuhn-Tucker point is a pair (x0, v) ∈ X0 × R+m,
satisfying the following two conditions.
∇f ( x0 , y0 ) +
m
∑ v j ∇g j ( x0 , y0 ) = 0
(12)
j =1
m
∑ v j g j ( x0 , y0 ) = 0 .
(13)
j =1
For (x, y) ∈ (X0, Y0). let us denote by J0(y) the set of active constraints at y;
J 0 ( x, y ) = { j g j ( x, y ) = 0} .
Proof: Let (x0, v) be any Kuhn-Tucker point of the problem (P). Then
∇f ( x0 , y0 ) +
m
∑ v j ∇g j ( x0 , y0 ) = 0
j =1
and
m
∑ v j g j ( x0 , y0 ) = 0
j =1
if there exists some η such that f is η-inf-invex at (x, y) ∈ (X0, Y0) if
inf
( x0 , y0 ) ∈ ( X 0 × X 0 )
∴
[ f ( x, y) − f ( x0 , y0 )] ≥
gj is η-sup-invex at (x, y) ∈ (X, Y ).
inf
( x0 , y0 ) ∈ ( X 0 × X 0 )
η ( x, y; x0 , y0 ) ⋅ ∇f ( x0 , y0 )
16
S. K. Ghosh, D. K. Dalai and S. R. Das
Let D be the dual in X of (P)
( D) sup f ( x, y) +
( x, y ) ∈ V
m
j =1
∑ vj g j ( x, y)
V = ( x, v) ∈ X × R+m ∇f ( x, y) +
m
j =1
∑ v j ∇g j ( x, y) = 0
then (x0, y0) is an optimum solution of P.
Theorem 1: Let (x0, v) be any Kuhn-Tucker point of the problem (P). If there exist
some η, such that f is η-inf-pseudo-invex at (x0, y0) with respect to and for every j ∈ J0(x0, y0).
gj is η-sup-quasi-invex at y ∈ X with respect to X0, then (x0, y0) is an optimum solution of (P).
Proof: Let (P) be the primal given by
(P)
inf
( x , y ) ∈ X0 × X0
f ( x, y )
where
X 0 × X 0 = {( x, y ) ∈ X × X g j ( x, y ) = 0}
where f and gj are differentiable.
Let D be the dual of (P) which is given by
( D) sup f ( x, y) +
( x, y ) ∈ V
m
j =1
∑ vj g j ( x, y)
where
V = ( x, v) ∈ X × R+m ∇f ( x, y) +
m
j =1
∑ v j ∇g j ( x, y) = 0 .
Since f is η-inf-pseudo-invex at (x0, y0) with respect to (X0, Y0), therefore
inf
( x , y ) ∈ X0 × X0
∴
{ f ( x, y) − f ( x0 , y0 )} ≥ 0
inf f (x, y) ≥ inf f (x0, y0)
gj is η-sup-invex at (x, y) ∈ X × X with respect to X0 × X0
∴
⇒
sup
( x, y ) ∈ X × X
( g j ( x, y ) − g j ( x0 , y0 )) ≤ 0
sup gj (x, y) ≤ inf gj (x0, y0)
Generalized Convexity and Invexity in Optimization Theory
17
Therefore
inf f (x0, y0) ≤ inf f (x, y) ≤ sup gj (x, y) ≤ sup gj (x0, y0)
⇒
inf f (x0, y0) ≤ sup gj (x0, y0)
∴
sup gj (x0, y0) – inf f (x0, y0) ≥ 0
⇒
sup (gj (x0, y0) = inf ( f (x0, y0)).
To show that (x0, y0) is an optimal solution of (P).
Let
Also
j g j ( x, y) = 0
J0(x0, y0) =
j sup g j ( x0 , y0 ) = 0
inf f (x0, y0) = 0
⇒
∇ f (x0, y0) = 0
∇ gj (x0, y0) = 0
Further
⇒
m
∑ v j ∇g j ( x0 , y0 ) = 0
j =1
∴
∇f ( x0 , y0 ) +
m
∑ v j ∇g j ( x0 , y0 ) = 0
j =1
and
m
∑ v j g j ( x0 , y0 ) = 0
j =1
Hence (x0, y0) is an optimal solution of (P).
Corollary 1: Assume that there exists some η such that (P) is η-inf-sup-invex on X.
Then for every Kuhn-Tucker point, (x0, v), x0 is an optimum solution.
Remark 2: An obvious relation of the above theorem can be obtained weakening the
requirements for the constraint functions. In fact it is sufficient to assume that
j ∈ J0(x0, y0)
⇒
sup
( x , y ) ∈ X0 × X0
(η ( x, y; x0 , y0 ) ⋅ ∇ g j ( x0 , y0 )) ≤ 0
(14)
Consequently, one obtains.
Corollary 2: Let (x0, v) be any Kuhn-Tucker point of the problem (P). If there exist
some η such that the constraint functions satisfy [14] and the objective function satisfies:
inf
(η ( x, y; x0 , y0 ) ⋅ ∇ f ( x0 , y0 )) ≥ 0
inf
[ f ( x, y) − f ( x0 , y0 )] ≥ 0
( x , y ) ∈ X0 × X0
⇒
( x , y ) ∈ X0 × X0
then (x0, y0) is an optimum solution.
(15)
18
S. K. Ghosh, D. K. Dalai and S. R. Das
Theorem 2: Assume that there exists some η such that f is an η-inf-pseudoinvex at
(x0, y0) on X × X and all gj are η-sup-invex on X. Then, for any feasible solution (x, y) of (P),
((x, y) ∈ X0 × X0) and for any feasible solution (y, v) of (D) ((y, v) ∈ V ) one has
f ( x, y) ≥ f ( x0 , y0 ) +
m
∑ v j g j ( x 0 , y0 ) .
(16)
j =1
Proof: Since f is η-inf-pseudo invex at (x0, y0) on X × X therefore
inf ( f (x, y) – f (x0, y0)) ≥ 0
⇒
(x, y) ∈ X0 × X0
⇒
f (x, y) ≥ f (x0, y0)
gj is η-sup-invex an X × X. Therefore
sup
( x , y ) ∈ X0 × X0
⇒
[ g j ( x, y; x0 , y0 ), ∇ g j ( x0 , y0 )] ≤ 0
sup gj (x, y) ≤ sup gj (x0, y0)).
Since f and gj are differentiable on X × X, therefore a Kuhn-Tucker point is a pair
(x0, v) ∈ X0 × X0, satisfying
∇f ( x0 , y0 ) +
m
∑ vj ∇g j ( x0 , y0 ) = 0
j =1
and
m
∑ vj g j ( x0 , y0 ) = 0
j =1
inf f (x0, y0) ≤ sup gj (x0, y0) ≤ – inf gj (x0, y0)
∴
– gj (x0, y0) ≥ f (x0, y0)
⇒
gj (x0, y0) ≤ – f (x0, y0)
⇒
f (x, y) – f (x0, y0) ≥ gj (x0, y0)
f (x, y) – f (x0, y0) ≥ vj gj (x0, y0)
f ( x, y) ≥ f ( x0 , y0 ) +
m
∑ ≥ v j g j ( x 0 , y0 ) .
j =1
Corollary 3: If the problem (P) is η-inf-sup-invex on X × X for some η, then it has the
weak duality property.
Always in [1] Caristi, Ferrara and Stefanescu considering that obtain weaker invexity
type conditions which are necessary and sufficient for the sufficiency of the Kuhn-Tucker
conditions.
Generalized Convexity and Invexity in Optimization Theory
19
Martin (1965) showed that, if a mathematical programming problem with linear
constraints which delimit a bounded feasible set is invex, then the objective function must
be convex.
He called the problem (P) Kuhn-Tucker invex on X × X, if there exist a vector function
η : X × X → Rn, such that
(x, y; x0, y0) ∈ X0 ∈ X0
⇒
{ f (x, y) – f (x0, y0) ≥ 〈η( f (x, y; x0, y0) ⋅ ∇f (x0, y0) 〉 < η(x, y; x0, y0) ⋅ ∇gj (x0, y0) ≤ 0
whenever
gj (x0, y0) = 0,
for
j = 1, 2, ..., m.
(17)
Theorem 3: (Martin (1985), Theorem 2.1): Every Kuhn-Tucker point of problem (P)
is a global minimize if and only if (P) is Kuhn-Tucker invex.
(P)
inf
( x , y ) ∈ X0 × X0
f ( x, y) where X 0 = {x ∈ X : g j ( x ) ≤ 0, j = 1, 2, ..., m}
where , j = 1,2,….,m
where f and gj are differentiable.
⇒
f (x, y) – f (x0, y0) ≥ η ( f (x, y; x0, y0) ⋅ f (x0, y0)
for
x, y ∈ X0
f (x, y) – f (x0, y0) = η (x, y; x0, y0) ⋅ ∇gj (x0, y0) ≤ 0
for
x, y ∈ X0.
(P) is a Kuhn-Tucker invex.
Theorem 4: Every Kuch-Tucker point of problem (P) is a global minimize if and only
if there exist a vector function η : X × X → Rn, such that f is η-inf-pseudo invex (η-infinvex) on X0, with respect to X and for every (x0, y0) ∈ X0 × X0, gj, is η-sup-quasi invex
(η-sup-invex) at (x0, y0) when j ∈ J0(x0, y0).
Proof: For j ∈ J0(x0, y0)
sup
( x, y ) ∈ X0 × X0
and
inf
( x , y ) ∈ X0 × X0
⇒
By (1),
inf
{ f ( x, y) − f ( x0 , y0 )} ≥ 0
sup
{g j ( x, y) − g j ( x0 , y0 )} ≤ 0
( x , y ) ∈ X0 × X0
⇒
sup
(1)
η ( x, y; x0 , y0 ) ⋅∇f ( x0 , y0 ) ≥ 0
( x , y ) ∈ X0 × X0
⇒
η ( x, y; x0 , y0 ) ⋅∇g j ( x0 , y0 ) ≤ 0
( x, y ) ∈ X0 × X0
η ( x, y; x0 , y0 ) ⋅∇g j ( x0 , y0 ) ≤ 0
gj is η-sup-quasi invex at (x0, y0)
(2)
20
S. K. Ghosh, D. K. Dalai and S. R. Das
By (2),
inf
( x , y ) ∈ X0 × X0
⇒
⇒
inf
( x , y ) ∈ X0 × X0
η ( x, y; x0 , y0 ) ⋅∇f ( x0 , y0 ) ≥ 0
{ f ( x, y) − f ( x0 , y0 )} ≥ 0
f is η-inf-pseudo invex at (x0, y0).
2. CONCLUSION
Every Kuhn-Tucker point of problem (P) is global minimizer if and only if (P) is
Kuhn-Tucker invex.
REFERENCES
[1]
Caristi G., and Ferrara M., and Stefanescu A., (2006), Mathematical Programming With-Invexity,
In: Konov I. V., Luc, Dinh The and Rubinov A. M., (Eds), Generalized Convexity and Related
Topics, Lecture Notes in Economics and Mathematical Systems, 583, Springer.
[2]
Caristi G., and Ferrara M., (2000), Convexity without Vector Space Structure, Rendiconti Del
Eircolo Mathematic Di Palermo, Series II, Suppl, 65, 67-73.
[3]
Craven B. D., (1981), Invex Functions and Constrained Local Minima, Bull. Austral. Math. Soc.,
24, 357-366.
[4]
Hanson M. A., (1981), On Sufficiency of Kuhn Tucker Conditions, J. Math. Anal. Appl. 30, 545-550.
[5]
B. D. Craven, and B. M. Glover, (1985), What is Invexity.
S. K. Ghosh
Department of Mathematics,
Ravenshaw University,
Cuttack-753004, India.
D. K. Dalai
Department of Mathematics,
S.B.Women’s College,
Cuttack-753001, India.
E-mail: [email protected]
S. R. Das
Principal, Nalanda Institute of Technology
Bhubaneswar, India.
E-mail: [email protected]
© Copyright 2026 Paperzz