Second order duality in minmax fractional programming with

J Glob Optim
DOI 10.1007/s10898-011-9694-1
Second order duality in minmax fractional programming
with generalized univexity
Sarita Sharma · T. R. Gulati
Received: 3 August 2010 / Accepted: 8 February 2011
© Springer Science+Business Media, LLC. 2011
Abstract In this paper, the concept of second order generalized α-type I univexity is introduced. Based on the new definitions, we derive weak, strong and strict converse duality
results for two second order duals of a minmax fractional programming problem.
Keywords Second order duality · Minmax programming · Fractional programming ·
Generalized α-type I univexity
1 Introduction
This article deals with the following minmax fractional programming problem:
f (x,y)
,
x∈S y∈Y h(x,y)
min sup
(P)
subject to x ∈ S = {x ∈ X : g(x) ≤ 0},
where X is an open subset of R n , Y is a compact subset of R l , f : X × Y → R, h :
X × Y → R and g : X → R m are such that their first and second order partial derivatives
with respect to the components of x are continuous. Let f (x, y) ≥ 0 and h(x, y) > 0 for all
(x, y) ∈ S × Y .
For a differentiable minmax fractional programming problem (P), Chandra and Kumar [4]
proved duality results for two modified dual problems considered in [19]. Liu and Wu [12,13]
established sufficient optimality conditions for (P) under generalized convexity assumptions.
The research of the author S. Sharma is supported by the Department of Atomic Energy, Govt. of India,
under the NBHM Post-Doctoral Fellowship Program No. 40/7/2008-R&D II/453.
S. Sharma (B) · T. R. Gulati
Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee 247 667, India
e-mail: [email protected]
T. R. Gulati
e-mail: [email protected]
123
J Glob Optim
Ahmad [1] and Yang and Hou [20] obtained optimality conditions and duality theorems for
(P) assuming the functions involved to be generalized convex. The reader is referred to [5]
for the theory, algorithms and applications of some minmax problems.
Based on Dinkelbach’s global optimization approach, Pardalos and Phillips [18] presented
a modified algorithm for finding the global maximum of the fractional programming problem. Liang et al. [9,10] studied optimality and duality for scalar and multiobjective fractional
programming problems.
Second order dual was first formulated by Mangasarian [14] for a nonlinear programming
problem by introducing an additional vector p ∈ R n . Mond [17] reproved the second order
duality results [14] under different and less complicated assumptions. Bector et al. [2] discussed second order duality results for minmax programming problems under generalized
invexity assumptions. Liu [11] extended these results by using second order generalized
B-invexity assumptions. Recently, Husain et al. [7] formulated two types of second order
duals for a minmax fractional programming problem and established usual duality theorems
under generalized η-bonvexity assumptions.
Bector et al. [3] introduced the concept of pre-univex functions, univex functions and
pseudo-univex functions as a generalization of invex functions [6] and showed by an example that univex functions are more general than invex functions. Mishra and Rautela [15]
obtained optimality conditions and duality theorems for a nondifferential minimax fractional
programming problem by introducing generalized α-type I invex functions. Recently,
Jayswal [8] introduced the concept of generalized α-type I univex functions. Several examples of generalized type I univex functions have appeared in [8,16]. In this paper, we introduce
the concept of second order generalized α-type I univex function. Weak, strong and strict
converse duality theorems are proved for two dual models of (P). The results obtained in this
paper generalize the results appeared in [1,2,4,7,11–13,19,20].
2 Notations and preliminaries
Let M = {1, 2, . . . , m} be the index set. For each x ∈ S, we define
J (x) = {j ∈ M : gj (x) = 0},
f (x, z)
f (x, y)
= sup
.
Y (x) = y ∈ Y :
h(x, y)
z∈Y h(x, z)
Also, let
K(x) =
s
s
× R ls : 1 ≤ s ≤ n + 1, t = (t1 , t2 , . . . , ts ) ∈ R+
,
(s, t, u) ∈ IN × R+
s
ti = 1, u = (y1 , y2 , . . . , ys ), yi ∈ Y (x), i = 1, 2, . . . , s .
i=1
Now we introduce second order generalized α-type I univex functions. Let ψ : X → R
and gj : X → R, j ∈ M be twice differentiable functions.
Definition 2.1 (ψ, g) is said to be second order pseudoquasi α-type I univex at x̄ ∈ X,
if there exist mappings η : X × X → R n ; b◦ , b1 : X × X → R+ ; φ◦ , φ1 : R → R; α :
X × X → R+ \ {0} such that for all x ∈ S and p ∈ R n , we have
123
J Glob Optim
α(x, x̄) ∇ψ(x̄) + ∇ 2 ψ(x̄)p , η(x, x̄) ≥ 0
1
⇒ b◦ (x, x̄)φ◦ ψ(x) − ψ(x̄) + p T ∇ 2 ψ (x̄) p ≥ 0,
2
and
1 T 2
−b1 (x, x̄)φ1 gj (x̄) − p ∇ gj (x̄)p ≤ 0
2
⇒ α(x, x̄) ∇gj (x̄) + ∇ 2 gj (x̄)p , η(x, x̄) ≤ 0,
for each j ∈ M.
Definition 2.2 (ψ, g) is said to be second order strictly pseudoquasi α-type I univex at
x̄ ∈ X, if there exist mappings η : X × X → R n ; b◦ , b1 : X × X → R+ ; φ◦ , φ1 : R →
R; α : X × X → R+ \ {0} such that for all x ∈ S(x = x̄) and p ∈ R n , we have
α(x, x̄) ∇ψ(x̄) + ∇ 2 ψ(x̄)p , η(x, x̄) ≥ 0
1 T 2
⇒ b◦ (x, x̄)φ◦ ψ(x) − ψ(x̄) + p ∇ ψ(x̄)p > 0,
2
and
1
−b1 (x, x̄)φ1 gj (x̄) − p T ∇ 2 gj (x̄)p ≤ 0
2
⇒ α(x, x̄) ∇gj (x̄) + ∇ 2 gj (x̄)p , η(x, x̄) ≤ 0,
for each j ∈ M.
In what follows, gradient and Hessian matix for the functions f, g and h are with respect
to the variable vector x. For example, ∇f (x, y) means ∇x f (x, y).
Theorems 2.1 and 3.1 in Chandra and Kumar [4] lead to the following necessary conditions. These will be needed in the proofs of strong duality theorems.
Theorem 2.1 Let x ∗ be an optimal solution of (P) and let ∇gj (x ∗ ), j ∈ J (x ∗ ) be linearly
m such that
independent. Then there exist (s ∗ , t ∗ , u∗ ) ∈ K(x ∗ ), λ∗ ∈ R+ , and μ∗ ∈ R+
∗
∇
s
m
ti∗ f x ∗ , yi∗ − λ∗ h x ∗ , yi∗ + ∇
μ∗j gj x ∗ = 0,
i=1
j =1
f x ∗ , yi∗ − λ∗ h x ∗ , yi∗ = 0, i = 1, 2, . . . , s ∗ ,
m
μ∗j gj x ∗ = 0.
j =1
3 Duality
In this section, we prove duality results for two different second order duals of fractional
minmax program (P). The first dual is
max
sup
(s,t,u)∈K(z) (z,μ,λ,p)∈H1 (s,t,u)
λ,
(DI)
123
J Glob Optim
m ×R ×R n
where for (s, t, u) ∈ K(z), the set H1 (s, t, u) contains all (z, μ, λ, p) ∈ R n ×R+
+
satisfying
∇
s
ti (f (z, yi ) − λh(z, yi )) + ∇ 2
i=1
+∇
i=1
ti (f (z, yi ) − λh(z, yi ))p
i=1
m
μj gj (z) + ∇ 2
j =1
s
s
m
μj gj (z)p = 0,
(3.1)
j =1
s
1
ti (f (z, yi ) − λh(z, yi )) − p T ∇ 2
ti (f (z, yi ) − λh(z, yi ))p ≥ 0,
2
m
j =1
1
μj gj (z) − p T ∇ 2
2
i=1
m
μj gj (z)p ≥ 0.
(3.2)
(3.3)
j =1
If, for a triplet (s, t, u) ∈ K(z), the set H1 (s, t, u) = ∅, then we define sup λ over it to be
−∞.
Theorem 3.1 (Weak duality) Let x and (z, μ, λ, s, t, u, p) be feasible solutions of (P) and
(DI), respectively. Assume that
s
m
(i)
ti (f (·, yi ) − λh(·, yi )),
μj gj (·) is second order pseudoquasi α-type I
i=1
j =1
univex at z; and
(ii) φ◦ (a) ≥ 0 ⇒ a ≥ 0; a ≥ 0 ⇒ φ1 (a) ≥ 0, and b◦ (x, z) > 0, b1 (x, z) ≥ 0.
Then
sup
y∈Y
f (x, y)
≥ λ.
h(x, y)
Proof Since a ≥ 0 ⇒ φ1 (a) ≥ 0, and b1 (x, z) ≥ 0, dual constraint (3.3) yields
⎤
⎡
m
m
1
T
2
μj gj (z) − p ∇
μj gj (z)p ⎦ ≤ 0.
− b1 (x, z)φ1 ⎣
2
j =1
On using the second part of assumption (i), inequality (3.4) gives
⎫
⎧
m
m
⎬
⎨ 2
μj gj (z) + ∇
μj gj (z)p , η(x, z) ≤ 0.
α(x, z) ∇
⎭
⎩
j =1
(3.4)
j =1
(3.5)
j =1
Inequality (3.5) in view of the dual constraint (3.1) implies
α(x, z) ∇
s
i=1
+∇
2
s
i=1
123
ti (f (z, yi ) − λh(z, yi ))
ti (f (z, yi ) − λh(z, yi ))p , η(x, z) ≥ 0.
(3.6)
J Glob Optim
Applying the first part of assumption (i) on (3.6), we obtain
s
s
b◦ (x, z)φ◦
ti (f (x, yi ) − λh(x, yi )) −
ti (f (z, yi ) − λh(z, yi ))
i=1
1
+ pT ∇ 2
2
i=1
s
ti (f (z, yi ) − λh(z, yi ))p ≥ 0.
(3.7)
i=1
It follows from b◦ (x, z) > 0; φ◦ (a) ≥ 0 ⇒ a ≥ 0, and the dual constraint (3.2) that
s
ti (f (x, yi ) − λh(x, yi )) ≥ 0.
i=1
Therefore, there exists a certain i◦ such that
f (x, yi◦ ) − λh(x, yi◦ ) ≥ 0.
Hence
sup
y∈Y
f (x, y)
f (x, yi◦ )
≥
≥ λ.
h(x, y)
h(x, yi◦ )
Theorem 3.2 (Strong duality) Assume that x ∗ is an optimal solution of (P) and ∇gj (x ∗ ), j ∈
J (x ∗ ) are linearly independent. Then there exist (s ∗ , t ∗ , u∗ ) ∈ K(x ∗ ) and (x ∗ , μ∗ , λ∗ , p ∗ =0)
∈ H1 (s ∗ , t ∗ , u∗ ) such that (x ∗ , μ∗ , λ∗ , s ∗ , t ∗ , u∗ , p ∗ = 0) is a feasible solution of (DI) and
the two objectives have the same value. If, in addition, the assumptions of weak duality (Theorem 3.1) hold for all the feasible solutions of (P) and (DI), then (x ∗ , μ∗ , λ∗ , s ∗ , t ∗ , u∗ , p ∗ =0)
is an optimal solution of (DI).
Proof Since x ∗ is an optimal solution of (P) and ∇gj (x ∗ ), j ∈ J (x ∗ ) are linearly independent, by Theorem 2.1, there exist (s ∗ , t ∗ , u∗ ) ∈ K(x ∗ ) and (x ∗ , μ∗ , λ∗ , p ∗ = 0) ∈
H1 (s ∗ , t ∗ , u∗ ) such that (x ∗ , μ∗ , λ∗ , s ∗ , t ∗ , u∗ , p ∗ = 0) is a feasible solution of (DI) and the
two objectives have the same value. Optimality of (x ∗ , μ∗ , λ∗ , s ∗ , t ∗ , u∗ , p ∗ = 0) for (DI)
follows from weak duality (Theorem 3.1).
Theorem 3.3 (Strict converse duality) Let x ∗ and (z∗ , μ∗ , λ∗ , s ∗ , t ∗ , u∗ , p ∗ ) be feasible
solutions of (P) and (DI), respectively. Suppose that
f (x ∗ , y ∗ )
= λ∗ ,
(i) sup
∗, y∗)
∗
h(x
y ∈Y
s∗
m
∗
∗
∗
∗
∗
(ii)
ti (f (·, yi ) − λ h(·, yi )),
μj gj (·) is second order strictly pseudoquasi
i=1
j =1
α-type I univex at z∗ ; and
(iii) φ◦ (a) > 0 ⇒ a > 0; a ≥ 0 ⇒ φ1 (a) ≥ 0, and b◦ (x ∗ , z∗ ) > 0, b1 (x ∗ , z∗ ) ≥ 0.
Then x ∗ = z∗ .
Proof We suppose, to the contrary, that x ∗ = z∗ and exhibit a contradiction. From (3.3), we
have
m
j =1
m
1
μ∗j gj (z∗ ) − p ∗ T ∇ 2
μ∗j gj (z∗ )p ∗ ≥ 0,
2
j =1
123
J Glob Optim
which on using a ≥ 0 ⇒ φ1 (a) ≥ 0 and b1 (x ∗ , z∗ ) ≥ 0 yields
⎡
⎤
m
m
1
μ∗j gj (z∗ ) − p ∗ T ∇ 2
μ∗j gj (z∗ )p ∗ ⎦ ≤ 0.
− b1 (x ∗ , z∗ )φ1 ⎣
2
j =1
(3.8)
j =1
By the second part of assumption (ii), inequality (3.8) gives
⎧
⎫
m
m
⎨ ⎬
∗ ∗
∗
∗
2
∗
∗ ∗
∗ ∗
μj gj (z ) + ∇
μj gj (z )p , η(x , z ) ≤ 0,
α(x , z ) ∇
⎩
⎭
j =1
j =1
which in view of the dual constraint (3.1) implies
⎧
s∗
⎨ ∗ ∗
ti∗ (f (z∗ , yi∗ ) − λ∗ h(z∗ , yi∗ ))
α(x , z ) ∇
⎩
i=1
⎫
s∗
⎬
2
∗
∗ ∗
∗
∗ ∗
∗
∗ ∗
ti (f (z , yi ) − λ h(z , yi ))p , η(x , z ) ≥ 0.
+∇
⎭
(3.9)
i=1
Applying the first part of assumption (ii) on (3.9), we obtain
⎡ ∗
s
s∗
∗ ∗
ti∗ (f (x ∗ , yi∗ ) − λ∗ h(x ∗ , yi∗ )) −
ti∗ (f (z∗ , yi∗ ) − λ∗ h(z∗ , yi∗ ))
b◦ (x , z )φ◦ ⎣
i=1
⎤
s∗
i=1
1
ti∗ (f (z∗ , yi∗ ) − λ∗ h(z∗ , yi∗ ))p ∗ ⎦ > 0.
+ p∗ T ∇ 2
2
(3.10)
i=1
It follows from b◦ (x ∗ , z∗ ) > 0; φ◦ (a) > 0 ⇒ a > 0, and the dual constraint (3.2) that
∗
s
ti∗ (f (x ∗ , yi∗ ) − λ∗ h(x ∗ , yi∗ )) > 0.
i=1
Therefore, there exists a certain i◦ such that
f (x ∗ , yi∗◦ ) − λ∗ h(x ∗ , yi∗◦ ) > 0.
Hence
sup
y ∗ ∈Y
f (x ∗ , yi∗◦ )
f (x ∗ , y ∗ )
≥
> λ∗ ,
h(x ∗ , y ∗ )
h(x ∗ , yi∗◦ )
which contradicts assumption (i). So, x ∗ = z∗ .
Now we discuss duality results for the second dual (DII) of (P).
max
sup
(s,t,u)∈K(z) (z,μ,λ,p)∈H2 (s,t,u)
123
λ.
(DII)
J Glob Optim
m ×R ×R n
where for (s, t, u) ∈ K(z), the set H2 (s, t, u) contains all (z, μ, λ, p) ∈ R n ×R+
+
satisfying
∇
s
ti (f (z, yi ) − λh(z, yi )) + ∇ 2
i=1
+∇
ti (f (z, yi ) − λh(z, yi ))p
i=1
m
m
μj gj (z) + ∇ 2
j =1
s
s
μj gj (z)p = 0,
(3.11)
j =1
ti (f (z, yi ) − λh(z, yi )) +
i=1
μj gj (z)
j ∈J0
⎤
⎡
s
1 T 2 ⎣
− p ∇
ti (f (z, yi ) − λh(z, yi )) +
μj gj (z)⎦ p ≥ 0,
2
i=1
j ∈Jβ
(3.12)
j ∈J0
1
μj gj (z) − p T ∇ 2
μj gj (z)p ≥ 0, β = 1, 2, . . . , r,
2
(3.13)
j ∈Jβ
Jβ ⊆ M, β = 0, 1, 2, . . . , r with Jβ ∩ Jγ = ∅, if β = γ and
r
Jβ = M.
β=0
If, for a triplet (s, t, u) ∈ K(z), the set H2 (s, t, u) = ∅, then we define sup λ over it to be
−∞.
Theorem 3.4 (Weak duality) Let x and (z, μ, λ, s, t, u, p) be feasible solutions of (P) and
(DII), respectively. Assume that
s
(i) for each β = 1, 2, . . . , r,
ti (f (·, yi ) − λh(·, yi )) +
μj gj (·),
μj gj (·)
i=1
j ∈J0
j ∈Jβ
is second order pseudoquasi α-type I univex at z; and
(ii) φ◦ (a) ≥ 0 ⇒ a ≥ 0; a ≥ 0 ⇒ φ1 (a) ≥ 0, and b◦ (x, z) > 0, b1 (x, z) ≥ 0.
Then
sup
y∈Y
f (x, y)
≥ λ.
h(x, y)
Proof As a ≥ 0 ⇒ φ1 (a) ≥ 0, b1 (x, z) ≥ 0, dual constraint (3.13) yields
⎤
⎡
1
− b1 (x, z)φ1 ⎣
μj gj (z) − p T ∇ 2
μj gj (z)p ⎦ ≤ 0, β = 1, 2, . . . , r.
2
j ∈Jβ
(3.14)
j ∈Jβ
By the second part of assumption (i), inequality (3.14) implies
⎤
⎡
2
μj gj (z) + ∇
μj gj (z)p ⎦ , η(x, z) ≤ 0, β = 1, 2, . . . , r.
α(x, z) ⎣∇
j ∈Jβ
Thus
⎡
α(x, z) ⎣∇
j ∈Jβ
j ∈M\J0
μj gj (z) + ∇
2
⎤
⎦
μj gj (z)p , η(x, z) ≤ 0.
j ∈M\J0
123
J Glob Optim
The above inequality along with (3.11) gives
⎫
⎡ ⎧
s
⎬
⎨
ti (f (z, yi ) − λh(z, yi )) +
μj gj (z)
α(x, z) ⎣∇
⎭
⎩
i=1
j ∈J0
⎫ ⎤
⎧
s
⎬
⎨
2
⎦
ti (f (z, yi ) − λh(z, yi )) +
μj gj (z) p , η(x, z) ≥ 0.
+∇
⎭
⎩
i=1
(3.15)
j ∈J0
Applying the first part of assumption (i) on (3.15), we get
⎡
s
s
ti (f (x, yi ) − λh(x, yi )) +
μj gj (x) −
ti (f (z, yi ) − λh(z, yi ))
b◦ (x, z)φ◦ ⎣
i=1
−
j ∈J0
⎧
s
⎨
1
μj gj (z) + p T ∇ 2
⎩
2
j ∈J0
i=1
ti (f (z, yi ) − λh(z, yi ) +
i=1
j ∈J0
⎫ ⎤
⎬
μj gj (z) p ⎦ ≥ 0,
⎭
which in view of b◦ (x, z) > 0; φ◦ (a) ≥ 0 ⇒ a ≥ 0 and the dual constraint (3.12) becomes
s
ti (f (x, yi ) − λh(x, yi )) +
i=1
μj gj (x) ≥ 0.
j ∈J0
It follows from x ∈ S and μ ≥ 0,
s
ti (f (x, yi ) − λh(x, yi )) ≥ 0.
i=1
Therefore, there exists a certain i◦ such that
f (x, yi◦ ) − λh(x, yi◦ ) ≥ 0.
Hence
sup
y∈Y
f (x, y)
f (x, yi◦ )
≥
≥ λ.
h(x, y)
h(x, yi◦ )
The proofs of the following theorems are similar to the corresponding results for the dual
(DI).
Theorem 3.5 (Strong duality) Assume that x ∗ is an optimal solution of (P) and ∇gj (x ∗ ), j ∈
J (x ∗ ) are linearly independent. Then there exist (s ∗ , t ∗ , u∗ ) ∈ K(x ∗ ) and (x ∗ , μ∗ , λ∗ , p ∗ =0)
∈ H2 (s ∗ , t ∗ , u∗ ) such that (x ∗ , μ∗ , λ∗ , s ∗ , t ∗ , u∗ , p ∗ = 0) is a feasible solution of (DII) and
the two objectives have the same value. If, in addition, the assumptions of weak duality (Theorem 3.4) hold for all feasible solutions of (P) and (DII), then (x ∗ , μ∗ , λ∗ , s ∗ , t ∗ , u∗ , p ∗ = 0)
is an optimal solution of (DII).
Theorem 3.6 (Strict converse duality) Let x ∗ and (z∗ , μ∗ , λ∗ , s ∗ , t ∗ , u∗ , p ∗ ) be feasible
solutions of (P) and (DII), respectively. Suppose that
(i) sup
y ∗ ∈Y
123
f (x ∗ , y ∗ )
= λ∗ ,
h(x ∗ , y ∗ )
J Glob Optim
(ii) for each β = 1, 2, . . . , r,
s∗
i=1
ti∗ (f (·, yi∗ ) − λ∗ h(·, yi∗ )) +
j ∈J0
μ∗j gj (·),
j ∈Jβ
μ∗j gj (·)
is second order strictly pseudoquasi α-type I univex at z∗ ; and
(iii) φ◦ (a) > 0 ⇒ a > 0; a ≥ 0 ⇒ φ1 (a) ≥ 0, and b◦ (x ∗ , z∗ ) > 0, b1 (x ∗ , z∗ ) ≥ 0.
Then x ∗ = z∗ .
Remark If Jβ = φ for β = 0, 1, 2, . . . , r − 1, and Jr = M, then Theorems 3.4–3.6 reduce
to Theorems 3.1–3.3.
Acknowledgments
The authors are thankful to the reviewer for valuable suggestions.
References
1. Ahmad, I.: Optimality conditions and duality in fractional minimax programming involving generalized
ρ−invexity. Inter. J. Manag. Syst. 19, 165–180 (2003)
2. Bector, C.R., Chandra, S., Husain, I.: Second order duality for a minimax programming problem.
Opsearch 28, 249–263 (1991)
3. Bector, C.R., Suneja, S.K., Gupta, S.: Univex functions and univex nonlinear programming. Proceedings
of the Administrative Sciences Association of Canada 115–124 (1992)
4. Chandra, S., Kumar, V.: Duality in fractional minimax programming. J. Aust. Math. Soc. Ser. A 58,
376–386 (1995)
5. Du, D.-Z., Pardalos, P.M.: Minimax and applications. Kluwer Academic Publishers, Dordrecht (1995)
6. Hanson, M.A.: On sufficiency of the Kuhn-Tucker conditions. J. Math. Anal. Appl. 80, 545–550 (1981)
7. Husain, Z., Ahmad, I., Sharma, S.: Second-order duality for minmax fractional programming. Optim.
Lett. 3, 277–286 (2009)
8. Jayswal, A.: On sufficiency and duality in multiobjective programming problem under generalized α-type
I univexity. J. Glob. Optim. 46, 207–216 (2010)
9. Liang, Z.A., Huang, H.X., Pardalos, P.M.: Optimality conditions and duality for a class of nonlinear
fractional programming problems. J. Optim. Theory Appl. 110, 611–619 (2001)
10. Liang, Z.A., Huang, H.X., Pardalos, P.M.: Efficiency conditions and duality for a class of multiobjective
fractional programming problems. J. Glob. Optim. 27, 447–471 (2003)
11. Liu, J.C.: Second order duality for minimax programming. Util. Math. 56, 53–63 (1999)
12. Liu, J.C., Wu, C.S.: On minimax fractional optimality conditions with invexity. J. Math. Anal. Appl.
219, 21–35 (1998)
13. Liu, J.C., Wu, C.S.: On minimax fractional optimality conditions with (F, ρ)-convexity. J. Math. Anal.
Appl. 219, 36–51 (1998)
14. Mangasarian, O.L.: Second and higher order duality in nonlinear programming. J. Math. Anal. Appl.
51, 607–620 (1975)
15. Mishra, S.K., Rautela, J.S.: On nondifferentiable minimax fractional programming under generalized
α-type I invexity. J. Appl. Math. Comput. 31, 317–334 (2008)
16. Mishra, S.K., Wang, S.Y., Lai, K.K.: Optimality and duality for multiple objective optimization under
generalized type I univexity. J. Math. Anal. Appl. 303, 315–326 (2005)
17. Mond, B.: Second order duality for nonlinear programs. Opsearch 11, 90–99 (1974)
18. Pardalos, P.M., Phillips, A.T.: Global optimization of fractional programming. J. Glob. Optim. 1,
173–182 (1991)
19. Yadav, S.R., Mukherjee, R.N.: Duality for fractional minimax programming problems. J. Aust. Math.
Soc. Ser. B 31, 484–492 (1990)
20. Yang, X.M., Hou, S.H.: On minimax fractional optimality and duality with generalized convexity. J. Glob.
Optim. 31, 235–252 (2005)
123