Smoothed square-root penalty function for nonlinear constrained

2013 c 6
$ÊÆÆ
June, 2013
1 17 ò 1 2 Ï
Operations Research Transactions
Vol.17 No.2
Smoothed square-root penalty function
for nonlinear constrained optimization∗
MENG Zhiqing1,†
GAO Song1
Abstract In this paper, we introduce a nonsmoothed square-root penalty function for nonlinear constrained optimization. We propose a smoothing function for the
nonsmooth penalty function and define the corresponding smoothed penalty problem
and obtain some error estimations among their optimal objective function values for
the smoothed penalty problem and the original optimization problem. We develop an
algorithm based on the smoothed penalty function and prove the convergence of the algorithm. Numerical examples show that the proposed algorithm is efficient for solving
some nonlinear constrained optimization problems.
Keywords nonlinear constrained optimization, square-root penalty function, exactness, smooth
Chinese Library Classification O221.2
2010 Mathematics Subject Classification 90C30
š‚5 å`z 1wz²•Šv¼ê∗
Š““1,†
p Ó1
Á‡ 0 ˜«š‚5 å`z ØŒ‡²•Šv¼ê§•ù«š1wv¼êJÑ ˜
‡# 1wz¼êÚéA v`z¯K§¼
¯K†1wzv`z¯K8Iƒm Ø
O. Äuù«v¼ê§JÑ ˜‡Ž{ÚÂñ5y²§êŠ~fL²Ž{é)ûš‚5
å`zäkk 5.
'…c š‚5 å`z, ²•Šv¼ê, °(5, 1w
¥ã©aÒ O221.2
2010êÆ©aÒ 90C30
0
Introduction
We consider the nonlinear constrained optimization problem
(P) :
min
f (x)
s.t. gi (x) 6 0, i = 1, 2, · · · , m,
ÂvFϵ2011c11 15F
* This work is supported by the National Natural Science Foundation of China (Nos. 10971193,
11271329).
1. College of Economics and Management, Zhejiang University of Technology, Hangzhou 310023, China¶
úôó’ŒÆ²n+nÆ §É² 310023
† ÏÕŠö Corresponding author, Email: [email protected]
2Ï
Smoothed square-root penalty function for nonlinear constrained optimization
71
where f, gi : Rn → R, i ∈ I = {1, 2, · · · , m}. Let
X0 = {x ∈ Rn | gi (x) 6 0, i = 1, 2, · · · , m},
X0 is the feasible solution set.
In 1967, Zangwill[1] presented a penalty function which is exact under certain assumptions. But, many researchers have discovered that some penalty functions are not smooth
if they are exact[2−8] . Hence, in order to use many efficient algorithms, such as Newton Method, it is very necessary and important to smooth exact penalty function to solve
constrained optimization problems. In fact, all the penalty function algorithms with the
constrained penalty parameter need to increase the penalty parameter gradually. So do
the exact penalty function methods because we do not know exactly how big the penalty
parameter is needed. So, some smoothing functions were studied in the literatures[2−8] . On
the other hand, a kind of penalty function method with an objective penalty parameter
has been discussed in [9]. We present a smoothing objective penalty functions for inequality constrained optimization problems[11] . Furthermore, smoothing exact penalty function
methods have been applied to study complementarity problems, mathematical programs
with equilibrium constraints and so on[11−14] . Hence, it is very important that smoothing
exact penalty function methods need much less the number of iteration.
In this paper, we study a new method for smoothing the exact penalty function of (P)
given by
m p
X
F (x, ρ) = (f (x) − M )2 + ρ
max{gi (x), 0},
(0.1)
i=1
M is a constant and M < 0. It is known that F (x, ρ) is exact under calmness condition[16] .
The corresponding unconstrained optimization problem to (0.1) is given by
(Pρ )
min F (x, ρ).
x∈ Rn
We know that the penalty function F (x, ρ) is not smooth. In this paper, we smooth the
penalty function (0.1) so that it can been applied to solve the problem (P) by Newton-type
method. The smoothed penalty function is second-order differentiable, and differs from
the smoothed penalty function in [6,10]. We prove some error estimates and develop an
algorithm to compute an approximate solution to (P).
1
A second-order differentiable smoothing method
We define a smoothing function pε (t) by
 5 1 t
ε2 eε ,
if t 6 0;



 4e
pε (t) = − 1 ε− 52 t3 + 5 ε 12 e εt , if 0 < t 6 ε;


4
4e

 1
2
t ,
if ε < t.
72
MENG Zhiqing, GAO Song
17ò
p
For ε = 0.01 and ε = 1, we can illustrate pε (t) in Figure 1. Let p(t) = max{t, 0}. It
is easy to see that pε (t) is twice continuously differentiable and that lim pε (t) = p(t).
ε→0
2
2
1.5
1.5
pε(t)
2.5
pε(t)
2.5
1
1
0.5
0.5
0
−2
−1
0
1
2
t
3
4
5
Figure 1
0
−2
6
−1
0
1
2
t
3
4
5
6
pε (t) (when ε = 0.01 and ε = 1)
In fact, we have
 5 −1 t

ε 2 eε ,

 4e


1
t
3 5
5
p0ε (t) = − ε− 2 t2 + ε− 2 e ε ,
 4
4e



 1 − 12
t ,
2
 5 −3 t

ε 2 eε ,


4e


3
t
3 5
5
p00ε (t) = − ε− 2 t + ε− 2 e ε ,

2
4e



 1 − 32
− t ,
4
if t < 0;
if 0 < t < ε;
if ε < t.
if t < 0;
if 0 < t < ε;
if ε < t.
Assume that f and gi , i ∈ I are twice continuously differentiable. Let gi+ (x) =
max{0, gi (x)} for any i ∈ I. A smoothed penalty function for (P) is given by
X
F (x, ρ, ε) = (f (x) − M )2 + ρ
pε (gi (x)),
(1.1)
i∈I
where ρ > 0 and k M k is large enough. F (x, ρ, M, ε) is twice continuously differentiable at
any x ∈ Rn . we have the following smoothing penalty problem,
(P Iρ )
min F (x, ρ, ε).
x∈Rn
As previously mentioned,
F (x, ρ) = (f (x) − M )2 + ρ
m p
X
max{gi (x), 0}.
i=1
The corresponding unconstrained optimization problem to (0.1) is given by
(Pρ )
min F (x, ρ).
x∈ Rn
2Ï
Smoothed square-root penalty function for nonlinear constrained optimization
73
As lim F (x, ρ, ε) = F (x, ρ) for any given ρ, we will study the relationship between (Pρ ) and
ε→0
(PIρ ).
Lemma 1.1 For any x ∈ Rn and ε > 0,
1
1
5
5
− mρε 2 6 F (x, ρ) − F (x, ρ, ε) 6 mρε 2 .
4
4
(1.2)
Proof By the definition of pε (t), we know that
5 1
5 1
− ε 2 6 p(t) − pε (t) 6 ε 2 .
4
4
Then, for any x ∈ Rn ,
5 1
5 1
− ε 2 6 p(gi (x)) − pε (gi (x)) 6 ε 2 ,
4
4
∀i ∈ I.
X
X
1
1
5
5
− mε 2 6
p(gi (x)) −
pε (gi (x)) 6 mε 2 .
4
4
i∈I
i∈I
Therefore,
1
1
5
5
− mρε 2 6 F (x, ρ) − F (x, ρ, ε) 6 mρε 2
4
4
since ρ > 0. We get the following two theorems based on the Lemma 1.1.
Theorem 1.1 Let {εj } → 0 and ∀εj > 0, assume that xj is a solution to minn F (x, ρ, εj )
x∈R
for some given ρ > 0. Let x be an accumulating point of the sequence {xj }, then x is an
optimal solution to minn F (x, ρ).
x∈R
Theorem 1.2 Let x∗ be an optimal solution of (Pρ ) and x ∈ X an optimal solution of
(P Iρ ). Then,
1
1
5
5
− mρε 2 6 F (x∗ , ρ) − F (x, ρ, ε) 6 mρε 2 .
(1.3)
4
4
Proof By Lemma 1.1 and ρ > 0, we obtain
1
1
5
5
− mρε 2 6 F (x∗ , ρ, ) − F (x∗ , ρ, ε) 6 mρε 2 ,
4
4
1
1
5
5
− mρε 2 6 F (x, ρ) − F (x, ρ, ε) 6 mρε 2 .
4
4
Because x∗ is an optimal solution of (Pρ ) and x is an optimal solution of (PIρ ),
F (x∗ , ρ) 6 F (x, ρ),
F (x, ρ, ε) 6 F (x∗ , ρ, ε),
we obtain that
1
1
5
5
− mρε 2 6 F (x∗ , ρ) − F (x, ρ, ε) 6 mρε 2 .
4
4
74
MENG Zhiqing, GAO Song
17ò
Definition 1.1 A point x0 ∈ X is an ε-feasible solution or an ε-solution if
gi (x0 ) 6 ε,
∀i ∈ I.
Theorem 1.3 Let x∗ be an optimal solution of (Pρ ) and xε ∈ Rn an optimal solution
of (P Iρ ). Furthermore, let x∗ be feasible to (P ) and xε ε-feasible to (P ). Then,
lim (f (x∗ ) − f (xε )) = 0.
(1.4)
ε→0
Proof Since xε is ε-feasible to (P),
06
X
pε (gi (xε )) 6
i∈I
1
5
mε 2 .
4
As x∗ is feasible to (P), we have
X
p(gi (x∗ )) = 0.
i∈I
Then, by Theorem 1.2, we get
X
X
1
1
5
5
− mρε 2 6 ((f (x∗ ) − M )2 + ρ
p(gi (x∗ ))) − ((f (xε ) − M )2 + ρ
pε (gi (xε ))) 6 mρε 2 ,
4
4
i∈I
i∈I
X
X
1
1
5
5
− mρε 2 + ρ
pε (gi (xε )) 6 (f (x∗ ) − M )2 − (f (xε ) − M )2 6 mρε 2 + ρ
pε (gi (xε )),
4
4
i∈I
i∈I
then,
1
1
5
5
− mρε 2 6 (f (x∗ ) − f (xε ))(f (x∗ ) + f (xε ) − 2M ) 6 mρε 2 .
4
2
Because M < 0 and k M k is large enough, f (x∗ ) + f (xε ) − 2M > 0, then we have
lim (f (x∗ ) − f (xε )) = 0.
ε→0
Definition 1.2 For x∗ ∈ Rn , we call y ∗ ∈ Rm a Lagrange multiplier vector corresponding to x∗ if x∗ and y ∗ satisfy
X
∇f (x∗ ) = −
yi∗ ∇gi (x∗ ),
(1.5)
i∈I
yi∗ gi (x∗ ) = 0, yi∗ > 0, gi (x∗ ) 6 0, i = 1, 2, · · · , m.
(1.6)
Theorem 1.4 Let f and gi , i = 1, 2, · · · , m, be convex. Let x∗ be an optimal solution
of (P ) and y ∗ ∈ Rm a Lagrange multiplier vector corresponding to x∗ . Then
F (x∗ , ρ) − F (x, ρ, ε) 6
1
5
mρε 2 ,
4
∀x ∈ Rn ,
(1.7)
2Ï
Smoothed square-root penalty function for nonlinear constrained optimization
75
p
provided that ρ > 2mλ b(x)(f (x∗ ) − M ), where λ = max{yi∗ , i = 1, 2, · · · , m} and b(x) =
max{gi+ (x), i = 1, 2, · · · , m}.
Proof By the convexity of f and gi , i = 1, 2, · · · , m, we have
f (x) > f (x∗ ) + ∇f (x∗ )T (x − x∗ ),
∀x ∈ Rn ,
(1.8)
∗
∗ T
∗
gi (x) > gi (x ) + ∇gi (x ) (x − x ),
n
∀x ∈ R .
Since x∗ is an optimal solution of (P) and y ∗ a Lagrange multiplier vector corresponding to
x∗ , hence, (1.5) and (1.6) follow. Applying (1.5), (1.6) and (1.8), we obtain
f (x) > f (x∗ ) + ∇f (x∗ )T (x − x∗ )
X
= f (x∗ ) −
yi∗ ∇gi (x∗ )T (x − x∗ )
i∈I
∗
X
∗
X
> f (x ) −
yi∗ (gi (x) − gi (x∗ ))
i∈I
= f (x ) −
yi∗ gi (x).
i∈I
+
Let I (x) = {i ∈ I | gi (x) > 0}. Then,
X
f (x) > f (x∗ ) −
yi∗ gi (x).
(1.9)
i∈I + (x)
Let
λ = max{yi∗ , i = 1, 2 · · · , m} and b(x) = max{gi+ (x), i = 1, 2, · · · , m}.
Then, −yi∗ gi (x) > −λb(x) for any i ∈ I + (x). Thus,
X
f (x) > f (x∗ ) −
yi∗ gi (x) > f (x∗ ) − λmb(x).
(1.10)
(1.11)
i∈I + (x)
f (x) − M > f (x∗ ) − M −
X
yi∗ gi (x) > f (x∗ ) − M − λmb(x).
(1.12)
i∈I + (x)
We know that M < 0 and k M k is large enough, so f (x∗ ) − M − λmb(x) > 0. We have
X q
F (x, ρ) = (f (x) − M )2 +
ρ gi+ (x)
i∈I + (x)
> (f (x∗ ) − M − λmb(x))2 +
X
q
ρ gi+ (x)
i∈I + (x)
∗
> (f (x ) − M ) − 2λmb(x)(f (x∗ ) − M ) + (λmb(x))2 + ρ
p
> F (x∗ , ρ) − 2λmb(x)(f (x∗ ) − M ) + ρ b(x).
2
p
b(x)
p
p
When ρ > 2mλ b(x)(f (x∗ )−M ), we obtain F (x, ρ) > F (x∗ , ρ). When ρ > 2mλ b(x)(f (x∗ )
− M ), we always have F (x∗ , ρ) − F (x, ρ) 6 0. By Lemma 1.1, we obtain (1.7).
76
MENG Zhiqing, GAO Song
17ò
As a corollary of Theorem 1.4, we have
Corollary 1.1 Let f and gi be convex, i = 1, 2, · · · , m, x∗ is an optimal solution of
(P ), y ∗ ∈ Rm a Lagrange multiplier vector corresponding to x∗ . If x∗ρ is an optimal solution
√
of (Pρ ), then F (x∗ , ρ) = F (x∗ρ , ρ) when ρ > λm b∗ (f (x∗ ) − M ), where λ = max{yi∗ , i =
1, · · · , m}, b∗ = max{gi+ (x∗ρ ), i = 1, 2, · · · , m}.
In summary, Theorem 1.1 and Theorem 1.2 show that an approximate solution to
(PIρ ) is also an approximate solution to (Pρ ) when the error ε is small enough. Theorem 1.3
mean that an approximately optimal solution to (PIρ ) becomes an approximately optimal
solution to (P) if the solution to (PIρ ) is ε-feasible. By Theorem 1.4, an approximately
optimal solution to (PIρ ) is an approximately optimal solution to (P), when the parameter
ρ is large enough. We may get an approximately optimal solution to (P) by computing an
approximately optimal solution to (PIρ ).
2
An algorithm
In this section, we give the algorithm based on the smoothed penalty function.
Algorithm 2.1
Step 1 Given x0 , ε > 0, ε0 > 0, ρ0 > 0, 0 < η < 1 and N > 1, given a constant M < 0
and k M k is large enough, let j = 0 and go to Step 2.
Step 2 Use xj as the starting point to solve minn F (x, ρj , εj ). Let xj+1 be the optimal
x∈R
solution obtained.
Step 3 If xj+1 is ε-feasible to (P), then stop and the algorithm has generated an approximate solution xj+1 of (P). Otherwise, let ρj+1 = N ρj , εj+1 = ηεj and j = j + 1, go to
Step 2.
Remark It is easy to see that as j → +∞, the sequence {εj } is decreasing to 0 and
the sequence {ρj } is increasing to +∞.
Next, we will prove the convergence of the algorithm.
For x ∈ Rn , we define
I 0 (x) = {i | gi (x) = 0, i ∈ I},
I − (x) = {i | gi (x) < 0, i ∈ I},
I + (x) = {i | gi (x) > 0, i ∈ I},
Iε− (x) = {i | gi (x) 6 ε, i ∈ I},
Iε+ (x) = {i | gi (x) > ε, i ∈ I}.
Theorem 2.1 Assume that
lim f (x) = +∞. Let {xj } be the sequence generated by
kxk→∞
Algorithm 2.1. Suppose that the sequence {F (xj , ρj , εj )} is bounded. Then {xj } is bounded
2Ï
Smoothed square-root penalty function for nonlinear constrained optimization
77
and any limit point x∗ of {xj } belongs to X0 . There exist λ > 0 and µi > 0, i = 1, 2, · · · , m,
which satisfy
X
λ∇f (x∗ ) +
µi ∇gi (x∗ ) = 0.
(2.1)
i∈I 0 (x∗ )
Proof First, we will show that {xj } is bounded. By the assumptions, there exist some
number L such that
L > F (xj , ρj , εj ),
j = 0, 1, 2, · · · .
(2.2)
We get
L > (f (xj ) − M )2 + ρ
Xp
max{gi (xj ), 0},
i∈I
then,
L > (f (xj ) − M )2 ,
because M < 0 and k M k is large enough, f (xj ) − M > 0, there exist some number L0
such that
L0 > f (xj ),
j = 0, 1, 2, · · · .
Suppose to the contrary that {xj } is unbounded. Assume without loss of generality that
k x k→ ∞ as j → +∞. Then we get jlim f (xj ) = +∞, which results in a contradiction.
j
kx k→∞
Next, we will show that any limit point of {xj } belongs to X0 . Without loss of generality, we assume lim xj = x∗ . Suppose to the contrary that x∗ 6∈ X0 . Then there
j→∞
exist some i such that p(gi (x∗ )) > a > 0. As gi (i ∈ I) are continuous, F (xj , ρj , εj ) are
continuous (j = 1, 2, · · · ). Note that
X
F (xj , ρj , εj ) =(f (xj ) − M )2 + ρj
Pεj (gi (xj )),
i∈I(xj )
X
F (xj , ρj , εj ) =(f (xj ) − M )2 + ρj
i∈I − (xj )
+ ρj
i∈I + (xj )
+ ρj
X
T
Iε−j (xj )
S
I 0 (xj )
5 12 giε(xj
ε e
4e j
j)
j
1 −5
5 1 gi (x ) − εj 2 gi (xj )3 + εj2 e εj
4
4e
1
X
gi (xj ) 2 .
i∈Iε+j (xj )
Then, as j → ∞, F (xj , ρj , εj ) → ∞, which contradicts the assumption.
Finally, we show that (2.1) holds. By Step 2, ∇F (xj , ρj , εj ) = 0, that is
j
2(f (xj ) − M )∇f (xj ) + ρj
X
i∈I − (xj )
X
i∈I + (xj )
T
S
I 0 (xj )
5 − 12 giε(xj )
ε e
∇gi (xj ) + ρj ×
4e j
j
3 −5
5 − 1 gi (x ) − εj 2 gi (xj )2 + εj 2 e εj ∇gi (xj ) + ρj
4
4e
Iε−j (xj )
X 1
1
gi (xj )− 2 ∇gi (xj ) = 0.
2
+
j
i∈Iεj (x )
78
MENG Zhiqing, GAO Song
17ò
For j = 1, 2, · · · , let
X
γj = 2(f (xj ) − M ) +
i∈I − (xj )
X
+
i∈I + (xj )
T
Iε−j (xj )
S
I 0 (xj )
gi (x
5
−1
ρj εj 2 e εj
4e
gi (xj ) 5
3
−5
−1
+
− ρj εj 2 gi (xj )2 + ρj εj 2 e εj
4
4e
j)
X
i∈Iε+j (xj )
1
1
ρj gi (xj )− 2 .
2
Then γj > 0. We have
j
2(f (x ) − M )
∇f (xj ) +
γj
5
j 2
2
X − 43 ρj ε−
j gi (x ) +
γj
T − j
j
i∈I + (x )
− 12
5
4e ρj εj e
X
i∈I − (xj )
− 21
5
4e ρj εj e
S
gi (xj )
εj
∇gi (xj )+
γj
I 0 (xj )
gi (xj )
εj
X
∇gi (xj ) +
1
j − 21
2 ρj gi (x )
i∈Iε+j (xj )
Iεj (x )
γj
∇gi (xj ) = 0.
Let
λj =
µji
=
2(f (xj ) − M )
,
γj
− 12
5
4e ρj εj e
gi (xj )
εj
i ∈ I − (xj ) ∪ I 0 (xj ),
,
γj
−5
− 3 ρj εj 2 gi (xj )2 +
µji = 4
γj
µji =
1
j − 12
2 ρj gi (x )
γj
,
− 12
5
4e ρj εj e
gi (xj )
εj
i ∈ I + (xj ) ∩ Iε−j (xj ),
,
i ∈ Iε+j (xj ).
Then,
λj +
X
µji = 1,
∀j,
(2.3)
i∈I
µji > 0, i ∈ I,
∀j.
Clearly, as j → ∞, λj → λ > 0, µji → µi > 0, ∀i ∈ I. We have
X
λ∇f0 (x∗ ) +
µi ∇gi (x∗ ) = 0,
i∈I
λ+
X
µi = 1.
i∈I
We know that any limit point of {xj } belongs to X0 , so x∗ ∈ X0 . For i ∈ I − (x∗ ), as
j → +∞, we get µji → 0. Therefore, µi = 0, ∀i ∈ I − (x∗ ). So, (2.1) holds.
Theorem 2.1 shows that the sequence {xj } generated by Algorithm 2.1 may converge
to a K-T point to (P) under some conditions. The speed of convergence of Algorithm
2.1 depends on the speed of convergence of the algorithm employed in Step 2 to solve the
unconstrained optimization problem minn F (x, ρj , εj ).
x∈R
2Ï
Smoothed square-root penalty function for nonlinear constrained optimization
3
79
Numerical experiments
In this section, we will give some numerical examples.
For the j 0 th iteration of the algorithm, we define a constraint error ej = e(j) by
e(j) =
m
X
max{g(xj ), 0}.
i=1
Example 3.1 Consider
(P3.1)
min f (x) = x21 + x22 + 2x23 + x24 − 5x1 − 5x2 − 21x3 + 7x4
s.t.
g1 (x) = 2x21 + x22 + x23 + 2x1 + x2 + x4 − 5 6 0,
g2 (x) = x21 + x22 + x23 + x24 + x1 − x2 + x3 − x4 − 8 6 0,
g3 (x) = x21 + 2x22 + x23 + 2x24 − x1 − x4 − 10 6 0.
Let x0 = (0, 0, 0, 0), ε = 10−6 , ε0 = 10−5 , ρ0 = 100, η = 0.5 and N = 5, choose
M = −100. We use the Algorithm 2.1 to solve the problem. Numerical results are given in
Table 3.1.
Table 3.1 Results of Algorithm 2.1
j
1
2
3
ρj
100
500
2500
f (xj )
εj
39.769327
0.000000
0.000000
-72.254054
-44.219631
-44.226405
(x1 , x2 , x3 , x4 )
(1.265148,1.391187,4.007639,-2.165947)
(0.198013,0.792996,2.013085,-0.953018)
(0.192883,0.801067,2.012790,-0.954342)
g1 (xj )
17.953306
-0.004226
-0.000070
g2 (xj )
0.000000
0.000000
0.000000
g3 (xj )
21.816021
-2.079105
-2.045061
Therefore, we get an approximate solution x2 = (0.198013, 0.792996, 2.013085, −0.953018)
at the second iteration, which is a feasible solution. The objective function value is given
by f (x2 ) = −44.219631, which is similar to the result obtained in [9] in comparison.
Example 3.2 Consider
(P3.2)
min
s.t.
x21 + x22
x21 − x2 6 0, −x1 6 0.
The optimal solution to (P3.2) is given by (x1 , x2 ) = (0, 0). Let x0 = (1, 1), ε = 10−6 ,
ε0 = 10−5 , ρ0 = 2, N = 5, η = 0.001. We choose M = −100 for ε-feasibility. We use the
Algorithm 2.1 to solve the problem. Numerical results are given in Table 3.2.
Table 3.2 Results of Algorithm 2.1
j
1
2
ρj
2
10
εj
0.000010
0.000000
f (xj )
0.001444
0.000000
(x1 , x2 )
(0.026863,0.026876)
(0.000000,0.000000)
We get the optimal solution x2 = (0.000000, 0.000000) at the second iteration, where
f (x2 ) = 0.000000.
According to the numerical results given above, we see that we can get a satisfactory
approximate solution in a satisfactory speed with the algorithm. It is a new type of penalty
function and the algorithm seems efficient.
80
MENG Zhiqing, GAO Song
17ò
References
[1] Zangwill W I. Nonlinear programming via penalty function [J]. Manangement Science, 1967,
13: 334-358.
[2] Zenios S A, Pinar M C, Dembo R S. A smooth penalty function algorithm for networkstructured problems [J]. European Journal of Operational Research, 1993, 64: 258-277.
[3] Pinar M C, Zenios S A. On smoothing exact penalty functions for convex constraints optimization [J]. SIAM Journal on Optimization, 1994, 4: 486-511.
[4] Yang X Q, Meng Z Q, Huang X X, et al. Smoothing nonlinear penalty functions for constrained
optimization problems [J]. Numerical Functional Analysis and Optimization, 2003, 24: 351364.
[5] Wu Z Y, Bai F S, Yang X Q, et al. An exact lower order penalty function and its smoothing
in nonlinear programming [J]. Optimization, 2004, 53: 51-68.
[6] Meng Z Q, Dang C Y, Yang X Q. On the smoothing of the square-root exact penalty function
for inequality constrained optimization [J]. Computational Optimization and Applications,
2006, 35: 375-398.
[7] Herty M, Klar A, Singh A K, et al. Smoothed penalty algorithms for optimization of nonlinear
models [J]. Computational Optimization and Applications, 2007, 37: 157-176.
[8] Liu B Z. On smoothing exact penalty functions for nonlinear constrained optimization p
roblems [J]. Journal of Applied Mathematics and Computing, 2009, 30: 259-270.
[9] Meng Z Q, Hu Q Y, Dang C Y. A penalty function algorithm with objective parameters
for nonlinear mathema [J]. Journal of Industrial and Management Optimization, 2009, 5:
585-601.
[10] Meng Z Q, Dang C Y, Jiang M, et al. A smoothing objective penalty function algorithm for
inequality constrained optimization problems [J]. Numerical Functional Analysis and Optimization, 2011, 32: 806-820.
[11] Chen C H, Mangasarian O L. A class of smoothing functions for nonlinear and mixed complementarity problems [J]. Computational Optimization and Applications, 1996, 5: 97-138.
[12] Chen C H, Mangasarian O L. Smoothing methods for convex inequalities and linear complementarity problems [J]. Mathematical Programming, 1995, 71: 51-69.
[13] Wan Z, Wang Y J. Convergence of an inexact smoothing method for mathematical programs
with equilibrium constraints [J]. Numerical Functional Analysis and Optimization, 2006, 27:
485-495.
[14] Zhu Z B, Luo Z J, Zeng J W. A new smoothing technique for mathematical programs with
equilibrium constraints [J]. Applied Mathematics and Mechanics, 2007, 28: 1407-1414.
[15] Hintermuller M, Kopacka I. A smooth penalty approach and a nonlinear multigrid algorithm
for elliptic MPECs [J]. Computational Optimization and Applications, 2011, 50: 111-145.
[16] Rubinov A M, Huang X X, Yang X Q. The zero duality gap property and lower semicontinuity
of the perturbation function [J]. Math Oper Res, 2002, 27: 775-791.