Global Optimization for the Sum of Certain Nonlinear Functions

Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2014, Article ID 408918, 8 pages
http://dx.doi.org/10.1155/2014/408918
Research Article
Global Optimization for the Sum of
Certain Nonlinear Functions
Mio Horai,1 Hideo Kobayashi,1 and Takashi G. Nitta2
1
2
Faculty of Engineering, Graduate School of Engineering, Mie University, Kurimamachiyamachi, Tsu 514-8507, Japan
Department of Education, Mie University, Kurimamachiyamachi, Tsu 514-8507, Japan
Correspondence should be addressed to Mio Horai; [email protected]
Received 13 April 2014; Revised 19 August 2014; Accepted 20 August 2014; Published 10 November 2014
Academic Editor: Julio D. Rossi
Copyright © 2014 Mio Horai et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We extend work by Pei-Ping and Gui-Xia, 2007, to a global optimization problem for more general functions. Pei-Ping and Gui-Xia
treat the optimization problem for the linear sum of polynomial fractional functions, using a branch and bound approach. We
prove that this extension makes possible to solve the following nonconvex optimization problems which Pei-Ping and Gui-Xia,
2007, cannot solve, that the sum of the positive (or negative) first and second derivatives function with the variable defined by sum
of polynomial fractional function by using branch and bound algorithm.
1. Introduction
The optimization problem is widely used in sciences, especially in engineering and economy [1–3]. In 2007, Pei-Ping
and Gui-Xia considered one global optimization problem in
[4]:
𝑃
πœ” (π‘₯) = βˆ‘π‘π‘—
min
𝑗=1
π‘”π‘˜ (π‘₯) ≀ 0,
s.t.
𝑏𝑗 (π‘₯)
Pei-Ping and Gui-Xia proposed the method to solve
these problems globally by using branch and bound algorithm in [4]. In the above problem, the objective function
and constrained function are sums of generalized polynomial
fractional functions. We extend these functions to more
general functions like below:
𝑃
π‘Žπ‘— (π‘₯)
(𝑃)
min
𝑗=1
π‘ƒπ‘˜
π‘₯ ∈ 𝑋,
́
𝑗=1
where 𝑋 := {π‘₯ ∈ R | 0 < π‘₯𝑖 ≀ π‘₯𝑖 ≀ π‘₯𝑖 < ∞ (𝑖 =
1, 2, . . . , 𝑁)}, π‘Žπ‘— (π‘₯), 𝑏𝑗 (π‘₯), π‘”π‘˜ (π‘₯) are given generalized polynomial. One has
π‘Žπ‘— (π‘₯) :=
𝑏𝑗 (π‘₯) :=
𝑔
π‘”π‘˜ (π‘₯) :=
π‘‡π‘˜
𝑁
𝑁
𝛾𝑏
𝑏
βˆ‘π›½π‘—π‘‘
∏π‘₯𝑖 𝑗𝑑𝑖 ,
𝑑=1
𝑖=1
π‘Žπ‘— (π‘₯)
)
π‘‘π‘˜π‘— ́ (π‘₯)
π‘π‘˜π‘— ́ (π‘₯)
)≀0
(𝑃0)
(𝑗 ́ = 1, . . . , 𝑃𝐾 , π‘˜ = 1, . . . , 𝑀) ,
π‘₯ ∈ 𝑋,
𝑇𝑗𝑏
𝑁
π›Ύπ‘Ž
π‘Ž
βˆ‘π›½π‘—π‘‘
∏π‘₯𝑖 𝑗𝑑𝑖 ,
𝑑=1
𝑖=1
𝑏𝑗 (π‘₯)
π‘”π‘˜ (π‘₯) = βˆ‘ β„Žπ‘˜π‘— ́ (
s.t.
𝑁
π‘‡π‘—π‘Ž
𝑀 (π‘₯) = βˆ‘β„Žπ‘— (
(1)
where π‘Žπ‘— (π‘₯) > 0, 𝑏𝑗 (π‘₯) > 0, π‘π‘˜π‘— (π‘₯)
> 0, π‘‘π‘˜π‘— (π‘₯)
> 0 π‘₯ ∈ 𝑋; that
́
́
is,
π‘‡π‘—π‘Ž
𝑔
𝛾
π‘Ž
βˆ‘π›½π‘˜π‘‘
∏π‘₯𝑖 π‘˜π‘‘π‘– .
𝑑=1
𝑖=1
Sum of rations problems like (𝑃) attract a lot of attention,
and the reason is that these problems are applied to various
economical problems [4].
π‘Žπ‘— (π‘₯) :=
𝑁
π›Ύπ‘Ž
π‘Ž
βˆ‘π›½π‘—π‘‘
∏π‘₯𝑖 𝑗𝑑𝑖 ,
𝑑=1
𝑖=1
𝑇𝑗𝑏
𝑁
𝑑=1
𝑖=1
𝛾𝑏
𝑏
𝑏𝑗 (π‘₯) := βˆ‘π›½π‘—π‘‘
∏π‘₯𝑖 𝑗𝑑𝑖 ,
π‘‡π‘˜π‘π‘— ́
𝑁
𝑑=1
𝑖=1
𝛾𝑐
π‘π‘˜π‘— ́ (π‘₯) := βˆ‘π›½π‘˜π‘ 𝑗𝑑́ ∏π‘₯𝑖 π‘˜π‘—π‘‘ ,
́
2
Abstract and Applied Analysis
π‘‡π‘˜π‘‘π‘— ́
𝑁
𝑑=1
𝑖=1
2. Equivalence Transformation and
Linear Relaxation
𝛾𝑑 ́
π‘‘π‘˜π‘— ́ (π‘₯) := βˆ‘π›½π‘˜π‘‘π‘—π‘‘Μ ∏π‘₯𝑖 π‘˜π‘—π‘‘ ,
(𝑗 = 1, 2, . . . , 𝑃, 𝑗 ́ = 1, 2, . . . , π‘ƒπ‘˜ , π‘˜ = 1, 2, . . . , 𝑀) ,
(2)
π‘Ž
𝑏
where π‘‡π‘—π‘Ž , 𝑇𝑗𝑏 , π‘‡π‘˜π‘π‘— ,́ π‘‡π‘˜π‘‘π‘— ́ are natural numbers, and 𝛽𝑗𝑑
, 𝛽𝑗𝑑
, π›½π‘˜π‘ 𝑗𝑑́ ,
π‘Ž
𝑏
π›½π‘˜π‘‘π‘—π‘‘Μ are real constants not zero, and 𝛾𝑗𝑑𝑖
, 𝛾𝑗𝑑𝑖
, π›Ύπ‘˜π‘π‘—π‘‘Μ , π›Ύπ‘˜π‘‘π‘—π‘‘Μ are real
constants.
: R 󳨃→ R are secWe assume that β„Žπ‘— (𝑦𝑗 ), β„Žπ‘˜π‘— (𝑦
́ π‘˜π‘— )́
ondly differentiable functions and monotone increasing or
monotone decreasing functions. We divide these functions to
monotone increasing or monotone decreasing as follows:
β„Žπ‘—σΈ€ 
> 0 (𝑗 = 1, . . . , 𝐾) ,
β„Žπ‘—σΈ€ 
< 0 (𝑗 = 𝐾 + 1, . . . , 𝑃) ,
β„Žπ‘˜σΈ€  𝑗 ́ > 0 (𝑗 ́ = 1, . . . , πΎπ‘˜ ) ,
β„Žπ‘˜σΈ€  𝑗 ́
In this section we firstly transform the problem (𝑃0) to the
equivalent problems (𝑃1) and secondly transform (𝑃1) to
(𝑃2). Thirdly we linearize the problem (𝑃2) corresponding to
[4].
2.1. Translation of the Problem (𝑃0) into (𝑃1). For the problem (𝑃0), we put new variables π‘šπ‘— , 𝑙𝑗 , π‘‘π‘˜π‘— ́ and π‘ π‘˜π‘— ,́ and the function 𝜌(𝑙, π‘š) and πœ‰π‘˜ (𝑠, 𝑑) depending on β„Žπ‘— , β„Žπ‘˜π‘— ́ in the original
problem (𝑃0):
𝑃
𝜌 (𝑙, π‘š) := βˆ‘β„Žπ‘— (
𝑗=1
π‘ƒπ‘˜
πœ‰π‘˜ (𝑠, 𝑑) := βˆ‘β„Žπ‘˜π‘— ́ (
< 0 (𝑗 ́ = πΎπ‘˜ + 1, . . . , π‘ƒπ‘˜ ) .
́
𝑗=1
π‘‘π‘˜π‘— ́
π‘ π‘˜π‘— ́
)
π‘šπ‘—
𝑙𝑗
)
(𝑗 = 1, 2, . . . , 𝑃) ,
(𝑗 ́ = 1, . . . , π‘ƒπ‘˜ , π‘˜ = 1, . . . , 𝑀) .
(6)
(3)
Furthermore, we assume the following conditions for the
second derivatives:
σΈ€ σΈ€ 
{β„Žπ‘— ∘ exp (𝑧𝑗 )} > 0,
σΈ€ σΈ€ 
> 0,
{β„Žπ‘˜π‘— ́ ∘ exp (π‘§π‘˜π‘— )}
́
σΈ€ σΈ€ 
{β„Žπ‘— ∘ exp (𝑧𝑗 )} < 0,
σΈ€ σΈ€ 
{β„Žπ‘˜π‘— ́ ∘ exp (π‘§π‘˜π‘— )}
<0
́
(4)
(𝑗 = 1, . . . , 𝑃, 𝑗 ́ = 1, . . . , π‘ƒπ‘˜ , π‘˜ = 1, . . . , 𝑀) .
sin (
π‘₯12 + 3π‘₯2 βˆ’ 2π‘₯22 + 1
)
π‘₯12 + π‘₯2 + 2
+ cos (
s.t.
π‘₯12 βˆ’
βˆ’π‘₯22
π‘₯1 + 3
π‘Žπ‘— ≀ 𝑙𝑗 ≀ π‘Žπ‘— , 𝑏𝑗 ≀ π‘šπ‘— ≀ 𝑏𝑗 ,
(7)
π‘π‘˜π‘— ́ ≀ π‘ π‘˜π‘— ́ ≀ π‘π‘˜π‘— ,́ π‘‘π‘˜π‘— ́ ≀ π‘‘π‘˜π‘— ́ ≀ π‘‘π‘˜π‘— }́ ,
where 𝑃sum = 2(𝑃 + βˆ‘π‘€
π‘˜=1 π‘ƒπ‘˜ ).
Let 𝑍𝐻 be the following closed domain in 𝑋 × π»; that is,
𝑍𝐻 := { (π‘₯, 𝑙, π‘š, 𝑠, 𝑑) ∈ 𝑋 × π» |
πœ‰π‘˜ (𝑠, 𝑑) ≀ 0
+ 2π‘₯1 + 2π‘₯2
)
π‘₯1 + 2.5
π‘₯1
βˆ’1≀0
π‘₯2
π‘π‘˜π‘— ,́ π‘π‘˜π‘— ,́ π‘‘π‘˜π‘— ,́ π‘‘π‘˜π‘— .́
Let 𝐻 be the closed interval:
𝐻 := { (𝑙, π‘š, 𝑠, 𝑑) ∈ R𝑃sum |
To solve the above problem (𝑃0), we transform the problem
(𝑃0) to the equivalent problems (𝑃1), (𝑃2) and transform
(𝑃2) into the linear relaxation problem. We prove the equivalency of the problems under above assumption, and we calculate the equivalent problem using branch and bound algorithm corresponding to [4–6].
For example, according to this extension at approach, we
can calculate the following global optimization problem:
min
π‘‘π‘˜π‘— (π‘₯)
are polynomials on closed
Since π‘Žπ‘— (π‘₯), 𝑏𝑗 (π‘₯), π‘π‘˜π‘— (π‘₯),
́
́
interval 𝑋, it is easy to calculate the minimums and maximums of the functions on 𝑋; we denote them by π‘Žπ‘— , π‘Žπ‘— , 𝑏𝑗 , 𝑏𝑗 ,
(π‘˜ = 1, . . . , 𝑀) ,
𝑙𝑗 βˆ’ π‘Žπ‘— (π‘₯) ≀ 0,
𝑏𝑗 (π‘₯) βˆ’ π‘šπ‘— ≀ 0
(𝑗 = 1, . . . , 𝐾) ,
(5)
π‘₯2
βˆ’5≀0
π‘₯1
𝑋 = {π‘₯ : 1 ≀ π‘₯1 ≀ 3, 1 ≀ π‘₯2 ≀ 3} .
In this paper, we explain how to make equivalent relaxation linear problem from original problem in Section 2. In
Section 3, we present the branch and bound algorithm and
its convergence. In Section 4, we introduce numerical experiments result.
π‘Žπ‘— (π‘₯) βˆ’ 𝑙𝑗 ≀ 0,
π‘šπ‘— βˆ’ 𝑏𝑗 (π‘₯) ≀ 0
(𝑗 = 𝐾 + 1, . . . , 𝑃) ,
π‘ π‘˜π‘— ́ βˆ’ π‘π‘˜π‘— ́ (π‘₯) ≀ 0,
π‘‘π‘˜π‘— ́ (π‘₯) βˆ’ π‘‘π‘˜π‘— ́ ≀ 0
(𝑗 ́ = 1, . . . , πΎπ‘˜ , π‘˜ = 1, . . . , 𝑀) ,
π‘π‘˜π‘— ́ (π‘₯) βˆ’ π‘ π‘˜π‘— ́ ≀ 0,
π‘‘π‘˜π‘— ́ βˆ’ π‘‘π‘˜π‘— ́ (π‘₯) ≀ 0
(𝑗 ́ = πΎπ‘˜ + 1, . . . , π‘ƒπ‘˜ , π‘˜ = 1, . . . , 𝑀)} .
(8)
Abstract and Applied Analysis
3
We give the problem (𝑃1) on 𝑍𝐻. Consider
min
𝜌 (𝑙, π‘š)
𝑃
πœ‰π‘˜ (𝑠, 𝑑) ≀ 0,
s.t.
Therefore we obtain
(π‘˜ = 1, . . . , 𝑀) ,
βˆ‘β„Žπ‘— (
(𝑃1)
𝑗=1
(π‘₯, 𝑙, π‘š, 𝑠, 𝑑) ∈ 𝑍𝐻.
π‘ƒπ‘˜
Now we obtain Theorem 1 that proves the equivalence of (𝑃0)
and (𝑃1).
Theorem 1. The problem (𝑃0) on 𝑋 is equivalent to the problem (𝑃1) on 𝑍𝐻.
βˆ‘β„Žπ‘˜π‘— ́ (
π‘šπ‘—βˆ— := 𝑏𝑗 (π‘₯βˆ— ) ,
π‘ π‘˜βˆ—π‘— ́ := π‘π‘˜π‘— ́ (π‘₯βˆ— ) ,
π‘‘π‘˜βˆ—π‘— ́ := π‘‘π‘˜π‘— ́ (π‘₯βˆ— ) ,
𝑃
𝑗=1
𝑏𝑗 (π‘₯βˆ— )
π‘Žπ‘— (π‘₯βˆ— )
𝑃
) = βˆ‘ β„Žπ‘— (
𝑗=1
π‘‘π‘˜π‘— ́ (π‘₯βˆ— )
π‘ƒπ‘˜
βˆ‘β„Žπ‘˜π‘— ́ (
π‘™π‘—βˆ—
βˆ‘ β„Žπ‘˜π‘— ́ (
π‘ƒπ‘˜
(9)
́
𝑗=1
π‘ π‘˜βˆ—π‘— ́
).
β„Žπ‘— (
β™―
𝑃
βˆ‘β„Žπ‘— (
𝑗=1
β™―
β™―
β™―
β™―
𝑏𝑗 (π‘₯β™― ); that is, 0 < π‘šπ‘— /𝑙𝑗 ≀ 𝑏𝑗 (π‘₯β™― )/π‘Žπ‘— (π‘₯β™― );
β™―
β™―
β™―
β™―
β™―
and 0 < π‘‘π‘˜π‘— (π‘₯
) ≀ π‘‘π‘˜π‘— ;́ that is, 0 < π‘‘π‘˜π‘— (π‘₯
)/π‘π‘˜π‘— (π‘₯
)≀
́
́
́
β™―
π‘‘π‘˜π‘— /𝑠
́ π‘˜π‘— ;́
β™―
for 𝑗 ́ = πΎπ‘˜ + 1, . . . , π‘ƒπ‘˜ and π‘˜ = 1, . . . , 𝑀, 0 < π‘π‘˜π‘— (π‘₯
)≀
́
β™―
β™―
β™―
β™―
β™―
); that is, 0 < π‘‘π‘˜π‘— /𝑠
π‘ π‘˜π‘— ́ and 0 < π‘‘π‘˜π‘— ́ ≀ π‘‘π‘˜π‘— (π‘₯
́
́ π‘˜π‘— ́ ≀
β™―
β™―
)/π‘π‘˜π‘— (π‘₯
).
π‘‘π‘˜π‘— (π‘₯
́
́
The conditions β„Žπ‘—σΈ€  > 0 (𝑗 = 1, . . . , 𝐾), or β„Žπ‘—σΈ€  < 0 (𝑗 = 𝐾 +
1, . . . , 𝑃) and β„ŽσΈ€  ́ > 0 (𝑗 ́ = 1, . . . , πΎπ‘˜ ), or β„ŽσΈ€  ́ < 0 (𝑗 ́ = πΎπ‘˜ +
π‘˜π‘—
π‘˜π‘—
1, . . . , π‘ƒπ‘˜ ) lead to
β„Žπ‘— (
𝑏𝑗 (π‘₯β™― )
π‘Žπ‘— (π‘₯β™― )
β„Žπ‘˜π‘— ́ (
π‘‘π‘˜π‘— ́ (π‘₯β™― )
π‘π‘˜π‘— ́
(π‘₯β™― )
π‘šπ‘—
β™―
𝑙𝑗
)
)≀
(12)
β™―
π‘ƒπ‘˜
π‘‘π‘˜π‘— ́
́
𝑗=1
π‘ π‘˜π‘— ́
) ≀ βˆ‘ β„Žπ‘˜π‘— ́ (
β™―
).
β™―
π‘ƒπ‘˜
π‘‘π‘˜π‘— ́
́
𝑗=1
π‘ π‘˜π‘— ́
) ≀ βˆ‘β„Žπ‘˜π‘— ́ (
β™―
) ≀ 0,
(13)
π‘Žπ‘— (π‘₯βˆ— )
) ≀ β„Žπ‘— (
𝑏𝑗 (π‘₯β™― )
π‘Žπ‘— (π‘₯β™― )
),
(14)
π‘™π‘—βˆ—
𝑃
) = βˆ‘ β„Žπ‘— (
𝑗=1
π‘šπ‘—βˆ—
π‘™π‘—βˆ—
𝑏𝑗 (π‘₯βˆ— )
π‘Žπ‘— (π‘₯βˆ— )
β™―
𝑃
π‘šπ‘—
𝑗=1
𝑙𝑗
) ≀ βˆ‘ β„Žπ‘— (
β™―
),
β™―
𝑃
π‘šπ‘—
𝑗=1
𝑙𝑗
) ≀ βˆ‘ β„Žπ‘— (
β™―
(15)
).
π‘™π‘—βˆ— := π‘Žπ‘— (π‘₯βˆ— ) ,
π‘šπ‘—βˆ— := 𝑏𝑗 (π‘₯βˆ— ) ,
π‘ π‘˜βˆ—π‘— ́ := π‘π‘˜π‘— ́ (π‘₯βˆ— ) ,
π‘‘π‘˜βˆ—π‘— ́ := π‘‘π‘˜π‘— ́ (π‘₯βˆ— ) ;
(16)
then
𝜌 (π‘™βˆ— , π‘šβˆ— ) = 𝑀 (π‘₯βˆ— ) ,
πœ‰π‘˜ (π‘ βˆ— , π‘‘βˆ— ) = 𝑔 (π‘₯βˆ— ) .
(17)
The element (π‘₯βˆ— , π‘™βˆ— , π‘šβˆ— , π‘ βˆ— , π‘‘βˆ— ) satisfies the conditions for 𝑍𝐻.
Since (π‘₯β™― , 𝑙♯ , π‘šβ™― , 𝑠♯ , 𝑑♯ ) is the optimal solution for (𝑃1), it
β™― β™―
satisfies βˆ‘π‘ƒπ‘—=1 β„Žπ‘— (π‘šπ‘— /𝑙𝑗 ) ≀ βˆ‘π‘ƒπ‘—=1 β„Žπ‘— (π‘šπ‘—βˆ— /π‘™π‘—βˆ— ).
β™―
β™―
Hence βˆ‘π‘ƒπ‘—=1 β„Žπ‘— (π‘šπ‘— /𝑙𝑗 ) = βˆ‘π‘ƒπ‘—=1 β„Žπ‘— (π‘šπ‘—βˆ— /π‘™π‘—βˆ— ); that is, the two
problems are equivalent.
β™―
) ≀ β„Žπ‘— (
),
β™―
For the optimal solution π‘₯βˆ— of (𝑃0), we denote
β™―
for 𝑗 ́ = 1, . . . , πΎπ‘˜ and π‘˜ = 1, . . . , 𝑀, 0 < π‘ π‘˜π‘— ́ ≀ π‘π‘˜π‘— (π‘₯
)
́
β™―
𝑏𝑗 (π‘₯βˆ— )
𝑗=1
for 𝑗 = 𝐾 + 1, . . . , 𝑃, 0 < π‘Žπ‘— (π‘₯β™― ) ≀ 𝑙𝑗 and 0 < π‘šπ‘— ≀
β™―
π‘π‘˜π‘— ́ (π‘₯β™― )
βˆ‘β„Žπ‘— (
β™― β™―
π‘šπ‘— /𝑙𝑗 ;
that is, 0 < 𝑏𝑗 (π‘₯ )/π‘Žπ‘— (π‘₯ ) ≀
π‘šπ‘—βˆ—
𝑃
for 𝑗 = 1, . . . , 𝐾, 0 < 𝑙𝑗 ≀ π‘Žπ‘— (π‘₯β™― ) and 0 < 𝑏𝑗 (π‘₯β™― ) ≀ π‘šπ‘— ;
β™―
𝑙𝑗
so
Furthermore let (π‘₯β™― , 𝑙♯ , π‘šβ™― , 𝑠♯ , 𝑑♯ ) be the optimal solution
for (𝑃1). Then by the restricted condition we have the following:
β™―
𝑗=1
that is, π‘₯β™― satisfied constant for (𝑃0).
Since the optimal solution for (𝑃0) is π‘₯βˆ— , we obtain
(10)
π‘‘π‘˜βˆ—π‘— ́
π‘‘π‘˜π‘— ́ (π‘₯β™― )
́
𝑗=1
),
) = βˆ‘ β„Žπ‘˜π‘— ́ (
π‘π‘˜π‘— ́ (π‘₯βˆ— )
́
𝑗=1
π‘šπ‘—βˆ—
π‘π‘˜π‘— ́ (π‘₯β™― )
π‘šπ‘—
Now,
π‘ƒπ‘˜
and then
βˆ‘ β„Žπ‘— (
π‘Žπ‘— (π‘₯β™― )
β™―
𝑃
) ≀ βˆ‘β„Žπ‘— (
π‘‘π‘˜π‘— ́ (π‘₯β™― )
́
𝑗=1
Proof. Let π‘₯βˆ— be the optimal solution for (𝑃0); we denote
π‘™π‘—βˆ— := π‘Žπ‘— (π‘₯βˆ— ) ,
𝑏𝑗 (π‘₯β™― )
(𝑗 = 1, . . . , 𝑃) ,
β™―
π‘‘π‘˜π‘— ́
β„Žπ‘˜π‘— ́ ( β™― )
π‘ π‘˜π‘— ́
(𝑗 ́ = 1, . . . , π‘ƒπ‘˜ , π‘˜ = 1, . . . , 𝑀) .
(11)
2.2. Translation of the Problem (𝑃1) into (𝑃2). We change the
variables by the logarithmic function log. Since π‘₯𝑖 , 𝑙𝑗 , π‘šπ‘— , π‘ π‘˜π‘— ,́
π‘‘π‘˜π‘— ́ are positive, we can write π‘₯𝑖 , 𝑙𝑗 , π‘šπ‘— , π‘ π‘˜π‘— ,́ π‘‘π‘˜π‘— ́ as exp(𝑦𝑛 ) are
using new variables 𝑦𝑛 (𝑛 = 1, . . . , 𝑁 + 𝑃sum ); that is, 𝑦𝑖 :=
ln π‘₯𝑖 , 𝑦𝑁+𝑗 := ln 𝑙𝑗 , 𝑦𝑁+𝑃+𝑗 := ln π‘šπ‘— , 𝑦𝑁+2𝑃+(π‘˜βˆ’1)π‘ƒπ‘˜ +𝑗 ́ := ln π‘ π‘˜π‘— ,́
and 𝑦𝑁+2𝑃+(𝑀+π‘˜βˆ’1)π‘ƒπ‘˜ +𝑗 ́ := ln π‘‘π‘˜π‘— .́
4
Abstract and Applied Analysis
The closed domain 𝑍𝐻 corresponds to the following 𝑆0 ,
where
𝑆0 := {𝑦 ∈ R𝑁+𝑃sum |
ln π‘₯𝑖 ≀ 𝑦𝑖 ≀ ln π‘₯𝑖 ,
Then the objective function 𝜌(𝑙, π‘š) and the restricted
functions are changed functions which are changed to πœ‡0 (𝑦)
and πœ‡π‘š (𝑦) (π‘š = 1, . . . , 𝑀 + 2(𝑃 + βˆ‘π‘€
π‘˜=1 π‘ƒπ‘˜ )).
Now we put
π‘†πœ‡0 := {𝑦 ∈ 𝑆0 |
ln π‘Žπ‘— ≀ 𝑦𝑁+𝑗 ≀ ln π‘Žπ‘— ,
(18)
ln 𝑏𝑗 ≀ 𝑦𝑁+𝑃+𝑗 ≀ ln 𝑏𝑗 ,
𝑀
πœ‡π‘š (𝑦) ≀ 0 (π‘š = 1, 2, . . . , 𝑀 + 2 (𝑃 + βˆ‘ π‘ƒπ‘˜ ))} .
π‘˜=1
ln π‘π‘˜π‘— ́ ≀ 𝑦𝑁+2𝑃+(π‘˜βˆ’1)π‘ƒπ‘˜ +𝑗 ́ ≀ ln π‘π‘˜π‘— ,́
(21)
ln π‘‘π‘˜π‘— ́ ≀ 𝑦𝑁+2𝑃+(𝑀+π‘˜βˆ’1)π‘ƒπ‘˜π‘— ́ ≀ ln π‘‘π‘˜π‘— }́ .
Using such transformation of variables, the objective function
and the restricted functions of (𝑃1) are changed to the
following:
𝑃
βˆ‘β„Žπ‘— (
𝑗=1
π‘ƒπ‘˜
βˆ‘ β„Žπ‘˜π‘— ́ (
́
𝑗=1
π‘‘π‘˜π‘— ́
π‘ π‘˜π‘— ́
π‘šπ‘—
𝑙𝑗
Then the problem (𝑃1) is transformed naturally to the following problem (𝑃2):
𝑃
) = βˆ‘β„Žπ‘— ∘ exp (𝑦𝑁+𝑃+𝑗 βˆ’ 𝑦𝑁+𝑗 ) ,
)
π‘ƒπ‘˜
𝑖
= βˆ‘ β„Žπ‘˜π‘— ́ ∘ exp (𝑦𝑁+2𝑃+(𝑀+π‘˜βˆ’1)π‘ƒπ‘˜ +𝑗 ́ βˆ’ 𝑦𝑁+2𝑃+(π‘˜βˆ’1)π‘ƒπ‘˜ βˆ’π‘— )́ ,
𝑁
𝑑=1
𝑖=1
𝑁
π‘˜π‘—
𝑖=1
(19)
𝑁+𝑃+𝑗
𝑁+2𝑃+(𝑀+π‘˜βˆ’1)π‘ƒπ‘˜ +𝑗
𝑁+2𝑃+(π‘˜βˆ’1)π‘ƒπ‘˜ +𝑗
,́ 𝑦𝑁+2𝑃+(𝑀+π‘˜βˆ’1)π‘ƒπ‘˜ +𝑗 ́ as min-
π‘˜π‘—
π‘˜π‘—
π‘˜π‘—
1, . . . , 𝑀).
And we denote π‘†π‘ž βŠ‚ π‘†πœ‡0 ; that is,
π‘ž
𝑦 ≀ π‘¦π‘ž ≀ 𝑦𝑖 ≀ 𝑦𝑖 ≀ 𝑦𝑖
𝑖
π‘ π‘˜π‘— ́ βˆ’ π‘π‘˜π‘— ́ (π‘₯) = exp (𝑦𝑁+2𝑃+(π‘˜βˆ’1)π‘ƒπ‘˜ +𝑗 )́
𝑇𝑗𝑐
𝑑=1
𝑖
(𝑖 = 1, . . . , 𝑁 + 𝑃sum ) } ,
𝑁
𝑐
𝑐
βˆ’ βˆ‘π›½π‘—π‘‘
exp (βˆ‘π›Ύπ‘—π‘‘π‘–
𝑦𝑖 ) ,
π‘ž
𝑁+𝑃sum
𝑆
:= βˆ‘ πœ† π‘šπ‘‘π‘– 𝑦𝑖 ,
π‘Œπ‘šπ‘‘
𝑖=1
𝑖=1
𝑇𝑗𝑑
𝑁
𝑑
𝑑
π‘‘π‘˜π‘— ́ (π‘₯) βˆ’ π‘‘π‘˜π‘— ́ = βˆ‘π›½π‘—π‘‘
exp (βˆ‘π›Ύπ‘—π‘‘π‘–
𝑦𝑖 )
𝑑=1
π‘ž
𝑖=1
𝑖=1
π‘†π‘ž
Now 𝜌(𝑙, π‘š), πœ‰π‘˜ (𝑠, 𝑑), 𝑙𝑗 βˆ’ π‘Žπ‘— (π‘₯), 𝑏𝑗 (π‘₯) βˆ’ π‘šπ‘— , π‘ π‘˜π‘— ́ βˆ’ π‘π‘˜π‘— (π‘₯),
́
π‘‘π‘˜π‘— (π‘₯)
βˆ’ π‘‘π‘˜π‘— ,́ are represented as
́
π‘ž
𝑖
𝑁+𝑃sum
π‘ž
π‘Œπ‘šπ‘‘ := βˆ‘ max {πœ† π‘šπ‘‘π‘– π‘¦π‘ž , πœ† π‘šπ‘‘π‘– 𝑦𝑖 } ,
𝑖=1
𝑖
𝑀
(π‘š = 0, 1, 2, . . . , 𝑀 + 2 (𝑃 + βˆ‘ π‘ƒπ‘˜ ) , 𝑑 = 1, . . . , π‘‡π‘š ) .
𝑁+𝑃sum
βˆ‘Ξ¨π‘šπ‘‘ ∘ exp ( βˆ‘ πœ† π‘šπ‘‘π‘– 𝑦𝑖 ) ,
𝑁+𝑃sum
𝑆
π‘Œπ‘šπ‘‘
:= βˆ‘ min {πœ† π‘šπ‘‘π‘– π‘¦π‘ž , πœ† π‘šπ‘‘π‘– 𝑦𝑖 } ,
βˆ’ exp (𝑦𝑁+2𝑃+(𝑀+π‘˜βˆ’1)𝑃𝐾 +𝑗 )́ .
𝑑=1
(𝑃2)
π‘†π‘ž := {𝑦 ∈ R𝑁+𝑃sum |
𝑏
𝑏
𝑏𝑗 (π‘₯) βˆ’ π‘šπ‘— = βˆ‘π›½π‘—π‘‘
exp (βˆ‘π›Ύπ‘—π‘‘π‘–
𝑦𝑖 ) βˆ’ exp (𝑦𝑁+𝑃+𝑗 ) ,
π‘‡π‘š
𝑦 ∈ π‘†πœ‡0 .
imums and maximums for ln π‘₯𝑖 , ln π‘₯𝑖 , ln π‘Žπ‘— , ln π‘Žπ‘— , ln 𝑏𝑗 , ln 𝑏𝑗 ,
ln 𝑐 ,́ ln 𝑐 ,́ ln 𝑑 ,́ ln 𝑑 ́ (𝑗 = 1, . . . , 𝑃, 𝑗 ́ = 1, . . . , π‘ƒπ‘˜ , π‘˜ =
π‘Ž
π‘Ž
𝑙𝑗 βˆ’ π‘Žπ‘— (π‘₯) = exp (𝑦𝑁+𝑗 ) βˆ’ βˆ‘π›½π‘—π‘‘
exp (βˆ‘π›Ύπ‘—π‘‘π‘–
𝑦𝑖 ) ,
𝑇𝑗𝑏
s.t.
𝑁+𝑗
𝑦𝑁+2𝑃+(π‘˜βˆ’1)π‘ƒπ‘˜ +𝑗 ,́ 𝑦
́
𝑗=1
𝑑=1
πœ‡0 (𝑦)
2.3. Linearization of the Problem (𝑃2). The objective and
restricted function for (𝑃2) are nonlinear. On π‘†πœ‡0 , we approximate πœ‡π‘š (𝑦) to lower bounded linear functions, and we can
transform (𝑃2) into the linear optimization problem. The
solution of it is lower bound of the optimal value on (𝑃2). We
, 𝑦𝑁+𝑃+𝑗 , 𝑦
,́
denote 𝑦 , 𝑦𝑖 , 𝑦 , 𝑦𝑁+𝑗 , 𝑦
𝑗=1
π‘‡π‘—π‘Ž
min
π‘˜=1
(20)
(22)
𝑖=1
σΈ€ 
where πœ† π‘šπ‘‘π‘– is real number and Ξ¨π‘šπ‘‘ satisfies Ξ¨π‘šπ‘‘
(π‘₯) > 0 or
σΈ€ σΈ€ 
σΈ€ 
Ξ¨π‘šπ‘‘ (π‘₯) < 0, and {Ξ¨π‘šπ‘‘ ∘ exp(𝑦)} > 0 or {Ξ¨π‘šπ‘‘ ∘ exp(𝑦)}σΈ€ σΈ€  < 0.
Let π‘“π‘šπ‘‘ (𝑦) be Ξ¨π‘šπ‘‘ ∘ exp(𝑦) and let πœ‡π‘š (𝑦) be
𝑁+𝑃
π‘‡π‘š
βˆ‘π‘‘=1 π‘“π‘šπ‘‘ (βˆ‘π‘–=1 sum πœ† π‘šπ‘‘π‘– 𝑦𝑖 ).
σΈ€ 
Now, π‘“π‘šπ‘‘
(𝑦)
σΈ€ σΈ€ 
π‘“π‘šπ‘‘ (𝑦) < 0, and
π‘†π‘ž
π‘†π‘ž
[π‘Œπ‘šπ‘‘
, π‘Œπ‘šπ‘‘ ]. And
σΈ€ 
σΈ€ σΈ€ 
> 0 or π‘“π‘šπ‘‘
(𝑦) < 0, and π‘“π‘šπ‘‘
(𝑦) > 0 or
π‘†π‘ž
π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) is monotonic convex function on
there exist the upper and lower bounded
π‘†π‘ž
π‘†π‘ž
π‘†π‘ž
π‘†π‘ž
π‘†π‘ž
(π‘Œπ‘šπ‘‘
) and πΊπ‘šπ‘‘
(π‘Œπ‘šπ‘‘
)) of π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
).
linear functions (πΉπ‘šπ‘‘
Abstract and Applied Analysis
5
Proof. The definition of (LRP (π‘†π‘ž )) implies the statement
naturally.
We denote
π‘†π‘ž
π‘ž
π‘ž
𝑆
)
π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
π‘ž
𝑆
𝑆
πΉπ‘šπ‘‘
(π‘Œπ‘šπ‘‘
) :=
π‘†π‘ž
π‘Œπ‘šπ‘‘
π‘†π‘ž
βˆ’ π‘Œπ‘šπ‘‘
π‘ž
π‘ž
𝑆
(π‘Œπ‘šπ‘‘
βˆ’ π‘Œπ‘†π‘šπ‘‘ )
(23)
π‘ž
𝑆
(𝑦) βˆ’ π‘“π‘šπ‘‘ (𝑦)| = 0
and 𝑑 = 1, 2, . . . , π‘‡π‘š , limπ‘ž β†’ ∞ maxπ‘¦βˆˆπ‘†π‘ž |πΉπ‘šπ‘‘
π‘ž
𝑆
).
+ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
π‘ž
π‘ž
π‘ž
𝑆
and limπ‘ž β†’ ∞ maxπ‘¦βˆˆπ‘†π‘ž |πΊπ‘šπ‘‘
(𝑦) βˆ’ π‘“π‘šπ‘‘ (𝑦)| = 0.
π‘ž
𝑆
𝑆
𝑆
) is continuous on [π‘Œπ‘šπ‘‘
, π‘Œπ‘šπ‘‘ ] and differentiable on
As π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
π‘ž
𝑆
π‘†π‘ž
(π‘Œπ‘šπ‘‘
, π‘Œπ‘šπ‘‘ ),
there exists
π‘†π‘ž
π‘π‘šπ‘‘
∈
π‘†π‘ž
π‘†π‘ž
(π‘Œπ‘šπ‘‘
, π‘Œπ‘šπ‘‘ ),
such that
π‘†π‘ž
π‘ž
σΈ€ 
𝑆
π‘“π‘šπ‘‘
(π‘π‘šπ‘‘
)=
Lemma 3. Assume that π‘†π‘ž+1 βŠ‚ π‘†π‘ž βŠ‚ β‹… β‹… β‹… βŠ‚ 𝑆0 βŠ‚ 𝑅𝑁+𝑃sum , and
𝑀
π‘ž
βˆ—
β‹‚βˆž
π‘ž=0 𝑆 = {𝑦 }. For each π‘š = 0, 1, 2, . . . , 𝑀 + 2(𝑃 + βˆ‘π‘˜=1 π‘ƒπ‘˜ )
π‘ž
𝑆
)
π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
π‘†π‘ž
π‘Œπ‘šπ‘‘
βˆ’
π‘†π‘ž
π‘Œπ‘šπ‘‘
(24)
,
π‘ž
Proof. Let π‘¦π‘ž := {𝑦 ∈ π‘†π‘ž | min(πœ† π‘šπ‘‘π‘– π‘¦π‘ž , πœ† π‘šπ‘‘π‘– 𝑦𝑖 ) (𝑖 = 1, . . . ,
𝑖
π‘ž
𝑁 + 𝑃sum )} and π‘¦π‘ž := {𝑦 ∈ π‘†π‘ž | max(πœ† π‘šπ‘‘π‘– π‘¦π‘ž , πœ† π‘šπ‘‘π‘– 𝑦𝑖 ) (𝑖 =
𝑖
1, . . . , 𝑁 + 𝑃sum )}.
π‘ž
= {π‘¦βˆ— }, the values π‘¦π‘ž and π‘¦π‘ž satisfy
Since β‹‚βˆž
π‘ž=0 𝑆
limπ‘ž β†’ ∞ π‘¦π‘ž = limπ‘ž β†’ ∞ π‘¦π‘ž = π‘¦βˆ— .
π‘†π‘ž
by the mean value theorem.
σΈ€ 
σΈ€ 
π‘†π‘ž
Since π‘“π‘šπ‘‘
(𝑦) > 0 or π‘“π‘šπ‘‘
(𝑦) < 0, π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
) is monotonic
π‘†π‘ž
π‘ž
function on [π‘Œπ‘†π‘šπ‘‘ , π‘Œπ‘šπ‘‘ ], there exists the inverse function of
π‘†π‘ž
π‘†π‘ž
βˆ’1 π‘†π‘ž
). Hence π‘π‘šπ‘‘
is uniquely decided, π‘“π‘šπ‘‘
(π‘Œπ‘šπ‘‘ ), such that
π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
π‘†π‘ž
π‘ž
𝑆
π‘π‘šπ‘‘
=
βˆ’1
π‘“π‘šπ‘‘
(
π‘†π‘ž
π‘ž
π‘Œπ‘šπ‘‘ βˆ’ π‘Œπ‘†π‘šπ‘‘
),
(25)
and we define
π‘†π‘ž
π‘ž
𝑆
πΊπ‘šπ‘‘
π‘ž
𝑆
(π‘Œπ‘šπ‘‘
)
:=
π‘ž
𝑆
)
π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
π‘†π‘ž
π‘Œπ‘šπ‘‘
π‘†π‘ž
βˆ’ π‘Œπ‘šπ‘‘
π‘ž
𝑆
),
+π‘“π‘šπ‘‘ (π‘π‘šπ‘‘
π‘ž
π‘ž
πΏπ‘†π‘šπ‘‘
π‘†π‘ž
(π‘Œπ‘šπ‘‘
)
π‘ž
𝑆
𝑆
(π‘Œπ‘šπ‘‘
βˆ’ π‘π‘šπ‘‘
)
π‘ž
(26)
π‘ž
𝑆
𝑆
σΈ€ σΈ€ 
{
{πΊπ‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) (π‘“π‘šπ‘‘ (𝑦) > 0)
:= { π‘†π‘ž π‘†π‘ž
σΈ€ σΈ€ 
{πΉπ‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) (π‘“π‘šπ‘‘
(𝑦) < 0) .
{
π‘ž
s.t.
πΏπ‘†π‘š (𝑦) ≀ 0
𝑖
(27)
π‘ž
󡄨󡄨 π‘†π‘ž π‘†π‘ž
π‘†π‘ž 󡄨󡄨󡄨
= σ΅„¨σ΅„¨σ΅„¨πΉπ‘šπ‘‘
(π‘π‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘π‘šπ‘‘
)󡄨󡄨
󡄨
󡄨
π‘†π‘ž
󡄨󡄨󡄨
π‘†π‘ž
󡄨󡄨 π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
) π‘ž
π‘ž
󡄨󡄨
𝑆
= 󡄨󡄨󡄨
(π‘π‘šπ‘‘
βˆ’ π‘Œπ‘†π‘šπ‘‘ )
π‘ž
𝑆
π‘ž
𝑆
󡄨󡄨󡄨
π‘Œπ‘šπ‘‘ βˆ’ π‘Œπ‘šπ‘‘
󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨
π‘†π‘ž 󡄨󡄨
π‘†π‘ž
+π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
) βˆ’ π‘“π‘šπ‘‘ (π‘π‘šπ‘‘
) 󡄨󡄨󡄨 .
󡄨󡄨
󡄨󡄨
󡄨
(LRP (π‘†π‘ž ))
𝑀
(π‘š = 1, 2, . . . , 𝑀 + 2 (𝑃 + βˆ‘ π‘ƒπ‘˜ )) .
π‘˜=1
Lemma 2. The value of (LRP (π‘†π‘ž )) is less than the optimal
value for the problem (𝑃2) on π‘†π‘ž .
π‘†π‘ž
󡄨󡄨 π‘†π‘ž
󡄨󡄨
(𝑦) βˆ’ π‘“π‘šπ‘‘ (𝑦)󡄨󡄨󡄨
maxπ‘ž σ΅„¨σ΅„¨σ΅„¨πΉπ‘šπ‘‘
󡄨
π‘¦βˆˆπ‘† 󡄨
π‘ž
By the definition (LRP (π‘†π‘ž )), any 𝑦 in π‘†π‘ž satisfying the
restricted condition of (𝑃2) satisfy the restricted condition of
(LRP (π‘†π‘ž )).
π‘ž
𝑆
𝑆
The function |πΉπ‘šπ‘‘
(𝑦) βˆ’ π‘“π‘šπ‘‘ (𝑦)| is concave on [π‘Œπ‘šπ‘‘
, π‘Œπ‘šπ‘‘ ];
π‘†π‘ž
π‘†π‘ž
therefore π‘π‘šπ‘‘ attains the maximum value of |πΉπ‘šπ‘‘ (𝑦)βˆ’π‘“π‘šπ‘‘ (𝑦)| :
π‘†π‘ž
(𝑦) βˆ’ π‘“π‘šπ‘‘ (𝑦)|
maxπ‘¦βˆˆπ‘†π‘ž |πΉπ‘šπ‘‘
π‘ž
𝐿𝑆0 (𝑦)
π‘žβ†’βˆž
π‘ž
|πœ† π‘šπ‘‘π‘– |(𝑦𝑖 βˆ’ π‘¦π‘ž ) 󳨀󳨀󳨀󳨀󳨀→ 0.
π‘ž
𝑆
𝑆
) β‰₯ 𝐿 π‘šπ‘‘ (π‘Œπ‘šπ‘‘
).
By the definition, π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
π‘‡π‘š
π‘‡π‘š π‘†π‘ž
π‘“π‘šπ‘‘ (𝑦) β‰₯ βˆ‘π‘‘=1
𝐿 π‘šπ‘‘ (𝑦).
For all 𝑦 ∈ π‘†π‘ž , πœ‡π‘š (𝑦) := βˆ‘π‘‘=1
π‘ž
π‘‡π‘š π‘†π‘ž
Let πΏπ‘†π‘š (𝑦) := βˆ‘π‘‘=1
𝐿 π‘šπ‘‘ (𝑦) (0 ≀ π‘š ≀ 𝑀 + 2(𝑃 + βˆ‘π‘€
π‘˜=1 π‘ƒπ‘˜ )).
Then 𝐿 0 (𝑦) is a linear function which is lower function for
the convex envelope of πœ‡0 (𝑦) on the rectangle.
(LRP (π‘†π‘ž )) is the linear problem of (𝑃2) by the lower
bounded function of πœ‡π‘š (𝑦):
min
𝑁+𝑃sum
󡄨󡄨 π‘†π‘ž
󡄨
󡄨󡄨𝐹 (𝑦) βˆ’ π‘“π‘šπ‘‘ (𝑦)󡄨󡄨󡄨
󡄨󡄨 π‘šπ‘‘
󡄨󡄨
π‘ž
π‘ž
󡄨󡄨 𝑆
𝑆
π‘†π‘ž 󡄨󡄨󡄨
= σ΅„¨σ΅„¨σ΅„¨πΉπ‘šπ‘‘
(π‘Œπ‘šπ‘‘
) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
)󡄨󡄨
󡄨
󡄨
π‘ž
󡄨󡄨
π‘ž
󡄨󡄨 𝑓 (π‘Œπ‘† ) βˆ’ 𝑓 (π‘Œπ‘† )
󡄨󡄨 π‘šπ‘‘ π‘šπ‘‘
π‘šπ‘‘
π‘šπ‘‘
π‘†π‘ž
π‘†π‘ž
= 󡄨󡄨󡄨󡄨
(π‘Œπ‘šπ‘‘
βˆ’ π‘Œπ‘šπ‘‘
)
π‘ž
𝑆
π‘ž
𝑆
󡄨󡄨
π‘Œπ‘šπ‘‘ βˆ’ π‘Œπ‘šπ‘‘
󡄨󡄨
󡄨
󡄨󡄨
󡄨󡄨
󡄨
π‘ž
π‘ž
𝑆
𝑆 󡄨󡄨
+π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) 󡄨󡄨󡄨 .
󡄨󡄨󡄨
󡄨󡄨
π‘ž
π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘†π‘šπ‘‘ )
π‘ž
Hence, π‘Œπ‘šπ‘‘ βˆ’ π‘Œπ‘†π‘šπ‘‘ = βˆ‘π‘–=1
Now,
(28)
We denote
π‘ž
π‘†π‘ž
π‘ž
𝑆
πΌπ‘šπ‘‘
= π‘Œπ‘šπ‘‘ βˆ’ π‘Œπ‘†π‘šπ‘‘ ,
π‘ž
π‘ž
π‘ž
π‘ž
𝑆
𝑆
𝑆 𝑆
π‘π‘šπ‘‘
= π‘Œπ‘šπ‘‘
+ πœƒπ‘šπ‘‘
πΌπ‘šπ‘‘
π‘ž
𝑆
(0 < πœƒπ‘šπ‘‘
< 1) .
(29)
6
Abstract and Applied Analysis
π‘ž
π‘ž
𝑆
Since πΌπ‘šπ‘‘
β†’ 0 for π‘ž β†’ ∞,
π‘ž
󡄨󡄨
󡄨󡄨 𝑓 (π‘Œπ‘† ) βˆ’ 𝑓 (π‘Œπ‘†π‘ž )
󡄨󡄨 π‘šπ‘‘ π‘šπ‘‘
π‘šπ‘‘
π‘šπ‘‘
π‘†π‘ž
π‘†π‘ž
󡄨󡄨
(π‘π‘šπ‘‘
βˆ’ π‘Œπ‘šπ‘‘
)
π‘ž
󡄨󡄨
𝑆
π‘ž
𝑆
󡄨󡄨󡄨
π‘Œπ‘šπ‘‘ βˆ’ π‘Œπ‘šπ‘‘
󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨
π‘†π‘ž
π‘†π‘ž 󡄨󡄨
+ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘π‘šπ‘‘ ) 󡄨󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨
π‘ž
π‘ž
󡄨󡄨
𝑆
𝑆
π‘†π‘ž
󡄨󡄨 π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
+ πΌπ‘šπ‘‘
) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
)
π‘†π‘ž
π‘†π‘ž π‘†π‘ž
π‘†π‘ž
󡄨
= 󡄨󡄨󡄨
(π‘Œπ‘šπ‘‘
+ πœƒπ‘šπ‘‘
πΌπ‘šπ‘‘ βˆ’ π‘Œπ‘šπ‘‘
)
π‘ž
π‘ž
π‘ž
𝑆
𝑆
𝑆
󡄨󡄨
π‘Œπ‘šπ‘‘ + πΌπ‘šπ‘‘ βˆ’ π‘Œπ‘šπ‘‘
󡄨
󡄨󡄨
󡄨
π‘†π‘ž
π‘†π‘ž
π‘†π‘ž π‘†π‘ž 󡄨󡄨
+ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ + πœƒπ‘šπ‘‘ πΌπ‘šπ‘‘ ) 󡄨󡄨󡄨
󡄨󡄨
󡄨
π‘ž
π‘ž
π‘ž
󡄨󡄨
𝑆
𝑆
𝑆
󡄨󡄨 π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
+ πΌπ‘šπ‘‘
) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
) π‘†π‘ž π‘†π‘ž
= 󡄨󡄨󡄨󡄨
(πœƒπ‘šπ‘‘ πΌπ‘šπ‘‘ )
π‘ž
π‘ž
π‘ž
𝑆
𝑆
𝑆
󡄨󡄨
π‘Œπ‘šπ‘‘ + πΌπ‘šπ‘‘ βˆ’ π‘Œπ‘šπ‘‘
󡄨
󡄨󡄨
󡄨
π‘†π‘ž
π‘†π‘ž
π‘†π‘ž π‘†π‘ž 󡄨󡄨
+π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘ + πœƒπ‘šπ‘‘ 𝐼m𝑑 ) 󡄨󡄨󡄨
󡄨󡄨
󡄨
π‘ž
π‘ž
π‘ž
π‘ž
π‘ž
󡄨󡄨
𝑆
𝑆
𝑆
𝑆
π‘†π‘ž
= σ΅„¨σ΅„¨σ΅„¨π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
+ πΌπ‘šπ‘‘
) πœƒπ‘šπ‘‘
βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
) πœƒπ‘šπ‘‘
+ π‘“π‘šπ‘‘ (π‘Œπ‘†π‘šπ‘‘ )
󡄨
π‘†π‘ž
π‘†π‘ž π‘†π‘ž 󡄨󡄨󡄨
+ πœƒπ‘šπ‘‘
πΌπ‘šπ‘‘ )󡄨󡄨
βˆ’π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
󡄨
π‘ž
π‘ž
π‘ž β†’ ∞ 󡄨󡄨
𝑆
π‘†π‘ž
π‘†π‘ž
󳨀󳨀󳨀󳨀󳨀→ σ΅„¨σ΅„¨σ΅„¨π‘“π‘šπ‘‘ π‘Œπ‘†π‘šπ‘‘ πœƒπ‘šπ‘‘
βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
) πœƒπ‘šπ‘‘
󡄨
π‘ž
π‘†π‘ž 󡄨󡄨󡄨
+π‘“π‘šπ‘‘ (π‘Œπ‘†π‘šπ‘‘ ) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
)󡄨󡄨 = 0.
󡄨
(30)
Thus
󡄨󡄨 π‘†π‘ž
󡄨
󡄨󡄨𝐺 (𝑦) βˆ’ π‘“π‘šπ‘‘ (𝑦)󡄨󡄨󡄨
󡄨󡄨 π‘šπ‘‘
󡄨󡄨
π‘ž
󡄨󡄨 π‘†π‘ž
𝑆
π‘†π‘ž 󡄨󡄨󡄨
= σ΅„¨σ΅„¨σ΅„¨πΊπ‘šπ‘‘ (π‘Œπ‘šπ‘‘
) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
)󡄨󡄨
󡄨
󡄨
π‘ž
󡄨󡄨
π‘ž
󡄨󡄨 𝑓 (π‘Œπ‘† ) βˆ’ 𝑓 (π‘Œπ‘† )
󡄨󡄨 π‘šπ‘‘ π‘šπ‘‘
π‘šπ‘‘
π‘šπ‘‘
π‘†π‘ž
π‘†π‘ž
= 󡄨󡄨󡄨󡄨
(π‘Œπ‘šπ‘‘
βˆ’ π‘π‘šπ‘‘
)
π‘ž
𝑆
π‘ž
𝑆
󡄨󡄨
π‘Œπ‘šπ‘‘ βˆ’ π‘Œπ‘šπ‘‘
󡄨󡄨
󡄨
󡄨󡄨󡄨
󡄨󡄨
π‘ž
π‘ž
𝑆
𝑆 󡄨󡄨
+ π‘“π‘šπ‘‘ (π‘π‘šπ‘‘
) βˆ’ π‘“π‘šπ‘‘ (π‘Œπ‘šπ‘‘
) 󡄨󡄨󡄨 .
󡄨󡄨󡄨
󡄨󡄨
(31)
π‘†π‘ž
|πΊπ‘šπ‘‘
(𝑦)
On the other hand
βˆ’ π‘“π‘šπ‘‘ (𝑦)| is a convex function by the same argument, and we obtain the following
π‘†π‘ž
(𝑦) βˆ’ π‘“π‘šπ‘‘ (𝑦)| β†’ 0 for π‘ž β†’ ∞.
maxπ‘¦βˆˆπ‘†π‘ž |πΊπ‘šπ‘‘
Lemma 4. Under the same assumption of Lemma 3
π‘ž
maxπ‘¦βˆˆπ‘†π‘ž |πΏπ‘†π‘š (𝑦) βˆ’ πœ‡π‘š (𝑦)| β†’ 0 for π‘ž β†’ ∞ and each π‘š =
0, 1, 2, . . . , 𝑀 + 2(𝑃 + βˆ‘π‘€
π‘˜=1 π‘ƒπ‘˜ ).
Proof. Lemma 3 and the definitions πΏπ‘†π‘š (𝑦), πœ‡π‘š (𝑦) imply
Lemma 4, standardly.
3. Branch and Bound Algorithm and
Its Convergence
In Section 2, we transformed the initial problem (𝑃0) into the
equivalent problem (𝑃2), and we make the linear relaxation
problem (LRP) of (𝑃2) to find the approximate value of (𝑃2)
easily. Now we get it by using branch and bound algorithm.
3.1. Branch and Bound Algorithm. We solve the linear relaxation problem on initial domain 𝑆0 to get the linear optimal
value as lower bound of (𝑃2) and upper bound of (𝑃2). For
preparing to separate the active domains, we let the active
domains set be Qπ‘ž and active domain π‘†π‘ž(π‘˜) βŠ‚ 𝑆0 . π‘ž presents the
times of the cutting domains and stage number and π‘˜ presents
the number of active domains on stage π‘ž. If π‘†π‘ž(π‘˜) is active
domain, we divide π‘†π‘ž(π‘˜) into half domains π‘†π‘ž(π‘˜)β‹…1 , π‘†π‘ž(π‘˜)β‹…2 and
linearize (𝑃2) on each domain and solve the linear problems.
After the above calculations, we get the lower and upper
bound value of (𝑃2). After the repeat calculations, we get the
convergence for the sequences of the lower and upper bound
values, and we get the optimal value and solution.
3.1.1. Branching Rule. We denote that π‘†π‘ž(π‘˜) = {𝑦 | π‘¦π‘ž(π‘˜) ≀
𝑛
0
π‘¦π‘›π‘ž(π‘˜) ≀ π‘¦π‘ž(π‘˜)
𝑛 , 𝑛 = 1, . . . , 𝑁 + 𝑃sum } βŠ† 𝑆 . We select the branching variable 𝑖 such that 𝑖 = 𝑛 max{π‘¦π‘ž(π‘˜)
βˆ’ π‘¦π‘ž(π‘˜) , 𝑛 = 1, 2,
𝑛
𝑛
π‘ž(π‘˜)
. . . , 𝑁+𝑃sum }, and we divide the interval [π‘¦π‘ž(π‘˜) , 𝑦𝑖
π‘ž(π‘˜)
intervals: [π‘¦π‘ž(π‘˜) , (π‘¦π‘ž(π‘˜) +𝑦𝑖 )/2]
𝑖
𝑖
] into half
𝑖
π‘ž(π‘˜)
π‘ž(π‘˜)
and [(π‘¦π‘ž(π‘˜) +𝑦𝑖 )/2, 𝑦𝑖 ].
𝑖
3.1.2. Algorithm Statement
Step 0. Firstly, we let π‘ž be 0 and let π‘˜ be 1. And we set an appropriate πœ–-value as a convergence tolerance, the initial upper
bound π‘‰βˆ— = ∞, and Q0 = 𝑆0(1) . We solve LRP(𝑆0(1) ), and
we denote the linear optimal solution and optimal value by
Μ‚ 0(1) ) is feasible for (𝑃2), then update
Μ‚ 0(1) ) and LB0(1) . If 𝑦(𝑆
𝑦(𝑆
Μ‚ 0(1) )) and we set the initial lower bound LB =
π‘‰βˆ— = πœ‡0 (𝑦(𝑆
LB0(1) . If π‘‰βˆ— βˆ’ LB ≀ πœ–, then we get the πœ–-approximate optimal
Μ‚ 0(1) )) and optimal solution 𝑦(𝑆
Μ‚ 0(1) ) of (𝑃2), so we
value πœ‡0 (𝑦(𝑆
stop this algorithm. Otherwise, we proceed to Step 1.
Step 1. For all π‘˜, we divide π‘†π‘ž(π‘˜) to get two half domains, π‘†π‘ž(π‘˜)β‹…1
and π‘†π‘ž(π‘˜)β‹…2 , according to the above branching rule.
Step 2. For all π‘˜ and each domain π‘†π‘ž(π‘˜)β‹…V (V = 1, 2), we
calculate
πœ‡ (V) =
π‘š
Ξ“π‘š
βˆ‘
𝑑=1,π‘π‘šπ‘‘ >0
π‘ž(π‘˜)β‹…V
𝑆
π‘π‘šπ‘‘ exp (π‘Œπ‘šπ‘‘
)+
Ξ“π‘š
βˆ‘
𝑑=1,π‘π‘šπ‘‘ <0
π‘†π‘ž(π‘˜)β‹…V
π‘π‘šπ‘‘ exp (π‘Œπ‘šπ‘‘ )
𝑀
(π‘š = 1, . . . , 𝑀 + 2 (𝑃 + βˆ‘ π‘ƒπ‘˜ )) ,
π‘˜=1
(32)
π‘ž(π‘˜)β‹…V
π‘ž(π‘˜)β‹…V
𝑆
𝑆
where π‘π‘šπ‘‘ , π‘Œπ‘šπ‘‘
, and π‘Œπ‘šπ‘‘
are defined in Section 2.3.
Abstract and Applied Analysis
7
If there is the πœ‡ (V) that satisfy πœ‡ (V) > 0 for some π‘š ∈
π‘š
π‘š
π‘ž(π‘˜)V
is infeasible domain for
{1, 2, . . . , 𝑀 + 2(𝑃 + βˆ‘π‘€
π‘˜=1 π‘ƒπ‘˜ )}, 𝑆
(𝑃2), then we delete the domain from Qπ‘ž . If π‘†π‘ž(π‘˜)β‹…V (V = 1, 2)
are all deleted for all π‘˜, then the problem has no feasible
solutions.
π‘ž(π‘˜)β‹…V
π‘ž(π‘˜)β‹…V
π‘ž(π‘˜)β‹…V
𝑆
𝑆
, π‘Œπ‘šπ‘‘
, and
Step 3. For left domains, we compute π΄π‘†π‘šπ‘‘ , π΅π‘šπ‘‘
π‘†π‘ž(π‘˜)β‹…V
π‘Œπ‘šπ‘‘
as defined in Sections 2.2 and 2.3. We solve the
) by simplex algorithm, and we denote the
LRP(𝑆
Μ‚ π‘ž(π‘˜)β‹…V ),
obtained linear optimal solutions and values by (𝑦(𝑆
π‘ž(π‘˜)β‹…V
Μ‚
LBπ‘ž(π‘˜)β‹…V ). Then if 𝑦(𝑆
) is feasible for (𝑃2), we update
βˆ—
βˆ—
π‘ž(π‘˜)β‹…V
Μ‚
𝑉 = min{𝑉 , πœ‡0 (𝑦(𝑆
))}. If LBπ‘ž(π‘˜)β‹…V > π‘‰βˆ— , then delete the
corresponding domains from Qπ‘ž . If π‘‰βˆ— βˆ’ LBπ‘ž(π‘˜)β‹…V ≀ πœ–, then we
Μ‚ π‘ž(π‘˜)β‹…V )) and optiget the πœ–-approximate optimal value πœ‡0 (𝑦(𝑆
Μ‚ π‘ž(π‘˜)β‹…V ) of (𝑃2), so we stop this algorithm.
mal solution 𝑦(𝑆
Otherwise, we proceed to Step 4.
π‘ž(π‘˜)β‹…V
πœ‡0 (𝑦) is continuous function, so lim𝑖 β†’ ∞ πœ‡0 (𝑦𝑛𝑖 ) = πœ‡0 (π‘¦βˆ— ).
π‘¦βˆ—
And πœ‡0βˆ— ≀ πœ‡0 ≀ πœ‡0βˆ— ; that is, πœ‡0 (π‘¦βˆ— ) = πœ‡0βˆ— . For βˆ€π‘š, πœ‡π‘š (π‘¦π‘›βˆ— ) ≀ 0.
As πœ‡π‘š (π‘¦βˆ— ) is continuous, lim𝑛 β†’ ∞ πœ‡π‘š (π‘¦π‘›βˆ— ) = πœ‡π‘š (π‘¦βˆ— ) ≀ 0.
4. Numerical Experiment
In this chapter, we show the numerical experiments of these
optimization problems according to the former rules. We
make the algorithms coded with MATLAB. In these codes,
we use MATLAB’s unique function code β€œlinprog” to solve the
linear optimization problems.
Example 1. Consider
min β„Ž (π‘₯) = sin (
+ cos (
Step 4. We update the index of left domains π‘†π‘ž(π‘˜)β‹…V to π‘†π‘ž+1(π‘˜) ;
then we initialize π‘˜. And we settle that Qπ‘ž+1 is a set of π‘†π‘ž+1(π‘˜) ,
and go to Step 1.
s.t.
(i) for the case πœ– > 0, the algorithm always terminates
after finitely many iterations yielding a global πœ–-optimal
solution π‘¦βˆ— and a global πœ–-optimal value π‘‰βˆ— for problem (𝑃2) in the sense that
βˆ—
𝑦 ∈ 𝑆,
βˆ—
𝑉 βˆ’πœ–β‰€
πœ‡0βˆ—
βˆ—
βˆ—
π‘€π‘–π‘‘β„Ž 𝑉 = πœ‡0 (𝑦 ) ;
π‘₯1 + 3
Example 2. Consider
min exp (
s.t.
(35)
then 𝑛𝑖 󳨀→ ∞, so lim πœ–π‘›π‘– = 0.
𝑛𝑖 β†’ ∞
π‘‰π‘›βˆ— is monotone decreasing sequence, so it converges. We
assume that lim𝑛 β†’ ∞ π‘‰π‘›βˆ— = πœ‡0βˆ— :
lim (π‘‰π‘›π‘–βˆ—
π‘–β†’βˆž
βˆ’ πœ–π‘›π‘– ) ≀
βˆ—
lim πœ‡0 (𝑦𝑛𝑖
)
π‘–β†’βˆž
≀
lim π‘‰βˆ—
𝑖 β†’ ∞ 𝑛𝑖
;
(36)
π‘₯1 βˆ’
2
π‘₯2
)
π‘₯12 βˆ’ 2π‘₯1 + π‘₯22 βˆ’ 8π‘₯2 + 20
π‘₯2
≀1
π‘₯1
(38)
π‘₯1
+ π‘₯2 + ≀ 6
π‘₯2
2π‘₯1 + π‘₯2 ≀ 8
𝑋 = {π‘₯ : 1 ≀ π‘₯1 ≀ 3, 1 ≀ π‘₯2 ≀ 3} .
(34)
𝑖 󳨀→ ∞,
βˆ’π‘₯12 + 3π‘₯1 + 2π‘₯22 + 3π‘₯2 + 3.5
)
π‘₯1 + 1
βˆ’ exp (
Proof. (i) It is obvious by algorithm statement.
(ii) We assume that the upper bound corresponding to πœ–π‘›
is π‘‰π‘›βˆ— :
π‘‰π‘›π‘–βˆ— βˆ’ πœ–π‘›π‘– ≀ πœ‡0 (π‘‰π‘›π‘–βˆ— ) ≀ π‘‰π‘›π‘–βˆ—
π‘₯2
βˆ’5≀0
π‘₯1
We set πœ– = 0.0001. After the algorithm, we found a global
πœ–-optimal value π‘‰βˆ— = 1.0748 when the global πœ–-optimal
solution is (π‘₯1 , π‘₯2 )𝑇 = (1.34977, 1.64232).
(ii) for the case πœ– β†’ 0, we assume the sequence πœ–π‘› is
convergence tolerance, such that πœ–1 > πœ–2 >, . . . , > πœ–π‘› >
πœ–π‘›+1 >, . . . , > 0; that is, lim𝑛 β†’ ∞ πœ–π‘› = 0. And we
assume the sequence π‘¦π‘›βˆ— is optimal solution of (𝑃2) corresponding to πœ–π‘› . Then the accumulation point of π‘¦π‘›βˆ— is
global optimal solution of (𝑃2).
π‘¦π‘›βˆ— is the point sequence on bounded closed set, so π‘¦π‘›βˆ— has a
βˆ—
βˆ—
. We assume that lim𝑖 β†’ ∞ 𝑦𝑛𝑖
= π‘¦βˆ— ;
converge subsequence 𝑦𝑛𝑖
then
(37)
𝑋 = {π‘₯ : 1 ≀ π‘₯1 ≀ 3, 1 ≀ π‘₯2 ≀ 3} .
(33)
πœ‡0 (π‘¦π‘›βˆ— ) ∈ [π‘‰π‘›βˆ— βˆ’ πœ–, π‘‰π‘›βˆ— ] ;
βˆ’π‘₯22 + 2π‘₯1 + 2π‘₯2
)
π‘₯1 + 2.5
π‘₯1
βˆ’1≀0
π‘₯2
π‘₯12 βˆ’
3.2. The Convergence of the Algorithm. Corresponding to [4],
we obtain the convergence of the algorithm (cf. [4]).
Theorem 5. Suppose that problem (𝑃2) has a global optimal
solution, and let πœ‡0βˆ— be the global optimal value of (𝑃2). Then
one has the following:
π‘₯12 + 3π‘₯2 βˆ’ 2π‘₯22 + 1
)
π‘₯12 + π‘₯2 + 2
We set πœ– = 0.0001. After the algorithm, we found a global
πœ–-optimal value π‘‰βˆ— = 58.2723 when the global πœ–-optimal
solution is (π‘₯1 , π‘₯2 )𝑇 = (1, 1.6180).
Example 3. Consider
min
sin (
π‘₯12 + 2π‘₯2 βˆ’ 2π‘₯1 + π‘₯22 + 1
)
π‘₯1 + π‘₯22 + 2
+ cos (
3π‘₯12 βˆ’ 3π‘₯2 + 2π‘₯1 + π‘₯22 + 5
)
π‘₯12 + 2π‘₯22 + 10
8
Abstract and Applied Analysis
s.t.
sin (
π‘₯12 + 3π‘₯2 βˆ’ 2π‘₯22 + 2
)
π‘₯12 + π‘₯2 + 2
+ cos (
βˆ’π‘₯22 + 2π‘₯1 + 2π‘₯2
)≀2
π‘₯1 + 2.5
(39)
𝑋 = {π‘₯ : 1 ≀ π‘₯1 ≀ 2, 1 ≀ π‘₯2 ≀ 2} .
[4]
We set πœ– = 0.0001. After the algorithm, we found a global
πœ–-optimal value π‘‰βˆ— = 1.09133 when the global πœ–-optimal
solution is (π‘₯1 , π‘₯2 )𝑇 = (2, 1).
Example 4. Consider
min
s.t.
3π‘₯1 βˆ’ π‘₯22 + 5
)
π‘₯12 βˆ’ π‘₯1 + π‘₯22 βˆ’ 3π‘₯2 + 10
π‘₯12 βˆ’ 2π‘₯2 ≀ 1
π‘₯1 βˆ’
[5]
[6]
π‘₯2 βˆ’ 2π‘₯22 + 8
)
exp ( 1 2
2π‘₯1 + π‘₯2 + 1
+ exp (
[3]
(40)
π‘₯2
≀1
π‘₯1
2π‘₯1 + π‘₯22 ≀ 6
𝑋 = {π‘₯ : 1.5 ≀ π‘₯1 ≀ 2, 1.5 ≀ π‘₯2 ≀ 2} .
We set πœ– = 0.0001. After the algorithm, we found a global
πœ–-optimal value π‘‰βˆ— = 3.9378 when the global πœ–-optimal
solution is (π‘₯1 , π‘₯2 )𝑇 = (1.5, 1.7321).
5. Concluding Remarks
In this paper, we proved that we can solve the nonconvex
optimization problems which [4] cannot solve that the sum of
the positive (or negative) first and second derivatives function
with the variable defined by sum of polynomial fractional
function by using branch and bound algorithm.
Conflict of Interests
The authors declare that there is no conflict of interests
regarding the publication of this paper.
Acknowledgments
The authors would like to thank S. Pei-Ping, H. Yaguchi, and
S. Tsuyumine for their good suggestion and encouragement.
References
[1] D. Cai and T. G. Nitta, β€œLimit of the solutions for the finite
horizon problems as the optimal solution to the infinite horizon
optimization problems,” Journal of Difference Equations and
Applications, vol. 17, no. 3, pp. 359–373, 2011.
[2] D. Cai and T. G. Nitta, β€œOptimal solutions to the infinite horizon problems: constructing the optimum as the limit of the
solutions for the finite horizon problems,” Nonlinear Analysis:
Theory, Methods & Applications, vol. 71, no. 12, pp. e2103–e2108,
2009.
R. Okumura, D. Cai, and T. G. Nitta, β€œTransversality conditions
for infinite horizon optimality: higher order differential problems,” Nonlinear Analysis: Theory, Methods & Applications, vol.
71, no. 12, pp. e1980–e1984, 2009.
S. Pei-Ping and Y. Gui-Xia, β€œGlobal optimization for the sum
of generalized polynomial fractional functions,” Mathematical
Methods of Operations Research, vol. 65, no. 3, pp. 445–459, 2007.
H. Jiao, Z. Wang, and Y. Chen, β€œGlobal optimization algorithm
for sum of generalized polynomial ratios problem,” Applied
Mathematical Modelling, vol. 37, no. 1-2, pp. 187–197, 2013.
Y. J. Wang and K. C. Zhang, β€œGlobal optimization of nonlinear
sum of ratios problem,” Applied Mathematics and Computation,
vol. 158, no. 2, pp. 319–330, 2004.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Mathematical Problems
in Engineering
Journal of
Mathematics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Discrete Dynamics in
Nature and Society
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Abstract and
Applied Analysis
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014