Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2014, Article ID 408918, 8 pages http://dx.doi.org/10.1155/2014/408918 Research Article Global Optimization for the Sum of Certain Nonlinear Functions Mio Horai,1 Hideo Kobayashi,1 and Takashi G. Nitta2 1 2 Faculty of Engineering, Graduate School of Engineering, Mie University, Kurimamachiyamachi, Tsu 514-8507, Japan Department of Education, Mie University, Kurimamachiyamachi, Tsu 514-8507, Japan Correspondence should be addressed to Mio Horai; [email protected] Received 13 April 2014; Revised 19 August 2014; Accepted 20 August 2014; Published 10 November 2014 Academic Editor: Julio D. Rossi Copyright © 2014 Mio Horai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We extend work by Pei-Ping and Gui-Xia, 2007, to a global optimization problem for more general functions. Pei-Ping and Gui-Xia treat the optimization problem for the linear sum of polynomial fractional functions, using a branch and bound approach. We prove that this extension makes possible to solve the following nonconvex optimization problems which Pei-Ping and Gui-Xia, 2007, cannot solve, that the sum of the positive (or negative) first and second derivatives function with the variable defined by sum of polynomial fractional function by using branch and bound algorithm. 1. Introduction The optimization problem is widely used in sciences, especially in engineering and economy [1β3]. In 2007, Pei-Ping and Gui-Xia considered one global optimization problem in [4]: π π (π₯) = βππ min π=1 ππ (π₯) β€ 0, s.t. ππ (π₯) Pei-Ping and Gui-Xia proposed the method to solve these problems globally by using branch and bound algorithm in [4]. In the above problem, the objective function and constrained function are sums of generalized polynomial fractional functions. We extend these functions to more general functions like below: π ππ (π₯) (π) min π=1 ππ π₯ β π, Μ π=1 where π := {π₯ β R | 0 < π₯π β€ π₯π β€ π₯π < β (π = 1, 2, . . . , π)}, ππ (π₯), ππ (π₯), ππ (π₯) are given generalized polynomial. One has ππ (π₯) := ππ (π₯) := π ππ (π₯) := ππ π π πΎπ π βπ½ππ‘ βπ₯π ππ‘π , π‘=1 π=1 ππ (π₯) ) πππ Μ (π₯) πππ Μ (π₯) )β€0 (π0) (π Μ = 1, . . . , ππΎ , π = 1, . . . , π) , π₯ β π, πππ π πΎπ π βπ½ππ‘ βπ₯π ππ‘π , π‘=1 π=1 ππ (π₯) ππ (π₯) = β βππ Μ ( s.t. π πππ π€ (π₯) = ββπ ( (1) where ππ (π₯) > 0, ππ (π₯) > 0, πππ (π₯) > 0, πππ (π₯) > 0 π₯ β π; that Μ Μ is, πππ π πΎ π βπ½ππ‘ βπ₯π ππ‘π . π‘=1 π=1 Sum of rations problems like (π) attract a lot of attention, and the reason is that these problems are applied to various economical problems [4]. ππ (π₯) := π πΎπ π βπ½ππ‘ βπ₯π ππ‘π , π‘=1 π=1 πππ π π‘=1 π=1 πΎπ π ππ (π₯) := βπ½ππ‘ βπ₯π ππ‘π , ππππ Μ π π‘=1 π=1 πΎπ πππ Μ (π₯) := βπ½ππ ππ‘Μ βπ₯π πππ‘ , Μ 2 Abstract and Applied Analysis ππππ Μ π π‘=1 π=1 2. Equivalence Transformation and Linear Relaxation πΎπ Μ πππ Μ (π₯) := βπ½ππππ‘Μ βπ₯π πππ‘ , (π = 1, 2, . . . , π, π Μ = 1, 2, . . . , ππ , π = 1, 2, . . . , π) , (2) π π where πππ , πππ , ππππ ,Μ ππππ Μ are natural numbers, and π½ππ‘ , π½ππ‘ , π½ππ ππ‘Μ , π π π½ππππ‘Μ are real constants not zero, and πΎππ‘π , πΎππ‘π , πΎππππ‘Μ , πΎππππ‘Μ are real constants. : R σ³¨β R are secWe assume that βπ (π¦π ), βππ (π¦ Μ ππ )Μ ondly differentiable functions and monotone increasing or monotone decreasing functions. We divide these functions to monotone increasing or monotone decreasing as follows: βπσΈ > 0 (π = 1, . . . , πΎ) , βπσΈ < 0 (π = πΎ + 1, . . . , π) , βπσΈ π Μ > 0 (π Μ = 1, . . . , πΎπ ) , βπσΈ π Μ In this section we firstly transform the problem (π0) to the equivalent problems (π1) and secondly transform (π1) to (π2). Thirdly we linearize the problem (π2) corresponding to [4]. 2.1. Translation of the Problem (π0) into (π1). For the problem (π0), we put new variables ππ , ππ , π‘ππ Μ and π ππ ,Μ and the function π(π, π) and ππ (π , π‘) depending on βπ , βππ Μ in the original problem (π0): π π (π, π) := ββπ ( π=1 ππ ππ (π , π‘) := ββππ Μ ( < 0 (π Μ = πΎπ + 1, . . . , ππ ) . Μ π=1 π‘ππ Μ π ππ Μ ) ππ ππ ) (π = 1, 2, . . . , π) , (π Μ = 1, . . . , ππ , π = 1, . . . , π) . (6) (3) Furthermore, we assume the following conditions for the second derivatives: σΈ σΈ {βπ β exp (π§π )} > 0, σΈ σΈ > 0, {βππ Μ β exp (π§ππ )} Μ σΈ σΈ {βπ β exp (π§π )} < 0, σΈ σΈ {βππ Μ β exp (π§ππ )} <0 Μ (4) (π = 1, . . . , π, π Μ = 1, . . . , ππ , π = 1, . . . , π) . sin ( π₯12 + 3π₯2 β 2π₯22 + 1 ) π₯12 + π₯2 + 2 + cos ( s.t. π₯12 β βπ₯22 π₯1 + 3 ππ β€ ππ β€ ππ , ππ β€ ππ β€ ππ , (7) πππ Μ β€ π ππ Μ β€ πππ ,Μ πππ Μ β€ π‘ππ Μ β€ πππ }Μ , where πsum = 2(π + βπ π=1 ππ ). Let ππ» be the following closed domain in π × π»; that is, ππ» := { (π₯, π, π, π , π‘) β π × π» | ππ (π , π‘) β€ 0 + 2π₯1 + 2π₯2 ) π₯1 + 2.5 π₯1 β1β€0 π₯2 πππ ,Μ πππ ,Μ πππ ,Μ πππ .Μ Let π» be the closed interval: π» := { (π, π, π , π‘) β Rπsum | To solve the above problem (π0), we transform the problem (π0) to the equivalent problems (π1), (π2) and transform (π2) into the linear relaxation problem. We prove the equivalency of the problems under above assumption, and we calculate the equivalent problem using branch and bound algorithm corresponding to [4β6]. For example, according to this extension at approach, we can calculate the following global optimization problem: min πππ (π₯) are polynomials on closed Since ππ (π₯), ππ (π₯), πππ (π₯), Μ Μ interval π, it is easy to calculate the minimums and maximums of the functions on π; we denote them by ππ , ππ , ππ , ππ , (π = 1, . . . , π) , ππ β ππ (π₯) β€ 0, ππ (π₯) β ππ β€ 0 (π = 1, . . . , πΎ) , (5) π₯2 β5β€0 π₯1 π = {π₯ : 1 β€ π₯1 β€ 3, 1 β€ π₯2 β€ 3} . In this paper, we explain how to make equivalent relaxation linear problem from original problem in Section 2. In Section 3, we present the branch and bound algorithm and its convergence. In Section 4, we introduce numerical experiments result. ππ (π₯) β ππ β€ 0, ππ β ππ (π₯) β€ 0 (π = πΎ + 1, . . . , π) , π ππ Μ β πππ Μ (π₯) β€ 0, πππ Μ (π₯) β π‘ππ Μ β€ 0 (π Μ = 1, . . . , πΎπ , π = 1, . . . , π) , πππ Μ (π₯) β π ππ Μ β€ 0, π‘ππ Μ β πππ Μ (π₯) β€ 0 (π Μ = πΎπ + 1, . . . , ππ , π = 1, . . . , π)} . (8) Abstract and Applied Analysis 3 We give the problem (π1) on ππ». Consider min π (π, π) π ππ (π , π‘) β€ 0, s.t. Therefore we obtain (π = 1, . . . , π) , ββπ ( (π1) π=1 (π₯, π, π, π , π‘) β ππ». ππ Now we obtain Theorem 1 that proves the equivalence of (π0) and (π1). Theorem 1. The problem (π0) on π is equivalent to the problem (π1) on ππ». ββππ Μ ( ππβ := ππ (π₯β ) , π πβπ Μ := πππ Μ (π₯β ) , π‘πβπ Μ := πππ Μ (π₯β ) , π π=1 ππ (π₯β ) ππ (π₯β ) π ) = β βπ ( π=1 πππ Μ (π₯β ) ππ ββππ Μ ( ππβ β βππ Μ ( ππ (9) Μ π=1 π πβπ Μ ). βπ ( β― π ββπ ( π=1 β― β― β― β― ππ (π₯β― ); that is, 0 < ππ /ππ β€ ππ (π₯β― )/ππ (π₯β― ); β― β― β― β― β― and 0 < πππ (π₯ ) β€ π‘ππ ;Μ that is, 0 < πππ (π₯ )/πππ (π₯ )β€ Μ Μ Μ β― π‘ππ /π Μ ππ ;Μ β― for π Μ = πΎπ + 1, . . . , ππ and π = 1, . . . , π, 0 < πππ (π₯ )β€ Μ β― β― β― β― β― ); that is, 0 < π‘ππ /π π ππ Μ and 0 < π‘ππ Μ β€ πππ (π₯ Μ Μ ππ Μ β€ β― β― )/πππ (π₯ ). πππ (π₯ Μ Μ The conditions βπσΈ > 0 (π = 1, . . . , πΎ), or βπσΈ < 0 (π = πΎ + 1, . . . , π) and βσΈ Μ > 0 (π Μ = 1, . . . , πΎπ ), or βσΈ Μ < 0 (π Μ = πΎπ + ππ ππ 1, . . . , ππ ) lead to βπ ( ππ (π₯β― ) ππ (π₯β― ) βππ Μ ( πππ Μ (π₯β― ) πππ Μ (π₯β― ) ππ β― ππ ) )β€ (12) β― ππ π‘ππ Μ Μ π=1 π ππ Μ ) β€ β βππ Μ ( β― ). β― ππ π‘ππ Μ Μ π=1 π ππ Μ ) β€ ββππ Μ ( β― ) β€ 0, (13) ππ (π₯β ) ) β€ βπ ( ππ (π₯β― ) ππ (π₯β― ) ), (14) ππβ π ) = β βπ ( π=1 ππβ ππβ ππ (π₯β ) ππ (π₯β ) β― π ππ π=1 ππ ) β€ β βπ ( β― ), β― π ππ π=1 ππ ) β€ β βπ ( β― (15) ). ππβ := ππ (π₯β ) , ππβ := ππ (π₯β ) , π πβπ Μ := πππ Μ (π₯β ) , π‘πβπ Μ := πππ Μ (π₯β ) ; (16) then π (πβ , πβ ) = π€ (π₯β ) , ππ (π β , π‘β ) = π (π₯β ) . (17) The element (π₯β , πβ , πβ , π β , π‘β ) satisfies the conditions for ππ». Since (π₯β― , πβ― , πβ― , π β― , π‘β― ) is the optimal solution for (π1), it β― β― satisfies βππ=1 βπ (ππ /ππ ) β€ βππ=1 βπ (ππβ /ππβ ). β― β― Hence βππ=1 βπ (ππ /ππ ) = βππ=1 βπ (ππβ /ππβ ); that is, the two problems are equivalent. β― ) β€ βπ ( ), β― For the optimal solution π₯β of (π0), we denote β― for π Μ = 1, . . . , πΎπ and π = 1, . . . , π, 0 < π ππ Μ β€ πππ (π₯ ) Μ β― ππ (π₯β ) π=1 for π = πΎ + 1, . . . , π, 0 < ππ (π₯β― ) β€ ππ and 0 < ππ β€ β― πππ Μ (π₯β― ) ββπ ( β― β― ππ /ππ ; that is, 0 < ππ (π₯ )/ππ (π₯ ) β€ ππβ π for π = 1, . . . , πΎ, 0 < ππ β€ ππ (π₯β― ) and 0 < ππ (π₯β― ) β€ ππ ; β― ππ so Furthermore let (π₯β― , πβ― , πβ― , π β― , π‘β― ) be the optimal solution for (π1). Then by the restricted condition we have the following: β― π=1 that is, π₯β― satisfied constant for (π0). Since the optimal solution for (π0) is π₯β , we obtain (10) π‘πβπ Μ πππ Μ (π₯β― ) Μ π=1 ), ) = β βππ Μ ( πππ Μ (π₯β ) Μ π=1 ππβ πππ Μ (π₯β― ) ππ Now, ππ and then β βπ ( ππ (π₯β― ) β― π ) β€ ββπ ( πππ Μ (π₯β― ) Μ π=1 Proof. Let π₯β be the optimal solution for (π0); we denote ππβ := ππ (π₯β ) , ππ (π₯β― ) (π = 1, . . . , π) , β― π‘ππ Μ βππ Μ ( β― ) π ππ Μ (π Μ = 1, . . . , ππ , π = 1, . . . , π) . (11) 2.2. Translation of the Problem (π1) into (π2). We change the variables by the logarithmic function log. Since π₯π , ππ , ππ , π ππ ,Μ π‘ππ Μ are positive, we can write π₯π , ππ , ππ , π ππ ,Μ π‘ππ Μ as exp(π¦π ) are using new variables π¦π (π = 1, . . . , π + πsum ); that is, π¦π := ln π₯π , π¦π+π := ln ππ , π¦π+π+π := ln ππ , π¦π+2π+(πβ1)ππ +π Μ := ln π ππ ,Μ and π¦π+2π+(π+πβ1)ππ +π Μ := ln π‘ππ .Μ 4 Abstract and Applied Analysis The closed domain ππ» corresponds to the following π0 , where π0 := {π¦ β Rπ+πsum | ln π₯π β€ π¦π β€ ln π₯π , Then the objective function π(π, π) and the restricted functions are changed functions which are changed to π0 (π¦) and ππ (π¦) (π = 1, . . . , π + 2(π + βπ π=1 ππ )). Now we put ππ0 := {π¦ β π0 | ln ππ β€ π¦π+π β€ ln ππ , (18) ln ππ β€ π¦π+π+π β€ ln ππ , π ππ (π¦) β€ 0 (π = 1, 2, . . . , π + 2 (π + β ππ ))} . π=1 ln πππ Μ β€ π¦π+2π+(πβ1)ππ +π Μ β€ ln πππ ,Μ (21) ln πππ Μ β€ π¦π+2π+(π+πβ1)πππ Μ β€ ln πππ }Μ . Using such transformation of variables, the objective function and the restricted functions of (π1) are changed to the following: π ββπ ( π=1 ππ β βππ Μ ( Μ π=1 π‘ππ Μ π ππ Μ ππ ππ Then the problem (π1) is transformed naturally to the following problem (π2): π ) = ββπ β exp (π¦π+π+π β π¦π+π ) , ) ππ π = β βππ Μ β exp (π¦π+2π+(π+πβ1)ππ +π Μ β π¦π+2π+(πβ1)ππ βπ )Μ , π π‘=1 π=1 π ππ π=1 (19) π+π+π π+2π+(π+πβ1)ππ +π π+2π+(πβ1)ππ +π ,Μ π¦π+2π+(π+πβ1)ππ +π Μ as min- ππ ππ ππ 1, . . . , π). And we denote ππ β ππ0 ; that is, π π¦ β€ π¦π β€ π¦π β€ π¦π β€ π¦π π π ππ Μ β πππ Μ (π₯) = exp (π¦π+2π+(πβ1)ππ +π )Μ πππ π‘=1 π (π = 1, . . . , π + πsum ) } , π π π β βπ½ππ‘ exp (βπΎππ‘π π¦π ) , π π+πsum π := β π ππ‘π π¦π , πππ‘ π=1 π=1 πππ π π π πππ Μ (π₯) β π‘ππ Μ = βπ½ππ‘ exp (βπΎππ‘π π¦π ) π‘=1 π π=1 π=1 ππ Now π(π, π), ππ (π , π‘), ππ β ππ (π₯), ππ (π₯) β ππ , π ππ Μ β πππ (π₯), Μ πππ (π₯) β π‘ππ ,Μ are represented as Μ π π π+πsum π πππ‘ := β max {π ππ‘π π¦π , π ππ‘π π¦π } , π=1 π π (π = 0, 1, 2, . . . , π + 2 (π + β ππ ) , π‘ = 1, . . . , ππ ) . π+πsum βΞ¨ππ‘ β exp ( β π ππ‘π π¦π ) , π+πsum π πππ‘ := β min {π ππ‘π π¦π , π ππ‘π π¦π } , β exp (π¦π+2π+(π+πβ1)ππΎ +π )Μ . π‘=1 (π2) ππ := {π¦ β Rπ+πsum | π π ππ (π₯) β ππ = βπ½ππ‘ exp (βπΎππ‘π π¦π ) β exp (π¦π+π+π ) , ππ π¦ β ππ0 . imums and maximums for ln π₯π , ln π₯π , ln ππ , ln ππ , ln ππ , ln ππ , ln π ,Μ ln π ,Μ ln π ,Μ ln π Μ (π = 1, . . . , π, π Μ = 1, . . . , ππ , π = π π ππ β ππ (π₯) = exp (π¦π+π ) β βπ½ππ‘ exp (βπΎππ‘π π¦π ) , πππ s.t. π+π π¦π+2π+(πβ1)ππ +π ,Μ π¦ Μ π=1 π‘=1 π0 (π¦) 2.3. Linearization of the Problem (π2). The objective and restricted function for (π2) are nonlinear. On ππ0 , we approximate ππ (π¦) to lower bounded linear functions, and we can transform (π2) into the linear optimization problem. The solution of it is lower bound of the optimal value on (π2). We , π¦π+π+π , π¦ ,Μ denote π¦ , π¦π , π¦ , π¦π+π , π¦ π=1 πππ min π=1 (20) (22) π=1 σΈ where π ππ‘π is real number and Ξ¨ππ‘ satisfies Ξ¨ππ‘ (π₯) > 0 or σΈ σΈ σΈ Ξ¨ππ‘ (π₯) < 0, and {Ξ¨ππ‘ β exp(π¦)} > 0 or {Ξ¨ππ‘ β exp(π¦)}σΈ σΈ < 0. Let πππ‘ (π¦) be Ξ¨ππ‘ β exp(π¦) and let ππ (π¦) be π+π ππ βπ‘=1 πππ‘ (βπ=1 sum π ππ‘π π¦π ). σΈ Now, πππ‘ (π¦) σΈ σΈ πππ‘ (π¦) < 0, and ππ ππ [πππ‘ , πππ‘ ]. And σΈ σΈ σΈ > 0 or πππ‘ (π¦) < 0, and πππ‘ (π¦) > 0 or ππ πππ‘ (πππ‘ ) is monotonic convex function on there exist the upper and lower bounded ππ ππ ππ ππ ππ (πππ‘ ) and πΊππ‘ (πππ‘ )) of πππ‘ (πππ‘ ). linear functions (πΉππ‘ Abstract and Applied Analysis 5 Proof. The definition of (LRP (ππ )) implies the statement naturally. We denote ππ π π π ) πππ‘ (πππ‘ ) β πππ‘ (πππ‘ π π π πΉππ‘ (πππ‘ ) := ππ πππ‘ ππ β πππ‘ π π π (πππ‘ β ππππ‘ ) (23) π π (π¦) β πππ‘ (π¦)| = 0 and π‘ = 1, 2, . . . , ππ , limπ β β maxπ¦βππ |πΉππ‘ π π ). + πππ‘ (πππ‘ π π π π and limπ β β maxπ¦βππ |πΊππ‘ (π¦) β πππ‘ (π¦)| = 0. π π π π ) is continuous on [πππ‘ , πππ‘ ] and differentiable on As πππ‘ (πππ‘ π π ππ (πππ‘ , πππ‘ ), there exists ππ πππ‘ β ππ ππ (πππ‘ , πππ‘ ), such that ππ π σΈ π πππ‘ (πππ‘ )= Lemma 3. Assume that ππ+1 β ππ β β β β β π0 β π π+πsum , and π π β ββ π=0 π = {π¦ }. For each π = 0, 1, 2, . . . , π + 2(π + βπ=1 ππ ) π π ) πππ‘ (πππ‘ ) β πππ‘ (πππ‘ ππ πππ‘ β ππ πππ‘ (24) , π Proof. Let π¦π := {π¦ β ππ | min(π ππ‘π π¦π , π ππ‘π π¦π ) (π = 1, . . . , π π π + πsum )} and π¦π := {π¦ β ππ | max(π ππ‘π π¦π , π ππ‘π π¦π ) (π = π 1, . . . , π + πsum )}. π = {π¦β }, the values π¦π and π¦π satisfy Since ββ π=0 π limπ β β π¦π = limπ β β π¦π = π¦β . ππ by the mean value theorem. σΈ σΈ ππ Since πππ‘ (π¦) > 0 or πππ‘ (π¦) < 0, πππ‘ (πππ‘ ) is monotonic ππ π function on [ππππ‘ , πππ‘ ], there exists the inverse function of ππ ππ β1 ππ ). Hence πππ‘ is uniquely decided, πππ‘ (πππ‘ ), such that πππ‘ (πππ‘ ππ π π πππ‘ = β1 πππ‘ ( ππ π πππ‘ β ππππ‘ ), (25) and we define ππ π π πΊππ‘ π π (πππ‘ ) := π π ) πππ‘ (πππ‘ ) β πππ‘ (πππ‘ ππ πππ‘ ππ β πππ‘ π π ), +πππ‘ (πππ‘ π π πΏπππ‘ ππ (πππ‘ ) π π π (πππ‘ β πππ‘ ) π (26) π π π σΈ σΈ { {πΊππ‘ (πππ‘ ) (πππ‘ (π¦) > 0) := { ππ ππ σΈ σΈ {πΉππ‘ (πππ‘ ) (πππ‘ (π¦) < 0) . { π s.t. πΏππ (π¦) β€ 0 π (27) π σ΅¨σ΅¨ ππ ππ ππ σ΅¨σ΅¨σ΅¨ = σ΅¨σ΅¨σ΅¨πΉππ‘ (πππ‘ ) β πππ‘ (πππ‘ )σ΅¨σ΅¨ σ΅¨ σ΅¨ ππ σ΅¨σ΅¨σ΅¨ ππ σ΅¨σ΅¨ πππ‘ (πππ‘ ) β πππ‘ (πππ‘ ) π π σ΅¨σ΅¨ π = σ΅¨σ΅¨σ΅¨ (πππ‘ β ππππ‘ ) π π π π σ΅¨σ΅¨σ΅¨ πππ‘ β πππ‘ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨ ππ σ΅¨σ΅¨ ππ +πππ‘ (πππ‘ ) β πππ‘ (πππ‘ ) σ΅¨σ΅¨σ΅¨ . σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨ (LRP (ππ )) π (π = 1, 2, . . . , π + 2 (π + β ππ )) . π=1 Lemma 2. The value of (LRP (ππ )) is less than the optimal value for the problem (π2) on ππ . ππ σ΅¨σ΅¨ ππ σ΅¨σ΅¨ (π¦) β πππ‘ (π¦)σ΅¨σ΅¨σ΅¨ maxπ σ΅¨σ΅¨σ΅¨πΉππ‘ σ΅¨ π¦βπ σ΅¨ π By the definition (LRP (ππ )), any π¦ in ππ satisfying the restricted condition of (π2) satisfy the restricted condition of (LRP (ππ )). π π π The function |πΉππ‘ (π¦) β πππ‘ (π¦)| is concave on [πππ‘ , πππ‘ ]; ππ ππ therefore πππ‘ attains the maximum value of |πΉππ‘ (π¦)βπππ‘ (π¦)| : ππ (π¦) β πππ‘ (π¦)| maxπ¦βππ |πΉππ‘ π πΏπ0 (π¦) πββ π |π ππ‘π |(π¦π β π¦π ) σ³¨σ³¨σ³¨σ³¨σ³¨β 0. π π π ) β₯ πΏ ππ‘ (πππ‘ ). By the definition, πππ‘ (πππ‘ ππ ππ ππ πππ‘ (π¦) β₯ βπ‘=1 πΏ ππ‘ (π¦). For all π¦ β ππ , ππ (π¦) := βπ‘=1 π ππ ππ Let πΏππ (π¦) := βπ‘=1 πΏ ππ‘ (π¦) (0 β€ π β€ π + 2(π + βπ π=1 ππ )). Then πΏ 0 (π¦) is a linear function which is lower function for the convex envelope of π0 (π¦) on the rectangle. (LRP (ππ )) is the linear problem of (π2) by the lower bounded function of ππ (π¦): min π+πsum σ΅¨σ΅¨ ππ σ΅¨ σ΅¨σ΅¨πΉ (π¦) β πππ‘ (π¦)σ΅¨σ΅¨σ΅¨ σ΅¨σ΅¨ ππ‘ σ΅¨σ΅¨ π π σ΅¨σ΅¨ π π ππ σ΅¨σ΅¨σ΅¨ = σ΅¨σ΅¨σ΅¨πΉππ‘ (πππ‘ ) β πππ‘ (πππ‘ )σ΅¨σ΅¨ σ΅¨ σ΅¨ π σ΅¨σ΅¨ π σ΅¨σ΅¨ π (ππ ) β π (ππ ) σ΅¨σ΅¨ ππ‘ ππ‘ ππ‘ ππ‘ ππ ππ = σ΅¨σ΅¨σ΅¨σ΅¨ (πππ‘ β πππ‘ ) π π π π σ΅¨σ΅¨ πππ‘ β πππ‘ σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨ π π π π σ΅¨σ΅¨ +πππ‘ (πππ‘ ) β πππ‘ (πππ‘ ) σ΅¨σ΅¨σ΅¨ . σ΅¨σ΅¨σ΅¨ σ΅¨σ΅¨ π πππ‘ (πππ‘ ) β πππ‘ (ππππ‘ ) π Hence, πππ‘ β ππππ‘ = βπ=1 Now, (28) We denote π ππ π π πΌππ‘ = πππ‘ β ππππ‘ , π π π π π π π π πππ‘ = πππ‘ + πππ‘ πΌππ‘ π π (0 < πππ‘ < 1) . (29) 6 Abstract and Applied Analysis π π π Since πΌππ‘ β 0 for π β β, π σ΅¨σ΅¨ σ΅¨σ΅¨ π (ππ ) β π (πππ ) σ΅¨σ΅¨ ππ‘ ππ‘ ππ‘ ππ‘ ππ ππ σ΅¨σ΅¨ (πππ‘ β πππ‘ ) π σ΅¨σ΅¨ π π π σ΅¨σ΅¨σ΅¨ πππ‘ β πππ‘ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨ ππ ππ σ΅¨σ΅¨ + πππ‘ (πππ‘ ) β πππ‘ (πππ‘ ) σ΅¨σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨ π π σ΅¨σ΅¨ π π ππ σ΅¨σ΅¨ πππ‘ (πππ‘ + πΌππ‘ ) β πππ‘ (πππ‘ ) ππ ππ ππ ππ σ΅¨ = σ΅¨σ΅¨σ΅¨ (πππ‘ + πππ‘ πΌππ‘ β πππ‘ ) π π π π π π σ΅¨σ΅¨ πππ‘ + πΌππ‘ β πππ‘ σ΅¨ σ΅¨σ΅¨ σ΅¨ ππ ππ ππ ππ σ΅¨σ΅¨ + πππ‘ (πππ‘ ) β πππ‘ (πππ‘ + πππ‘ πΌππ‘ ) σ΅¨σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨ π π π σ΅¨σ΅¨ π π π σ΅¨σ΅¨ πππ‘ (πππ‘ + πΌππ‘ ) β πππ‘ (πππ‘ ) ππ ππ = σ΅¨σ΅¨σ΅¨σ΅¨ (πππ‘ πΌππ‘ ) π π π π π π σ΅¨σ΅¨ πππ‘ + πΌππ‘ β πππ‘ σ΅¨ σ΅¨σ΅¨ σ΅¨ ππ ππ ππ ππ σ΅¨σ΅¨ +πππ‘ (πππ‘ ) β πππ‘ (πππ‘ + πππ‘ πΌmπ‘ ) σ΅¨σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨ π π π π π σ΅¨σ΅¨ π π π π ππ = σ΅¨σ΅¨σ΅¨πππ‘ (πππ‘ + πΌππ‘ ) πππ‘ β πππ‘ (πππ‘ ) πππ‘ + πππ‘ (ππππ‘ ) σ΅¨ ππ ππ ππ σ΅¨σ΅¨σ΅¨ + πππ‘ πΌππ‘ )σ΅¨σ΅¨ βπππ‘ (πππ‘ σ΅¨ π π π β β σ΅¨σ΅¨ π ππ ππ σ³¨σ³¨σ³¨σ³¨σ³¨β σ΅¨σ΅¨σ΅¨πππ‘ ππππ‘ πππ‘ β πππ‘ (πππ‘ ) πππ‘ σ΅¨ π ππ σ΅¨σ΅¨σ΅¨ +πππ‘ (ππππ‘ ) β πππ‘ (πππ‘ )σ΅¨σ΅¨ = 0. σ΅¨ (30) Thus σ΅¨σ΅¨ ππ σ΅¨ σ΅¨σ΅¨πΊ (π¦) β πππ‘ (π¦)σ΅¨σ΅¨σ΅¨ σ΅¨σ΅¨ ππ‘ σ΅¨σ΅¨ π σ΅¨σ΅¨ ππ π ππ σ΅¨σ΅¨σ΅¨ = σ΅¨σ΅¨σ΅¨πΊππ‘ (πππ‘ ) β πππ‘ (πππ‘ )σ΅¨σ΅¨ σ΅¨ σ΅¨ π σ΅¨σ΅¨ π σ΅¨σ΅¨ π (ππ ) β π (ππ ) σ΅¨σ΅¨ ππ‘ ππ‘ ππ‘ ππ‘ ππ ππ = σ΅¨σ΅¨σ΅¨σ΅¨ (πππ‘ β πππ‘ ) π π π π σ΅¨σ΅¨ πππ‘ β πππ‘ σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨σ΅¨ σ΅¨σ΅¨ π π π π σ΅¨σ΅¨ + πππ‘ (πππ‘ ) β πππ‘ (πππ‘ ) σ΅¨σ΅¨σ΅¨ . σ΅¨σ΅¨σ΅¨ σ΅¨σ΅¨ (31) ππ |πΊππ‘ (π¦) On the other hand β πππ‘ (π¦)| is a convex function by the same argument, and we obtain the following ππ (π¦) β πππ‘ (π¦)| β 0 for π β β. maxπ¦βππ |πΊππ‘ Lemma 4. Under the same assumption of Lemma 3 π maxπ¦βππ |πΏππ (π¦) β ππ (π¦)| β 0 for π β β and each π = 0, 1, 2, . . . , π + 2(π + βπ π=1 ππ ). Proof. Lemma 3 and the definitions πΏππ (π¦), ππ (π¦) imply Lemma 4, standardly. 3. Branch and Bound Algorithm and Its Convergence In Section 2, we transformed the initial problem (π0) into the equivalent problem (π2), and we make the linear relaxation problem (LRP) of (π2) to find the approximate value of (π2) easily. Now we get it by using branch and bound algorithm. 3.1. Branch and Bound Algorithm. We solve the linear relaxation problem on initial domain π0 to get the linear optimal value as lower bound of (π2) and upper bound of (π2). For preparing to separate the active domains, we let the active domains set be Qπ and active domain ππ(π) β π0 . π presents the times of the cutting domains and stage number and π presents the number of active domains on stage π. If ππ(π) is active domain, we divide ππ(π) into half domains ππ(π)β 1 , ππ(π)β 2 and linearize (π2) on each domain and solve the linear problems. After the above calculations, we get the lower and upper bound value of (π2). After the repeat calculations, we get the convergence for the sequences of the lower and upper bound values, and we get the optimal value and solution. 3.1.1. Branching Rule. We denote that ππ(π) = {π¦ | π¦π(π) β€ π 0 π¦ππ(π) β€ π¦π(π) π , π = 1, . . . , π + πsum } β π . We select the branching variable π such that π = π max{π¦π(π) β π¦π(π) , π = 1, 2, π π π(π) . . . , π+πsum }, and we divide the interval [π¦π(π) , π¦π π(π) intervals: [π¦π(π) , (π¦π(π) +π¦π )/2] π π ] into half π π(π) π(π) and [(π¦π(π) +π¦π )/2, π¦π ]. π 3.1.2. Algorithm Statement Step 0. Firstly, we let π be 0 and let π be 1. And we set an appropriate π-value as a convergence tolerance, the initial upper bound πβ = β, and Q0 = π0(1) . We solve LRP(π0(1) ), and we denote the linear optimal solution and optimal value by Μ 0(1) ) is feasible for (π2), then update Μ 0(1) ) and LB0(1) . If π¦(π π¦(π Μ 0(1) )) and we set the initial lower bound LB = πβ = π0 (π¦(π LB0(1) . If πβ β LB β€ π, then we get the π-approximate optimal Μ 0(1) )) and optimal solution π¦(π Μ 0(1) ) of (π2), so we value π0 (π¦(π stop this algorithm. Otherwise, we proceed to Step 1. Step 1. For all π, we divide ππ(π) to get two half domains, ππ(π)β 1 and ππ(π)β 2 , according to the above branching rule. Step 2. For all π and each domain ππ(π)β V (V = 1, 2), we calculate π (V) = π Ξπ β π‘=1,πππ‘ >0 π(π)β V π πππ‘ exp (πππ‘ )+ Ξπ β π‘=1,πππ‘ <0 ππ(π)β V πππ‘ exp (πππ‘ ) π (π = 1, . . . , π + 2 (π + β ππ )) , π=1 (32) π(π)β V π(π)β V π π where πππ‘ , πππ‘ , and πππ‘ are defined in Section 2.3. Abstract and Applied Analysis 7 If there is the π (V) that satisfy π (V) > 0 for some π β π π π(π)V is infeasible domain for {1, 2, . . . , π + 2(π + βπ π=1 ππ )}, π (π2), then we delete the domain from Qπ . If ππ(π)β V (V = 1, 2) are all deleted for all π, then the problem has no feasible solutions. π(π)β V π(π)β V π(π)β V π π , πππ‘ , and Step 3. For left domains, we compute π΄πππ‘ , π΅ππ‘ ππ(π)β V πππ‘ as defined in Sections 2.2 and 2.3. We solve the ) by simplex algorithm, and we denote the LRP(π Μ π(π)β V ), obtained linear optimal solutions and values by (π¦(π π(π)β V Μ LBπ(π)β V ). Then if π¦(π ) is feasible for (π2), we update β β π(π)β V Μ π = min{π , π0 (π¦(π ))}. If LBπ(π)β V > πβ , then delete the corresponding domains from Qπ . If πβ β LBπ(π)β V β€ π, then we Μ π(π)β V )) and optiget the π-approximate optimal value π0 (π¦(π Μ π(π)β V ) of (π2), so we stop this algorithm. mal solution π¦(π Otherwise, we proceed to Step 4. π(π)β V π0 (π¦) is continuous function, so limπ β β π0 (π¦ππ ) = π0 (π¦β ). π¦β And π0β β€ π0 β€ π0β ; that is, π0 (π¦β ) = π0β . For βπ, ππ (π¦πβ ) β€ 0. As ππ (π¦β ) is continuous, limπ β β ππ (π¦πβ ) = ππ (π¦β ) β€ 0. 4. Numerical Experiment In this chapter, we show the numerical experiments of these optimization problems according to the former rules. We make the algorithms coded with MATLAB. In these codes, we use MATLABβs unique function code βlinprogβ to solve the linear optimization problems. Example 1. Consider min β (π₯) = sin ( + cos ( Step 4. We update the index of left domains ππ(π)β V to ππ+1(π) ; then we initialize π. And we settle that Qπ+1 is a set of ππ+1(π) , and go to Step 1. s.t. (i) for the case π > 0, the algorithm always terminates after finitely many iterations yielding a global π-optimal solution π¦β and a global π-optimal value πβ for problem (π2) in the sense that β π¦ β π, β π βπβ€ π0β β β π€ππ‘β π = π0 (π¦ ) ; π₯1 + 3 Example 2. Consider min exp ( s.t. (35) then ππ σ³¨β β, so lim πππ = 0. ππ β β ππβ is monotone decreasing sequence, so it converges. We assume that limπ β β ππβ = π0β : lim (πππβ πββ β πππ ) β€ β lim π0 (π¦ππ ) πββ β€ lim πβ π β β ππ ; (36) π₯1 β 2 π₯2 ) π₯12 β 2π₯1 + π₯22 β 8π₯2 + 20 π₯2 β€1 π₯1 (38) π₯1 + π₯2 + β€ 6 π₯2 2π₯1 + π₯2 β€ 8 π = {π₯ : 1 β€ π₯1 β€ 3, 1 β€ π₯2 β€ 3} . (34) π σ³¨β β, βπ₯12 + 3π₯1 + 2π₯22 + 3π₯2 + 3.5 ) π₯1 + 1 β exp ( Proof. (i) It is obvious by algorithm statement. (ii) We assume that the upper bound corresponding to ππ is ππβ : πππβ β πππ β€ π0 (πππβ ) β€ πππβ π₯2 β5β€0 π₯1 We set π = 0.0001. After the algorithm, we found a global π-optimal value πβ = 1.0748 when the global π-optimal solution is (π₯1 , π₯2 )π = (1.34977, 1.64232). (ii) for the case π β 0, we assume the sequence ππ is convergence tolerance, such that π1 > π2 >, . . . , > ππ > ππ+1 >, . . . , > 0; that is, limπ β β ππ = 0. And we assume the sequence π¦πβ is optimal solution of (π2) corresponding to ππ . Then the accumulation point of π¦πβ is global optimal solution of (π2). π¦πβ is the point sequence on bounded closed set, so π¦πβ has a β β . We assume that limπ β β π¦ππ = π¦β ; converge subsequence π¦ππ then (37) π = {π₯ : 1 β€ π₯1 β€ 3, 1 β€ π₯2 β€ 3} . (33) π0 (π¦πβ ) β [ππβ β π, ππβ ] ; βπ₯22 + 2π₯1 + 2π₯2 ) π₯1 + 2.5 π₯1 β1β€0 π₯2 π₯12 β 3.2. The Convergence of the Algorithm. Corresponding to [4], we obtain the convergence of the algorithm (cf. [4]). Theorem 5. Suppose that problem (π2) has a global optimal solution, and let π0β be the global optimal value of (π2). Then one has the following: π₯12 + 3π₯2 β 2π₯22 + 1 ) π₯12 + π₯2 + 2 We set π = 0.0001. After the algorithm, we found a global π-optimal value πβ = 58.2723 when the global π-optimal solution is (π₯1 , π₯2 )π = (1, 1.6180). Example 3. Consider min sin ( π₯12 + 2π₯2 β 2π₯1 + π₯22 + 1 ) π₯1 + π₯22 + 2 + cos ( 3π₯12 β 3π₯2 + 2π₯1 + π₯22 + 5 ) π₯12 + 2π₯22 + 10 8 Abstract and Applied Analysis s.t. sin ( π₯12 + 3π₯2 β 2π₯22 + 2 ) π₯12 + π₯2 + 2 + cos ( βπ₯22 + 2π₯1 + 2π₯2 )β€2 π₯1 + 2.5 (39) π = {π₯ : 1 β€ π₯1 β€ 2, 1 β€ π₯2 β€ 2} . [4] We set π = 0.0001. After the algorithm, we found a global π-optimal value πβ = 1.09133 when the global π-optimal solution is (π₯1 , π₯2 )π = (2, 1). Example 4. Consider min s.t. 3π₯1 β π₯22 + 5 ) π₯12 β π₯1 + π₯22 β 3π₯2 + 10 π₯12 β 2π₯2 β€ 1 π₯1 β [5] [6] π₯2 β 2π₯22 + 8 ) exp ( 1 2 2π₯1 + π₯2 + 1 + exp ( [3] (40) π₯2 β€1 π₯1 2π₯1 + π₯22 β€ 6 π = {π₯ : 1.5 β€ π₯1 β€ 2, 1.5 β€ π₯2 β€ 2} . We set π = 0.0001. After the algorithm, we found a global π-optimal value πβ = 3.9378 when the global π-optimal solution is (π₯1 , π₯2 )π = (1.5, 1.7321). 5. Concluding Remarks In this paper, we proved that we can solve the nonconvex optimization problems which [4] cannot solve that the sum of the positive (or negative) first and second derivatives function with the variable defined by sum of polynomial fractional function by using branch and bound algorithm. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgments The authors would like to thank S. Pei-Ping, H. Yaguchi, and S. Tsuyumine for their good suggestion and encouragement. References [1] D. Cai and T. G. Nitta, βLimit of the solutions for the finite horizon problems as the optimal solution to the infinite horizon optimization problems,β Journal of Difference Equations and Applications, vol. 17, no. 3, pp. 359β373, 2011. [2] D. Cai and T. G. Nitta, βOptimal solutions to the infinite horizon problems: constructing the optimum as the limit of the solutions for the finite horizon problems,β Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 12, pp. e2103βe2108, 2009. R. Okumura, D. Cai, and T. G. Nitta, βTransversality conditions for infinite horizon optimality: higher order differential problems,β Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 12, pp. e1980βe1984, 2009. S. Pei-Ping and Y. Gui-Xia, βGlobal optimization for the sum of generalized polynomial fractional functions,β Mathematical Methods of Operations Research, vol. 65, no. 3, pp. 445β459, 2007. H. Jiao, Z. Wang, and Y. Chen, βGlobal optimization algorithm for sum of generalized polynomial ratios problem,β Applied Mathematical Modelling, vol. 37, no. 1-2, pp. 187β197, 2013. Y. J. Wang and K. C. Zhang, βGlobal optimization of nonlinear sum of ratios problem,β Applied Mathematics and Computation, vol. 158, no. 2, pp. 319β330, 2004. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Applied Mathematics Algebra Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Mathematical Problems in Engineering Journal of Mathematics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Discrete Dynamics in Nature and Society Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Abstract and Applied Analysis Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Journal of Stochastic Analysis Optimization Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014
© Copyright 2026 Paperzz