Computers & Operations Research 29 (2002) 1537}1558 Hierarchical generation of Pareto optimal solutions in large-scale multiobjective systems R. Caballero*, T. GoH mez, M. Luque, F. Miguel, F. Ruiz Department of Applied Economics (Mathematics), University of Ma& laga, Campus El Ejido s/n, 29071-Ma& laga, Spain Received 1 March 2000; received in revised form 1 December 2000 Abstract In this paper, the problem of the determination of Pareto optimal solutions for certain large-scale systems with multiple con#icting objectives is considered. As a consequence, a two-level hierarchical method is proposed, where the global problem is decomposed into smaller multiobjective problems (lower level) which are coordinated by an upper level that has to take into account the relative importance assigned to each subsystem. The scheme that has been developed is an iterative one, so that a continuous information exchange is carried out between both levels in order to obtain e$cient solutions for the initial global problem. The practical implementation of the developed scheme allows us to prove its e$ciency in terms of processing time. Scope and Purpose Many are the problems that can arise when attempting to modelize and solve real problems using mathematical techniques. Among them, two questions must be pointed out. First, decisions are usually taken according to several criteria which are in con#ict among them, rather than as the result of the optimization of a single objective. This fact has been faced by the Multiple Criteria Decision Analysis in its many aspects (see, for example, Ignizio, Goal Programming and Extensions, Lexington Books, Massachusets, 1976 or Steuer, Multiple Criteria Optimization: Theory, Computation and Application, Wiley, New York, 1986 for an overview of the problems and techniques). Second, real problems are usually very large and complex, in the sense that many variables and constraints are involved, and complex relations hold among them. In particular, many companies have a hierarchical structure with di!erent decision levels. Such models have been studied in the literature (see Singh, Titli, Systems: Decomposition, Optimization and Control, Pergamon Press, New York, 1978). This paper follows the line of others like Haimes et al. (HierarchicalMultiobjective Analysis of Large-Scale Systems, Hemisphere, New York, 1990), where both aspects are combined. Namely, an algorithm is designed to generate non-dominated solutions for a hierarchical multiple objective model. 2002 Elsevier Science Ltd. All rights reserved. * Corresponding author. Tel.: #34-5-2131168; fax: #34-5-2132061. E-mail address: r}[email protected] (R. Caballero). 0305-0548/02/$ - see front matter 2002 Elsevier Science Ltd. All rights reserved. PII: S 0 3 0 5 - 0 5 4 8 ( 0 1 ) 0 0 0 4 5 - 4 1538 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 Keywords: Multiobjective programming; Non-dominated solutions; Hierarchical generating methods; Feasible decomposition 1. Introduction Many real problems are characterized by a great number of interdependent components that have to share scarce resources and whose behaviors are determined by a set of objectives which are usually in con#ict among them. Consequently, such complex models must be treated using suitable procedures, in order to reasonably reduce the calculation e!ort that they involve. One of the methodologies that has been developed to deal with such complex models is the hierarchical scheme, motivated by the hierarchical internal structure of many real organizations, which consists in the decomposition of the original problem into interdependent subproblems, through the introduction of auxiliary parameters known as coordination variables. The smaller subproblems obtained are solved separately and, through an operative procedure, all these solutions are coordinated in such a way that an optimal solution for the global problem is achieved. For this reason, these methods are usually known as decomposition-coordination schemes, and they involve a conceptual representation of a complex system consisting, on the one hand, in independent subunits with their own objectives and, on the other hand, in a superior unit which harmonizes the behavior of the subordinate subsystems. Although there are many ways to transform a constrained optimization problem into a multilevel problem, most of them are, in practice, combinations of two di!erent schemes called model and goal coordination or feasible and nonfeasible model. In the "rst case, the modi"cations that are carried out in order to decompose the problem only a!ect the constraints of the model, while in the second case such modi"cations are made to the objective functions of the subsystems. The terms feasible or nonfeasible are due to the fact that in the "rst scheme, all the intermediate values of the endogenous variables are feasible, while in the second scheme only the values reached at the end of the procedure are feasible. It is also important to point out another research line in multilevel programming applied to systems where an upper level implicitly determines the feasible region of the subordinate systems (see Lai [1] and Shih et al. [2]). This scheme, which is di!erent from the one followed in this work, is a multilevel extension of the Stackelberg games. Initially, our scheme was developed under a single objective environment, but it has to be taken into account that this assumption is not very realistic, as the most typical situation is the existence of multiple con#icting objectives. The "rst studies that integrated hierarchical optimization and multiple objective programming appeared in the 1970s, although the greatest development took place in the 1980s. Thus, it is important to draw attention to a systematic review of the theoretical fundamentals and methodological schemes for the analysis of hierarchical multiple objective systems, due to Haimes and Li [3], which highlighted the importance of this approach for complex systems. Later on, in 1992, Lieberman [4] presented a detailed perspective of hierarchical multiobjective analysis, and mentioned the wide variety of "elds where such scheme has been applied: resource assignment, regional planning, environmental problems, etc. Nevertheless, there are just a few papers where generating multiobjective algorithms for hierarchical systems are developed. Such schemes are particularly useful when the decision makers R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 1539 have a limited view of the reality modeled and these constitute the main aim of this paper. In this line, Tarvainen [5], developed a noniterative method for systems with few interconnection variables. By use of this method, for a given set of values of the interconnection variables, some e$cient solutions for each subsystem are obtained, among which the global solutions are chosen as the ones such that a set of strictly positive global weights can be found through the use of a linear system which represents the coordination condition. Later on, Li and Haimes [6] proposed a feasible method where the coordination is carried out via a complex envelopment approach. Nevertheless, we think that the two previously mentioned approaches may not be useful in many practical cases. In the "rst one, a set of values for the interconnection variables must be previously "xed and e$cient solutions are searched for among these values where, in fact, they may not exist. The second scheme is di$cult to implement from a computational perspective, because an explicit parametric expression for the set of e$cient solutions of the subsystems is required. This fact brings up the need to develop algorithms which allow the generation of e$cient solutions in an e!ective way, with a suitable computational implementation for application to real problems. Thus, in the present paper, a generating algorithm for hierarchical multiobjective systems is developed, which allows us to determine an approximation of the set of e$cient solutions for the global problem. This is done via an iterative procedure, where a continuous information exchange takes place between the upper unit (coordinator) and the subsystems, or lower units, until a representative set of e$cient solutions for a given family of weights is obtained. Besides this, two di!erent sets of weights are considered: one of them for the upper unit, relative to each subsystem, and the other one for the subsystems, relative to their objectives. Finally, the implementation of this scheme shows its e!ectiveness in terms of computing time. The structure of the paper is as follows. In Section 2 the mathematical formulation of the multiobjective problem corresponding to a system formed by N interconnected systems is described. In Section 3, some theoretical fundamentals are developed in order to carry out a decomposition of the global problem, based on its physical structure, into N subproblems with less dimensions, and a hierarchical generating method is developed in order to determine, under convexity conditions and if a constraint quali"cation holds, the set of properly e$cient solutions for the problem under study. In Section 4, the proposed algorithm is shown, a numerical example is developed and some computational results obtained with its implementation are discussed. Finally, some conclusions can be found in Section 5. 2. Mathematical statement of the problem Let us consider a system formed by N (N*2) interconnected subsystems with multiple objectives in each of them. The general structure of such complex systems is described in Fig. 1. Let us use the following notation for subsystem i"1,2,2, N: y is its output vector, with G dimension n G , x is its input vector, which comes from other subsystems, with dimension n G , and V W G m is its decision vector, with dimension n G . G K In Fig. 1, it can be seen that vectors x and y generate the interconnections among the subsystems. G G Let f G"(f G , f G ,2, f G G ) be the objective vector of subsystem i. It will be assumed that each L objective f G is a function of the variables corresponding to its subsystem, and this will be denoted by H f G (x , m , y ), j"1,2,2, n . Let f"(f , f ,2, f ,) be the joint objective vector of all the subsystems. H G G G G 1540 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 Fig. 1. General structure of the system. In what follows, it will be implicitly assumed that n *2. The case with a single objective in one G subsystem can be treated by obvious modi"cations of the results obtained. The multiple objective programming problem corresponding to the whole system can be stated as follows: Minimize [f (x , m , y ), f (x , m , y ),2, f ,(x , m , y )] , , , V K W 2V, K, W, subject to (1a) y "H (x , m ), i"1,2,2, N, (1b) G G G G g (x , m , y ))0, i"1,2,2, N, (1c) G G G G , (1d) x " C y , i"1,2,2, N. G GH H H Eq. (1d) represents the couplings or connections among the subsystems, and indicates that the input vector of each subsystem is a linear combination of the outputs of all the N subsystems. Usually, matrix C , which is called the connection matrix, is formed by zero}one elements, where GH the unity indicates a connection. Anyway, in what follows, matrices C can be constant matrices in GH general. Expressions (1b) and (1c) represent the systems of equations and inequations for each subsystem. This scheme is particularly useful in complex organizations formed by relatively autonomous units, among which there exists a certain con#ict degree, due to the necessity of sharing scarce resources. Each unit is aware of its own technical limitations, and is responsible for the achievement of its objectives, but they do not need a detailed information about the rest of the units. Therefore, the overall objective of the organization, considered as a whole, is directly formed by the objectives of each of the units. On the other hand, given that an optimal performance of each separate unit does not necessarily produce the optimal policy of the whole organization, it is necessary to coordinate the decisions taken by the di!erent units. As an example, Abad [7] modelizes a company whose strategic decisions are taken in a coordinated manner, taking into account the R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 1541 decentralized performance of the production, marketing and "nance subsystems. The cash in#ow rate, which is an output of the marketing department, and the costs associated to the production department, are both inputs for the "nance department. On the other hand, the sales rate, which is an input of the production department, is also an output for the marketing department. Haimes et al. [8] consider a multiproduct "rm which has two manufacturing plants, and the global problem is decomposed under di!erent points of view: by products or by plants. Also, this scheme can also be applied in macroeconomic contexts, where the decisional units can be di!erent regions of a country that have to share scarce resources, or di!erent sectors of a national economy, etc. (see, for example, Nijkamp and Rietveld [9]). The resolution via hierarchical techniques of a global problem, which in formed by N decisional units, has two consequences. On the one hand, it allows a reduction of the complexity of the problem by solving several smaller subproblems, which in turn yields a reduction of the computational e!ort, that is greater as the complexity of the initial problem increases (see Haimes et al. [8]). On the other hand, it makes easier to identify problematic areas with a weak performance, on which some action should be taken. We are interested in the Pareto e$ciency of the objective vector formed by the objective vectors of the subsystems, that is, we want to determine the Pareto optimal vectors relative to f"(f , f ,2, f ,) which will be denoted by E[f (X); 1L >L >2>L, ], where > , X" (x, m, y) y "H (x , m ), x " C y , g (x , m , y ))0 G G G G G GH H G G G G H is the decision space, and f (X)"f (x, m, y) (x, m, y)3X is the objective or criterion space. By de"nition, a point (x, m, y)3X is said to be a Pareto optimal or e$cient point for problem (1), or equivalently, f (x, m, y)3E[f (X); 1L >L >2>L, ], if there does not exist any other point such that > f (x, m, y)!f (x ,m ,y )", with 31L >L >2>L, . > The generation of e$cient solutions for f is basic for many applications; the study of other cases where a global objective vector F which depends on the objectives of the subsystems F(f , f ,2, f ,) is considered, can be reduced to the Pareto optimality problem of the vector formed by the vectors of the subsystems (see Haimes et al. [8]). 3. Feasible decomposition-coordination procedure Let us suppose that the coupling variables among the subsystems are "xed by assigning given values y to the outputs of the subsystems: y "y (i"1,2,2, N). As a consequence, the input G G G vectors (x ) will always be "xed, and thus the global optimization problem (1) can be decomposed G into the N following subproblems, one for each lower unit: Minimize (f G , f G ,2, f G G ) L VG KG WG (2a) 1542 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 subject to y "H (x , m ), G G G G g (x , m , y ))0, G G G G !y #y "0, G G !y #y "0, G G i"1,2,2, N. (2b) (2c) (2d) (2e) Note that the output vectors y are "xed, as well as the inputs x , through (2e), which, substituted in G G (2b), yields a system of n G equations with n G unknowns (m ), i"1,2,2, N. Thus, the necessary K G W condition for the applicability of this decomposition is that the number of decision variables must exceed the number of interconnection variables, that is, n G *n G , i"1,2,2, N. W K On the other hand, it is necessary to establish an appropriate coordination process according to the decomposition that has been carried out, so that the overall solution of the original problem can be obtained from the solutions of the subsystems. In this line, Li and Haimes [6] develop an enveloping coordination process, via a feasible decomposition theorem, for problems where the overall objective function is, by components, the sum of the objectives of the subsystems. Nevertheless, this approach can only be used in practice when an analytic expression of the solutions of the subsystems, as functions of the coordination variables, can be obtained. This is not frequent in practical cases. So, the previously mentioned result will be extended to the general case, where the global objective is not necessarily additive. Theorem 1 provides the theoretical framework for the proposed coordination strategy. Such a strategy does not need a parametric expression of the optimal solutions for the subsystems, and consequently, it has clear computational advantages with respect to the previously mentioned approach when it comes to solving complex real problems. Theorem 1. E[f(X); 1L >L >2>L, ]"E > where >"y there exists m such that (x, m,y )3X, X(y )"(x, m,y ) y 3>,( (x, m,y )3X, f (X(y ))"f (x, m,y ) (x, m,y )3X(y ). Proof. In this proof, the following notations will be used: M"[f(X); RL >L >2>L, ] > and ¸"E E[f(X(y )); 1L >L >2>L, ] ; 1L >L >2>L, , > > W Z7 E[f (X(y )); 1L >L >2>L, ] ; 1L >L >2>L, . > > W Z7 (3) R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 1543 First, let us prove that ML¸. Let us assume that f (xH, mH, yH)3M. Then it can be a$rmed that yH3> and f (xH, mH, yH)3E[f (X(yH)); 1L >L >2>L, ]. > Thus, f(xH, mH, yH) must belong to L, for otherwise, there would exist a solution y 3> with f (x ,m ,y )3E[f (X(y )); 1L >L >2>L, ] > which dominates it, that is, f (xH, mH, yH)!f (x ,m ,y )", with 31L >L >2>L, > and this contradicts the fact that f (xH, mH, yH)3M. (4) Let us now prove the converse inclusion, that is, ¸LM. Let us assume that f (xH, mH, yH)3¸ and that it does not belong to M. Then, there exists (x ,m ,y )3X, which dominates it, that is relation (4) holds. If f (x ,m ,y )3E[f (X(y )); 1L >L >2>L, ], then this contradicts the fact that f (xH, mH, yH)3¸. > If, on the other hand, f (x ,m ,y ),E[f (X(y )); 1L >L >2>L, ], then there exists (x ,m ,y )3X, such that > f (x ,m ,y )!f (x ,m ,y )", with 31L >L >2>L, . > Therefore, it follows from (4) that f (xH, mH, yH) is dominated by f (x ,m ,y )3E[f(X(y )); 1L >L >2>L, ], > which is a contradiction. Thus, f (xH, mH, yH)3M, ¸LM, and the proof is completed. 䊐 From this theorem it follows that the e$cient set of the overall problem can be determined through a multilevel procedure. In the lower level, problem (2) is solved in each subsystem, for a "xed value of the variable y, y"y , so as to obtain the set E[ f (X(y ))]. In the upper level, making use of the e$cient sets for each parameter y , the set of e$cient solutions for the overall problem (1) is determined. Based on this theorem, a hierarchical algorithm will be developed, where an operative characterization of the coordination strategy is given which makes possible the numerical resolution of practical problems. To this end, it will be assumed that functions f G and g are convex and continuously di!erentiH G able, and functions H are linear. Consequently, making use of the weighting method, the following G N subproblems will be considered: Subsystem i (1)i)N): LG Minimize (G)2f G" G f G (x , m , y ) H H G G G H VG KG WG subject to (2b)}(2e). (5a) (5b) The weights are denoted by G and they are assumed to be strictly positive, G '0, provided that H H we wish to obtain properly e$cient solutions. The value of the objective function (G)2f G obviously depends on the "xed value of y, y"y . The function de"ned by < (y )"Minimize (G)2f G(x , m , y ) (2b)}(2e) G G G G VG KG WG 1544 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 assigns to each vector y the optimal value < (y ) of (G)2f G. This function is known as the value G function for problem (5). The Lagrange function of the previous problem is given by ¸ (x , m , y )"(G)2f G(x , m , y )# 2[!y #H (x , m )] G G G G G G G G G G G G , (6) #2g (x , m , y )#(W)2[!y #y ]#(V)2 !x # C y , G GH H G G G G G G G G G H where , , W and V are Kuhn}Tucker multipliers vectors. G G G G In the following lemma, some properties of value function < (y ) are stated so as to make use of G them later on. Lemma. If functions f G and g are convex and continuously diwerentiable, and function H is linear, H G G then: (a) Function < (yN ) is convex. G (b) If all the properly ezcient solutions of (2), (xH(y ), mH(y ), y ), are regular, that is, they satisfy the G G G linear independence constraint qualixcation, then < (yN ) is diwerentiable in yN , and G < (y )" ¸ (xH(y ), mH(y ), y ) W G G W G G G "((V)2C , (V)2C ,2,(W)2#(V)2C ,2,(V)2C )2. G G G G G G GG G G, (7) The proofs of these properties can be found in Tanino and Ogawa [10]. Therefore, the gradient of the value function can be obtained through the resolution of subproblems (5), making use of the corresponding Kuhn}Tucker multipliers W and V. The next G G theorem provides a useful characterization in order to solve the global problem (1). Theorem 2. Let us consider the global problem (1) and let us assume that functions f G and g are H G convex and continuously diwerentiable, functions H are linear, and all the properly ezcient G solutions of the global problem and of the subproblems (2) are regular. Then, xH"(xH, mH, yH,2, xH , mH , yH )2 is a properly ezcient solution of the global problem if and only if the , , , two following conditions hold: (a) (xH, mH, yH) is a properly ezcient solution for problem (2) with y "yH. G G G (b) yH is a properly ezcient solution for the following problem (known as the coordinator or coordination problem): Minimize [< (y ), < (y ),2, < (y )]. , W W W , (8) Proof. Let us prove the "rst implication. Let xH"(xH, mH, yH,2, xH , mH , yH )2 be a properly , , , e$cient solution for the global problem (1). Then, there exists a vector of strictly positive weights, ( "(( ,2,( ,2,( ,,2,( ,, )2, such that xH is an optimal solution of the corresponding L L R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 1545 weighting problem, whose Lagrangian function is the following: , , ¸(x)" (( G)2f G(x , m , y )# K 2[!y #H (x , m )] G G G G G G G G G G , , , # ( 2g (x , m , y )# K 2 !x # C y , G GH H G G G G G G H G G where K , ( and K are vectors and the components of ( are nonnegative. In addition, such G G G G a solution satis"es the Kuhn}Tucker conditions. In particular, condition R¸/Rx"0, expressed component wise is the following: RH Rg RfG (( G)2 # K 2 G #( 2 G !K 2"0, G G G Rx Rx Rx G G G RH Rg RfG # K 2 G #( 2 G "0, (( G)2 G Rm G Rm Rm G G G Rg RfG , (( G)2 ! K 2#( 2 G # K 2C "0, G G Ry I IG Ry G I G i"1,2,2, N. Let us carry out the following normalization, for i"1,2,2, N ( G G" , G K K ( I " G , " G , I " G G G G G G G with LG " ( G . H G H Taking this notation into account, the previous equations, together with the rest of the Kuhn}Tucker conditions for global problem (1), in the optimal solution xH"(xH, mH, yH,2, xH, mH, yH,2, xH , mH , yH )2, can be written in the following form: G G G , , , RH Rg Rf G (9a) (G)2 # I 2 G #2 G !I 2"0, G Rx G Rx G Rx G G G RH Rg Rf G # I 2 G #2 G "0, (9b) (G)2 G Rm G Rm Rm G G G Rg , Rf G (9c) (G)2 ! I 2#2 G # I (I )2C "0, G G Ry I IG Ry G I G G G'0, (9d) ( )*0, ( )2g (xH, mH, yH)"0, g (xH, mH, yH))0, G G G G G G G G G G yH"H (xH, mH), G G G G (9e) (9f) 1546 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 , xH" C yH, G GH H H i"1,2,2, N. (9g) In particular, let us consider the ith subsystem, the solution (xH, mH, yH), and the vector of weights G G G G"(G ,2,G G )2, and let us denote L (10) V"I , G G , (W)2"! I (I )2C . (11) IG G I I G Then, Eq. (9) for subsystem i take the form Rf G RH Rg (G)2 # I 2 G #2 G !(V)2"0, G G G Rx Rx Rx G G G RH Rg Rf G # I 2 G #2 G "0, (G)2 G Rm G Rm Rm G G G Rf G Rg (G)2 ! I 2#2 G !(W)2"0, G G G Ry Ry G G G'0, (12a) (12b) (12c) (12d) ( )*0, ( )2g (xH, mH, yH)"0, g (xH, mH, yH))0, (12e) G G G G G G G G G G yH"H (xH, mH), (12f) G G G G , xH" C yH. (12g) G GH H H But these are the Kuhn}Tucker conditions for the ith subproblem (2) corresponding to the point (xH, mH, yH), the vector of weights G"(G ,2,G G )2, and the "xed value yH, for the output variable G G G G L y . Thus, due to the convexity hypotheses of subproblem i, it can be concluded that such solution is G properly e$cient for the subproblem for each i"1,2,2, N. Moreover, from (10) and (11) it can be deduced that given y "yH for i"1,2,2, N, the following G G equation holds: , (W)2# (V )2C "0. (13) G G I I IG I Given (7), these equations, together with conditions '0 (i"1,2,2, N), constitute the properly G e$cient Kuhn}Tucker conditions for the coordinator weighting problem with weights ( , ,2, ). Therefore, taking into account the previous lemma, it can be a$rmed that y "yH , G G (i"1,2,2, N), is a properly e$cient solution for the coordinator problem. This proves the "rst implication. Let us now prove the converse one. Let (xH(y ), mH(y ), y ) be a properly e$cient solution for the ith subproblem (i"1,2,2, N) given G G G by (2). Due to the regularity and di!erentiability assumptions, there exist G'0 and multipliers I , ,W,V, such that the corresponding Kuhn}Tucker conditions for each subsystem are satis"ed. G G G G R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 1547 That is, (12) holds, with x "xH and m "mH. Besides this, yH"y (i"1,2,2, N) is, by G G G G G G hypothesis, a properly e$cient solution for the coordination problem. So, there also exists a vector of strictly positive weights, "( , ,2, )2 such that yH"y , is an optimal solution of the , G G corresponding weighting problem. Therefore, the "rst order conditions (13) also hold. Taking (12) and (13) into account, and de"ning I according to (10), the solution considered G satis"es the Kuhn}Tucker conditions (9) for the global problem. Due to the convexity and di!erentiability assumptions, it can be deduced that xH is a properly e$cient solution for the global problem, and this completes the proof of the theorem. 䊐 From the above proof, it can be deduced that there exists the following relationship between the weights of the global problem and the weights of the subsystems and the coordination problem: ( G" G, i"1,2,2, N, G where is the weight assigned to the ith subsystem. G Note that, due to the way the subsystems have been formulated (see expression (2)), the predictions are only directly considered in the interconnections, while they appear indirectly in the rest of the problem through constraint (2d). Obviously, the optimal solution would have been the same if such predictions had been directly substituted in all the constraints. But, when obtaining the optimal solutions for formulation (2), the optimal values of the multipliers W and V are also G G obtained, and they in turn let us obtain the gradient of the weighted coordination function. Consequently, although the coordinator generally does not know an analytic expression of his/her objectives (8), he/she can use this information in order to build a tangential approximation of the weighting function. This way, new values of y can be obtained, minimizing the approximation function. This minimization can be carried out via a gradient-type algorithm. The recursive equation for such minimization can be written as follows: , (yI>)2"(yI)2!sI (WI)2# (VI)2C , i"1,2,2, N, G G H H HG G G H where sI'0 is the step length and super-index k indicates the current iteration number. (14) 4. Algorithm and numerical example In this section, the two-level hierarchical algorithm which has been developed on the base of the above considerations will be described. Previously, some eventualities which can appear in real problems will be analyzed, as well as some observations which must be taken into account in the implementation. As it has been mentioned, the decomposition is carried out by "xing the output vectors of the subsystems, y , so that N multiobjective subproblems appear, which will be solved at the lower level. But the initial prediction of the outputs is generally done in a rather random way, unless the decision maker has a previous knowledge of the behavior of the global system (which is di$cult in large-scale problems). For this reason, the risk exists to choose a nonfeasible vector y . In order to solve this eventuality, the algorithm incorporates a feasibility phase, once the value y has been "xed, previous to the resolution of each subsystem, which is carried out using the feasible phase of 1548 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 any optimizing subroutine (in our case, a subroutine library has been used, as it will be seen afterwards). As a result, in case the initial estimate y is unfeasible, the algorithm detects this fact and provides a feasible one. On the other hand, in practical cases it can also happen that the solutions of the subsystems may not satisfy the regularity conditions. For this reason, before actualizing the coordination variable, an intermediate optimization level has been introduced, in order to achieve a higher e$ciency and quicker convergence of the method. In this level, among all the possible multipliers associated to the optimal solution of (5), the algorithm chooses the values that best "t the coordination equations. To this end, the following intermediate problem is solved: , Minimize LVLWE G subject to , (W)2# (V)2C G G H H HG H RH Rg Rf G (G)2 # 2 G #2 G !(V)2"0, G Rx G Rx G Rx G G G RH Rg Rf G (G)2 # 2 G #2 G "0, G G Rm Rm Rm G G G Rg Rf G (G)2 ! 2#2 G !(W)2"0, G G G Ry Ry G G G*0, i"1,2,2, N, (15) where the equations are valued in the current iteration. With the corresponding optimal solution, the search direction vectors are calculated: , dI" (WI)2# (VI)2C , i"1,2,2, N G G G H H HG H and the algorithm proceeds to the coordination phase. In this phase, in order to actualize the estimate yI using recursive Eq. (14), once the search direction is determined, it is necessary to compute the corresponding step length sI. To do so, the algorithm proposed by Fletcher [11] for a nonderivative line search procedure has been used, given the available information about the coordinator's function. This algorithm is fully described in Appendix A. Given these considerations, the algorithm, which is illustrated in Fig. 2, takes the following steps: Step 1: At the lower level, for each subsystem, give a vector of weights (GH)2"(GH,GH,2,GHG ), GH'0, j"1,2,2, n , i"1,2,2, N. H G L Step 2: At the upper level, determine an initial estimate of the coordination variable y , i"1,2,2, N. G Assign the coordination weight to each subsystem , i"1,2,2, N. G Step 3: Test the feasibility of y , i"1,2,2, N. G If it is not feasible, use the feasible phase to obtain a feasible estimate yH, i"1,2,2, N. Make G k"0, and y"yH, i"1,2,2, N. G G If it is feasible, make k"0, and y"y , i"1,2,2, N. G G R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 Fig. 2. Flowchart for the algorithm. 1549 1550 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 Step 4: At the subsystem level, solve each weighting problem (5) and (15), and obtain xI(GH, yI), mI(GH, yI), I(GH, yI), I(GH, yI), VI(GH, yI) and WI(GH, yI), i"1,2,2, N. G G G G G G Step 5: At the upper level, given the N optimal solutions of the subproblems, test whether the errors in the coupling equations are lower than certain tolerances '0, that is G , eI" (WI)2# (VI)2C ( , i"1,2,2, N. G G G G H H HG H If so, this N-set of optimal solutions of the subsystems corresponding to yI"(yI, yI,2, yI) , constitutes a properly e$cient solution for the overall problem. Go to step 7. Step 6: Update the step length sI using the sectioning algorithm described in Appendix A. Using recursive Eq. (14), update the prediction of yI, make k"k#1 and send this actualization to the lower level, that is, go to step 4. Step 7: If the user wishes to generate another properly e$cient solution, go to step 1. If not, end of process. The convergence of this algorithm is guaranteed by Brosilow et al. [12]. That is, if the global weighting problem has an optimal solution, then it is approximated by the described decomposition coordination scheme. This solution is in turn a properly e$cient solution of the overall problem. These algorithms have been implemented on a VAX 8530 computer, in FORTRAN 77 language, and with the aid of the subroutine library NAG [13]. In order to illustrate the behavior of the algorithm, let us solve the following simple example. Let us suppose that two a$liated companies produce two types of goods each. Each company has a decision center, which has decided to modify, for the next year, the investment policy they are presently carrying out. The production costs, of each company, of each good are denoted by (y , y ), j"1,2 H H and the variations of the investment costs with respect to their current values (equipment corresponding to each good) are denoted by (m , m ), j"1,2. H H The aim of the decision centers is to determine their production costs and the variations of their investment costs, taking into account a series of technical constraints, given by the decision centers. First, there exist technical constraints which relate the production costs of both a$liated companies with the new investment policy: y "y #m #m !2, m "(y #y #y #m ). Second, in company 1, the decision center wishes to favor the investment over the production costs. Third, in company 2, the decision center wishes to bind the variations of the investment costs, but R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 1551 giving a minimum value for m . Consequently, the following technical constraints appear: Company 1 Company 2 y #y )5 m *1 m #m )8 y )m m *0.5 y )m y *0, y *0 y *0, y *0 Finally, the objectives of each company are to minimize the deviations of the productions costs from certain reference values of the sector, and the variations of the investment costs, penalizing via weights the least desired deviations: Company 1 Company 2 (y !2)#2(y !3) 2(y !5)#(y !1) m #2m 3m #m In order to decompose the global problem in two subproblems, one for each company, the interconnection variables x and x are considered, and the following constraints are added: x "!y , for the "rst company, and x "y #y for the second company. Thus, the system (depicted in Fig. 3) has two subsystems connected by two vector variables. Therefore, the global problem takes the following form: f (x , m , m , y , y )"(y !2)#2(y !3), m #2m f (x , m , m , y , y )"2(y !5)#(y !1), 3m #m subject to Min y #x #m #m "2, y #y )5, m *1, y !m )0, y !m )0, y *0, y *0, y #x #m !2m "0, m #m )8, y *0, y *0, m *0.5, x "!y , x "y #y . (16a) (16b) (16c) (16d) (16e) (16f) (16g) (16h) (16i) (16j) (16k) (16l) 1552 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 Fig. 3. Structure of the example. Once the values y2"(y , y , y , y )"(y ,y ,y ,y ) are "xed at the second level, problem (16) can be decomposed into two subproblems. The subsystems at the "rst level take the following form: Subsystem 2 Subsytem 1 (y !2)#2(y !3) m #2m subject to Min 2(y !5)#(y !1) 3m #m subject to Min (16a)}(16f) (16g)}(16j) x "!y x "y #y !y #y "0 !y #y "0 !y #y "0 !y #y "0 Let us assume that the vectors of weights of subsystems 1 and 2 are "(0.4,0.6) and "(0.5,0.5), respectively, and the same importance is assigned to each subsystem, that is, " "1. Finally, let us assume that the initial values chosen at the second level are y"(0,0,0,0)2, and the admissible value of the coupling errors is set to "0.001. The results of the iterative process is given in Table 1. Therefore, for the given weights (it is important to point out that this is a single e$cient solution for a given vector of weights), the variations of the investment costs in company 1 are (1.64168, 1.86228), which means an increase over their current values, and the productions costs are (0.38153, 1.86228). On the other hand, the results for company 2 are more heterogeneous. The investment costs should be reduced for product 1, given that m "!0.43154, and increased for product 2, given that m "2.58925. Besides, the production cost of product 1, y "3.36624 is much higher than that of product 2, y "1. Of course, the results for other vectors of weights may di!er signi"cantly from the present one. R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 1553 Table 1 Solutions in the iteration process Iteration 1 2 3 4 5 6 7 x m m y y x m m y y !1.41359 1.08240 1.16559 0.57039 1.16559 1.73599 !0.24227 1.45365 1.41359 0.12399 !3.29844 1.49620 1.90112 0.72304 1.90112 2.62417 !0.45558 2.73351 3.29844 0.44478 !3.39212 2.04124 1.67544 0.58881 1.67544 2.26425 !0.43510 2.61063 3.39212 0.66069 !3.33333 1.61341 1.85996 0.56957 1.85996 2.42953 !0.44329 2.65978 3.33333 0.69769 !3.39648 1.85949 1.76849 0.39556 1.76849 2.16406 !0.42773 2.56640 3.39648 0.98118 !3.36503 1.64285 1.86108 0.39560 1.86108 2.25669 !0.43244 2.59464 3.36503 0.98310 !3.36624 1.64168 1.86228 0.38153 1.86228 2.24381 !0.43154 2.58925 3.36624 1.00031 Let us center our attention in the step from iteration 1 to iteration 2. Once the values y"y"(0.57039, 1.16559, 1.41359, 0.12399) are "xed, the rest of the variables are calculated solving the problems previously speci"ed, corresponding to each subsystem. From these data, the results shown in the "rst column of the table are obtained: Subsystem 1: x"!1.41359, m "1.08240, m "1.16559, Subsystem 2: x"1.73599, m "!0.24227, m "1.45365 and besides, the gradient of the value function of each subsystem is also available from the resolution of the subsystems: < "(W ,W ,!V,0)"(!1.14368,!2.73537,1.29888,0), W < "(V,V,W ,W )"(0.72682,0.72682,!6.44597,!0.87600). W In order to test whether the solution obtained from the subsystems is also an e$cient solution for the global problem, the error in the coupling equations is calculated: e"(W #V,W #V) "2.00855, e"(W !V ,W ) "5.14709. As these errors are not under the given tolerance, the current iteration is not yet a solution for the global problem, and thus, the process must go on. The "rst step is now to determine the step length using the sectioning process described in Appendix A. The result for this iteration is s"0.36619. Therefore y "y !0.36619[W #V]"0.72304y "y !0.36619[W !V]"3.29844, y "y !0.36619[W #V]"1.90112y "y !0.36619[W ]"0.44478. From this point, the process starts again. The aim of this example is just to show the way the implemented algorithm works. Of course, the example is not designed to show all the advantages of the decomposition-coordination schemes in 1554 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 Table 2 CPU times for several hierarchical mutiobjective problems Variables CPU (s) 70 85 100 120 150 190 220 320 470 853 1386 2567 6834 11954 13254 15842 hierarchical systems. Such schemes are specially suitable for much more complex problems, with a higher dimensionality, which would be too long to be studied as a simple example. With the aim of studying the performance of the implementation, in terms of processing time, it has been run with a series of test problems. Namely, the mean computing time of a great number of linear quadratic problems has been considered. Each of them has four subsystems, 100 constraints, the number of variables ranges between 70 and 320, and the number of objectives between 8 and 20. The problems have been randomly generated according to the scheme given in Rey [14], and their e$cient frontier is approximated using the described algorithm for di!erent vectors of weights. The results are displayed in Table 2. It must be taken into account that, for each problem, 1024 weighting problems are solved, given the number of objectives and the partition of the weight space which is carried out. Let us observe that, for small size problems (like the one treated in the example), nonhierarchical procedures show a better computational behavior than hierarchical techniques. But for larger scale problems, the computing times produced by the hierarchical methodology are signi"cantly lower as the number of objectives grows (10, 15, 20,2) and so does the number of variables (100, 300,2). Nevertheless, the variable that most critically in#uences the computing time is the number of interconnection variables. This is an expected result, given that a greater number of coupling equations yields a higher di$culty in the subsystems coordination process. On the other hand, it must be pointed out that the empirical results demonstrate good behavior of the algorithm, given the relatively low processing times, and taking into account the size of the problems. If these times are compared with computational implementations of nonhierarchical resolution methods (as in Caballero et al. [15]), a decrease of approximately 10% in the global processing time can be observed (furthermore, for problems with, for example, 100 variables, 150 constraints, and 10 objectives, computing times can be reduced up to 35%). It is important to insist that these computational advantages are greater as the complexity of the problem increases, in terms of number of objectives, variables, etc. Of course, in a small problem like the one considered in the example, a nonhierarchical traditional scheme is more computationally e$cient. Besides, the potential advantages of each scheme (hierarchical or not) are not just reduced to the computational aspects, but also depend on the decisional context of the problem. In an organizational environment, taking into account several decentralized functional units, which are somehow R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 1555 interconnected, the hierarchical scheme may help to solve the problem of each unit, to detect areas with a bad performance, etc., without having to have a detailed knowledge of the global organization. 5. Conclusions The main di!erence between the already existing schemes and the one described in this paper lies in the coordination process. Namely, in the previous works, once the value of the coordination variables is "xed, a decomposition of the overall multiobjective problem into N parameterized multiobjective subproblems is carried out. Then, the properly e$cient solutions of the subproblems are calculated, and the properly e$cient set of the overall problem is approximated using those subsystem's properly e$cient solutions that satisfy a system of equations, which normally has more equations than unknowns. Nevertheless, this scheme is insu$cient for many practical applications, for it is highly improbable that, given an initial prediction of the coordination variable, the properly e$cient solutions of the subsystems satisfy the system of equations. Only in problems where analytic solutions of the subsystems, as functions of the coordination variables, can be obtained and handled by the coordination equations will these schemes be useful. But this situation is not likely to appear in real practical cases, where the subsystems must be solved numerically. For this reason, it can be a$rmed that the scheme developed in this paper is appropriate for practical cases. In it, the subsystems are numerically solved for some representative values of the coordination variables, and these values are updated via an iterative procedure until the coordination equations are satis"ed. On the other hand, from the theoretical point of view, it can be stated that under convexity conditions, the algorithm proposed can be used to obtain the properly e$cient solutions of the global problem. Acknowledgements The authors would like to express their gratitude to the anonymous referees for their helpful comments. Appendix. A In this annex, the method proposed in Fletcher [11] to calculate the step length within a line search procedure will be described. Besides, some small modi"cations have been introduced in order to adapt the algorithm to our particular problem. In theory, a value for s must be determined, such that Minimize =(s)"[ < (yI!sdI) < (yI!sdI)#2# < (yI!sdI)]. , , Q 1556 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 In order to approximate this optimal value, an iterative procedure will be used. First, the variation rank for s is determined [0, sH] so that no possible predictions for yI> for values of s3[0, sH] determine an empty set for the subsystems. In order to "nd this value sH, an incremental method is used, assuring the feasibility of all the iterations of the procedure. Once this interval is calculated, it is contracted by sectioning. The method used is the so-called golden section search, which is described as follows: Step 1: Give the allowed tolerance . Step 2: Make h"0, sF"0, sF"sH and "((5!1)/2 (golden number). Step 3: Calculate "sF!sF, sF"(1!)#sF, sF"#sF. Step 4: Evaluate = in the new two points. If =(sF)(=(sF), then make sF>"sF, sF>"sF. Go to step 6. Else, go step 5. Step 5: Make sF>"sF, sF>"sF. Step 6: If h#1((log !log sH)/log , go to step 3. If not, end of process. Therefore, the central point with the smallest value of the function =(s) is taken as the =(s). This solution is used as approximation of the optimal solution of the problem Minimize QZQH the step length in the main algorithm. In step 3, it must be pointed out that one of the intermediate points of each iteration remains as interior point in the following one. In order to achieve a quicker convergence of the algorithm, specially at the "rst stage when the current iteration is still far from the optimal solution, the stopping criterion can be done in terms of a maximum number of sectioning, instead of the allowed tolerance. Finally, let us observe that, as suggested by Fletcher [11], in order to achieve a better performance in the step-length determination, once an interval (s1, s4) or (s3, s2) is chosen and a interior point is selected, instead of step 3, a quadratic interpolation can be carried out with the three points. The corresponding minimum is the fourth point of the partition, and the procedure can proceed to step 4. Nevertheless, in this case some exceptions must be taken into account, as commented in Fletcher [11]. Namely, when the minimum value of the quadratic function lies outside the current interval or too close to one of the interpolation points (less than a 5% of the interval length), it is suggested to go back to the initial sectioning procedure during two iterations, and then keep on with the quadratic interpolation method. R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 1557 References [1] Lai YJ. Hierarchical optimization: a satisfactory solution. Fuzzy Sets and Systems 1996;77:321}35. [2] Shih HH, Lai YJ, Lee ES. Fuzzy approach for multi-level programming problems. Computers and Operations Research 1996;23(1):73}91. [3] Haimes YY, Li D. Hierarchical multiobjective analysis for large-scale systems. Review and Current Status, Automatica 1988;24(1):53}69. [4] Lieberman ER. Hierarchical multiobjective programming: an overview. In: Goicoechea A, Duckstein L, Zionts S, editors. Multiple criteria decision making. Theory and applications in business, industry and government. Berlin: Springer, 1992. p. 211}25. [5] Tarvainen K. On the generating of Pareto optimal alternatives in large scale systems. In: Proceedings of the Fourth IFAC Symposium on Large Scale Systems, Zurich, Switzerland, 1986. p. 461}6. [6] Li D, Haimes YY. A hierarchical generating method with feasible decomposition. In: Proceeding of the 10th IFAC Triennial World Congress. Large Scale Systems: Multilevel Control 1987;12(1}4):73}8. [7] Abad PL. A hierarchical optimal control model for coordination of functional decisions in a "rm. European Journal of Operational Research 1987;32(1):62}75. [8] Haimes YY, Tarvainen K, Shima T, Thadanthil J. Hierarchical-multiobjective analysis of large-scale systems. New York: Hemisphere, 1990. [9] Nijkamp P, Rietveld P. Multi-objective multi-level policy models: an application to regional and environmental planning. European Economic Review 1981;15:63}89. [10] Tanino T, Ogawa T. An algorithm for solving two-level convex optimization problems. International Journal of Systems SCI 1984;15(2):163}74. [11] Fletcher R. Practical methods of optimization, 2nd ed. New York: Wiley, 1987. [12] Brosilow C, Lasdon LS, Pearson JD. Feasible optimization methods for interconnected systems. In: Proceedings of the Joint Automatic Control Conference, Troy, New York, 1965. p. 79}84. [13] N.A.G. (Numerical Algorithms Group Limited). The NAG FORTRAN Library introductory guide, Mark 17, 1995. [14] Rey L. A study of the di!erent schemes to obtain solutions for convex quadratic multiple objective problems. PhD dissertation, University of MaH laga, Spain, 1994. [15] Caballero R, Rey L, Ruiz F, GonzaH lez M. An algorithmic package for the resolution and analysis of convex multiple objective problems. In: Fandel G, Gal T, editors. Multiple criteria decision making, Lecture notes in economics and mathematical systems, vol. 448. Heidelberg: Springer, 1997. p. 275}84. Further reading Ignizio JP. Goal programming and extensions. Massachusets: Lexington Books, 1976. Steuer RE. Multiple criteria optimization: theory, computation and application. New York: Wiley, 1986. Singh MG, Titli A. Systems: decomposition, optimization and control. New York: Pergamon Press, 1978. Rafael Caballero is a professor in the Department of Applied Economics, University of MaH laga, Spain. He holds Ph.D. in Mathematics from the University of MaH laga. He is interested in the "eld of Multiple Objective Programming (quadratic, convex, dynamic, hierarchical, etc.). Presently, his research is in interactive methods and application to problems in the Public Sector. Trinidad GoH mez NuH n ez is an assistant professor in the Department of Applied Economics, University of MaH laga, Spain. She holds Ph.D. in Economic Science from the University of MaH laga. Her current research is in the "eld of multicriteria decision making with a special interest in hierarchical models, applications and problems in the "elds of Health Economy and Education Economy. Mariano Luque holds Ph.D. in Economic Science from the University of Malaga. Nearly all his research has been carried out in the "eld of Hierarchical Multiple Objective Programming. Presently, his research is based on interactive methods. 1558 R. Caballero et al. / Computers & Operations Research 29 (2002) 1537}1558 Francisca Miguel Garcia is an Assistant Professor of Applied Economics at the University of MaH laga, Spain. She holds Ph.D. in Economic Science from the University of MaH laga. Nearly all her research has been carried out in the "eld of Hierarchical Multiobjective-Analysis of Large-Scale Systems. Presently, her research is based on the application of Multiple Objective Programming techniques. Francisco Ruiz is an Assistant Professor in the Department of Applied Economics at the University of MaH laga. He holds Ph.D. in Economic Science from the University of MaH laga. His research has been carried out in the "eld of quadratic, convex, dynamic, hierarchical, multiple objective programmes including interactive methods.
© Copyright 2026 Paperzz