Your Paper`s Title Starts Here:

Optimality conditions Pareto-optimal solutions
Vladimir Gorbunov1,a, Elena Sinyukova2,b
1,2
Tomsk Polytechnic University, Lenin Avenue 30, Tomsk, Russia, 634050
a
[email protected], [email protected]
Keywords: multicriteria optimization, Pareto optimality, effective solutions that are necessary and
sufficient conditions for local optimization.
Abstract. In this paper, the authors derived the simple necessary conditions of optimality for
continuous multicriteria optimization problems. It is proved that the existence of effective solutions
requires that the gradients of individual criteria were linearly dependent. The set of solutions is
given by system of equations. It is shown that for finding necessary and sufficient conditions for
multicriteria optimization problems, it is necessary to switch to the single-criterion optimization
problem with the objective function, which is the convolution of individual criteria. These results
are consistent with non-linear optimization problems with equality constraints. An example is the
study of optimal solutions obtained by the method of the main criterion for Pareto optimality.
In the study and optimization of real processes we rarely have to deal with a single parameter
optimization. Typically, the process is described by several output functions (quality indicators or
criteria), each of which reflects an important property of the object and should be taken into account
in the determination of the optimal values. Cornerstone in the multi-criteria optimization problem is
to find the Pareto-optimal (efficient) solutions, as «... version of the draft, which will be produced
by serial machine, must be Pareto-optimal» [1]). Just because the methods to determine the Paretooptimal solutions from a variety of possible solutions are so relevant. It is believed that when this
area was built, it is in-sense someone is the solution of multi-criteria optimization.
It should be noted that some sources of non-dominated solution set is called the set of
Edgeworth-Pareto. This is due to the fact that the two criteria for this concept introduced in 1881,
and in the general case of generalized Pareto in 1906. At present, this set is more often called Pareto
front.
Problem Statement. Optimization problem, in which there is not one but several objective
functions (partial criteria), were called multicriteria optimization problems.
Criteria Fi(X), i = 1, 2, . . . , m, form a vector criterion F(X)=(F1, F2,..., Fm), which are assumed to
be differentiable repeatedly. The literature also uses the term vector optimization, multi-purpose
optimization. Entries will set particular criteria as a valuation mapping F: D → Rm, where D –
initial set of options in Rn. Spatial coordinates of points assessments consist of values-of particular
criteria, calculated at the points D.
We formulate the problem of multi-criteria optimization. It has the form:
maxF(X)
XD.
Symbol maxF(X) is understood as a set of characters maxFi (X), i = 1, 2, . . . , m. We assume that
all the criteria necessary to maximize, because you can always go from minFi(X) to the
max[-Fi (X)], i = 1, 2, . . . , m, i.e. changing the sign of the partial criteria.
Prerequisites optimization. In the field of non-linear programming much attention is paid to the
definition of necessary and sufficient conditions in order to be a vector of solutions X is a local
extremum. Note that the optimality criteria are needed not only for the recognition of decisions, but
also to find solutions, because they form the basis of most of the methods used to find solutions.
Similar issues are in front of multicriteria optimization.
Define the concept of optimal solutions in the sense of Pareto. If the decision X1D not
dominated by any other feasible solution XD, then it is called non-dominated (effective) or
optimal in the sense of Pareto.
I. Sobol and R. Statnikov [2] claim that the two criteria for F1(x1,x2) and F2(x1,x2) D. Bartel and
R. Marks in [3] was obtained by expression (1) to calculate the stationary compromise curve points:
F2 ( X )
 F1 ( X )
0
 x   x

1
1

 F1 ( X )   F2 ( X )  0.
 x2
x2
(1)
From the analysis of the homogeneous system of equations authors [3] concluded that the
gradients of local criteria must have opposite signs and tilt compromise curve we obtain the
expression:
dF1/dF2= –1/, where 0<.
(2)
Note that these results were obtained experimentally, i.e. based on the study of lines of levels of
local criteria.
From the obtained expressions (1) and (2) that two of the gradient (vector) must be linearly
dependent, i.e 1F1 ( X )  2F2 ( X )  0, where 1+2=1 and =2/1.
In [2] of (1) and (2) are summarized and given to build a compromise curve and Pareto set as
parametric equations parameter . Note that the first multicriteria optimization is not doing
mathematics. However, with the advent of mathematicians in this area has been applied more
complex mathematical apparatus, which is not always clear for engineers. This is evident, for
example, the results obtained in [4, 5].
Thus, to find the Pareto solutions need to solve the system of differential equations (1). In some
cases, the system (1) is transformed into a system of linear algebraic equations whose solution
consists of parametric equations on a parameter .
In this paper, the authors provided a simple proof of the necessary and sufficient optimality
conditions, the understanding of which requires no special mathematical training. As mentioned
above, it is necessary to show that the gradients of partial criteria for effective points are linearly
dependent. Recall that the system of vectors a1, a2, …, am said to be linearly dependent if there exist
numbers 1, 2, …, m such that at least one of them is different from zero and
1a1+2a2+…,+mam=0.
A linear approximation of partial criteria:
F1(X+X)=F1(X)+  T F1(X) X;
F2(X+X)=F2(X)+  T F2(X) X;
(3)
…
Fm(X+X)=Fm(X)+  T Fm(X) X.
Fi ( X ) Fi ( X )
F ( X )
,
,..., i
) – gradient criterion Fi, i=1,2, …, m. We reduce
x1
x2
xn
gradients particular criteria in a matrix А:
where the Fi ( X )  (
 F1 ( X )

 x1
A=  F1 ( X )
 x
2

 
 F1 ( X )
 x
n

F2 ( X )
Fm ( X ) 
…

x1
x1 
F2 ( X )
Fm ( X ) 

x2
x2 




F2 ( X )
Fm ( X ) 

xn
xn 
(4)
It is clear that the dimension of assessments determined by the number of partial criteria m≥2.
Depending on the combination of values of n and m are many effective assessments is a spatial
curve or surface, which we call «Pareto front». For example, when n2 and m =2, we obtain a curve
in the plane (the boundary region); n2 when m=3, and – the surface (the boundary surface). Since
the particular criteria differentiable, then we can construct a hyperplane tangent to the front of
Pareto. We note immediately that the dimension of the hyperplane is always less than the dimension
of assessments. Known statement that «... in the final vector space generated by n basis vectors:
each set of m>n vectors must be linearly dependent» [6]. The number of vectors (gradient) is m, the
maximum dimension is equal to the hyperplane tangent m–1. Consequently, the gradients are on the
set of efficient estimators always linearly dependent.
Thus, a necessary condition for the existence of effective solutions to deter-mined by the
expression (4) or (5):
m
 k Fk ( X )  0,
(5)
k 1
That corresponds to the system of homogeneous equations:
Fm ( X ) m k Fk ( X )
F2 ( X )
 F1 ( X )








0
1
2
m

k 1
x1
x1
x1
x1

F ( X ) m k Fk ( X )
 F1 ( X )
F ( X )
 2 2
   m m

0
1
k 1
x2
x2
x2
x2
.



F ( X ) m k Fk ( X )
F2 ( X )
 F1 ( X )
   m m

0
1 x  2 x
k 1
xn
xn
n
n

(6)
For the existence of a nontrivial solution of the homogeneous system (6) is necessary and
sufficient that the rank of A is less than m, if nm and not more than n for n<m (with m=n, this
condition means that detA=0).
Convert the expression (6). Since the operations of summation and differentiation can be
swapped, we get:
m  F ( X )
 m


f ( X )  0, i=1, 2, …, n.
 k k
 k Fk 
k 1
xi
xi k 1
xi
m
Where f(X)=  k Fk ( X ), – linear convolution of individual criteria. Thus, the problem of finding
k 1
the necessary conditions for multicriteria problems is reduced to the finding of the necessary
m
conditions for single-criterion tasks with the target function f(X)=  k Fk ( X ) .
k 1
The demonstration of this approach is shown in Fig. 1 and 2. Fig. 1 shows the two criteria
defined on the interval [1; 4]. These criteria are contradictory to x[1; 2] decreases as one criterion,
and the other increases (gradients in opposite directions). Consequently, the set of efficient
solutions P=[1; 2]. Fig. 2 shows the space estimates gradients criteria and many effective
assessments that are located between the two asterisks.
min F1(X); min F2(X)
XD
Fig. 2. Criterion space and
Fig. 1. The two-criteria optimization
compromise curve
problem
Note that estimates the optimal Pareto always lie on the boundary Space i.e. estimates that the
south-western boundary while minimizing the particular criteria or the north-east – while
maximizing (shown in Fig. 2). Therefore, the set of efficient solutions, as noted above, is also called
the Pareto front.
Sufficient conditions for optimization. A sufficient condition for a local maximum is negative
certainty Hessian Hf at the stationary point. Elements of the matrix Hf is calculated by the formula
m
2
hij=  f ( X ) 
xi x j
 2  k Fk ( X )
k 1
xi x j
, i, j=1,2, …, n.
For n=m=2 Hessian of f(X) has the following form:
 1 2 F1 2  2 F2 1 2 F1 2  2 F2 
 2

 2

 x1
 x1
x1x2 x1x2 

Hf=
.
 1 2 F1 2  2 F2 1 2 F1 2  2 F2 

 2


 2 x2
 x2 
 x2 x1 x2 x1
Similar results are shown, for example, in [4, 5].
To solve the problems of multicriteria optimization using generalized criteria (for example, an
additive criterion) or sequential optimization techniques (for example, the method of the main
criterion). We use these results to study the optimal point obtained by the main criterion.
Consider an example. In the square D={–1x1 1, –1 x2 1}set two criteria:
F1(x1, x2)= x12  4x 22 , F2(x1, x2)=(x1+1)2+(x2–1)2,
which is desirable to minimize. To solve this problem we use the method of main criterion. Suppose
that the criterion F1 is more important than criterion F2. The value of F2 should be not more 1. We
write the optimization problem:
minF1(x1, x2)
under the constraints:
–1 x1  1, –1 x2  1,
(x1+1)2+(x2–1)2≤1.
Solution has the form:
Xopt=(-0.446;0.168).
We show analytically that this solution is effective. For this we must show that:
1. At point Xopt gradients have different signs.
2. The decision meets the necessary conditions (want to find 1 and 2, see. for example, the rule
5.2 [7]).
3. Hessian is positive.
We prove this:
1. Calculate the gradients of partial criteria in point Xopt and determine that the gradients are
parallel and pointing in different directions:
F1  (2x1opt; 8x2opt)=(–0.892;1.344);
F2 =(2(x1opt+1); 2(x2opt–1))=(1.108; –2.336). Gradients have opposite signs, so the cosine of the
angle between them is equal to –1.
2. Substitute Xopt в (5) for n=m=2. Obtain two equations:
F1 ( X )
F ( X ) =0 →x  + (x +1)=0,
1opt 1
2 1opt
 2 2
x1
x1
F ( X )
F ( X ) =0 →4 x  + (x –1)=0.
2opt 1
2 2opt
1 1
 2 2
x2
x2
1
So 1+2=1, then the first equation we get 2 = –x1 = 0,446. Consequently, 1=1–2 = 0.554. Thus
for a given optimal value, was found a couple of factors for which the necessary condition of
optimality.
3. We find the Hessian
0
 2(   )

H=  1 2
 . Since the weights are non-negative, the matrix H is positive
0
2(41  2 ) 

definite. Consequently, at Xopt = (–0.446; 0.168) is a minimum.
Thus, the optimum value obtained by the main criterion is effective in at least this point is
reached.
Consider the graphical interpretation of this problem. We construct an allowable-domain (see.
Fig. 3). Feasible region D1 is highlighted in yellow. The set of solutions, Pareto optimal P, the
figure shows the red dots. Optimal decision Xopt lies at the intersection of the boundary of D1 and
curve P, that is Pareto-optimal (see Fig. 3). Figure performed using MathCad.
Fig. 3. Graphic illustration of the problem
Drawing an analogy with the one-criterion optimization problems (Kuhn-Tucker conditions for
nonlinear programming problem with equality constraints coincide with the first-order optimality
conditions for the problem of Lagrange [see eg, 8, Vol. 1]), we see that the necessary conditions
optimality for multicriteria optimization problems coincide with the first-order optimality
m
conditions for additive optimality criterion f ( X )   i Fi ( X ) . If we find the partial derivatives of
i 1
the function f(X) and equate them to zero, we obtain the expression (6). Note that the Lagrange
multipliers method establishes the necessary facilities identify the optimum point in optimization
problems with equality constraints.
Conclusion
We obtain necessary optimality conditions for multicriteria optimization problems. The set of
solutions is given by the system of equations (6). To find necessary and sufficient conditions for
multicriteria optimization problems, it is necessary to switch to the single-criterion optimization
m
problem with objective function f(X)=  k Fk ( X ) . These results are consistent with the nonlinear
k 1
problem of optimization, with equality constraints. This is natural, since one of the ways to solve
problems of multicriteria optimization – a replacement vector criterion scalar criterion, i.e. the
transition to a one-criterion optimization problem.
The authors thank for their assistance in preparing this article Hitesh Nalamwar.
References
[1]
[2]
[3]
Statnikov R.B., Matusov I.B. Multicriteria design machines. – M.: Knowledge, 1989. – 48 p.
/New Life, Science, Technology. Ser. «Mathematics, cybernetics»; № 5, p. 3.
Sobol I.M., Statnikov R.B. The choice of optimal parameters in problems with many criteria:
Textbook for High Schools. – M.: Drofa, 2006 .
D.L. Bartel, R.W. Marks. The Optimum Design of Mechanical Systems With Competing
Design Objectives. Journal of Engineering for Industry. Transactions of the ASME. (1974)
171-178.
[4]
[5]
[6]
[7]
[8]
Podinovskij V.V., Nogin V.D. Pareto-optimal solutions of multicriteria problems. – M.:
Science, 1982.
Multi-criteria optimization: Mathematical Aspects /B.A. Berezovskij, Ju.M. Baryshnikov,
V.M. Borzenko, L.M. Kempner. M.: Science, 1989.
Mathematical handbook for scientists and engineers. Definitions, theorems and formulas for
reference and review. G. Korn, T. Korn. –M.: Science, 1984, p. 416.
V.V. Rozen. Mathematical models of decision-making in the economy. Textbook. – M.:
Bookshop «The University», Graduate School, 2002, p. 66.
G. Reklaitis, A. Ravindran, K. Ragsdell. Engineering Optimization: In 2 Books. Book. 1.
Transl. from engl. – М.: The World, 1986, p. 204.