A Note on Robustness of the Min-Max Solution to Multiobjective

A Note on Robustness of the Min-Max Solution to
Multiobjective Linear Programs
Erin K. Doolittle, Karyn Muir, and Margaret M. Wiecek
Department of Mathematical Sciences
Clemson University
Clemson, SC
January 15, 2016
Abstract: The challenge of the use of scalarizing methods in multiobjective optimization
results from the choice of the method, which may not be apparent, and given that a method
has been selected, from the choice of the values of the scalarizing parameters. In general
these values may be unknown and the decision maker faces a difficult situation of making a
choice possibly under a great deal of uncertainty. Due to its effectiveness, the robust optimization approach of Ben-Tal and Nemirovski is applied to resolve the uncertainty carried in
scalarized multiobjective linear programs (MOLPs). A robust counterpart is examined for
six different scalarizations of the MOLP yielding a robust (weakly) efficient solution to the
original MOLP. The study reveals that the min-max optimal solution emerges as a robust
(weakly) efficient solution for five out of the six formulations.
Keywords: robust optimization, robust counterpart, scalarization, achievement function,
conic scalarization, weighted norm, weighted sum, weighted constraint, epsilon constraint,
efficient solutions
1
Introduction
Multiobjective optimization problems (MOPs) model decision making situations in which
several objective functions are optimized under conflict because a feasible solution that is
optimal for all functions simultaneously does not exist. Solving MOPs is understood as
finding their Pareto-optimal solution set or its representation, or a single Pareto point that
is preferred by the decision maker (DM). Solution methods for MOPs can be classified into
1
scalarizing approaches that reformulate the original MOP into a single-objective optimization problem (SOP) by means of scalarizing parameters, or nonscalarizing approaches that
maintain the vector-valued objective function and use other optimality concepts. Refer to
(Ehrgott and Wiecek, 2005) for an extensive survey on this subject. While the interest in
the development of nonscalarizing methods has been growing and new approaches are being
proposed, the scalarizing methods have mainly been popular among DMs working in multicriteria settings. The challenge in the use of these methods results from the choice of the
method, which may not be apparent, and given that a method has been selected, from the
choice of the scalarizing parameters’ values. In general these values may be unknown and the
DM faces a difficult situation of making a choice possibly under a great deal of uncertainty.
For example, choosing weights as scalarizing parameters has been discussed extensively from
a psychological perspective (Eckenrode, 1965) and from an engineering point of view (Marler and Arora, 2010). In any case, a scalarized MOP becomes an uncertain SOP that could
benefit from the independently growing field of optimization under uncertainty.
Decision making under uncertainty has achieved much attention because of inaccurate
or incomplete data that often appear in real-world applications. To address this challenging issue, several types of methodologies such as stochastic optimization (Schneider and
Kirkpatrick, 2006), reliability-based optimization (Kuo and Zhu, 2012), fuzzy optimization
(Lodwick and Kacprzyk, 2010), uncertain optimization (Liu, 2002), or robust optimization
(Ben-Tal and Nemirovski, 1999, 2000), (Ben-Tal et al., 2009) have been developed. The first
four types employing the probability and possibility theories require the knowledge of probability distributions or fuzzy membership functions to model and resolve uncertainty. The
latter seems to be less demanding since it makes use of deterministic models and methods,
and simple definitions of finite or infinite uncertainty sets that are assumed to be known.
Due to its effectiveness, robust optimization is applied in this paper to resolve the uncertainty carried in scalarized MOPs. This research direction has already been undertaken for
the weighted-sum method in a setting of forest management (Palma and Nelson, 2010) and in
a broader theoretical context (Hu and Mehrotra, 2012). The uncertain weights are restricted
to a given interval but are allowed to vary throughout the optimization process. We also
recognize that robust optimization has been related to multiobjective optimization in other
ways than what we present in this paper. Multiobjective optimization has first been used as
a tool to solve optimization problems under uncertainty (Kouvelis and Yu, 1997; Köbis and
Tammer, 2012; Klamroth et al., 2013; Iancu and Trichakis, 2014) where every realization
of uncertainty leads to another objective function. More recent studies address MOPs with
uncertain parameters in the objective and/or constraint functions (Kuroiwa and Lee, 2012;
Raith and Kuhn, 2013; Ehrgott et al., 2014; Goberna et al., 2014). Some of these works
adapt the scheme of (Ben-Tal et al., 2009) in multiobjective settings while others go beyond.
Refer to (Goberna et al., 2014) for a recent review on robust multiobjective optimization.
The goal of this paper is to review a collection of those SOPs associated with the multiobjective linear program (MOLP) that are well-established in the literature, and solve them
within the framework of robust optimization. Because an optimal solution to the SOP is typically a (weakly) efficient solution to the MOLP, this approach reveals scalarization-related
2
robust efficient solutions to the MOLP, which provide information of high interest to DMs.
The paper is organized as follows. In the next section two types of uncertain SOPs are
formulated and preliminary definitions and propositions are given. Section 3, as the main
part of the paper, contains results on robust counterparts to six specific SOPs. The results
are summarized and discussed in Section 4, and the paper is concluded in Section 5.
2
Preliminaries
Consider the following MOLP with linear objective functions,
min Cx
(1)
x∈X
where C is a p × n matrix and X ⊂ Rn is a nonempty feasible polyhedral set.
Definition 1. For y1 , y2 ∈ Rp , (1) y1 ≤ y2 if and only if yi1 ≤ yi2 , i = 1, · · · , p, with
y 1 6= y 2 ; (2) y1 5 y2 if and only if yi1 ≤ yi2 , i = 1, · · · , p.
Suppose, without loss of generality, Cx ≥ 0. Let ci x ∈ R denote the ith component of
Cx ∈ Rp , where ci is the i-th row of matrix C, i = 1, · · · , p.
Definition 2.
(i) A solution x ∈ X is said to be efficient to MOLP (1) if there does not exist x̄ ∈ X
such that C x̄ ≤ Cx.
(ii) A solution x ∈ X is said to be weakly efficient to MOLP (1) if there does not exist
x̄ ∈ X such that C x̄ < Cx.
Given MOLP (1), we formulate two types of related SOPs. Let s1 and s2 be two scalarizing functions defined as s1 (Cx, u) : Rp × Rp → R1 and s2 (Cx) : Rp → R1 , u ∈ U , with U
being a set of scalarizing parameters. Let Su ⊆ X be a subset of the feasible region of (1).
Then the scalarized optimization problem (SOP) associated with MOLP (1) can be written
in two forms:
Scalarizing Model 1
Scalarizing Model 2
SOP1(u)
SOP2(u)
min s1 (Cx, u)
min s2 (Cx)
x
x
s.t.
x∈X
s.t.
x ∈ Su ⊆ X
(2)
One can immediately observe that a feasible solution to SOP (2) is feasible to MOLP
(1), however, the converse may not hold true. We therefore make the following assumption.
3
Assumption 1. If x is a feasible solution to MOLP (1), then there exists a scalarizing
parameter u ∈ U such that x is a feasible solution to SOP2(u).
Based on (Ehrgott and Wiecek, 2005, and references therein), we state the following
general result that applies to a variety of scalarizing functions.
Theorem 1. If x is an optimal solution to SOP(u) (2) for some u ∈ U , then x is a weakly
efficient solution to MOLP (1).
Instead of following the traditional approach to selecting specific values for the parameter
u, we assume the the values are uncertain and contained in an uncertainty set U . This
yields an uncertain SOP (USOP) defined as a collection of SOPs, {SOP (u)}u∈U , one for
each realization of the uncertainty parameter u:
USOP1
(
USOP2
(
)
min s1 (Cx, u)
x
s.t.
x∈X
)
min s2 (Cx)
x
s.t.
u∈U
x ∈ Su
(3)
u∈U
A member of this collection for a fixed u ∈ U is referred to as an instance. Given USOP
(3), if we wish to apply the robust optimization methodology of (Ben-Tal and Nemirovski,
1999), we reformulate the USOP into a robust counterpart and refer to it as the robust SOP
(RSOP):
RSOP1
RSOP2
min γ
x,γ
s.t.
min s2 (Cx)
s1 (Cx, u) ≤ γ, u ∈ U
x∈X
x
s.t.
x ∈ Su , u ∈ U
(4)
where γ ∈ R is an auxiliary variable. Both robust counterpart problems are not computationally tractable since they have infinitely many constraints parametrized by u in U . It is
then of interest to reformulate the counterparts into deterministic problems that are computationally tractable. Before we accomplish this, following (Ben-Tal and Nemirovski, 1999)
we adopt the following concepts.
Definition 3. (i) A feasible solution to RSOP (4) is said to be a robust feasible solution
to USOP (3).
(ii) A feasible and optimal solution to RSOP (4) is said to be a robust optimal solution to
USOP (3).
Note that an optimal solution to RSOP (4) is the feasible solution to this problem that
produces the smallest objective value for all realizations u ∈ U . This smallest objective
value of RSOP (4) is called the robust optimal value of USOP (3).
4
In RSOP1, the scalarizing function in the inequality constraint gives rise to p inequality
constraints of the form s¯i (ci x, ui ) ≤ γ, ui ∈ Ui , i = 1, · · · , p where U = U1 × · · · × Up . In
RSOP2, the constraint is replaced with p − 1 constraints of the form x ∈ Sui , ui ∈ Ui , i =
1, · · · , p−1, where U = U1 ×· · ·×Up−1 . This structure, which will become obvious in the next
section of the paper, is needed to maintain the assumption of constraint-wise uncertainty
that has to be satisfied when the robust optimization approach is applied. As discussed in
(Ben-Tal and Nemirovski, 1999), the relationship between USOP (3) and RSOP (4) relies
on certain properties of these problems and the uncertainty set. The uncertainty set must
be constraint-wise and the boundedness assumption must hold.
Definition 4. (i) The uncertainty is said to be constraint-wise if the uncertainty set U is
the direct product of the “partial” uncertainty sets Ui , U = U1 × U2 × · · · × Uk , where
k = p or p − 1.
(ii) The boundedness condition holds if there exists a convex compact set in Rn which contains the feasible sets of all instances of USOP (3).
These definitions lead to the following results.
Theorem 2 ((Ben-Tal and Nemirovski, 1999), Proposition 2.1). Let the uncertainty be
constraint-wise and the boundedness assumption hold.
(i) RSOP (4) is infeasible if and only if there exists an infeasible instance of USOP (3).
(ii) If RSOP (4) is feasible and x is an optimal solution to RSOP (4), then x is an optimal
solution to at least one instance of USOP (3).
In view of problems (3) and (4) and the connections among them, we relate the feasible
and optimal solutions to RSOP (4) to the feasible and efficient solutions to MOLP (1).
Proposition 1. Let Assumption 1 hold, the uncertainty be constraint-wise and the boundedness assumption hold.
(i) A solution x ∈ X is feasible to RSOP (4) if and only if it is feasible to MOLP (1).
(ii) If x ∈ X is a feasible and optimal solution to RSOP (4) then it is a weakly efficient
solution to MOLP (1).
Proof. (i) A solution x ∈ X is feasible to RSOP (4) if and only if, by Theorem 2(i), x is
feasible to USOP (3) for all u ∈ U . This is equivalent to x being feasible to SOP(u)
(2) for all u ∈ U and, because of Assumption 1, feasible to MOLP (1).
(ii) Let x ∈ X be feasible and optimal to RSOP (4). Then by Theorem 2(ii), x is optimal
to USOP (3) for a particular realization ū ∈ U . Thus, x is optimal to SOP(ū) (2) and,
by Proposition 1, weakly efficient to MOLP (1).
In the next section we present four scalarization methods using Model 1 and two scalarization methods using Model 2. In all cases we assume a scalarizing parameter is uncertain
and follow the scheme presented above, that is, we formulate an uncertain problem and
derive a deterministic and computationally tractable robust counterpart.
5
3
Robust counterparts to scalarized MOLPs
In the two subsections we study two types of scalarization models treating them as optimization problems under uncertainty. In the first model the uncertain parameter is in the
objective function, while in the second model it is confined to the constraints. For each
scalarization we first review the relationship between the optimal solution of the scalarized
problem and the (weakly) efficient solutions to MOLP (1). For each case, we then derive
a computationally tractable, deterministic robust counterpart which reveals a relationship
between the optimal solutions to the uncertain problem and the optimal solutions to the
counterpart.
3.1
Model 1
In all of the following SOPs, the uncertain scalarization parameter is the vector of weights,
w ∈ Rp . Since weights are typically nonnegative and normalized to sum to 1, we may wish
to to use the uncertainty set {w ∈ Rp : eT w = 1, w = 0}, where e ∈ Rp is a vector of ones.
However, the normalization violates the constraint-wise uncertainty required by Proposition
2, and we instead let
U = {w ∈ Rp : 0 5 w 5 1},
(5)
where U = U1 × · · · × Up and Ui = {wi ∈ R : 0 ≤ wi ≤ 1} for i = 1, · · · , p.
In some of the presented SOPs formulations the weights are required to be nonnegative
while in the others they must be strictly positive in order to guarantee that optimal solutions
to the SOPs are (weakly) efficient to MOLP (1). On the other hand, in all formulations of
the USOPs it is assumed that the uncertain weights are elements of the set U given in (5)
and so may have some components equal to zero. We discuss this situation later in this
subsection. The case when w = 0 is considered trivial because it eliminates the objective
function of the SOP.
3.1.1
Methods using achievement functions
We first consider achievement functions to scalarize MOLP (1). Given a real-valued achievement function sR : Rp → R, the scalarized problem is given by
min sR (Cx).
x∈X
(6)
The achievement functions must be increasing to lead to efficient solutions.
Definition 5. An achievement function sR : Rp → R is said to be (1) strictly increasing
if for y1 , y2 ∈ Rp , y1 < y2 then sR (y1 ) < sR (y2 ); (2) strongly increasing if for y1 , y2 ∈
Rp , y1 ≤ y2 then sR (y1 ) < sR (y2 ).
We consider the strictly increasing function
sR (y) = max
k=1,··· ,p
wk (yk − ykR )
6
and the strongly increasing function
sR (y) = max
k=1,··· ,p
wk (yk −
ykR )
+ρ
p
X
wk (yk − ykR ),
k=1
where w ∈ Rp> is a vector of positive weights, yR ∈ Rp is a reference point and ρ1 > 0 and
sufficiently small. In the notation of (2) we have Su = X and u = w.
Theorem 3. (Wierzbicki, 1986a,b)
(i) Let an achievement function sR be strictly increasing. If x̂ ∈ X is an optimal solution
to problem (6), then x̂ is weakly efficient to MOLP (1).
(ii) Let an achievement function sR be strongly increasing. If x̂ ∈ X is an optimal solution
to problem (6), then x̂ is efficient to MOLP (1).
Note the difference in the specification of the weights, which has been mentioned earlier.
In Theorem 3, w ∈ Rp> , while in (5) and Proposition 2, w ∈ Rp= . If w ∈ Rp= is used
to construct the achievement functions, then the objective functions corresponding to the
zero components of the weight vector are eliminated and Theorem 3 is applied to a reduced
MOLP. In this case, an optimal solution to problem (6) is efficient to the reduced MOLP.
However, we can now utilize another result stating that a weakly efficient solution to an
MOP with, say, m < p objective functions is weakly efficient to an MOP with p objective
functions, to which p − m objective functions have been added (Fliege, 2007; Engau and
Wiecek, 2008). We can therefore conclude that using a nonnegative weight in Theorem 3
yields a weakly efficient solution to MOLP (1).
We now formulate the uncertain achievement function problem (7 ) with the strongly increasing function and derive its computationally tractable, deterministic robust counterpart
(8).
Proposition 2. A solution x̄ ∈ X is robust optimal to the uncertain achievement function
problem
! )
(
p
X
min
max wj (cj x − yjR ) + ρ
wi (ci x − yiR )
(7)
x∈X
j=1,··· ,p
i=1
w∈U
if and only if it is an optimal solution to the min-max problem
min max {0, cj x − yjR }.
x∈X j=1,··· ,p
Proof. Using an auxilliary variable, problem (11) can be rewritten equivalently as




min γ




x,γ






p
X
j
R
i
R
s.t.
max wj (c x − yj ) + ρ
wi (c x − yi ) ≤ γ,

j=1,··· ,p




i=1






x∈X
w∈U.
7
(8)
Since the inequality constraint must hold for the maximum value, it must hold for all j =
1, · · · , p, meaning we write the robust counterpart as
min γ
x,γ
j
s.t. wj (c x −
yjR )
+ρ
p
X
wi (ci x − yiR ) ≤ γ
wj ∈ Uj , wi ∈ Ui , j = 1, · · · , p, i = 1, · · · , p
i=1
x ∈ X.
In the worst case scenario, this problem becomes
min γ
x,γ

j
yjR )
p
X
!
i
wi (c x −
+ρ
wj (c x −
 max
 w
i=1

s.t. 
 s.t. wi ≤ 1 i = 1, · · · , p
wi ≥ 0 i = 1, · · · , p
yiR )


≤γ


j = 1, · · · , p
x ∈ X.
Now, making use of linear programming duality theory, we take the dual of the inner maximization problem with dual variables vij and obtain
min γ
x,γ

 min
 vij
s.t. 


p
X

vij
i=1
s.t. vij ≥ ρ(ci x − yiR ) + 1(j=i) (cj x − yjR )
vij ≥ 0


≤γ

i = 1, · · · , p 
j = 1, · · · , p
i = 1, · · · , p
x ∈ X,
where 1 is the indicator function, meaning
(
ci x − yiR
1(j=i) (cj x − yjR ) =
0
if j = i
otherwise.
Since the optimal value of each of the p subproblems must be less than or equal to γ, we
minimize the maximum of their objective function values
min max min
x∈X j=1,··· ,p vij
p
X
vij
i=1
i
s.t. vij ≥ ρ(c x − yiR ) + 1(j=i) (cj x − yjR )
vij ≥ 0
i = 1, · · · , p, j = 1, · · · , p
8
i = 1, · · · , p, j = 1, · · · , p
and combine the constraints to obtain
min max min
x∈X j=1,··· ,p vij
p
X
vij
i=1
s.t. vij ≥ max{0, ρ(ci x − yiR ) + 1(j=i) (cj x − yjR )}, i = 1, · · · , p, j = 1, · · · , p.
We can now eliminate the vij variables by moving the p2 inequality constraints to the objective function. Thus, the above problem reduces to
( p
)
X
min max
max{0, ρ(ci x − yiR ) + 1(j=i) (cj x − yjR )} .
x∈X j=1,··· ,p
i=1
As we maximize over j = 1, · · · , p, the sum only differs by cj x − yjR due to the indicator
function. Therefore, this problem is equivalent to
min max {0, cj x − yjR }
x∈X j=1,··· ,p
as desired.
Since the strictly increasing function is a special case of the strongly increasing function,
we immediately obtain the following analogous result.
Proposition 3. A solution x̄ ∈ X is robust optimal to the uncertain achievement function
problem
i
R
min
max wi (c x − yi )
(9)
x∈X
i=1,··· ,p
w∈U
if and only if it is an optimal solution to the min-max problem
min max 0, ci x − yiR .
x∈X i=1,··· ,p
(10)
We proceed in a similar fashion with other scalarization methods.
3.1.2
Conic method
We examine the conic scalarization problem (Kasimbeyli, 2013). Given a weight parameter
w ∈ Rp> and α such that 0 ≤ α ≤ min{wi , i = 1, ..., p}, the problem is given by
min wT (Cx − yR ) + α||Cx − yR ||1
x∈X
(11)
where || · ||1 denotes the `1 norm and yR ∈ Rp is a reference point.
Theorem 4. (Kasimbeyli, 2013) If x̂ ∈ X is an optimal solution to problem (11) then x̂ is
efficient to MOLP (1).
9
In the notation of (2) we have Su = X and u = w.
Similar to the achievement function method, the conic scalarization requires the weights
to be strictly positive. If a weight is nonnegative and has some zero components then problem
(11) is modified by dropping the corresponding components in both terms of the objective
function and, by the same arguments as those given for the scalarization with achievement
functions, an optimal solution to the reduced SOP (11) is weakly efficient to MOLP (1).
Choosing α to be uncertain will always result in α = min{wi , i = 1, · · · , p} since that is
the largest and therefore the most conservative value in its range, so we omit that case.
Proposition 4. A solution x̄ ∈ X is robust optimal to the uncertain conic problem
)
(
p
X
i
R
T
R
(c x − yi )
min w (Cx − y ) + α
x∈X
i=1
(12)
w∈U
if and only if it is an optimal solution to the min-max problem
min p · max {ci x − yiR } + α
i=1,··· ,p
x∈X
p
X
!
(ci x − yiR ) .
i=1
Proof. We can write (12) equivalently as
(
)
p
X
min
wi ci x + αci x − 2yiR
x∈X
(13)
i=1
(14)
w∈U
since ci x > 0. The robust counterpart of (14) is
min γ
x,γ
s.t.
p
X
wi ci x + αci x − 2yiR ≤ γ
wi ∈ Ui
(15)
i=1
x ∈ X.
In the worst case this reduces to
min γ
x,γ


p
X




i
i
R

wi c x + αc x − 2yi 



 max
w
i=1
s.t.
≤γ


s.t.
w
≤
1
i
=
1,
·
·
·
,
p


i






wi ≥ 0 i = 1, · · · , p
x ∈ X.
This can be written equivalently as
10
(16)
min γ
x,γ

 p
p
X
X




i
i
R


)
+
max
w
c
x
(αc
x
−
2y


i
i


w
i=1
i=1
≤γ
s.t.


s.t.
w
≤
1
i
=
1,
·
·
·
,
p


i






wi ≥ 0 i = 1, · · · , p
(17)
x ∈ X.
Taking the dual of the above problem gives
min γ
x,γ

 p
p
X
X






vi
(αci x − 2yiR ) + min




v
i=1
i=1
≤γ
s.t.
i


s.t.
v
≥
c
x
i
=
1,
·
·
·
,
p


i






vi ≥ 0 i = 1, · · · , p
(18)
x ∈ X.
Now we can remove the inner minimization to obtain
p
X
min
x,v
(vi + αci x − 2yiR )
i=1
s.t. vi ≥ ci x i = 1, · · · , p
x∈X
(19)
which simplifies to
min
x∈X
p
X
( max {ci x} + αci x − 2yiR )
i=1
i=1,··· ,p
(20)
or
!
p
X
min p · max {ci x − yiR } + α
(ci x − yiR ) .
x∈X
3.1.3
i=1,··· ,p
(21)
i=1
Weighted-norm method
We now attempt to investigate the traditional weighted-norm formulation given a parameter
w ∈ Rp≥ ,
min wT (Cx − r)p
(22)
x∈X
11
where k · kp indicates the `p norm with p = 1 or ∞, and r ∈ Rp is a utopia point defined as
ri = min ci x − i , i = 1, · · · , p
x∈X
for some i ≥ 0, i = 1, · · · , p. In the notation of (2), Su = X and u = w.
Theorem 5. (i) (Ehrgott, 2005) Let w ∈ Rp≥ . If x̂ ∈ X is a unique optimal solution to
the l1 -norm problem (22) then x̂ is efficient to MOLP (1).
(ii) (Choo and Atkins, 1983) Let w ∈ Rp> . If x̂ ∈ X is an optimal solution to the l∞ -norm
problem (22) then x̂ is weakly efficient to MOLP (1).
Proposition 5. A solution x̄ ∈ X is robust optimal to the uncertain weighted `1 -norm
problem
(
)
p
X
min
wi (ci x − ri )
(23)
x∈X
i=1
w∈U
if and only if it is an optimal solution to the min-max problem
i
min p · max {c x − ri } .
i=1,··· ,p
x∈X
(24)
Proof. The robust counterpart of problem (23)
min γ
x,γ
p
X
s.t.
wi ci x − ri ≤ γ wi ∈ Ui
i=1
x ∈ X.
This, under the worst-case scenario, reduces to
min γ
x,γ

p
X

wi (ci x − ri ) 
 max
 w

i=1
≤γ
s.t. 


 s.t. wi ≤ 1 i = 1, · · · , p
wi ≥ 0 i = 1, · · · , p
x ∈ X.
Applying duality results for SOPs on the inner maximization yields
min γ
x,γ

p
X

vi
 min

 v

i=1
≤γ
s.t. 


i
s.t.
v
≥
c
x
−
r
i
=
1,
·
·
·
,
p


i
i
vi ≥ 0 i = 1, · · · , p
x∈X
12
(25)
where v ∈ Rp is the dual variable. Problem (25) is equivalent to the following
min γ
x,γ,v
s.t.
p
X
vi ≤ γ
i=1
vi ≥ ci x − ri i = 1, · · · , p
vi ≥ 0 i = 1, · · · , p
x ∈ X.
Removing the unnecessary variable γ and recognizing that ci x − ri ≥ 0 yields
min
x,v
p
X
vi
i=1
s.t. vi ≥ ci x − ri i = 1, · · · , p
x∈X
or equivalently problem (24).
Proposition 6. A solution x̄ ∈ X is robust optimal to the uncertain weighted `∞ -norm
problem
(
)
i
min
max wi (c x − ri )
(26)
x∈X
i=1,··· ,p
w∈U
if and only if it is an optimal solution to the min-max problem
min max ci x − ri .
x∈X i=1,··· ,p
(27)
Proof. Using an auxilliary variable, problem (26) can assume the following equivalent form




min
γ




 x,γ

i
s.t.
max wi c x − ri ≤ γ,
i=1,··· ,p








x∈X
w∈U.
This family of MOPs has the robust counterpart of the form
min γ
x,γ
s.t. wi ci x − ri ≤ γ wi ∈ Ui , i = 1, · · · , p
x ∈ X.
13
This, under the worst-case scenario, reduces to
min γ
x,γ


max wi ci x − ri
 wi

 ≤ γ i = 1, · · · , p
s.t. 
 s.t. wi ≤ 1

wi ≥ 0
x ∈ X.
Applying duality results to the inner maximization yields
min γ
x,γ


min vi
 vi


i
s.t. 
 s.t. vi ≥ c x − ri  ≤ γ i = 1, · · · , p
vi ≥ 0
(28)
x∈X
where v ∈ Rp is the dual variable. Problem (28) is equivalent to the following
min γ
x,γ,v
s.t. vi ≤ γ i = 1, · · · , p
vi ≥ ci x − ri , i = 1, · · · , p
vi ≥ 0, i = 1, · · · , p
x ∈ X.
Removing the unnecessary variable γ and recognizing that ci x − ri ≥ 0 yields
min
x,v
max {vi }
i=1,··· ,p
s.t. vi ≥ ci x − ri , i = 1, · · · , p
x ∈ X,
or equivalently problem (27).
3.1.4
Weighted-sum method
Given a parameter w ∈ Rp≥ , the weighted-sum problem is,
min wT Cx.
x∈X
Again, in the notation of (2), Su = X and u = w.
14
(29)
Theorem 6. (Geoffrion, 1968) If x̂ ∈ X is a unique optimal solution to the weighted-sum
problem (29) then x̂ is efficient to MOLP (1).
Proposition 7. A solution x̄ ∈ X is robust optimal to the uncertain weighted-sum problem
n
o
min wT Cx
(30)
x∈X
w∈U
if and only if it is an optimal solution to the min-max problem
min p · max {ci x}.
x∈X
i=1,··· ,p
(31)
Proof. The proof is omitted becaue the weighted-sum method is a special case of the weighted
`1 −norm method with r = 0.
3.2
Model 2
We now present the scalarizations that use parameters u ∈ Rp−1 and have additional constraints originating from p − 1 objective functions. The first one makes use of weights while
the other employs right-hand side coefficients.
3.2.1
Weighted-constraint method
The weighted-constraint scalarization problem is given by
min wj cj x
x∈X
s.t. wi ci x ≤ wj cj x i = 1, · · · , p, i 6= j.
(32)
In this approach, weights w ∈ Rp> are assigned to each of the p objective functions, and one
weighted objective function is selected to be minimized. Then p − 1 constraints are added
to the problem requiring the remaining weighted objective functions to be less than or equal
to the weighted objective function being minimized.
Theorem 7. (Burachik et al., 2014) A feasible solution x̂ ∈ X is a weakly efficient solution
to MOLP (1) if and only if there exists w ∈ Rp> such that x̂ is an optimal solution to problem
(32) for all j = 1, · · · , p.
Here, the weights wi , i = 1, · · · , p, i 6= j are uncertain, where wj is the weight associated
with the objective function being minimized. We do not allow wj to be uncertain because
the constraint-wise uncertainty would be violated. We therefore define
U = {w−j ∈ Rp−1
: w−j = (w1 , · · · , wj−1 , wj+1 , · · · , wp )},
>
(33)
where U = U1 × · · · × Uj−1 × Uj+1 × · · · × Up and Ui = R> for i = 1, · · · , p, i 6= j. In the
notation of (2) we have Su = {x ∈ X : wi ci x ≤ wj cj x for i = 1, · · · , p, i 6= j}.
15
Proposition 8. A solution x̄ ∈ X is robust optimal to the uncertain weighted-constraint
problem
(
)
min wj cj x
x∈X
(34)
s.t. wi ci x ≤ wj cj x i = 1, · · · , p, i 6= j w ∈U ,i6=j,i=1,··· ,p
i
i
if and only if it is an optimal solution to the min-max problem
min
max
x∈X j=1,··· ,p, i6=j
{ci x}.
(35)
Proof. By the construction of the scalarization, the uncertainty is already restricted to the
constraints, so no reorganization of the problem is needed. We therefore begin with
(
)
min wj cj x
x∈X
(36)
s.t. wi ci x ≤ wj cj x i = 1, · · · , p, i 6= j w ∈U ,i6=j
i
i
and note that in the worst case scenario this can be written as
min wj cj x
x∈X


max wi ci x

 wi
j

s.t. 
 s.t. wi ≤ 1 ≤ wj c x i = 1, · · · , p, i 6= j.
wi ≥ 0
(37)
Applying duality results to the inner maximization problem yields
min wj cj x
x∈X


min vi
 vi

i  ≤ wj cj x i = 1, · · · , p, i 6= j
s.t. 
s.t.
v
≥
c
x


i
vi ≥ 0
(38)
where vi , i = 1, · · · , p, i 6= j is the dual variable. Now we remove the inner minimization to
obtain
min
x∈X,v
w j cj x
s.t. vi ≤ wj cj x i = 1, · · · , p, i 6= j
vi ≥ ci x i = 1, · · · , p, i 6= j
vi ≥ 0 i = 1, · · · , p.
(39)
The inequalities vi ≤ wj cj x and vi ≥ ci x imply that ci x ≤ wj cj x so the dual variables can
be removed and the problem can be written equivalently as
min wj cj x
x∈X
s.t. ci x ≤ wj cj x i = 1, · · · , p, i 6= j
16
(40)
which can be rewritten as the desired min-max problem
min
max
x∈X i=1,··· ,p, i6=j
3.2.2
{ci x}.
(41)
ε-constraint method
Consider the ε-constraint method in which only one objective is minimized, while the others
are converted into new constraints. For each j = 1, · · · , p, the ε-constraint method generates
an SOP as follows:
min cj x
x∈X
(42)
s.t. ci x ≤ εi i = 1, · · · , p, i 6= j,
where parameter ε ∈ Rp−1 is chosen so problem (42) is feasible.
Theorem 8. (Chankong and Haimes, 1983) If x̂ is a unique optimal solution to (42) for
some j ∈ {1, . . . , p} and some ε such that (42) is feasible, then x̂ is efficient to MOLP (1).
We assume that the epsilon is uncertain, that is, we define
U = {ε ∈ Rp−1 : εL ≤ ε ≤ εU },
(43)
where U = U1 × · · · × Uj−1 × Uj+1 × · · · × Up and Ui = {εi ∈ R : εLi ≤ εi ≤ εUi } for
i = 1, · · · , p, i 6= j. In the notation of (2), Su = {x ∈ X : ci x ≤ εi i = 1, · · · , p, i 6= j}.
Proposition 9. A solution x̄ ∈ X is robust optimal to the uncertain ε-constraint problem
(
)
min cj x
x∈X
(44)
s.t. ci x ≤ εi i = 1, · · · , p, i 6= j ε ∈U ,i6=j,i=1,··· ,p
i
i
if and only if it is an optimal solution to a related ε-constraint problem of the form
min cj x
x∈X
s.t.
ci x ≤ εLi i = 1, · · · , p, i 6= j.
Proof. The family of MOPs (42) has the robust counterpart of the form
min cj x
x∈X
s.t.
− εi ≤ −ci x εi ∈ Ui , i = 1, · · · , p, i 6= j
which under the worst-case scenario reduces to
min cj x
x∈X


max − εi
 εi

 s.t. εi ≤ εU 
i

 ≤ −ci x i = 1, · · · , p, i 6= j.
s.t. 
L

εi ≥ εi 
ε≥0
17
(45)
We observe that the solution to the inner maximization problem is εi = εLi . Thus, this
reduces to
min cj x
x∈X
s.t.
− εLi ≤ −ci x i = 1, · · · , p, i 6= j
which is equivalent to
min cj x
x∈X
s.t. ci x ≤ εLi i = 1, · · · , p, i 6= j
(46)
as desired.
In the next section we summarize the obtained results.
4
Discussion
Based on Propositions 2 - 9 given in Section 3, we make two general observations that apply
to all presented scalarizations of MOLP (1). First, a solution x̄ ∈ X is robust optimal
to the uncertain scalarized MOLPs (of the form USOP (3)) if and only if it is optimal to
the corresponding deterministic robust counterparts that are derived in these propositions.
Second, all these robust counterpart problems assume a certain form of the scalarized problem
SOP(ū) where ū is a particular realization of the uncertain parameter u. We therefore
recognize this realization in the following definition.
Definition 6. The uncertainty realization ū of the uncertainty parameter u that yields the
robust optimal solution to USOP (3) is called the robust realization.
For the four scalarization methods in Subsection 3.1, u ∈ Rp and all deterministic robust
counterparts assume a similar final form. In particular, for yR = r = 0 and α = 0, all
counterparts reduce to
min max ci x ,
(47)
x∈X i=1,··· ,p
which is the well-known min-max problem. Recall that an optimal solution to the min-max
problem associated with MOLP (1) is weakly efficient to the MOLP (Ehrgott, 2005). The corresponding common robust realization of the uncertain parameter ū = [0, · · · , 0, 1, 0, · · · , 0]T
has all components equal to 0 with 1 in the component corresponding to the largest objective
value ci x among all objectives i = 1, · · · , p for all feasible solutions of MOLP (1).
For the two scalarization methods in Subsection 3.2, u ∈ Rp−1 . The deterministic robust
counterpart for the uncertain weighted-constraint problem is a min-max problem on p − 1
objective functions, and, based on the discussion in Subsection 3.1.1, its optimal solution
is weakly efficient to MOLP (1). The accompanying robust realization of the uncertain
parameter is analogous to that provided by the first four methods. The deterministic robust
counterpart for the uncertain ε-constraint problem assumes the form of the ε-constraint
18
problem with the robust realization of the uncertain parameter equal to the lower bound on
the uncertainty set, ū = εL .
As a result, all robust counterparts we have presented assume a scalarized form of MOLP
(1), SOP(ū), that is associated with ū, the robust realization of the uncertain parameter.
They all act in accordance with Proposition 1, that is, their optimal solutions are weakly
efficient to MOLP (1). While this proposition was known in Section 2 before the scalarizations were studied in detail in Section 3, our work in Section 3 revealed the computationally
tractable robust counterparts and the actual values of the robust realizations ū.
An optimal solution to SOP(ū) with the robust realization of uncertainty deserves its
own notion.
Definition 7. An optimal solution to SOP(ū) obtained for the robust realization of uncertainty is called a robust weakly efficient solution to MOLP (1).
This discussion is closed with the following corollary.
Corollary 1. An optimal solution to the min-max problem (47) associated with MOLP (1)
is a robust weakly efficient solution to MOLP (1).
5
Conclusion
Using the framework of single-objective robust optimization of (Ben-Tal et al., 2009), we
have studied six scalarizations of the MOLP making the scalarizing parameters uncertain.
For the first five scalarizations, an optimal solution to the min-max problem associated with
the MOLP emerges as a robust weakly efficient solution to the MOLP. This leads to two
conclusions. First, from the robustness point of view, these scalarizations are equivalent to
each other and their choice should be justified by considerations such as numerical convenience rather than decision making implications. Second, the min-max optimal solution to
the MOLP exhibits unusual strength and significance for decision making in the presence of
multiple criteria. Recall that the min-max optimal solution is also an equitable solution to
the MOLP (Singh, 2007) which makes this solution even more special.
There is a clear direction on how to continue the work presented in this paper. Uncertain scalarized MOLPs could be examined in the context of other notions of robustness.
Additionally, they can also be studied more generally, making use of other approaches to
uncertainty. All such studies should provide more insight into decision making with multiple
criteria.
References
A. Ben-Tal and A. Nemirovski. Robust solutions of uncertain linear programs. Operations
Research Letters, 25(1):1–13, 1999.
A. Ben-Tal and A. Nemirovski. Robust solutions of linear programming problems contaminated with uncertain data. Mathematical Programming, 88(3):411–424, 2000.
19
A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton University
Press, Princeton, 2009.
R.S. Burachik, C.Y. Kaya, and M.M. Rizvi. A new scalarization technique to approximate
Pareto fronts of problems with disconnected feasible sets. Journal of Optimization Theory
and Applications, 162(1):428 – 446, 2014.
V. Chankong and Y.Y. Haimes. Multiobjective Decision Making: Theory and Methodology.
Elsevier Science Publishing Company, Columbia, New York, 1983.
E. Choo and D. Atkins. Proper efficiency in nonconvex multicriteria programming. Mathematics of Operations Research, 8(3):467–470, 1983.
R.T. Eckenrode. Weighting multiple criteria. Management Science, 12(3):180–192, 1965.
M. Ehrgott. Multicriteria Optimization. Springer, New York, 2005.
M. Ehrgott and M.M. Wiecek. Multiobjective Programming. In J. Figueira, S. Greco,
and M. Ehrgott, editors, Multiple Criteria Decision Snalysis: State of the Art Surveys,
volume 78, pages 667–722. Springer Science + Business Media, New York, 2005.
M. Ehrgott, J. Ide, and A. Schöbel. Minmax robustness for multi-objective optimization
problems. European Journal of Operational Research, 239:17–31, 2014.
A. Engau and M.M. Wiecek. Interactive coordination of objective decompositions in multiobjective programming. Management Science, 40(2 and 3):305–317, 2008.
J. Fliege. The effects of adding objectives to an optimisation problem on the solution set.
Operations Research Letters, 35(6):782–790, 11 2007.
A. Geoffrion. Proper efficiency and the theory of vector maximization. Journal of Mathematical Analysis and Applications, 22(3):618–630, 6 1968.
M.A. Goberna, V. Jeyakumar, G. Li, and J. Vicente-Pérez. Robust solutions to multiobjective linear programs with uncertain data. European Journal of Operational Research,
242:730–743, 2014.
J. Hu and S. Mehrotra. Robust and stochastically weighted multiobjective optimization
models and reformulations. Operations Research, 60(4):936–953, 2012.
D.A. Iancu and N. Trichakis. Pareto efficiency in robust optimization. Management Science,
60(4):130–147, 2014.
R. Kasimbeyli. A conic scalarization method in multi-objective optimization. Journal of
Global Optimization, 56:279–297, 2013.
20
K. Klamroth, E. Köbis, A. Schöbel, and C. Tammer. A unified approach for different
concepts of robustness and stochastic programming via non-linear scalarizing functionals.
Optimization, 62(5):649–671, 2013.
E. Köbis and C. Tammer. Relations between strictly robust optimization problems and a
nonlinear scalarization method. Report 01. Martin Luther Universität, Halle-Wittenberg,
Germany, 2012. http://www2.mathematik.uni-halle.de/institut/reports/ (Accessed May
10, 2015).
P. Kouvelis and G. Yu. Robust Discrete Optimization and Its Applications. Springer, 1997.
W. Kuo and X. Zhu. Importance Measures in Reliability, Risk, and Optimization: Principles
and Applications. John Wiley & Sons, 2012.
D. Kuroiwa and G.M. Lee. On robust multiobjective optimization. Vietnam Journal of
Mathematics, 54(7):1350–1363, 2012.
B. Liu. Theory and Practice of Uncertain Programming. Springer, Berlin, 2002.
W.A. Lodwick and J. Kacprzyk, editors. Fuzzy Optimization: Recent Advances and Applications. Studies in Fuzziness and Soft Computing. Springer, 2010.
R.T. Marler and J.S. Arora. The weighted sum method for multi-objective optimization:
new insights. Structural and Multidisciplinary Optimization, 41(6):853–862, 2010.
C.D. Palma and J.D. Nelson. Bi-objective multi-period planning with uncertain weights: a
robust optimization approach. European Journal of Forest Research, 129(6):1081–1091,
2010.
A. Raith and K. Kuhn. Solving robust bicriteria shortest path problems. 2013. Presented
at 22nd International Conference on Multiple Criteria Decision Making; Málaga, Spain.
J. Schneider and S. Kirkpatrick. Stochastic Optimization. Springer, 2006.
V.K. Singh. Equitable Efficiency in Multicriteria Optimization. PhD thesis, Clemson University, Clemson, SC, 2007.
A.P. Wierzbicki. A methodological approach to comparing parametric characterizations of
efficient solutions. In A. Kurzhanski A.P. Wierzbicki G. Gandel, M. Grauer, editor, Lecture
Notes in Economics and Mathematical Systems, volume 273, pages 27–45. Spinger-Verlag,
1986a.
A.P. Wierzbicki. On the completeness and constructiveness of parametric characterizations
to vector optimization problems. OR Spectrum, 8(2):73–87, 1986b.
21