Approximating the Nondominated Set of an MOLP by Approximately

Approximating the Nondominated Set of an MOLP
by Approximately Solving its Dual Problem
Lizhen Shao
Department of Engineering Science
The University of Auckland, New Zealand
email: [email protected]
Matthias Ehrgott
Department of Engineering Science
The University of Auckland, New Zealand
email: m.ehrgott
and
Laboratoire d’Informatique de Nantes Atlantique
Université de Nantes, France
email: [email protected]
August 10, 2007
Abstract
The geometric duality theory of Heyde and Löhne (2006) defines a dual to a
multiple objective linear programme (MOLP). In objective space, the primal
problem can be solved by Benson’s outer approximation method (Benson,
1998a,b) while the dual problem can be solved by a dual variant of Benson’s
algorithm (Ehrgott et al., 2007). Duality theory then assures that it is possible
to find the nondominated set of the primal MOLP by solving its dual.
In this paper, we propose an algorithm to solve the dual MOLP approximately but within specified tolerance. This approximate solution set can be
used to calculate an approximation of the nondominated set of the primal.
We show that this set is an ε-nondominated set of the original primal MOLP
and provide numerical evidence that this approach can be faster than solving
the primal MOLP approximately.
1
1
Introduction
The goal of multiple objective optimization is to simultaneously minimize p 2
noncomparable objectives. The objectives are usually conflicting in nature, a feasible solution optimizing all the objectives simultaneously does not exist. Therefore,
the goal of multiobjective optimization is to obtain the nondominated set. For a
multiple objective programme (MOP), a nondominated point in objective space corresponds to an efficient solution in decision space and an efficient solution is defined
as a solution for which an improvement in one objective will always lead to a deterioration in at least one of the other objectives. The set of all nondominated points
forms the nondominated set in objective space. It conveys the trade-off information
to a decision maker (DM) we assume to prefer less to more in each objective.
In this paper, we are concerned with multiobjective linear programming problems. Researchers have developed a variety of methods for generating the efficient
set or the nondominated set such as multiobjective simplex methods, interior point
methods, and objective space methods. Multiobjective simplex methods and interior point methods work in variable space to find the efficient set, see the references
in Ehrgott and Wiecek (2005). Since the number of objectives of and MOLP is often
much smaller than the number of variables and typically many efficient solutions
in decision space are mapped to a single nondominated point in objective space,
Benson (1998a) argues that generating the nondominated set should require less
computation than generating the efficient set. Moreover, it is reasonable to assume
that a decision maker will choose a solution based on the objective values rather
than variable values. Therefore, finding the nondominated set in objective space
instead of the efficient set in decision space is more important for the DM. Benson
has proposed an outer approximation method (Benson, 1998a) to find the extended
feasible set in objective space of an MOLP. Furthermore, for an MOLP, geometric
duality theory (Heyde and Löhne, 2006) specifies a dual MOLP and establishes that
the extended primal and dual objective set are related to each other. Thus if the
extended objective set of the dual MOLP is known, then the extended objective
set of the primal MOLP can be obtained, and vice versa. Based on the geometric
duality theory, Ehrgott et al. (2007) develop a dual variant of Benson’s algorithm
to obtain the feasible set in objective space of the dual MOLP.
Although it is theoretically possible to identify the complete nondominated set
with the methods mentioned above, finding an exact description of this set often
turns out to be practically impossible or at least computationally too expensive (see
the examples in Shao and Ehrgott (2006)). Therefore, many researchers focus on
approximating the nondominated set, see Ruzika and Wiecek (2005) for a survey.
In the literature, the concept of ε-nondominated points has been suggested as a
means to account for modeling limitations or computational inaccuracies.
In Shao and Ehrgott (2006), we have proposed an approximation version of
Benson’s algorithm to sandwich the extended feasible set of an MOLP with an outer
approximation and an inner approximation. The nondominated set of the inner
approximation is proved to be a set of ε-nondominated points. In this paper, we
propose to solve the dual MOLP approximately, then to calculate a corresponding
polyhedral set in objective space of the primal MOLP using a coupling function.
The nondominated subset of this polyhedral set can be proved to be a set of εnondominated points of the original primal MOLP.
Finally, we apply this approximate method to solve the beam intensity optimization problem of radiotherapy treatment planning. Three clinical cases were used
and the results are compared with those obtained by the approximation version of
Benson’s algorithm which directly approximates the truncated extended feasible set
of the primal MOLP, an algorithm developed in Shao and Ehrgott (2006).
2
2
Preliminaries
In this paper we use the notation y 1 ≤ y 2 to indicate y 1 y 2 but y 1 = y 2 for
y 1 , y 2 ∈ Rp whereas y 1 < y 2 means yk1 < yk2 for all k = 1, . . . , p. The k-th unit
vector in Rp is denoted by ek and a vector of all ones is denoted by e. Given a
mapping f : Rn → Rp and a subset X ⊆ Rn we write f (X ) := {f (x) : x ∈ X }.
Let A ⊆ Rp . We denote the interior, and relative interior of A by int A, and
ri A.
Let C ⊆ Rp be a closed convex cone. An element y ∈ A is called C-minimal
if ({y} − C \ {0}) ∩ A = ∅ and C-maximal if ({y} + C \ {0}) ∩ A = ∅. A point
y ∈ A is called weakly C-minimal (weakly C-maximal) if ({y} − ri C) ∩ A = ∅
(({y} + ri C) ∩ A = ∅). We set
wminC A :=
{y ∈ A : ({y} − ri C) ∩ A = ∅} and
wmaxC A :=
wmin(−C) A.
In this paper we consider two special ordering cones, namely C = Rp = {x ∈ Rp :
xk 0, k = 1, . . . , p} and
C = K := R ep = {y ∈ Rp : y1 = · · · = yp−1 = 0, yp 0} .
For the choice C = Rp the set of weakly Rp -minimal elements of A (also called
the set of weakly nondominated points of A) is given by
wminRp A := y ∈ A : ({y} − int Rp ) ∩ A = ∅ .
In case of C = K the set of K-maximal elements of A is given by
maxK A := {y ∈ A : ({y} + K \ {0}) ∩ A = ∅} .
Note that ri K = K \ {0} so that weakly K-maximal and K-maximal elements of A
coincide.
Let us recall some facts concerning the facial structure of polyhedral sets (Webster, 1994). Let A ⊆ Rp be a convex set. A convex subset F ⊆ A is called a face of
A if for all y 1 , y 2 ∈ A and α ∈ (0, 1) such that αy 1 + (1 − α)y 2 ∈ F it holds that
y 1 , y 2 ∈ F. A face F of A is called proper if ∅ = F = A. A point y ∈ A is called an
extreme point of A if {y} is a face of A. The extreme points are also called vertices.
The set of all vertices of a polyhedron A is denoted by vert A.
A polyhedral convex set A is defined by {y ∈ Rp : By β}, where B ∈ Rm×p
and β ∈ Rm . A polyhedral set A has a finite number of faces. A subset F of A is
a face if and
areλ ∈ Rp and γ ∈ R such that A ⊆ y ∈ Rp : λT y γ
only pif there
and F = y ∈ R : λT y= γ ∩ A. Moreover, F is a proper face if and only if
H := y ∈ Rp : λT y = γ is a supporting
hyperplane to A with F = A ∩ H. We
call hyperplane H = y ∈ Rp : λT y = γ supporting if λT y γ for all y ∈ A and
there is some y 0 ∈ A such that λT y 0 = γ. The proper (r − 1)-dimensional faces of
an r-dimensional polyhedral set A are called facets of A.
A recession direction of A is a vector d ∈ Rp such that y + αd ∈ A for some
y ∈ A and all α 0. The recession cone (or asymptotic cone) A∞ of A is the set
of all recession directions
A∞ := {d ∈ Rp : y + αd ∈ A for some y ∈ A for all α 0}.
A recession direction d = 0 is called extreme if there are no recession directions
d1 , d2 = 0 with d1 = αd2 for all α > 0 such that d = 12 (d1 + d2 ).
3
A polyhedral convex set A can be represented by both a finite set of inequalities
and the set of all extreme points and extreme directions of A (Rockafellar, 1972,
Theorem 18.5). Let E = {y 1 , . . . , y r , d1 , . . . , dt } be the set of all extreme points and
extreme directions of A then
⎫
⎧
r
t
r
⎬
⎨
αi y i +
νj dj with αi 0, νj 0, and
αi = 1 .
A = y ∈ Rp : y =
⎭
⎩
i=1
3
j=1
i=1
MOLP Duality Theory
In this paper we consider multiple objective linear programming problems of the
form
min{P x : x ∈ X }.
(1)
We assume that X in (1) is a nonempty feasible set X in decision space Rn defined
by X = {x ∈ Rn : Ax b}. We have A ∈ Rm×n , b ∈ Rm and P ∈ Rp×n . The
feasible set Y in objective space Rp is defined by
Y = {P x : x ∈ X }.
(2)
It is well known that the image Y of a nonempty, polyhedron X under a linear map
P is also a nonempty polyhedron of dimension dimY p.
Definition 3.1 A feasible solution x̂ ∈ X is an efficient solution of problem (1)
if there exists no x ∈ X such that P x ≤ P x̂. The set of all efficient solutions of
problem (1) will be denoted by XE and called the efficient set in decision space.
Correspondingly, ŷ = P x̂ is called a nondominated point and YN = {P x : x ∈ XE }
is the nondominated set in objective space of problem (1).
Definition 3.2 A feasible solution x̂ ∈ X is called weakly efficient if there is no
x ∈ X such that P x < P x̂. The set of all weakly efficient solutions of problem
(1) will be denoted by XW E and called the weakly efficient set in decision space.
Correspondingly, the point ŷ = P x̂ is called weakly nondominated point and YW N =
{P x : x ∈ XW E } is the weakly nondominated set in objective space of problem (1).
The primal and dual pair of this MOLP formulated in Heyde and Löhne (2006)
are
(D)
X := {x ∈ Rn : Ax b}
wminRp P (X ),
(P)
maxK D(U),
U := (u, λ) ∈ Rm × Rp : (u, λ) 0, AT u = P T λ, eT λ = 1 ,
where K := {y ∈ Rp : y1 = y2 = ... = yp−1 = 0, yp 0} as defined before and D :
Rm+p → Rp is given by
T
0 Ip−1 0
u
T
D(u, λ) := λ1 , ..., λp−1 , b u =
.
bT
0
0
λ
The primal problem (P) consists in finding the weakly nondominated points
of P (X ), the dual problem consists in finding the K-maximal elements of D(U).
We use the extended polyhedral image sets P := P (X ) + Rp of problem (P) and
D := D(U) − K of problem (D). It is known that the Rp -minimal (nondominated)
points of P and P (X ) as well as the K-maximal elements of D and D(U) coincide.
Geometric duality theory (Heyde and Löhne, 2006) states that there is an inclusion reversing one-to-one map Ψ between the set of all proper K-maximal faces
4
of D and the set of all proper Rp -minimal faces of P. The map Ψ is based on the
two set-valued maps,
H : Rp ⇒ Rp ,
∗
p
H(v) := {y ∈ Rp : ϕ(y, v) = 0} ,
p
H ∗ (y) := {v ∈ Rp : ϕ(y, v) = 0} .
p−1 p−1
where ϕ(y, v) = i=1 yi vi + yp 1 − i=1 vi − vp is the coupling function. Let
H :R ⇒R ,
λ(v)
v1 , . . . , vp−1 , 1 −
:=
p−1
T
vi
and
i=1
λ∗ (y)
T
y1 − yp , . . . , yp−1 − yp , −1
:=
then
H(v)
∗
H (y)
=
=
y ∈ Rp : λ(v)T y = vp
and
v ∈ R : λ (y) v = −yp .
p
∗
T
Theorem 3.3 (Heyde and Löhne (2006)) Let F be a proper weakly nondominated face of P and F ∗ be a proper K-maximal face of D, then
H(v) ∩ P.
Ψ(F ∗ ) :=
v∈F ∗
Ψ is an inclusion reversing one-to-one map between the set of all proper K-maximal
faces of D and the set of all proper weakly nondominated faces of P and the inverse
map is given by
Ψ−1 (F ) =
H ∗ (y) ∩ D.
y∈F
Moreover, for every proper K-maximal face F ∗ of D it holds dim F ∗ + dim Ψ(F ∗ ) =
p − 1.
Geometric duality theory gives us the following results.
Corollary 3.4 (Heyde and Löhne (2006))
1. If v is a K-maximal vertex of
D, then H(v) ∩ P is a weakly nondominated (p − 1)-dimensional facet of P,
and vice versa.
2. If F is a weakly nondominated (p − 1)-dimensional facet of P, there is some
uniquely defined point v ∈ Rp such that F = H(v) ∩ P.
3. If y is a weakly nondominated vertex of P, then H ∗ (y) ∩ D is a K-maximal
(p − 1)-dimensional facet of D, and vice versa.
4. If F ∗ is a K-maximal (p − 1)-dimensional facet of D, there is some uniquely
defined point y ∈ Rp such that F ∗ = H ∗ (y) ∩ D.
The following pair of dual linear programming problems are useful for the proof
of the geometric duality theory and they also play a key role in the algorithm of
solving the dual problem to be introduced in the next section.
minλ(v)T P x, X := {x ∈ Rn : Ax b}
(P1 (v))
x∈X
(D1 (v))
max bT u,
u∈T (v)
T (v) := u ∈ Rm : u 0, AT u = P T λ(v)
5
4
Solving the Primal and Dual MOLP
Assuming X is bounded, Benson proposes an outer approximation algorithm to
solve MOLP (1) in objective space (Benson, 1998a,b). Benson defines
Y = {y ∈ Rp : P x y ŷ for some x ∈ X },
(3)
where ŷ ∈ Rp is chosen to satisfy ŷ > y AI . The vector y AI ∈ Rp is called the anti
ideal point for the problem (1) and is defined as
ykAI = max{yk : y ∈ Y}.
(4)
According to the following theorem, instead of directly finding Y, Benson’s outer
approximation algorithm is dedicated to finding Y .
Theorem 4.1 (Benson (1998a,b)) The set Y ⊆ Rp is a nonempty, bounded
polyhedron of dimension p and YN = Y N .
Benson’s “outer approximation” algorithm can be illustrated as follows. First, a
cover that contains Y is constructed and an interior point of Y is found. Then, for
each vertex of the cover, the algorithm checks whether it is in Y or not. If not, the
vertex and the interior point are connected to form a line segment and the unique
boundary point of Y on the line segment is found. A cut (supporting hyperplane)
of Y containing the boundary point is constructed and the cover is updated. The
procedure repeats until all the vertices of the cover are in Y . As a result, when
the algorithm terminates, the cover is actually equal to Y . In addition, to improve
computation time, some improvements have been proposed in Shao and Ehrgott
(2006).
In Ehrgott et al. (2007), we have extended Benson’s algorithm to MOLPs which
are Rp -bounded. The algorithm finds P instead of Y . Notice that P is an un
bounded extension of Y as P = Y + Rp and Y = (ŷ − Rp ) P. P, Y and Y have
the same nondominated set. In the same paper we also propose a dual variant of
Benson’s algorithm to solve the dual MOLP.
We show the dual variant of Benson’s algorithm and how to obtain the nondominated facets of P from D. The algorithm is very similar to Benson’s algorithm.
Instead of Y , it finds D. In the course of the algorithm, supporting hyperplanes
of D are constructed. The following proposition is the basis for finding supporting
hyperplanes.
Proposition 4.2 (Ehrgott et al. (2007)) Let v̄ ∈ maxK D, then for every solution x̄ of (P1 (v̄)), H ∗ (P x̄) is a supporting hyperplane of D with v̄ ∈ H ∗ (P x̄).
The algorithm is shown in Algorithm 1.
Algorithm 1 (Dual variant of Benson’s algorithm)
Initialization:
Choose some dˆ ∈ int D. Compute an optimal solution x0
ˆ
of (P1 (d)).
Set S 0 = {v ∈ Rp : λ(v) 0, ϕ(P x0 , v) 0}
and k = 1.
Iteration k.
Step k1.
If vert S k−1 ⊆ D stop, otherwise choose a vertex sk of
S k−1 such that sk ∈ D.
Step k2.
Compute αk ∈ (0, 1) such that v k := αk sk + (1 − αk )dˆ ∈
maxK D.
Step k3.
Compute an optimal solution xk of (P1 (v k )).
Step k4.
Set S k := S k−1 ∩ {v ∈ Rp : ϕ(P xk , v) 0}.
Step k5.
Set k := k + 1 and go to (k1).
6
In Step k4, we use the on-line vertex enumeration algorithm of Chen and Hansen
(1991) to calculate the vertices of S k = S k−1 ∩ {v ∈ Rp : ϕ(P xk , v) 0}. This
algorithm needs to remember the adjacency list of each vertex of S k . The adjacency
list of a vertex v includes its adjacent vertices and its adjacent cutting planes. We
say that a supporting hyperplane H ∗ (P xk ) = {v ∈ Rp : ϕ(P xk , v) = 0} is adjacent
to a vertex sk if the vertex is on the hyperplane, i.e., ϕ(P xk , sk ) = 0. We say a
cut is degenerate if the supporting hyperplane does not support D in a facet. The
principle of the on-line vertex enumeration algorithm is to find the vertex sets of S k
on both sides of the supporting hyperplane H ∗ (P xk ) = {v ∈ Rp : ϕ(P xk , v) = 0}
and then to use adjacency lists of vertices to identify all edges of S k intersecting
H ∗ (P xk ). The corresponding intersection points are computed and the adjacency
lists updated.
When the algorithm terminates, we have the following Proposition (see Ehrgott
et al. (2007) for a proof).
Proposition 4.3 (1) The set of K-maximal vertices of D is vert S k−1 .
(2) The set y ∈ Rp : ϕ(y, v) 0 for all v ∈ vert S k−1 is a nondegenerate inequality representation of P.
(3) All Rp -minimal (nondominated) vertices of P are contained in the set W :=
{P x0 , P x1 , . . . , P xk−1 }.
(4) The set {v ∈ Rp : λ(v) 0, ϕ(y, v) 0 for all y ∈ W} is a (possibly degenerate) inequality representation of D. “Possibly degenerate” means that this
inequality representation may include redundant inequalities (a redundant inequality is produced during the iteration if a supporting hyperplane supports a
face but not a facet of D).
(1) and (4) in Proposition 4.3 give the vertex set and the inequality representation of D while (2) gives the inequality representation of P and (3) gives a set
of nondominated points of P which includes all the vertices of P. Proposition 4.3
suggests that we can get both P and D when the algorithm terminates.
At each iteration k, the hyperplane given by H ∗ (P xk ) = {v ∈ Rp : ϕ(P xk , v) =
0} is constructed so that it cuts off a portion of S k containing sk , thus S 0 ⊇ S 1 ⊇
S 2 ⊇ . . . ⊇ S k−2 ⊇ S k−1 = D.
Geometric duality theory establishes a relationship between P and D. P has
the property that P = P + Rp while D has the property that D = D − K and the
p−1
projection to its first p − 1 components is the polytope {t ∈ Rp−1 , t 0, i=1 ti 1}. Now we are going to extend the geometric duality theorem to two special
polyhedral convex sets which have the same property as P and D, respectively.
Property 4.4 In this paper, we consider special convex polyhedral sets S ⊆ Rp
with the property that S = S −
and the projection to its first p − 1 components is
Kp−1
the polytope {t ∈ Rp−1 , t 0, i=1 ti 1}.
Lemma 4.5 For S with Property 4.4, S∞ = −K.
Proof. The proof is part of the proof of Proposition 5.3 in Ehrgott et al. (2007).
Definition 4.6 For a polyhedral convex set S ⊆ Rp with Property 4.4,
we define
p−1
p
D(S)
=
{y
∈
R
:
ϕ(y,
v)
0,
for
all
v
∈
vert
S},
where
ϕ(y,
v)
=
yi vi +
i=1
p−1 yp 1 − i=1 vi − vp .
7
Proposition 4.7 Let S ⊆ Rp with Property 4.4. Then D(S) = D(S) + Rp .
Proof. It is obvious that D(S) ⊆ D(S) + Rp , therefore we only need to show
D(S) ⊇ D(S) + Rp .
Let w + d ∈ D(S) + Rp with w ∈ D(S) and d ∈ Rp . Since w ∈ D(S), we
have ϕ(w, v) 0 for all v ∈ vert S. ϕ(w + d, v) = ϕ(w, v) + d, λ(v) where λ(v) =
T
(v1 , . . . , vp−1 , 1 − p−1
i=1 vi ) . Since ϕ(w, v) 0 and it is obvious that d, λ(v) 0
p
because both d and λ(v) ∈ R
, we have ϕ(w + d, v) 0 and w + d ∈ D(S).
Corollary 4.8 For S ⊆ Rp with Property 4.4, Theorem 3.3 holds for D = S and
P = D(S).
Proposition 4.9 Let S 1 and S 0 be polyhedral convex sets with Property 4.4 and
S 1 ⊆ S 0 , then D(S 1 ) ⊇ D(S 0 ).
1
0
Proof. By Lemma 4.5 we have S∞
= S∞
= −K. This means that S 0 and S 1 have
p
only one extreme direction d = −e = (0, . . . , 0, −1).
Suppose S 0 has r vertices, v 1 , . . . , v r . We need to show that for y ∈ D(S 0 ), i.e.,
ϕ(y, v) 0, v = v 1 , . . . , v r , it holds that y ∈ D(S 1 ).
1
∗
0
1
S 0 . Therefore, v ∗ can be
Let v ∗ be a vertex
r of Si , then v ∈ S as S ⊆
r
∗
expressed as v = i=1 αi v + νd with αi 0, ν 0, i=1 αi = 1, and d = −ep .
∗
We calculate ϕ(y, v ) as follows.
ϕ(y, v ∗ ) =
=
=
=
p−1
k=1
p−1
k=1
p−1
k=1
r
yk vk∗ + yp (1 −
p−1
vk∗ ) − vp∗
k=1
r
yk (
αi vki + νdk ) + yp (1 −
i=1
yk
r
αi vki + yp (1 −
i=1
p−1
r
r
(
αi vki + νdk )) −
αi vpi − νdp
k=1 i=1
p−1 r
k=1 i=1
αi vki ) −
i=1
r
αi vpi +
i=1
p−1
k=1
yk νdk − yp
p−1
νdk − νdp
k=1
αi ϕ(y, v i ) + 0 − 0 + ν
i=1
=
r
αi ϕ(y, v i ) + ν
i=1
Since ϕ(y, v i ) 0, α 0 and ν 0, we have ϕ(y, v ∗ ) 0. This means that any
y ∈ D(S 0 ) is also contained in D(S 1 ). This proves D(S 0 ) ⊆ D(S 1 ).
For the dual variant of Benson’s algorithm, Proposition 4.9 indicates that D(S k )
enlarges with the iteration and when the algorithm terminates at iteration k, D(S k−1 ) =
D(D) = P. We give an example to illustrate the dual variant of Benson’s algorithm
and we show how S k and D(S k ) change with iteration k.
Example 4.10 Consider the MOLP min{P x : Ax b}, where
⎛
⎞
⎛ ⎞
2 1
4
⎜ 1 1 ⎟
⎜ 3 ⎟
⎜
⎟
⎜
⎟
1 0
1 2 ⎟
4 ⎟
P =
,A = ⎜
,b = ⎜
⎜
⎟
⎜
⎟.
0 1
⎝ 1 0 ⎠
⎝ 0 ⎠
0 1
0
8
P and D are shown in Fig. 1. The 5 vertices of D are (0, 0), ( 13 , 43 ), ( 12 , 32 ), ( 23 , 43 ),
(1, 0). Their corresponding facets (supporting hyperplanes) of D are y2 = 0, y1 +
2y2 = 4, y1 + y2 = 3, 2y1 + y2 = 4, y1 = 0. The four vertices of P, (0, 4), (1, 2),
(2, 1) and (4, 0) correspond to the facets (supporting hyperplanes) of D 4v1 + v2 =
4,v1 + v2 = 2, −v1 + v2 = 1 and −4v1 + v2 = 0, respectively.
y2
v2
5
2
4
P
3
1
2
D
1
0
0
0
1
2
3
4
5
0
y1
1
v1
Figure 1: P and D for Example 4.10.
Fig. 2 shows the change of S k with each iteration k. As can be seen, with
iteration k, S k becomes smaller and smaller until at termination it is the same as
D. The vertices of the initial cover S 0 are (0, 32 ) and (1, 32 ). The first hyperplane
cuts off vertex (0, 32 ), the vertices of S 1 are (1, 32 ), (0, 0), and ( 38 , 32 ). The second
hyperplane cuts off vertex (1, 32 ), thus the vertices of S 2 are (0, 0), ( 38 , 32 ), ( 58 , 32 )
and (1, 0). The third hyperplane cuts off vertex ( 38 , 32 ), thus the vertices of S 3 are
(0, 0), ( 13 , 43 ), ( 12 , 32 , ( 58 , 32 ) and (1, 0). The fourth hyperplane cuts off vertex ( 58 , 32 )
and the vertices of S 4 are (0, 0), ( 13 , 43 ), ( 12 , 32 , ( 23 , 43 ) and (1, 0). After the fourth
cut, we have S 4 = D. Therefore, the vertices of S 4 are also the vertices of D.
v2
v2
2
v2
2
v2
2
1
1
dˆ
1
v1
S3
0
0
dˆ
1
S2
0
0
1
v1
1
S1
0
dˆ
2
1
S0
0
v2
2
1
v1
0
0
dˆ
S4
0
1
v1
0
dˆ
1
v1
Figure 2: The reduction of S k with iteration k.
The change of D(S k ) after each iteration k can be seen in Fig. 3. The calculation
of D(S k ) is according to the definition D(S k ) = {y ∈ Rp : ϕ(y, v) 0 for all v ∈
D(S 0 ) = {y ∈ Rp : ϕ(y, v) 0 for v = (0, 32 ), (1, 32 )}, i.e.
vert S k }. For example,
3 0
D(S ) = {y1 2 } {y2 32 }. In contrast to the reduction of S k , D(S k ) enlarges
with iteration k. When the dual variant of Benson’s algorithm terminates, S 4 = D
and D(S 4 ) = D(D) = P.
With Proposition 4.9 the process of “outer approximation” of D can be interpreted as a process of “inner approximation” of P.
4.1
Obtaining the Nondominated Facets of P from D
As seen in Proposition 4.3 (2), vertex v = (v1 , v2 , . . . , vp ) of D corresponds to a
supporting hyperplane of P which supports P in a weakly nondominated facet. The
9
y2
y2
5
4
4
D(S 0 )
3
5
4
D(S 1 )
3
2
2
1
1
1
0
0
1
2
3
4
5
y1
4
0
0
y2
5
D(S 2 )
3
2
0
y2
y2
5
1
2
3
4
5
y1
0
1
2
3
4
5
y1
5
4
D(S 3 )
3
D(S 4 )
3
2
2
1
1
0
0
0
1
2
3
4
5
y1
0
1
2
3
4
5
y1
Figure 3: The enlarging of D(S k ) with iteration k.
p−1
hyperplane is λ(v)T y = vp , where λ(v) = (v1 , . . . , vp−1 , 1 − i=1 vi )T and λ(v) 0.
If λ(v) > 0, then the supporting hyperplane supports P in a nondominated facet
instead of a weakly nondominated facet. We call v an inner vertex of D if λ(v) > 0,
otherwise, we call it a boundary vertex of D. To calculate the nondominated facets
of P, we only need to consider inner vertices of D.
Now we are going to show how to calculate the nondominated facet F which
corresponds to an inner vertex v of D.
When the algorithm terminates, Proposition 4.3 (1) gives us the vertex set of D
and (4) gives us the inequality representation of D. The inequality representation
of D is D = {v ∈ Rp : λ(v) 0, ϕ(y, v) 0 for all y ∈ W} where W = {P x0 , P x1 ,
. . . , P xk−1 }.
As we mentioned before, at Step k4, we use the on-line vertex enumeration
algorithm of Chen and Hansen (1991). Therefore, we have the adjacency list for
each vertex of D. The adjacency list of a vertex includes the adjacent vertices
and the adjacent supporting hyperplanes of D. The set of adjacent supporting
hyperplanes of an inner vertex is a subset of {v ∈ Rp : ϕ(y, v) = 0 for all y ∈ W}.
For an inner vertex of D, if all its adjacent supporting hyperplanes (cuts) are non
degenerate, i.e., each cut supports D in a facet, then we can find all the points in
P which correspond to the cuts of D by geometric duality theory. These points are
the vertices of F . If not all of its adjacent cuts support D in facets, i.e., some cuts
are degenerate, then we need to use the following result.
Proposition 4.11 Let inner vertex v of D correspond to nondominated facet F of
P. Suppose v has a degenerate adjacent cut (supporting hyperplane), then the cut
corresponds to a nondominated point p ∈ F, but p is not a vertex of F .
Proof. Suppose vertex v has k non degenerate cuts (supporting hyperplanes). The
k non degenerate cuts correspond to the k vertices of F . The degenerate cut can be
expressed by a linear combination of the k non degenerate cuts. Correspondingly,
the point that the degenerate cut corresponds to can also be expressed by the same
linear combination of the k vertices. Therefore, we have p ∈ F.
10
v2
y2
2
5
4
3
P
1
2
D
1
0
0
0
1
0
v1
1
2
3
4
5
y1
Figure 4: Vertex ( 12 , 32 ) and its degener- Figure 5: P and the point corresponding
to a degenerate cut
ate cut.
Proposition 4.11 shows that the points of P which correspond to the adjacent
cuts of a vertex of D are on the same facet of P. Moreover, all non degenerate cuts
of D correspond to the vertices of the facet of P and all degenerate cuts correspond
to points which are not vertices of the facet. Therefore, we can use the convex hull
of the points to find the facet.
We show an example with a degenerate cut.
Example 4.12 For Example 4.10, vertex ( 12 , 32 ) of D is adjacent to three cuts, they
are v1 + v2 = 2, −v1 + v2 = 1 and v2 = 32 . Among them, v2 = 32 is a degenerate cut
because it only supports D in vertex ( 12 , 32 ) instead of a facet. This can be seen in
Fig. 4.
Vertex ( 12 , 32 ) of D corresponds to the line segment between (1, 2) and (2, 1),
a facet F of P, as shown in Fig. 5. Cut v1 + v2 = 2 corresponds to (1, 2) and
cut −v1 + v2 = 1 corresponds to (2, 1); while v2 = 32 corresponds to ( 32 , 32 ), a
nondominated point of P. The two points (1, 2) and (2, 1) are the vertices of F ,
while point ( 32 , 32 ) is in F , but it is not a vertex of F .
The degenerate cut v2 = 32 can be obtained by simply adding the two adjacent
non degenerate cuts together with equal weights, i.e., 12 (v1 + v2 ) + 12 (−v1 + v2 ) =
1
1
3
3 3
2 × 2 + 2 × 1 or v2 = 2 . On the other hand, ( 2 , 2 ) can also be obtained by simply
adding the two vertices (1, 2) and (2, 1) together with the same equal weights, i.e.,
1
1
3 3
2 × (1, 2) + 2 × (2, 1) = ( 2 , 2 ).
5
Solving the Dual MOLP Approximately
If the nondominated set of an MOLP is “curved”, which means there are very many
facets and very many vertices of P, then there will be very many vertices and facets
of D according to Corollary 3.4. Therefore, whether we are solving the primal
problem with Benson’s algorithm or whether we are solving the dual problem with
dual variant of Benson’s algorithm, computation time may be a problem.
To improve the computation time, an approximation version of Benson’s algorithm has been proposed in Shao and Ehrgott (2006). For a vertex of S k not in
Y , but with Euclidean distance to the boundary point of Y on the line segment
between the vertex and the interior point less than tolerance ( ∈ R, > 0), we
omit constructing the cut and remember both the vertex and the boundary point
as a point of the outer approximation and a point of the inner approximation,
respectively. Finally, we have an outer approximation Y o of Y and an inner approximation Y i of Y to sandwich Y , i.e., Y o ⊇ Y ⊇ Y i . Moreover, each vertex
of Y o has a corresponding point of Y i , and the Euclidean distance between the
two points is less than tolerance . As a result of approximation, all the points of
Y i calculated are on the boundary of Y and the weakly nondominated set of Y i
has been proved to be a set of weakly ε-nondominated points of Y (ε = e).
11
In this section, we propose approximately solving the dual MOLP but controlling
the approximation error to get an approximate extended feasible set in objective
space Do , and then finding the polyhedral set D(Do ). D(Do ) is an inner approximation of P, the original extended primal feasible set. Finally we will show that
the nondominated set of D(Do ) is actually an ε-nondominated set of the original
MOLP.
Our approximate dual variant of Benson’s algorithm is identical to Algorithm 1
except for Step k1. Let ∈ R, > 0 be a tolerance, then the changes are as follows.
If, for each v ∈ vert S k−1 , v ∈ D + ep is satisfied, then
the outer approximation of D denoted by Do is equal to
S k−1 . Stop. Otherwise, choose any v k ∈ vert S k such that
/ D + ep and continue.
vk ∈
Step k1.
To check v ∈ D + ep , we need solve D1 (v) and get its objective value f . If
vp − f , then v ∈ D + ep .
Since Do ⊇ D, Proposition 4.9 implies D(Do ) ⊆ D(D) = P. This means that
D(Do ) is an inner approximation of P. We use P i to represent D(Do ).
We illustrate the algorithm by using the same example as Example 4.10.
3
Example 5.1 In Example 4.10, let us set = 20
. After two cuts there are two
3 3
5 3
2
1
2
vertices of S (v = ( 8 , 2 ) and v = ( 8 , 2 )) outside D. The boundary points of
5 11
2
D which have the same v1 value as v 1 and v 2 are bd1 = ( 38 , 11
8 ) and bd = ( 8 , 8 ),
1
1
respectively. Both the Euclidean distance between v and bd and the Euclidean
distance between v 2 and bd2 are equal to 18 . We accept these two infeasible points
v1 , v2 for the outer approximation of D due to the distances to their corresponding
boundary points being less than , i.e., v1 , v2 ∈ D + eq . When the algorithm
terminates, the total number of iterations k is equal to 3 and S 2 = Do .
Figs. 6 and 7 show Do and its corresponding polyhedral set P i = D(Do ).
v2
y2
2
5
4
D(Do ) = P i
3
1
2
D
o
1
0
0
0
1
v1
0
1
2
3
4
5
y1
Figure 7: P i
Figure 6: Do
We are going to evaluate the approximation quality of P i as inner approximation
of P.
Let us first recall the definition of ε-efficient solutions.
Definition 5.2 (Loridan (1984)) Consider the MOLP (1) and let ε ∈ Rp .
1. A feasible solution x̂ ∈ X is called an ε-efficient solution of (1) if there does
not exist x ∈ X such that P x ≤ P x̂ − ε. Correspondingly, ŷ = P x̂ is called
an ε-nondominated point in objective space;
2. A feasible solution x̂ ∈ X is called a weakly ε-efficient solution if there does
not exist x ∈ X such that P x < P x̂ − ε. Correspondingly, ŷ = P x̂ is called a
weakly ε-nondominated point in objective space.
12
Now we proceed to show that the weakly nondominated set of P i is actually a
set of weakly ε-nondominated points of P.
First let us move D along ep by , then we get Du = D + ep . When the
approximate algorithm terminates, the set of K-maximal points of Do will lie in
between the set of K-maximal points of D and the set of K-maximal points of Du ,
i.e., D ⊆ Do ⊆ Du .
We can calculate P = D(D) and P u = D(Du ). According to Proposition 4.9,
we have P ⊇ P i ⊇ P u . Therefore, the set of weakly nondominated points (Rp -
minimal points) of P i = D(Do ) is in between the set of weakly nondominated points
(Rp -minimal points) of P and the set of weakly nondominated points of P u .
Theorem 5.3 Suppose the dual approximation error is . Let ε = e, then the
nondominated set of P i is a set of ε-nondominated points of P.
Proof. First we show that the nondominated set of P u is actually a set of εnondominated points of P.
According to Corollary 4.8, Theorem 3.3 applies to D = Du and P = P u , thus
we can use duality theory to find P u .
Let v = (v1 , v2 , . . . , vp ) be a vertex of D. Then there is a corresponding vertex v u
of Du and v u = v+ep . Suppose vertex v of D corresponds to facet λ(v)T y = vp of P,
p−1
where λ(v) = (v1 , . . . , vp−1 , 1 − i=1 vi )T and suppose vertex v u = (v1u , v2u , . . . , vpu )
u
of Du corresponds to facet λ(v u )T y = vpu of P u , where λ(v u ) = (v1u , . . . , vp−1
,1 −
p−1 u T
p
u
u
v
)
.
As
v
+
e
=
v
,
we
have
λ(v
)
=
λ(v).
Therefore,
the
facet
λ(v)y
=
vp
i=1 i
of P is parallel to the facet λ(v u )y = vpu of P u .
Let F be a facet of D, then there is a corresponding facet F u of Du and F u is
parallel to F . Suppose F corresponds to vertex y = (y1 , y2 , . . . , yp ) of P, then the
equation of the hyperplane spanned by F is (yp − y1 )v1 + (yp − y2 )v2 + . . . + (yp −
yp−1 )vp−1 + vp = yp . Suppose F u corresponds to vertex y u = (y1u , y2u , . . . , ypu ) of P u ,
then the equation of the hyperplane spanned by F u is (ypu − y1u )v1 + (ypu − y2u )v2 +
u
. . . + (ypu − yp−1
)vp−1 + vp = ypu . F u and F are parallel and F u is obtained by
moving F along ep by , therefore, we have ypu = yp + , and (ypu − y1u ) = (yp − y1 ),
u
. . . , (ypu − yp−1
) = (yp − yp−1 ), i.e., y u = y + e.
Above all, P u can be obtained by moving every point y of P to y +e. Therefore,
the nondominated set of P u is a set of ε-nondominated points of P.
Moreover, as P u ⊆ P i ⊆ P, so the nondominated set of P i is also a set of
ε-nondominated points of P.
3
Example 5.4 As in Example 4.10, = 20
. Fig. 8 and Fig. 9 show D, Du , P and
u
P . The vertices of P are (0, 4), (1, 2), (2, 1) and (4, 0). The corresponding vertices
3
3
3
3
3
3
3
3
, 4 20
), (1 20
, 2 20
), (2 20
, 1 20
) and (4 20
, 20
). The nondominated set of
of P u are ( 20
3
u
P is a set of ε-nondominated points of P where ε = 20
e.
6
Application: Radiotherapy Treatment Planning
The aim of radiotherapy is to kill tumor cells while at the same time protecting the
surrounding tissue and organs from the damaging effect of radiation. To achieve
these goals computerized inverse planning systems are used. Given the number
of beams and beam directions, beam intensity profiles that yield the best dose
distribution under consideration of clinical and physical constraints are calculated.
This is called beam intensity optimization problem.
13
v2
y2
2
5
4
3
Pu
1
D
D
u
2
1
0
P
0
0
1
v1
0
1
2
3
4
5
y1
Figure 9: P and P u
Figure 8: D and Du
We have formulated the beam intensity optimization problem as an MOLP in
Shao and Ehrgott (2006). The objectives of the MOLP are to minimize the maximum deviation y1 , y2 , y3 of delivered dose from tumor lower bounds, from critical
organ upper bounds and from the normal tissue upper bounds, respectively. We
used both Benson’s algorithm and an approximation version of Benson’s algorithm
to solve the MOLP of an acoustic neuroma (AC), a prostate (PR) and a pancreatic
lesion (PL) case, respectively.
In this paper, we solve the dual problems of the same three clinical cases as
above, i.e., the acoustic neuroma (AC), the prostate (PR) and the pancreatic lesion
(PL) both by the dual variant of Benson’s algorithm and the approximate dual
variant of Benson’s algorithm.
Both the acoustic neuroma case and the prostate case can be solved exactly
with the dual variant of Benson’s algorithm. We show the results of solving the
dual problem exactly with the dual variant of Benson’s algorithm and approximately with the approximate dual variant of Benson’s algorithm. The results for
the acoustic neuroma case are shown in Fig. 10. The set maxK D and the union of
the nondominated facets of P obtained by the dual variant of Benson’s algorithm
are on the top left and right, respectively. The other pictures show the results
of solving the dual problem approximately with the approximate dual variant of
Benson’s algorithm. Pictures on the left are maxK Do while pictures on the right
are the union of the nondominated facets of P i . From top to bottom, they are for
approximation error = 0.01, and = 0.1, respectively.
The results for the prostate case are in Fig. 11. The pictures on the top left
and right are maxK D and the union of the nondominated facets of P, they were
obtained by the dual variant of Benson’s algorithm. The rest of the pictures show
the results of solving the dual problem approximately with the approximate dual
variant of Benson’s algorithm. Pictures on the left show maxK Do while pictures on
the right show the union of the nondominated facets of P i . From top to bottom,
they are for approximation error = 0.001, = 0.01, and = 0.1, respectively.
14
80
90
80
40
y3
v3
60
20
70
0
0
0.5
1
v
0.4
0.2
0
0.6
0.8
60
12
1
14
v2
1
16
y
18 15
1
10
5
y
2
80
90
80
40
y3
v3
60
20
70
0
0
60
12
0.5
1
v1
0.4
0.2
0
v
0.6
0.8
1
14
5
16
y1
2
18 15
10
y2
80
90
80
40
y3
v
3
60
20
70
0
0
60
12
0.5
1
v1
0
0.2
0.4
0.6
0.8
1
v2
14
16
y1
18 15
Figure 10: The results for the AC case.
15
10
y2
5
100
3
0
y
v3
50
50
−50
0
0
0
1
0.5
0.5
1 0
v1
40
40
y
v2
1
20
y
0
−20
2
100
50
0
y3
v3
20
50
−50
0
0
0
1
0.5
0.5
1 0
v1
20
v2
40
40
y1
−20
100
0
y3
v3
50
0
20
y2
−50
0
0.5
0
0
1
0.5
1 0
v1
50
−20
20
v2
0
y1
40
40
20
y
2
50
3
y
v3
100
0
50
−50
0
0.5
0
0
1
v1
1
0.5
0
−20
20
v2
0
40
y1
Figure 11: The results for the PR case.
16
40
20
y2
The dual variant of Benson’s algorithm cannot solve the problem of pancreatic
lesion case within 10 hours of computation. Therefore, we show the results solved
by the approximate dual variant of Benson’s algorithm in Fig. 12. Pictures on the
left are maxK Do and pictures on the right are the union of the nondominated facets
of P i . Pictures on the top are for = 0.01 while pictures on the bottom are for
= 0.1.
100
100
80
v3
y3
50
0
60
40
0
−50
0
0.5
1
v
1
1
10
0.5
0
v
y1
2
20 20
−40
−20
0
y
2
100
100
80
v
3
y3
50
0
60
−50
0
40
0
0.5
1
v1
1
0.5
0
v2
10
y
1
20 20
−20
0
−40
y2
Figure 12: The results for the PL case.
Summarizing information comparing the number of vertices and number of cuts
of the dual variant of Benson’s algorithm and the approximate dual variant of
Benson’s algorithm with various values of is given in Table 1. The dual variant
of Benson’s algorithm can solve the dual problem of the first two cases exactly in
one hour, but not the problem of the pancreatic lesion case. The approximate dual
variant of Benson’s algorithm can solve all three problems within 15 minutes with
approximation error = 0.005.
Table 1 and Figs. 10, 11 and 12 clearly show the effect of the choice of . The
smaller the approximation error, the more vertices and the more cuts are generated
and the longer the computation time.
To make comparisons with the results obtained by Benson’s algorithm and the
approximation version of Benson’s algorithm, we show Y and Y o with different
values of approximation error for the above three cases. Benson’s algorithm can
exactly solve the acoustic case and the prostate case. Fig. 13 shows Y for the
acoustic case and Fig. 15 shows Y for the prostate case. Y o with = 0.1 for
the acoustic case obtained by the approximation version of Benson’s algorithm
is shown in Fig. 14, while Y o with = 0.1 for the prostate case obtained by
the approximation version of Benson’s algorithm is shown in Fig. 16. Benson’s
algorithm cannot solve the pancreatic lesion case within 10 hours of computation,
17
we show Y o with approximation error = 0.005 in Fig. 17 and = 0.1 in Fig. 18.
90
90
80
y3
y3
80
70
70
60
12
60
12
14
16
y
18
10
15
16
y1
y2
1
5
10
18 15
y2
Figure 14: AC: Y o solved by approximation version of Benson’s algorithm
with = 0.1.
Figure 13: AC: Y solved by Benson’s
algorithm.
100
100
50
y3
y3
14
5
0
0
20
40
y1
20
y
40
0
−20
50
0
0
20
2
40
y1
Figure 15: PR: Y solved by Benson’s
algorithm.
20
y
40
0
−20
2
Figure 16: PR: Y o with = 0.1.
100
100
80
γ
y
3
80
60
60
0
40
0
5
10
10
α
15
20 20
0
−20
−40
β
y1
20 20
−20
0
−40
y2
Figure 18: PL: Y o with = 0.1.
Figure 17: PL: Y o with = 0.005.
The number of vertices and number of cuts of Y o with various values of are
also given in Table 1. Comparing the computation time of exactly solving the
primal and exactly solving the dual problem for both the acoustic case and the
prostate case, solving the primal problem exactly using Benson’s algorithm needs
more computation time than solving the dual problem exactly using the dual variant
of Benson’s algorithm.
If the approximation error in the approximation version of Benson’s algorithm
and the approximation error in the approximate dual variant of Benson’s algorithm
are the same, then both algorithms guarantee finding ε-nondominated set (ε = e).
Thus we can compare the computation time for solving Y o and P i with the same
18
approximation error . In Table 1, we observe that for the three clinical cases solving
the dual approximately is always faster than solving the primal approximately with
the same approximation error .
Table 1: Running time and number of vertices and cutting planes to solve the dual
problem and the primal problem for three cases with different approximation error
( = 0 means it is solved exactly).
Case
AN
PR
PL
0.1
0.01
0
0.1
0.01
0.001
0
0.3
0.1
0.05
0.01
0.005
Solving the dual
Time Vertices
Cuts
(seconds)
of Do of Do
1.484
17
8
3.078
33
18
8.864
85
55
4.422
39
19
18.454
157
78
86.733
729
366
792.390
3280
3165
29.110
40
21
58.263
85
44
102.761
151
78
401.934
582
298
734.784
1058
539
Solving the primal
Time Nondominated
(seconds) vertices of Y o
5.938
27
8.703
47
13.984
55
14.781
56
64.954
296
257.328
1157
995.050
3165
70.796
57
164.360
152
303.630
278
1184.950
1097
2147.530
1989
Cuts
of Y o
21
44
85
42
184
692
3280
37
90
159
586
1041
Solving the primal MOLP gives us Y . If we only need the nondominated facets,
we need to get rid of the weakly nondominated facets. Solving the dual problem
allows us to directly calculate the nondominated facets. This can be taken as an
advantage of solving the dual problem.
7
Conclusion
In this paper, we have developed an approximate dual variant of Benson’s algorithm
to solve MOLPs in objective space. We have shown the algorithm guarantees to
find ε-nondominated points with a specified accuracy .
This algorithm was applied to the beam intensity optimization problem of radiation therapy treatment planning. Three clinical cases were used and the results were compared with those obtained by approximately solving the primal with
the approximation version of Benson’s algorithm. When both algorithms use the
same approximation error , both of them guarantee producing ε-nondominated set
(ε = e), we found that approximately solving the dual with the approximate dual
variant of Benson’s algorithm is faster than approximately solving the primal with
the approximation version of Benson’s algorithm for all the three clinical cases.
References
Benson, H. P. (1998a). Hybrid approach for solving multiple-objective linear programs in outcome space. Journal of Optimization Theory and Applications, 98,
17–35.
Benson, H. P. (1998b). An outer approximation algorithm for generating all efficient
19
extreme points in the outcome set of a multiple objective linear programming
problem. Journal of Global Optimization, 13, 1–24.
Chen, P. C. and Hansen, P. (1991). On-line and off-line vertex enumeration by
adjacency lists. Operations Research Letters, 10, 403–409.
Ehrgott, M. and Wiecek, M. (2005). Multiobjective programming. In J. Figueira,
S. Greco, and M. Ehrgott, editors, Multicriteria Decision Analysis: State of the
Art Surveys, pages 667–722. Springer Science + Business Media, New York.
Ehrgott, M., Löhne, A., and Shao, L. (2007). A dual variant of Benson’s outer
approximation algorithm. Report 654, Department of Engineering Science, The
University of Auckland.
Heyde, F. and Löhne, A. (2006). Geometric duality in multi-objective linear programming. Reports on Optimization and Stochastics 06-15, Department of Mathematics and Computer Science, Martin-Luther-University Halle-Wittenberg. Submitted to SIAM Journal on Optimization.
Loridan, P. (1984). ε-solutions in vector minimization problems. Journal of Optimization Theory and Applications, 43(2), 265–276.
Rockafellar, R. (1972). Convex Analysis. Princeton University Press, Princeton.
Ruzika, S. and Wiecek, M. M. (2005). Approximation methods in multiobjective
programming. Journal of Optimization Theory and Applications, 126(3), 473–
501.
Shao, L. and Ehrgott, M. (2006). Approximately solving multiobjective linear programmes in objective space and an application in radiotherapy treatment planning. Report 646, Department of Engineering Science, The University of Auckland. Available online at http://www.esc.auckland.ac.nz/research/tech/
esc-tr-646.pdf. Accepted for publication in Mathematical Methods of Operations Research.
Webster, R. (1994). Convexity. Oxford Science Publications, Oxford, Oxford University Press.
20