Convex Maximization on a Convex Set with Fuzzy Constraints

MS-01-02
Convex Maximization on a Convex Set
with Fuzzy Constraints
Jianming Shi and Hiroshi Inoue
December 2000
Revised June 2001
1
Convex Maximization on a Convex Set with
Fuzzy Constraints
Jianming Shi and Hiroshi Inoue
Science University of Tokyo, 500 Shimokuoku, Kuki, Saitama 346-8512, Japan. {j.shi, inoue}@ms.kuki.sut.ac.jp
This research was partially supported by Grant-in-Aid for Scientific Research (No. 10640137) of the Ministry of
Education, Science, Sports and Culture of Japan.
June 14, 2001
DRAFT
2
Abstract
Usually, many people work for solving convex minmization problems with various constraints,
including some fuzzy conditions. In this paper, we present some algorithms for solving convex
maximization problems with some fuzzy constraints. The objective function of the encountered
problem is convex with a feasible region of a reverse convex constraint. Even without fuzzy
nature, the problem still remains in the category of N P-hardness, which can be solved in typical
global optmization. We transform the reverse convex feasible region to a d.c. (difference of
convex) set, then solve the problem in combining the techniques of d.c. method and on-line
vertices enumeration method.
Keywords
fuzzy constraints, cutting plane, global optimization.
I. Introduction
In this paper, we consider the following maximization problem:
(P )
max{f (x) | h(g(x)) ≥ ρ, x ∈ S}
(1)
where S is a compact, convex and nonempty subset of Rn , f(·) : Rn → R a convex
function on S, h(·) : Rm → [0, 1) a convex and nondecreasing function with respect to yi
for i = 1, ..., m, and g(·) : Rn → Rm a d.c. function on S, that is, there exist two convex
functions g1 (·), g2 (·) such that g(x) ≡ g1 (x) − g2 (x) on S, and ρ ≥ 0 is a given parameter.
We call the constraint h(g((x)) ≥ ρ a fuzzy constraint. Probelm (P) without the fuzzy
constraint is a typically global optimization which has been well studied (see, e.g., [7])
for last 30 years. Even without fuzzy constraint, the problem is still in the category of
N P-hardness. If g2 ≡ 0, then h(g(x)) turns to be convex, and the feasible region becomes
convex. Many approaches can be exploited to solve the two simplified cases above.In this
paper, we assume that g2 ≡ 0.
The fuzzy constraint comes from many applications. For instance, suppose that the
quantity nq of resources is centrally allocated among n independent manufacturing processes, producing its output which is assessed on the basis of its quantity. Each process, say k, is given grade pk . In other works, if the portion x of nq is avaiable to
k, then it gives the grade pk . The grade may be called a membership fuction, taking
DRAFT
June 14, 2001
3
values between 0 and 1. Suppose now that n is large, and that the membership function are pk ≡ pk (x, ω), where ω is an element of a probability space, (Ω, A, P), of a
given distribution. Formally, {pk } is a sequence of fuzzy mappings (fuzzy random sets),
pk (x, ω) : Rn × Ω → [0, 1] ⊂ R+ := {t ∈ R | t ≥ 0}. If, in this setting, each process is
assessed by not only pk (x) but another function, h(·) which may be determined on the
basis of some characteristics. Thus, we define a continuous linear functional which maps
to R+ . For the random set, h(pk , ρ) := {(x, ω) | h(pk (x, ω)) ≥ ρ} for each h ∈ L∗ , where
L∗ is the set of all continuous linear functionals. In this circumstance, the optimization
problem is considered as follows. For each ω ∈ Ω look at
max
s.t.
n
k=1 h(pk (xk , ω))
n
k=1 xk
≤ nq,
where xk ∈ Rn . Obviously, the above programming is equivalent to
max y
s.t.
n
k=1
h(pk (xk , ω)) − y ≥ 0
k=1
xk ≤ nq.
n
Even there are no many results related on convex maximization with fuzzy constraint,
a few papers are found to be involved in the fuzzy constraints, directly or indirectly. In
his paper [10], Tuy treated with a differential case in a localtion problem. We extend his
results in this paper.
The main idea proposed for solving the program (P ) is to cast the fuzzy constraint to
an explicit d.c. optimization and solve it as a reverse convex global optimization.
In the Section 2, we devlope a d.c. representation for the problem and Section 3 is
devoted to a solution method with outer approximation. In section 4, we propose an
algorithm and prove its validity.
II. Explicit D.c. Representation
A set A is called to be d.c. if there exist two convex sets A1 and A2 such that A = A1 \A2.
Obviously, the indicator function δAi (x) = ∞ if x ∈ Ai or δAi (x) = 0 if x ∈ Ai (i = 1, 2) is
convex respectively, and A = {x | δA1 (x) − δA2 (x) ≤ −1} in the sense of that ∞ − ∞ = ∞.
Generally, a d.c. set A is of the form {x | f1(x) − f2 (x) ≤ 0 } for some convex functions
June 14, 2001
DRAFT
4
f1 (·) and f2 (·). Any closed set is a d.c. set (see, e.g., [9]), so a wide class of functions is in
this category. Even though, an explicit d.c. representation is very important in designing
algorithms.
There exist many methods to represent a d.c. function in explicit forms. For instance,
a C 2-smoothing function π(x) on any compact convex set Λ can be represented as (π(x) +
M||x||2) − M||x||2, where M is a real number. If taking the number M sufficiently large,
then the first term (π(x) + M||x||2 ) will be convex on Λ. It implies that π(x) is of a form
of an explicit d.c. representation howerver, estimating such number M on Λ is still very
defficult from the view of numerical implementation, And the functions considered in this
paper are not smoothing, but convex.
Some important properties of d.c. function can be found in such as [1], [4], [8]. Now we
consider how to represent the fuzzy constraint in (1).
We write g(x) := (g1 (x), g2 (x), ..., gm(x)), and denote g(x) ≤ g(x ) if gi (x) ≤ gi (x ) for
i = 1, ..., m.
Lemma 1: If gi are convex for i = 1, 2, ..., m and h(y1, y2, ..., ym) : Rm → R is convex
and nondecreasing in each yi , then h(g(x)) : Rn → R is convex with respect to x.
Proof: For β ∈ [0, 1] we see that g(βx + (1 − β)x) ≤ βg(x) + (1 − β)g(x). Then
h(g(βx + (1 − β)x )) ≤ h(βg(x) + (1 − β)g(x ))
≤ βh(g(x)) + (1 − β)h(g(x )).
In the sequal, ∂(f(x)) stands for the set of a subgradient of the function f at point x. For
a set S, ∂(S) means the boundary of S. The symbol ·, · is the inner product.
Lemma 2: If y ≥ y with yk > yk and yi = yi for all i = k and h is convex, then
ξk (y) ≥ ξk (y ) for ξ(y) ∈ ∂(h(y)) and ξ(y ) ∈ ∂(h(y )).
Proof: From the convexity of h we see that
h(y) − h(a) ≥ ξ(a), y − a
(2)
for any a ∈ Rm and ξ(a) ∈ ∂h(a). Suppose that y ≥ y with yk > yk and yi = yi for all
i = k and let ȳ := 12 (y + y ). Then h(ȳ) ≤ 12 h(y) + 12 h(y ) and it follows from (2) that
1
1
ξ(y ), ȳ − y ≤ h(ȳ) − h(y ) ≤ h(y) − h(y )
2
2
DRAFT
June 14, 2001
5
1
1
ξ(y), ȳ − y ≤ h(ȳ) − h(y) ≤ h(y ) − h(y).
2
2
Therefore
ξ(y ), ȳ − y + ξ(y), ȳ − y ≤ 0
Note that ȳi − yi = yi − ȳi = 0 for i = k and that ȳk − yk = −(ȳk − yk ) > 0. We see that
ξ(y ), ȳ − y + ξ(y), ȳ − y = (ξk (y ) − ξk (y))(ȳk − yk ) ≤ 0.
It implies that ξk (y) ≥ ξk (y ).
In the sequel, we assume that intS = ∅. Denote by Simplex a simplex containing S and
suppose that the n + 1 vertices of Simplex are in hand. Then from the convexity of h, we
can calculate the maximum of h over Simplex by checking the values of h at the vertices.
Let M := max{h(x) | x ∈ Simplex } and M := min{h(x) | x ∈ Simplex }. The Simplex can
be taken large enough such that for any y ∈ Simplex there exits a y ∈ Simplex such that
y − y ≥ δ. If Simplex is not large enough, it can be replaced by Simplex + δB , where
B is a unit ball of Rn . On the other hand, for any y1, y2 ∈ Simplex one has
M − M ≥ h(y1) − h(y2 ) ≥ ξ(y2 ), y1 − y2.
for any ξ(y2 ) ∈ ∂h(y2). If y1 − y2 ≥ δ, then
ξ(y2 ),
y1 − y2
y1 − y2 ≤
M −M
.
δ
(3)
Lemma 3: If gi are convex for i = 1, 2, ..., m on S and h is convex with respect to y,
then h(g(x)) has an explicit d.c. representation:
h(g(x)) = p(x) − α, g(x) ∈ [0, 1),
where p(x) is a convex function and α is a vector with ith component αi =
M −M
.
δ
Proof: Let p̄(y) := h(y) + α, y then p̄(y) is convex with respect to y. Consider two
vectors y, y of S and suppose that y ≥ y with yk > yk and yi = yi for all i = k. Hence by
(2) we see that
p̄(y) − p̄(y ) = h(y) − h(y ) + α, y − y ≥ ξ(y ) + α, y − y = ξ(y ), y − y + α, y − y .
June 14, 2001
DRAFT
6
If y − y ≥ δ, we see that
ξ(y ), y − y + α, y − y ≥ −(M − M) + δ
M −M
= 0.
δ
If y − y < δ, then there exits y0 such that y − y0 ≥ δ and y − y = β(y − y0) for some
β ∈ (0, 1). Therefore,
ξ(y ), y − y + α, y − y ≥ β (ξ(y ), y − y0 + α, y − y0)
M −M
≥ β −M + M + δ
δ
= 0.
It says that p̄(y) is nondecreasing in yi as well. From Lemma 1 we see that p(x) := p̄(g(x))
is a convex function with respect to x. Therefore,
h(g(x)) = p(x) − α, g(x)
is an explicit represenation of h(g(x))
Before next theorem, we give a notation. For two vectors x = (x1 , · · · , xn ) and y =
(y1, · · · , yn ), we denote xy := (sign(y1 )x1, · · · , sign(yi )xi, · · · , sign(yn )xn ).
Theorem 1: If gi := gi − gi for i = 1, 2, ...m, where gi , gi are convex functions on S, and
h and α are the same as in Lemma 3, then h(g(x)) has an explicit d.c. representation:
¯ g (x) − g (x) ∈ [0, 1),
h(g(x)) = q(x) − ξ,
where q(x) is a convex function.
Proof: Similar to the proof of Lemma 3, we have
ξ(γ), y − γ ≥ −α(y−γ) , y − γ,
where ξ(γ) ∈ ∂(h(γ)) for any γ ∈ S. From the convexity of h and above inequality we see
that
h(y) ≥ h(γ) + ξ(γ), y − γ ≥ h(γ) − α(y−γ) , y − γ.
Then
h(y) ≥ max{h(γ) − α(y−γ) , y − γ|γ ∈ S}.
DRAFT
June 14, 2001
7
It follows from y ∈ S that
h(y) = max{h(γ) − α(y−γ) , y − γ|γ ∈ S}.
Therefore
h(g(x)) = h(g (x) − g (x))
= max{h(γ) + α(g (x)−g
(x)−γ)
γ∈S
= max{h(γ) + α + α(g (x)−g
, g (x) − g (x) − γ}
(x)−γ)
γ∈S
−−α + α(g (x)−g
−α(g (x)−g
(x)−γ)
(x)−γ)
, g (x)
, g (x)
, γ − α, g (x) + g (x)}
Denote that
r(x, γ) : = h(γ) + α + α(g (x)−g
−−α + α(g (x)−g
−α(g (x)−g
(x)−γ)
(x)−γ)
(x)−γ)
, g (x)
, g (x)
, γ
By definition of α and αy , we see that α ± αy ≥ 0 for any vector y ∈ S. Then, for any
fixed γ, r(x, γ) is a convex function with respect to x. Let q(x) := max{r(x, γ)|γ ∈ S},
then q(x) is convex as well. Therefore
h(g(x)) = q(x) − α, g (x) + g (x)
The assertion holds.
From the above Lemmas, we rewrite the problem (1) as following.
(P )
max f(x)
s.t. q(x) − α, g (x) + g (x) ≥ ρ,
(4)
x ∈ S.
For a given x, the value of q(x) can be calculated by h(g (x) − g (x)) + α, g (x) + g (x).
A. Solution Method
Based on the previous disccusion, we foucs on a solution method for the problem (4) in
this section.
June 14, 2001
DRAFT
8
The problem (4) is a special form of the following problem.
max F (x)
s.t. G (x) − G (x) ≥ 0,
(5)
x ∈ S,
where F, G , G are convex functions, and S is the same as previous form. Many researchers
have contributed to the problem (5) in theoretical and algorithmic respects. Most of
existing algorithms are based on the following observation. In (5), the d.c. constraint
G (x) − G (x) ≥ 0 is equivalent to for any x, G (x) ≥ t ≥ G (x) for some t ∈ R, so that
(5) is equivalent to
max F (x)
s.t. G (x) − t ≥ 0,
G (x) − t ≤ 0,
(6)
x ∈ S, t ∈ R.
The first constraint G (x) − t ≥ 0 is called to be reverse convex constraint. The problem
(5) can be solved by outer approximation or by conical-divided branch-and-bound method
emanating an interior point of the feasible region as in, for instance, [5]. The main idea of
such methods is to estimate the lower bounds and upper bounds of the values of functions
in (6). With the bounds being in hand, the separated feasible region can be selected
for remaining to be divided or for removing further. The dividing process is known very
time-consuming, especially when the dimension of operational space of the process is high.
Now we go back to (P ) of (4). For exploiting a stream of outer approximation, we add
an extra variable t and rewrite (4) as follows
(P )
max f(x)
s.t. q(x) − t ≥ 0,
α, g (x) + g (x) + ρ − t ≤ 0,
(7)
x ∈ S, t ∈ [t∗, t∗],
where t∗ and t∗ stand for the lower bound of α, g (x) + g (x) + ρ and upper bound of
q(x) on S, respectively. In the sequel we assume that there is a convex function d(·) such
DRAFT
June 14, 2001
9
that
Ω : = {(x, t) ∈ Rn+1 | d(x, t) ≤ 0}
= {(x, t) ∈ Rn+1 | x ∈ S, t ∈ [t∗, t∗],
α, g (x) + g (x) + ρ − t ≤ 0}.
Denote Q := {(x, t) ∈ Rn+1 | q(x) − t < 0}. Then the feasible region of (P ) is equal to
Ω \ Q which is a d.c. set. Denote by int(S) the set of interior point of S, co(S) the convex
hull of S. Throughout this paper, we assume that
Assumption A: the feasible region Ω \ Q is not empty,
Assumption B: a point (x0, t0 ) ∈ int(Ω) is available.
Assumption A guarantees the existence of optimal solution of (P ), and B is designed for
an outer approximation process.
Lemma 4: Under Assumption A, there exits an optimal solution (x∗, t∗ ) of (P ) such that
(x∗, t∗ ) ∈ ∂(Ω \ Q).
Proof: Note that f(·) is continuous and S is compact. The existence of an optimal
solution of (P ) is trivial. Suppose that (x̄, t̄) is an optimal solution and that (x̄, t̄) ∈
/
∂(Ω \ Q). Then there is a line segment [(x, t), (x , t )] through (x̄, t̄) such that (x , t) ∈
∂(Ω), (x , t ) ∈ ∂(Ω) and that (x, t) ∈
/ Q and (x , t) ∈
/ Q. From the convexity of
f(·) we see that max{f (x ), f(x )} ≥ f(x̄). Therefore (x∗, t∗ ) := arg max{f (x) | (x, t) ∈
{(x, t), (x , t )}} is an optimal solution and is on ∂(Ω \ Q).
For a polytope T we denote V the vertex set of T , and denote that V+ := {(v, t) ∈
V | (v, t) ∈
/ Q}, V− := {(v, t) ∈ V | (v, t) ∈ Q}, for (v, t) ∈ V adj(v, t) := {(u, t) ∈
V | (u, t) is adjacent to (v, t)}. Similarly, we use V k , V+k , V−k and adjk (v, t) for Tk , respectively.
Lemma 5: If (Ω \ Q) ⊆ T then V+ = ∅.
Proof:
Suppose that V+ = ∅. It implies that T ⊆ Q. Therefore (Ω \ Q) ⊆ Q. It
contradictes Assumption A.
As expressed before, we design an algorithm based on outer approximation. In the algorithm, a strictly nested sequence of polytope Tk ⊇ (Ω \ Q) is created. For the sake of
proceeding discussion easily, we assume the following.
June 14, 2001
DRAFT
10
Assumption C : all vertices of Tk is nondegenerate for all k.
Suppose that Tk is in hand at step k. If V−k = ∅ we denote E+k := {(u, t) ∈ V+k | adjk (u, t) ∩
V−k = ∅}, E−k := {(v, t) ∈ V−k | adjk (v, t) ∩ V+k = ∅}. For (u, t) ∈ E+k , (v, t) ∈ E−k , if
(u, t) ∈ adjk (v, t) or (v, t) ∈ adjk (u, t), we denote (puv , t) := ∂(Q) ∩ [(u, t), (v, t)], where
[(u, t), (v, t)] stands for the line segment from (u, t) to (v, t). Clearly, (puv , t) = (pvu , t) and
it is a singleton point on Rn+1 . Moreover, we denote E k := {(puv , t) | (u, t) ∈ adjk (v, t) ∩
E+k , (v, t) ∈ adjk (u, t) ∩ E−k }.
Lemma 6: If (Ω \ Q) ⊆ Tk , then (Ω \ Q) ⊆ co(V+k ∪ E k ).
Proof:
Suppose that (Ω \ Q) ⊆ Tk . If V−k = ∅, then E k = ∅ and V k = V+k . It
concludes that co(V+k ) = V k . Now we suppose that V−k = ∅. From the convexity of Q we
see that co(V−k ∪ E k ) ⊆ cl(Q), the closure of set Q. On the other hand, T k = co(V+k ∪ V−k ).
Therefore we see that
T k ⊆ co(V+k ∪ E k ) ∪ co(V−k ∪ E k ).
It implies the assertion.
From Lemma 6 above, we see that the following µk can serve as an upper bound of f(·)
over Tk
µk := max{f (u) | (u, t) ∈ (V+k ∪ E k )}.
(8)
Choose a point (x∗k , t∗k ) from arg max{f (u) | (u, t) ∈ (V+k ∪ E k )}. When (x∗k , t∗k ) ∈ Ω then
we can calculate the intersection of ∂(Ω) and line segment [(x∗k , t∗k ), (x0k , t0k )]. It is easy to
see from the convexity of Ω that (xek , tek ) := ∂(Ω) ∩ [(x∗k , t∗k ), (x0k , t0k )] is a singleton point.
Let
/k (x, t) := d(xek , tek ) + ∂(d(xek , tek )), (x, t) − (xek , tek ).
(9)
From the convexity of Ω it follows that (Ω \ Q) ⊆ Tk ∩ {(x, t) | /k (x, t) ≤ 0} and (x∗k , t∗k ) ∈
Tk ∩ {(x, t) | /k (x, t) ≤ 0}. Therefore Tk ∩ {(x, t) | /k (x, t) ≤ 0} ⊂ Tk . This property plays a
crucial role to generate a nested sequence converging to optimal solutions. After V k being
in hand, V k+1 can be calculated by a so-call adjacency-lists method which is known to be
fast enough (cf. [2]).
DRAFT
June 14, 2001
11
B. Algorithm and Validity
Now we design an algorithm for solving the program (P ) as follows.
Algorithm IS
Step 0: create a polytope T0 such that Ω ⊆ T0,
find (x0, t0) under Assumption B, calculate
α in Lemma 3, µ := ∞, ι := −∞, k := 0,
Step 1: calculate V+k , V−k , if V−k = ∅ then calculate
E+k , E−k and E k , find (x∗k , t∗k ),
if (x∗k , t∗k ) ∈ Ω then stop,
else calculate µk by (8), if µk < µ then µ := µk ,
Step 2: calculate (xek , tek ),
if (xek , tek ) ∈ Q and f(xek ) > ι then ι := f(xek ),
calculate ∂(d(xek , tek )) and /k (x, t),
make Tk+1 := Tk ∩ {(x, t) | /k (x, t) ≤ 0} by (9),
Step 3: k := k + 1 goto Step 1.
The algorithm is only based on outer approximation which is different from a method based
on a combination of outer approximation and conical branch-and-bound (BB). From a view
of implementation, both outer approximation and conical BB in the latter find a lower
(ι in IS) bound and upper bound (µ in IS) twice. Moreover, conical BB or BB is quite
time-consuming and memory-consuming. They are a big barrier to speed-up. Therefore
the latter might be inefficient.
Lemma 7: When IS terminates within finitely many iterations, (x∗k , t∗k ) is an optimal
solution of (P ).
Proof: Algorithm IS only terminates at Step 1 under the condition that (x∗k , t∗k ) ∈ Ω.
By the choice of (x∗k , t∗k ), we have that (x∗k , t∗k ) ∈ Q. It implies that (x∗k , t∗k ) is a feasible
point of (P ), i.e., (x∗k , t∗k ) ∈ Ω \ Q. From Lemma 6 and the definition of (x∗k , t∗k ), we see
that (x∗k , t∗k ) is a point at which an upper bound of (P ) is attained. Hence, it is an optimal
solution.
Theorem 2: When IS does not terminate within finitely many iterations, then any accumulation point of (x∗k , t∗k ) is an optimal solution of (P ).
June 14, 2001
DRAFT
12
Proof:
Denoted by S ∗ the set of accumulation points of (x∗k , t∗k ). From the proof
of Lemma 7 we see that (x
, t
) ∈ Q and f(x
) ≥ max(P ) for all (x
, t
) ∈ S ∗ . Now
we prove that (x
, t
) ∈ Ω. Suppose that there exits a subsequnce {(x∗kq , t∗kq )}q=1,2,... of
{(x∗k , t∗k )}k=1,2,... such that (x∗kq , t∗kq ) → (x
, t
). By (9) and convexity of Ω, we see that
/kq (x∗kq , t∗kq ) > 0 and /kq (x∗kq +1 , t∗kq +1 ) ≤ 0.
(10)
From [3] we know that /kq is uniformly equicontinuous and that there exits a continuous
function / satifying
lim /kq (x∗kq , t∗kq ) = /(x
, t
).
q→∞
From (10) we have that /(x
, t
) = 0. By Theorem II.1 of [7] we see that (x
, t
) ∈ Ω.
Hence (x
, t
) is in the feasible region Ω \ Q.
III. Concluding Remarks
We designed an algorithm for a global optimization problem with a fuzzy constaint. It
is found that the algorithm works under three mild assumptions, and the validity of the
algorithm was theoretically proved. Also, the algorithm is different from a well-published
BB-based method. As the future work it is desirable to do numerical experiments so
that the proposed algorithm may be different in efficiency from the approaches with both
approximation and conical BB methods.
References
[1] A.D. Alexandroff, Surface represented by the difference of convex functions, Doklady Akademii Nauk SSSR
72 (1950) 613-616.
[2] P.C. Chen, P. Hansen and B. Jaumard, On-line and off-line vertex enumeration by adjacency lists, Operations
Research Letters 10 (1991) 403-409.
[3] Y. Dai, J. Shi and Y. Yamamoto, Global optimization problem with several reverse convex constraints and
its application to out-of-roundness problem, Journal of the Operations Research Society of Japan 39 (1996)
356-371.
[4] P. Hartman, On functions representable as a difference of convex functions, Pacific Journal of Mathematics 9
(1959) 707-713.
[5] R. Horst,T.Q. Phong and N.V. Thoai, On solving general reverse convex programming problems by a sequence
of linear programs and line searches, Annals of Operations Research 25 (1990) 1-18.
[6] R. Horst and N.V. Thoai, Constraint decomposition algorithms in global optimization, Journal of Global
Optimization 5 (1994) 333-348.
[7] R. Horst and H. Tuy, Global Optimization: Deterministic Approaches, Springer-Verlag, third edition, 1996.
DRAFT
June 14, 2001
13
[8] E.M. Landis, On functions representable as the difference of two convex functions, Doklady Akademii Nauk
SSSR 80 (1951) 9-11.
[9] P.T. Thach, D.c. sets, d.c. functions and nonlinear equations, Mathematical Programming 58 (1993) 415-428.
[10] H. Tuy, A general d.c. approach to location problems, in State of the Art in Global Optimization: Computational Methods and Applications, 413-432, Kluwer Academic Publichers, 1996.
June 14, 2001
DRAFT