dX( t) ft( t) dt+or(t) dWt, X(O) x to be chosen so that

SIAM J. CONTROL AND OPTIMIZATION
Vol. 25, No. 1, January 1987
(C) 1987 Society for Industrial and Applied Mathematics
012
MINIMIZING OR MAXIMIZING THE EXPECTED TIME
TO REACH ZERO*
D. HEATHt, S. OREY*, V. PESTIEN AND W. SUDDERTH
Abstract. We treat the following control problems: the process Xl(t) with values in the interval (-oo, 0]
(or [0, )) is given by the stochastic differential equation
dX(t)=ft(t) dt+r(t) dWt,
Xt(O)=xt
where the nonanticipative controls/ and cr are to be chosen so that (p(t), (t)) remains in a given set ff
and the object is to minimize (or maximize) the expected time to reach the origin. The minimization problem
had been dicussed earlier by Heath, Pestien and Sudderth under various restrictions on the set 3’. Here an
improved verification lemma is established which is used to solve the minimization and maximization
problems for any ,9’. An application to a portfolio problem is discussed.
Key words, stochastic control, portfolio selection, gambling theory
AMS(MOS) subject classifications. 60G40, 60J60, 93E20
1. Introduction. Consider a real-valued process {Xl(t)} given by a stochastic
differential equation
dX( t) ft( t) dt + or(t) dWt,
X(O) x
where { Wt } is standard Brownian motion and ft (t) and or(t) are nonanticipative controls
to be chosen so that (ft(t), or(t)) remains in a specified set Ae. The problems of
minimizing or maximizing the expected time to reach the origin are treated in 3. The
minimization problem has been studied in [9] and [3], though with an exponential
change of variables putting the problem on (0, 1]. For a more detailed discussion, see
Remark 2 in 3.
The solution of these control problems uses a new refinement of the verification
lemma of [9], which is proved in 2. This result should be of independent interest.
Section 4 deals with a portfolio planning problem which turns out to be a special
case of the minimization problem. This portfolio problem was originally solved in [3].
2. Continuous-time stochastic control. The formulation of stochastic control problems given here is adapted from Pestien and Sudderth [9]. Our notation and terminology
is the same as theirs, but we consider a more general class of processes and establish
a verification lernma more suited to the present applications.
A continuous-time gambling problem is a triple (F, X, u) where
(2.1) the state space F is Polish (we shall use a Borel subset of ordinary Euclidean
space),
(2.2) the gambling house is a mapping which assigns to each x F a nonempty
collection X(x) of processes X {X,, >= 0} with state space F such that Xo x
and X has right-continuous paths with left-limits,
(2.3) the utility function u is a Borel function from F to the real line.
* Received by the editors March 18, 1985, and in revised form October 15, 1985.
f School of Operations Research and Industrial Engineering, Cornell University, Ithaco, New York
,
14853.
School of Mathematics, University of Minnesota, Minneapolis, Minnesota 55455. The research of
this author was supported by National Science Foundation grant MCS 83-01080.
Department of Mathematics and Computer Science, University of Miami, Coral Gables, Florida 33124.
School of Statistics, University of Minnesota, Minneapolis, Minnesota 55455. The research of this
author was supported by National Science Foundation grant DMS-8421208.
195
196
D. HEATH, S. OREY, V. PESTIEN AND W. SUDDERTH
,
A process X 5:(x) is said to be available at x. Each available X is defined on some
probability space (f, P) and is adapted to an increasing filtration (t, t->0} of
complete sub-sigma fields of $;. The probability space and filtration may depend on X.
A player, starting at position x F, selects a process X E(x) and receives payoff
u (X) defined by
(2.4)
u(X) E[limt--sup u(Xt)].
The expectation occurring on the right is assumed to be well-defined for every available
process X.
The value function V is defined by
V(x) sup {u(X): X e E(x)}
for every x e F. A process X e E(x) is optimal at x if
u(X) V(x).
From now on we shall require that F be a Borel subset of the Euclidean space
R d having nonempty interior, and each process X {Xt} under consideration will be
an Ito process of the form
(:z.5)
x, x+
(s) as +
(s) dw
where W { Wt} is a standard m-dimensional Brownian motion process on (I, ;, P)
adapted to increasing, right-continuous g-fields { t}, and t is independent of { Wt/s
Wt, s >=0}. The function a a(t, to) is to be .Rd-valued, progressively measurable,
adapted to {$;t} and such that
I (s)l ds < c
(2.6)
a.s. for all t.
The function/3 =/3 (t, to) has as values real d x m matrices, is progressively measurable,
adapted to {,}, and satisfies
I (s)l = ds < 00
(2.7)
a.s. for all t.
For each pair (a, b), where a d is a d x 1 vector and b is a d x m real-valued
matrix, define the differential operator D(a, b) for sufficiently smooth functions Q d
R by
+oQ)
D(a, b)Q(y) Qx(y)a
where
Q’(Y)=
oQ
ox,’
"
Y. Qx,xj(y)(bb’)ij
i=lj=l
o2Q
and b’ is the transpose of b.
We now specify X(x) by specifying the possible values of a and/3. To this end,
for each x F let C(x) be a nonempty set of pairs (a, b), where a d and b is a real
d x m matrix. (The idea is that C(x) is the set from which a player at state x may
choose the value of (a,/3).) Assume also that every available process X is absorbed
at the time Tx of its first exit from F the interior of F. These conditions define a
,
197
EXPECTED TIME TO REACH ZERO
function Ec on F where Ec(X) is the collection of all processes X having paths in F
and satisfying (2.5), (2.6) and (2.7) together with
(2.8)
(2.9)
(2.10)
(a(t, to), fl(t, to)) C(Xt(to)) for all (t, to),
> Tx(to),
(a(t, to),(t, to))=(O,O) fort_C(x) {(0, 0)} for x F- F
.
Let E be a gambling house such that E(x) Ec(X) for every x F.
The following proposition, which is related to Lemmas 2 and 3 of [9], will be
applied in the next two sections.
PROPOSITION. Let G be an open subset of Rd which contains F. Suppose Q" G
and Q, G
Suppose also that each Q, has continuous second-order
for n 1, 2,.
derivatives on G and that
(i) lim,oo Q,(x)= Q(x) for every x F.
Assume the following conditions for every x F and every X ,(x)"
(ii) Q(X)>=u(X) where Q(X)=E[limsup,_ Q(Xt)] is assumed to be well-
.
defined.
(iii) There exists a sequence {k,} of nonnegative constants such that lim,_oo k,
and with probability one, for all n and all >= O,
D(a(t), fl(t))Q,(Xt) <- k,.
(Here a and fl are related to X by (2.5).)
(iv) There exist integrable random variables Z, Y, Y2,
all >-_ O,
Z<-_Q,(Xt)<=Y,.
Then Q >- V.
such that, for all n and
,
The following lemma is the chief tool for the proof of the proposition.
LEMMA. Suppose Q" G
has continuous second-order derivatives, Xo F X
E(Xo), and z is an almost surely finite {t}-stopping time. Also assume
(i) there is a nonnegative constant k such that with probability one, for all s >-O,
D(a(s), fl(s))Q(Xs) <- k,
(ii) there exist integrable random variables Y and Z such that for all >-_ O,
Z <- Q(Xt) <- Y.
Then
EQ(X,) <-_ Q(xo) + kE".
Proof Apply Ito’s lemma to write
Q(Xt) Q(xo)- At + Mt Q(xo)-(kt + At)+ kt + Mt
(2.11)
where
At
Mr=
(Here
a
0
D(a(s), fl(s))Q(Xs) ds,
Qx(X)fl(s) dW.
and/3 are related to X by (2.5) and satisfy (2.6) and (2.7).)
198
D. HEATH, S. OREY, V. PESTIEN AND W. SUDDERTH
Assume without loss of generality that E-< oo. Hence,
EQ(X.,.) Q(xo)+ kE’r+ E[M.,.-(k’r+ A)].
(2.12)
It suffices to show that the final expectation in (2.12) is less than or equal to zero.
By condition (i), -(k’r+A.,.)<=O. We will show that EM.,.<-O. (Notice that EM.,. is
well-defined by the first equality in (2.11) and condition (ii).)
Let T be a sequence of stopping times such that {Mr^ ;t} is a uniformly
> T ]. Then My M^ on
integrable martingale for every j and T c a.s. Let Bj
-
-
B, and
<--
,
I [-Z + kr+ Q(xo)].
That is,
Let j
-
,
f M+ f f
B7
By
M- <-
[-Z + k’r+ Q(x)]"
Bj
and conclude
Proof of the proposition. Let Xo F and X E(Xo). By condition (ii) and [9, Lemma
1], it suffices to show
(2.13)
EQ(X.,.)<-_Q(xo)
for every almost surely finite stopping time z.
Assume first that z is bounded. Then, by the lemma and Fatou’s inequality,
EQ(X.,.) <-lim inf EQ,,(X.,.) <- lim Q,,(Xo) / (lim k,,)E,r Q(xo).
If z is unbounded, use Fatou’s inequality again:
EQ(X.,.) <- lim inf EQ(X.,.^,,) <- Q(xo).
3. Minimizing or maximizing the expected time to reach zero. The problems
described in the Introduction will now be formulated as continuous-time gambling
problems in R E. Consider first the problem of minimizing expected time. The first
coordinate, Xl, of the state vector x will correspond to the player’s position on (-o, 0],
while the second coordinate, x2, will represent time.
It is convenient to allow negative as well as positive times and define
F {x R2: -oo < x _-< 0}.
Because the object is to minimize expected time, let
u(x) -x.
Recall the notation from 2. The interior of F is F (-oo, 0)x (-c% c) and by our
conventions each available process X will be absorbed at time
T Tx inf { t: X(t) 0}.
In the present example the set C(x) will not depend on x for x F Let 6e c R x [0, oo),
=
.
.
EXPECTED TIME TO REACH ZERO
and let C(x)
equations
199
Co for x F Every X ,c(X) can be specified by stochastic differential
dXl( t) Iz( t) dt + tr( t) dWt,
(3.1)
dXE(t)=dt,
x(0) x, x(0)= x,
where/ and tr are progressively measurable and (/z(t), tr(t)) Y, < T; and Xt Xr
for >_- T. Note that for every X e Ec(x) the second coordinate process {X2(t)} increases
deterministieally at rate 1 up to time T, and by (2.4) and the definition of u
u(X)=-XE-ET.
(3.2)
Now let
(x) {x
(3.3)
(X ,c(X)" ET < oo}.
From (3.2) and (3.3) one sees that
V(x, x2) V(Xl) x2
(3.4)
where V(Xl)= V(xl, 0). Furthermore, for x <y<0, a strategy starting at x and
minimizing the time to 0 must first minimize the time to y and, having gotten there,
minimize the time to 0. This argument leads to V(x)= V(xl-y)+ V(y). Since V is
also continuous and vanishes at the origin, one may conclude
V(x)=Xx
where h -> 0 depends on 6e.
(We omit a formal proof because we will not rely on this
formula below.)
If in (3.1) tz(t)=tx(X(t)) and r(t)=r(X(t)), where tz and r are measurable
real-valued functions on (-o, 0), we say that X is given by a stationary Markovian
strategy. For given functions/z and r then X as defined by (3.1) depends only on the
initial conditions, so we may write u(X)= v(x, x) and from (3.2)
(3.5)
V(Xl, x2) V(Xl) x2
where v(xl)= v(x, 0). Now u(X) can be obtained explicitly. Assume for simplicity
> ro>0 for all x. Note
that/z and r are piecewise continuous functions, and r(x)=
that if r(x)=> ro> 0 and r and/z are measurable and bounded on compact sets then
(3.1) will have a unique weak solution: for/z vanishing this is shown by a time-change
argument ([4, Chap. IV, Ex. 4.2]), and for/z not vanishing one uses transformation
of drift ([4, Chap. IV, Thm. 4.2]).
By definition v(x) is simply the negative of the expected time it takes the diffusion
to reach the origin if it is started at x. If X E(x), then T is finite with probability
one and v(x) is the limit as M o of-v(xl), where vI(Xl) is the expected time
to exit the interval I-M, 0]. Let us set
a(x)=o’(x).
Then
v
is determined by
tzv’ +av + 1
Solving for
(3.6)
v and letting M
O,
v(O) vt(-M) O.
o gives
v’(x) e
eB(Z) 2 dz
a(z)
200
D. HEATH, S. OREY, V. PESTIEN AND W. SUDDERTH
where
2/x (y)
a(y)
B(xl)
(3.7)
and r is an arbitrary point in (-oo, 0]. Of course
v’(y) dy.
V(Xl)
(3.8)
Recall (see, for example, [5]) that the diffusion determined by/x and a has a scale
function and speed measure determined respectively by
(3.9)
dp(Xl)
e -nx’> dXl,
dm(Xl)
2
a(x,)
e ’’)
dXl.
The formulas (3.9) are correct under the same conditions mentioned above for
the existence of a unique weak solution. Note first that if p is as in (3.9), p(X(t)) is
a martingale, as can be seen by applying the Ito formula as extended by Krylov [7].
To check the speed measure first consider the case where/z vanishes. Then as remarked
above the diffusion can be obtained by a time change of a Brownian motion if’, the
tr-2(if(s)) ds. Comparing this with the Itotime change being the inverse of b,
McKean construction of diffusion we find that the speed measure satisfies m(dy)=
2tr-2(y) dy, see [2, Thm. 2.123]. When there is drift present consider the drift-free
diffusion Yt p(X(t)). From the Ito formula we can read off the diffusion coefficient
of Y, hence also the speed measure rh for Y and since the speed measure for X is
given by m(dx)= rh(p(dx)) this is now determined and indeed given as in (3.9).
Consider now /x(t) -=/Xo, or(t) tro, where /Zo and tro are constants. This will
determine a diffusion with ET < o if and only if/Zo > 0, and then
o
(3.10)
/)
(Xl)-- X’-I
which is a special case of (3.6) if tro>O and obvious if tro=O.
It is natural, especially in the light of (3.10), to conjecture that an optimal strategy
is to choose the drift/z to achieve the supremum
M sup {/z: (/z, tr) e S for some or}.
As is explained in Remark 2 below, a similar strategy was proposed by Kelly [6] for
certain discrete-time problems. However, these "Kelly strategies" need not be optimal
if the set of the possible cr’s is unbounded. The exact criterion for our continuous-time
problem involves another quantity
.
I
inf sup {/x + eO’2. (/./,, O’)
THEOREM 1. Let x F
(a) If 0<M< and I< then V(x)-Xl/M-x2. If in addition (M, cro)
then the process X E(x) with tx( t) =- M and or(t) =- Cro is optimal.
(b) If M <- 0 and I < o then V(x) -o.
then V(x)=-x2 (i.e., the origin can be reached in an
(c) If M=c or I
arbitrarily small expected time.)
Proof. (a) Let Q(x)- x/M-x2. It is clear from (3.5) and (3.10) that Q-<_ V. It
remains to verify that Q _-> V. (Once this is done, the final assertion of (a) will follow
from (3.5) and (3.10).) This inequality will be proved by applying the proposition of 2.
=
201
EXPECTED TIME TO REACH ZERO
For 8 > 0, let
I(6)=sup
{ /z+3tr2: tr)S}
(/z,
and notice that by condition (a), I(6)< c for 8 sufficiently small. For such 8, let
ex8 -1
Qa(x)
I() -x.
Now verify the conditions of the proposition, with Q Q and Q, Q for each n.
Condition (i) is automatic and (ii) follows easily. With k, 0 for each n, it is routine
to check that (iii) holds. As to condition (iv), obsee that in the formula for Q(x)
the first term on the right is bounded uniformly in x for each fixed 6. So Q(X) is
bounded above and below by a constant plus X(t), and since x X(t) x2 + T and
X E(x) implies that T is integrable, the proposition gives Q Finally, because
I(6) M and Q
(b) We reduce the result to (a). Let e > 0 and consider a new problem based on
the set
U {(, 0)}.
The quantity corresponding to M for the new problem is
be applied to obtain the value function
V(x)
M
e. Thus
pa (a) can
x,
E
X2.
Clearly V(x) V(x)- as 0.
the desired conclusion V(x)=-xa follows easily from (3.10). So
(c) If M
assume now that M <
and
Y, > 0, and
=
(3.11)
where
-h(a)a,
a
i= 1, 2,...,
and h(s) is a nonnegative function on [0, ) which decreases to zero as
s. Let
(x,) ,(,, (x,)
where is a function from (-m, O) to the positive integers with i(x) increasing rapidly
to m as x decreases to -m. Use (3.6), (3.7) and (3.11) to obtain
2
2(y)
v’(x,)=
dy dz
a(y)
xp
2
)
[exp(I[h(a(y)) dy)]
For any e > 0 we can choose so that a(y)= a(y increases suciently fast so that
both integrals occurring in the final expression are arbitrarily small. It follows that
can be chosen to make v as small as desired, and then (3.8) gives the desired conclusion.
(Notice T < with probability one because p(-)=- by (3.9) and g is bounded
below by g.)
202
D. HEATH, S. OREY, V. PESTIEN AND W. SUDDERTH
For the maximization problem it seems natural to work on [0, o) rather than
(-, 0] and to think of maximizing the expected time until bankruptcy occurs. Here
is the formal definition of the gambling problem"
F={x2: 0<X
u(x) x,
C(x)=Co for x F
where
Co is given by (3.0),
(x) =;(x).
Then, for x F and X E(x),
u(X)
(3.12)
where T inf (t: X(t)
x2 + ET
0}. As before
V(x, x) V(x) + x
where
V(x1)
V(x1,0).
It is natural, as it was for the minimization problem, to conjecture that an optimal
strategy will choose/z to achieve
M sup {/: (/z, r) 6e for some or}.
.
This time the conjecture is essentially correct.
THEOREM 2. Let x F
(a) If M < O, then V(x) -Xl/ M + x2. If in addition (M, cro) $1’, then the process
X ,(x) with tz(t) =- M and or(t) =- Cro is optimal.
(b) If M >- O, then V(x)=oo.
Proof. Suppose X is given by a stationary Markov strategy/z(t)-=/Zo, r(t)-= cro
where /Zo and fro are constants. Because we have changed from (-o, 0] to [0, oo),
formulas (3.5), (3.10) and (3.12) now imply
---+ x2 if/Zo < 0,
u(X)
(3.13)
if/Zo -> 0.
Part (b) of the theorem is immediate. For (a), let Q(x)=-x/M + x2. By (3.13), Q<= V.
oo
The reverse inequality will be proved by another application of the proposition of 2.
Let {/3,} be a sequence of numbers in the interval (0, 1) which increase up to 1.
Define
Q,(x)
e x(o.)x,
1
log/3,
where
-
x2fl
-M-x/M 2tr log fl
2-
O" o
and tro> 0. (The first term on the right-hand side in the definition of Q,(x) is equal
T
to the expectation of (/3,) ds for a process/z(t) M, tr(t) ro and thus corresponds
to a discounted payoff.)
o
EXPECTED TIME TO REACH ZERO
203
Condition (i) of the proposition is easily verified, and (ii) is obvious because
Q-_> u. For (iii) let (a, b)e C(x) where a (), b () and calculate (with
1
A(fl) eX(t3)x’
h (/3)tr 2 + fl’2(1 + x2 log/3)
D(a, b)Q,(x)
/x
log/3
)
(+
A (fl)M
-<-t-/3x2(1 + x2 log/3).
log/3
The inequality holds because /z=<M and A(fl)<0. Now recall that X2(s) is an
increasing process and (iii) will follow after some calculus. Condition (iv) is an easy
consequence of the definition of Q, together with the facts that Xl(s) >= 0 and X2(s) ->
X(0).
Remark 1. Since the set 6e is not assumed to be bounded, and the tr with (/z, tr) 5e
are not bounded away from zero, the usual approach via Bellman’s equation for the
value function V could not be used above. For a continuous time gambling problem
as defined in 2 the Bellman equation can be written in the form
sup D(a, b) V(x)--0
(3.14)
where the supremum is taken over all (a, b) C(x). For the minimization problem of
this section, (3.4) applies and (3.14) becomes
(3.15)
sup
[ tzV’(x)/1
’2 V"(x’)-
1]
=0"
,
Under condition (c) of Theorem 1 the value function V(x)=-0 does not satisfy (3.15).
Furthermore if I =co and M < oo with (M, tro) 9 the function Xl/M does solve (3.15)
but does not represent the value function. Under condition (a) of the theorem the
value function V(x)= x/M is a solution of (3.15) but this fact does not follow from
standard theorems.
Remark 2. Consider the problem of a process on the interval 0 < 1 -<- 1 determined
by the equation
,
d(t) .1(t)[/2 (t) dt + (t) dWt]
.1(0)
(3.16)
where/2(t), t(t) are nonanticipating contro.ls required, to satisfy (/2(t), t(t))e St and
the object is to minimize the expectation of T inf { t: X(t) 1.}. This problem reduces
to that of Theorem 1 by the change of variables X(t)= log X(t). This follows from
Ito’s formula, and one finds (t)= (t)-trE(t)/2, tr(t)= t(t). So one can formulate
the theorem to apply to the
X1 process. Note that the role of M is assumed by
fiT/= sup {fi 6/2: (fi, 6) P}
and the role of I is taken by
= (fi-(-e)>o
inf sup
’
t2-"
The problem for the
process was considered i.n [9]. and [3] and solved under some
restrictions on 6e. In [9] it was assumed that ASe_ Se for all A _->0, while in [3] this
assumption was needed only for 0-<_ A _-< 1.
As discussed in [9] and [3] various models lead to the problem on [0, 1]. One of
these, the "portfolio problem," will be explained in 4. In [6] Kelly introduced a plan
in discrete time based on the criterion of maximizing, at each stage, the expected value
of the logarithm. This "Kelly criterion" was further studied by Breiman [1] who
204
D. HEATH, S. OREY, V. PESTIEN AND W. SUDDERTH
established certain asymptotic optimality properties. Theorem 1 may be interpreted to
imply that a continuous time Kelly criterion is in fact optimal under the hypotheses
of (a), but not under those of (c).
4. A portfolio problem. Consider the problem of managing a portfolio of stocks,
bonds and cash so as to minimize the expected time to reach a given total worth. For
a simple model suppose that there is one bond whose price Bt at time satisfies
dBt rBBt dr,
and one stock whose price S, at time
satisfies
dSt rsSt dt + o’sSt dWt
rs and trs are positive constants and { Wt} is a standard Brownian motion.
A recent paper by Malliaris [8] explains the use of stochastic differential models in
finance and has numerous references to the financial literature. Let Xl(t) be the total
fortune of an investor at time t, let fs(t) be the fraction of that fortune invested in the
stock, and let f(t) be the fraction invested in the bond. Then 1 satisfies
where re,
df(,(t)= f(,l(t)[rsfs(t)+ rf(t)] dt+trsfs(t dWt.
(4.1)
Let
S {(/2, )"
rsfs,f >- O, fs >= O,f, +fs <- 1}.
12 rsfs + raft,
;.
Then (4.1) and ,1(0)= are equivalent to (3.16) and ((t), t(t))
We are in the
situation of Remark 2 of 3. Theorem 1 applies, and one is in case (a). If re > rs one
should obviously take f(t) 1. If ra <- rs one finds
re +
//=
rs
(rs- rn)
2tr
o’
if
rs <-- re + tr2s,
otherwise.
2
The corresponding optimal policies are given by f
1-fs and
if this is less than 1,
otherwise.
In particular, the Kelly strategy is optimal.
REFERENCES
1] LEO BREIMAN, Optimal gambling systems for favorable games, Proc. the Fourth Berkeley Symposium
on Mathematical Statistics and Probability, Univ. California Press, Berkeley, CA, (1961), pp.
65-78.
[2] DAVID FREEDMAN, Brownian Motion and Diffusion. Holden-Day, San Francisco, CA, 1971.
[3] D. HEATH AND W. SUDDERTH, Continuous time portfolio management: minimizing the expected
time
to reach a goal, Preprint, Institute for Mathematics and Applications, Univ. Minnesota, Minneapolis,
MN, 1984.
[4] NosuYUKI IKEDA AND SHINZO WATANASE, Stochastic Differential Equations and Diffusion Processes.
Kodansha Ltd., Tokyo, 1981.
EXPECTED TIME TO REACH ZERO
205
[5] K. ITO AND H. MCKEAN, JR., Diffusion Processes and their Sample Paths. Academic Press, New York,
1965.
[6]
[7]
[8]
[9]
J. L. KELLY, JR., A new interpretation of information rate, Bell System Tech. J., 35 (1956), pp. 917-926.
N. V. KRYLOV, Controlled Diffusion Processes. Springer-Verlag, New York, 1980.
A. G. MALLIARIS, lto’s calculus in financial decision making, SIAM Rev., 25 (1983), pp. 481-496.
VICTOR C. PESTIEN AND WILLIAM D. SUDDERTH, Continuous-time Red and Black: How to control a
diffusion to a goal, Math. Oper. Res., 10 (1983), pp. 599-611.