An Optimal
On-Line
Algorithm
for Metrical
Task System
ALLAN
BORODIN
Universip
of Toronto,
NATHAN
Hebrew
Toronto,
Ontano,
Canada
LINIAL
University,
Jerasalem,
Israel,
and IBM
Almaden
Research
Center,
San Jose, California
AND
MICHAEL
Universip
E.
SAKS
of Callfomia
at San Diego,
San Diego,
California
In practice, almost all dynamic systems require decisions to be made on-line, without
full knowledge of their future impact on the system. A general model for the processing of
sequences of tasks is introduced, and a general on-line decnion algorithm is developed. It is
shown that, for an important class of special cases, this algorithm is optimal among all on-line
algorithms.
Specifically, a task system (S. d) for processing sequences of tasks consists of a set S of states
and a cost matrix d where d(i, j) is the cost of changing from state i to state j (we assume that d
satisfies the triangle inequality and all diagonal entries are f)). The cost of processing a given task
Tk of tasks is a
depends on the state of the system. A schedule for a sequence T1, T2,...,
‘equence sl,s~, ..., Sk of states where s~ is the state in which T’ is processed; the cost of a
schedule is the sum of all task processing costs and state transition costs incurred.
An on-line scheduling algorithm is one that chooses s, only knowing T1 Tz ~.. T’. Such an
algorithm is w-competitive if, on any input task sequence, its cost is within an additive constant of
w times the optimal offline schedule cost. The competitive ratio w(S, d) is the infimum w for
which there is a w-competitive on-line scheduling algorithm for (S, d). It is shown that w(S, d) =
2 ISI – 1 for eoery task system in which d is symmetric, and w(S, d) = 0(1 S]2 ) for every task
system. Finally, randomized on-line scheduling algorithms are introduced. It is shown that for the
uniform task system (in which d(i, j) = 1 for all i, j), the expected competitive ratio w(S, d) =
O(log 1s1).
Abstract.
A. Borodin was supported in part while he was a Lady Davis visiting professor at the Hebrew
University in Jerusalem. M. E. Sakswas supported in part by National Science Foundation grants
DMS 87-03541, CCR 89-11388 and Air Force Office of Scientific Research grant AFOSR-0271.
Authors’ addresses: A. Borodin, Department of Computer Science. University of Toronto.
Toronto, Ontario, Canada M5S 1A4: N. Linial, Institute of Computer Science and Mathematics,
Hebrew University, Jerusalem, Israel: M. E. Saks, Department
of Computer Science, University of
California
at San Diego, San Diego, CA.
Permission to copy without fee all or part of this material is granted provided that the copies are
not made or distributed for direct commercial advantage, the ACM copyright notice and the title
of the publication
and its date appear, and notice is given that copying is by permission of the
Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or
specific permission.
G 1992 ACM 0004-5411/92/1000-0745 $01.50
Journalof theAssoc]at]on
for Computmg
Machinery,
Vol 39,No 4,October1992,pp 745-763
A. BORODIN ET AL.
746
Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms
Nonnumerical Algorithms and Problems—sequencutg and scheduling:
tics]: probabdcstzc algont}vns (mcludmg Monte Carlo )
and Problem Complexity]:
[Probability and Statis-
G.3
General Terms: Algorithms, Performance, Theory
Additional Key Words and Phrases: Competitive analysis, on-line algorithms
1. Introduction
Computers
It is usually
example,
are typically
faced with streams of tasks that they have to perform.
the case that a task can be performed
in more than one way. For
the system
same piece
of data
The decision
system
may have several
may reside
as to how to perform
in two
ways:
(1) the
files
a particular
cost
performed
and (2) the state
affect the cost of subsequent
possible
in several
of the
of the
tasks.
paging
task affects
present
system
schemes
available
each of which
task
upon
or the
can be accessed.
the efficiency
depends
completion
of the
on how
of the
it is
task
may
There are many other situations
in which present actions influence
our well
being in the future.
In economics,
one often encounters
situations
in which the
choice between
present alternatives
influences
the future
costs/benefits.
This
is, for
example,
selecting
the
among
The difficulty
case in choosing
investment
is, of course,
the past and the current
measure
taken
the
plans
quality
a proposed
economists
approach
we take
here,
decision
is to devise
was first
optimal
“clairvoyant”
no
strategy
probabilistic
that
about
more
papem
approach
studied
in
spans.
only on
how to
usually
model
of the theory
of the
of Markov
in the seminal
the performance
of a strategy
with the performance
of the
knowledge
the
future
“worst-case”
measure
of quality.
A numb?r
of recent
this paper analyze various computational
problems
from
15, 19, 22], and many
based
even clear
The
explicitly
has complete
assumptions
our decision
point
example,
and time
some probabilistic
paper of Sleator and Tarjan [21], is to compare
that operates
with no knowledge
of the future
requires
for
rates
It is not
strategy.
is the starting
which
strategy,
interest
that we have to make
future and act on this basis. This
decision processes [13].
The
different
task we have to perform.
of
by mathematical
an investment
with
have appeared
of the future.
and
This
is therefore
a
articles
that preceded
this point of view [2, 7,
subsequently.
More precisely,
consider a system for processing
tasks that can be configured
into any of a possible set S of states (we take S to be the set {1, 2, . . . . n}). For
any two states i # j, there is a positive
transition
cost d(i, j) associated
with
changing from i to j. We always assume that the distance matrix
d satisfies the
entries are
triangle
inequality,
d(i, j) + d(j. k) > d(i, k) and that the diagonal
O. The pair (S, d) is called a task system. For any task T, the cost of processing
T is a function
of the state of the system so we can view T as a vector
T = (T(l),
T(2),...,
ing the task while
ofapair{TITz
a specified
T = TITZ
T(n))
where
in state j. An
T(j)
is the (possibly
instance
infinite)
T of the scheduling
cost of processproblem
consists
.“” T“’, so} where TIT1
.”, Tm is a sequence of tasks and SO is
initial
state. T is called a task sequence and usually we write simply
. . . Tm, where it is understood
that there is a specified
initial
state
SO. (For most of what we do, the particular
value of SO is unimportant.)
A
sclzedule for processing
a sequence T = T 1T z . . . T’n of tasks is a function
o:
{O,. . . . m} + S where u(i) is the state in which T’ is processed (and u(0) = s(,).
Optimal
On-Line
Algorithm
The cost of schedule
processing
It will
747
u on T is the sum of all state transition
u)
=
~d(a(i
,=1
be convenient
processed
during
transition
u
times.
a state
to view
the time
schedule
which
Task Systems
costs and the task
costs:
c(T;
1.) A
for Metrical
ith
task
1<
occurs
+
~ T’(a(i)).
~=1
as arriving
[i, i + 1). (Thus,
specified
Specifically
change
m(i))
the
interval
is also
– I),
by
at time
the first
sequences
of
i and being
task arrives
state
tl< t2 < ... < tk is the sequence
and
at time
transitions
and
of times
at
s = S1, S2, ...,
Sk is the order in which states
distinct but s]_ ~ # s, for each j). We
are visited (the states are not necessarily
also define to = O.
A scheduling
algorithm
for (S, d) is a map
A
that
associates
to each
task
sequence
T a schedule
u = A(T). The cost of algorithm
A on sequence
T,
denoted
cJT)
is defined
to be c(T; A(T)).
It is easy to construct
a dynamic
programming
task
algorithm
sequence;
off-line
we
computation
In an on-line
that
denote
gives an optimal
this
optimal
since the choice
scheduling
algorithm,
(minimum
cost
by
cost) schedule
CO(T). This
of each state may depend
the ith state
m(i)
for any
algorithm
k
on future
of the resulting
an
tasks.
schedule
u is a function
only of the first i tasks T 1T z . co T’ and SO; that is, the tasks are
received
one at a time in order and the algorithm
must output
u(i)
upon
receiving
T’. To measure
the efficiency
of an on-line
algorithm
A as compared
to the optimal
(off-line)
algorithm,
we say that A is w-competitive
(after [15])
for a positive
real number
w if there is a constant
KW such that CA(T) –
WCO(T’) s KW, for any finite task sequence T. Let WA = {w: A is w-competitive}
and define
ratio
of
the competitive
the
algorithms
task
system,
ratio W(A)
W( S, d),
infimum
of
of WA. The
competitil]e
W(A)
all
over
on-line
A.’
If, in addition
to satisfying
is symmetric,
that
is, d(i, j)
metrical.
main
result
system
to be the infimum
is the
Our
on n states
THEOREM
1.1.
the triangle
= d(j,
is that
the
is independent
For any metrical
situations,
the matrix
the transition
say that
competitive
the
ratio
task
cost matrix
system
of every
(S, d)
metrical
is
task
of d:
task system (S, d) with n states
w(S,
In many
inequality,
i), we
d)
= 2n – 1.
d is not symmetric
expensive
since the cost of switching
between
two states may be more
one way than
the other.
In fact, the
algorithm
a matrix
we give can be used to give an upper bound in the general case. For
d that is not necessarily
symmetric,
define the cycle offset ratio +(d)
to be the~ maximum
over al~ sequences
SO,Sl, S2, ..., sk where so = Sk of the
ratio (~, = ~d(sl _ ~, s,))/( ~,= ~ d(sl, s,- ~)). (If d is symmetric,
then +(d)
= 1.)
We show
THEOREM
algorithm A:
1.2.
For any task system (S, d) with n states,
with competitive
ratio at most (2n – l)+(d).
Since d satisfies the triangle
and so w(S, d) s (n – l)(2n
inequality,
it is easy to show that
– 1) for any task sequence.
We
1In our original paper, we used the term,
of competitiLle
ratio.
there is an on-line
waste factor,
rather
+(d)
do not
<
n – 1,
believe
than the now accepted terminology
A. BORODIN ET AL.
748
that
this
is the
best
always achievable.
The work in this
theory
how
of on-line
it relates
sequential
possible;
paper
our
to
some
is that
was motivated
algorithms
search
guess
for
of
by the goal
sequential
the
work
an 0(n)
decision
done
in
of a list. In this case, on-line
competitive
ratio
of developing
problems.
particular
a general
Let
cases,
algorithms
us explain
for
have been
instance,
developed
that give a competitive
ratio of at most 2 [21]. How does this square
result, that there is an Q(1 S 1) lower bound
on the competitiveness
metrical
and
task system?
in other
special
set, while
research
difference
is that
problems,
the
in our
model
we allow
how
restrictions
ratio.
restrictions
In
recent
by extending
with
the
in the sequential
possible
is to determine
competitive
together
The
particular
arbitra~
Manasse
model
so that
The
next
set of tasks
et
al.
a task
tasks. In particular,
[16]
with
for
search
are restricted
tasks.
on the
work,
a set T of possible
tasks
system
problem
step in this
studied
consists
they obtain
our
any
to a very
can effect
have
is
the
such
of (S, d)
partial
results
on a very interesting
problem
called the k-server problem.
In our conclusion,
we elaborate
further
on the relation
between
task systems and the k-server
problem.
In Section 2, the lower bound of 2n – 1 for metrical
task systems is proved.
First,
the situation
taskmaster
competitive
proved
easily
tion
ratio
by explicit
described
times).
easily
is formulated
and then
strategy
of 2n – 1. The
description
to discrete
time
given.
time
These
In
are most
transi-
and shown
4, a simple
In Section
7, randomized
Section
to other
work.
2. Cruel
Taskmasters
on-line
8 contains
and a Lower
to be
class of
is introduced,
whose behavior
system. In Section
5, a simple
that achieves
a competitive
ratio of at most 8(n – 1) when
is presented.
In Section 6, the algorithm
promised
by Theorem
case is analyzed.
a
1.2 are
nonintegral
precisely
Section
and a
to force
1.1 and
algorithms
(allowing
are defined
algorithms
input
task
scheduler
is shown
of Theorems
algorithms.
algorithms.
algorithms
called nearly
oblivious
depends
only very weakly
on the
algorithm
symmetric
bounds
in continuous
3, such algorithms
an on-line
for the taskmaster
upper
of on-line
as operating
In Section
converted
as a game between
a particular
algorithms
some concluding
are introduced
remarks
d is
1.2 is
and a special
and comparisons
Bound
We can view the operation
of an on-line
algorithm
as a game between
the
taskmaster and the scheduler;
in the ith round the taskmaster
provides the task
T’ and then the scheduler
chooses a(i). To prove a lower bound we will define
a simple set of strategies
for the taskmaster
and show that they can be used to
force a competitive
ratio arbitrarily
close to 2n – 1.
For a real number
E >0,
we define
the cruel taskmaster strate~
M(e),
as
follows: At step i, define T’ so that
T[(u(i
– 1)) = e,
T’(s)
Thus, the cost of the
state cr(i – 1). Such
To prove a lower
algorithm
A, it will
= O
if
s#u(i–
1).
ith task is nonzero if and only if the scheduler
remains in
tasks are called ~-elementa~.
bound of 2n – 1 on the competitive
ratio of an arbitrary
be convenient
to define the competitive
ratio of A with
Optimal
On-Line
respect
Algorithm
to an infinite
for Metrical
task sequence
WT(A)
T as follows:
the definition
LEMMA
2.1.
If
of w(A),
PROOF.
““” T~)
CO(T1T2
““. Tin’) “
it is easy to prove:
T is an infinite
to in@zitY with m, then w(A)
c~(TITz
= limsup
m+m
From
749
Task Systems
task sequence such that CO(T’ T2 “”” Tm) tends
> w~(A).
Let
?-m
c~(T1T2
=
““. T~)
and
gm = CO(T1T2
It
is enough
then
there
to show
that
is a constant
for
all
. . . T~).
w G W’,
w > w T(A ). So suppose w G WA;
r~ – wg,,l s KW for all m, which implies:
KW such that
r~
—<w+—.
g.
Taking
lim sup of both
g.
WJ A)
s w since
gm tends
to infinity,
by
❑
hypothesis.
The
sides yields
Kti,
lower
THEOREM
bound
2.2.
system (S, d). Let
to A. Then
on w(S, d) now follows
Let A be any on-line
T(E)
be the infinite
from
scheduling
algorithm
task sequence produced
for the metrical
task
by M(e ) in response
Taking
~ to be arbitrarily
small, we obtain, by Lemma
2.1, that
— 1 for any on-line
algorithm
A so w(S, d) > 2n – 1, as desired.
w(A)
> 2n
Remark.
Intuitively,
the reason that the lower bound
on the competitive
ratio improves
as e decreases
is that smaller
tasks reduce
the amount
of
information
available
to the on-line
algorithm
at the times it must make
decisions.
Compare
a sequence
(i.e., the task whose
The
cost for the off-line
The
on-line
repeated
of k repeats
costs are obtained
algorithm
algorithm
fares better
k times
except
tasks are all equal
to T.
that
from
is easily
with
kT:
it is “given
of the task T with
those
a single
of T on multiplying
seen to be the same in both
Its situation
a guarantee”
task
cases.
is the same as with
that
the
kT
by k).
following
T
k
sequence of tasks
PROOF OF THEOREM 2.2.
Let T’T2
. . . be the (infinite)
produced
by M(E) and let o be the schedule produced
by A. Each e-elementary task T’ has nonzero
cost only in state m(i – 1). Let SI, s?, 1$3,. . . and
t1<t2<t~
<””” be the state transitions
and transition
times prescribed
by
algorithm
A and tO = O. Recall from the introduction
that we assumed that
u(O)
is occupied
during
the interval
[0, 1), and the first task
the times
t, and t,’are positive
integers.
Note
t: = 1. All
arrives at time
that the times
A. BORODIN ET AL.
750
t; <t;<
“s. with
t; = tk+ 1 define
the times
at which
the taskmaster
changes
tasks.
To evaluate
we analyze
by A during
b~ = cost incurred
by the optimal
definitions,
we do not
discrepancy
makes
ultimately
interested
its correctness.
Note
fjr(cr(i))
~=1
= E(tk
off-line
the system
cost
the
analysis
more
during
of the
the cost of an off-line
on-line
transition
convenient
[1, t~ ].
transition
at time
and,
since
at
t( in
we are
it does not affect
for A and
=
~
(
;
1=1
cost for
an upper
the optimal
the
algorithm
if
o(i)
= u(i
– 1)
if
u(i)
# u(i
– 1)
}
‘
–k),
is the task processing
We obtain
off-line
in the ratio aL/bL as k tends to infinity,
first that a~ = D~ + P~ where
is the cost of state transitions
P, =
the quantities:
[1, t;),
include
t( in a~ but we do include
b~. This
that
ratio,
a~ = cost incurred
By the
time
the competitive
bound
schedule
is in state
A.
for
bL, as follows:
during
s at time
the interval
Let
[1, t;]
bL(,s) denote
subject
the cost of
to the condition
t~.Then
so we can upper bound bk by any convex combination
of the b~( s). Thus, our
goal is to find some linear combination
BL of the bL(s) for which we can write
a bound
in terms
bk(s)
bL(sk_l)
The first
=
of DA and
Pk. To this end, we note
the following
facts:
s +s~_~,
bk_l(s)
s mh{bL_,(sk_l)
+ C(tk
–tL_l),
bL_l(sL)
claim
+~(sk,
sk-[))
time
is obvious, since staying in state s adds no cost between
follows from the fact that two possible options of the
off-line
algorithm
are: (i) stay in state s&~ during the interval
[ t~_ ~,t;1and (ii)
stay in state sk during the interval
[t~ _ ~, t~) and move to Sk_ ~ at time t(.
t;_~ and t:. The second
Simple
then
manipulation
now shows that
if we define
Optimal
On-Line
Algorithm
for Metrical
Task Systems
751
and thus
B~<B1+D~+et~=B1+a~+ck.
Now,
since
B~ > (2n
a~ > D~ 2 k rein, ~ ~ d(i, j) we have
– l)b~
and thus the previous
manipulation
d(i, j).
Also
rein, ~, d(i,
j)
)“
yields
&(2n
for some constant
1
(
-1)
1 + e/rein
k
C. Taking
w~(e)(A)
d(i,
j)
the lim sup as k tends
> (2n
-
1)
)
to
~ + ~,mi~
,~J d(i,
(
3. Continuous
s ~aJminl~J
implies:
E
(
b~(2n–l)<Bl+a~l+
Simple
ek
inequality
j)
Time Schedules
Because of the discrete
nature
of our problem,
all state transitions
occur at
integer times. It will be useful to consider
schedules in which transition
times
are allowed
to be arbitrary
schedules
are not
any such schedule
least
as good.
nonnegative
real
numbers.
Such
continuous-time
really possible in our model, but we see in Lemma
3.1 that
can be converted
easily to a discrete time schedule that is at
The
advantage
of
introducing
continuous
time
algorithms
we define in the next sections are easier to describe.
Precisely,
a continuous-time
schedule
for tasks T1 T2 “”” Tm
a: [1, m + 1) ~
union
of half
S such
open
that
intervals
for
each
state
[a, b). The
s G S, U–l(s)
endpoints
is that
is a function
is a finite
of these
the
intervals
disjoint
represent
times for transitions
between
states. As before,
a can be described
by the
<t2 < ““”< tk where state s, is entered at time
sequences slj sz, ..., Sk andtl
t,;the only difference
is that the t,need not be integers. The cost of processing
task T’ under this schedule is given by
!
T’(cr(t)),
convex
combination
‘+ldt
1
that
is,
proportion
the
cost
is the
of the time
interval
spent
~
k ,T’(s)
in state s. The total
An on-line
continuous-time
scheduling
the time interval
[i, i + 1) upon receipt
in the next three sections are CTSAS.
where
h,
is the
cost of the schedule
is
algorithm
(CTSA) is one that schedules
of task T’. The algorithms
we describe
We now show that any such algorithm
can be converted
into an online
discrete-time
is at least as good on any task sequence.
scheduling
algorithm
(DTSA)
that
A. BORODIN ET AL.
752
LEMMA
per@rns
3.1.
For
any
online
CTSA
A‘,
PROOF.
We define
A as follows:
produced
by A is given by
u(j)
where
there
is an
at least as well as A on any task sequence
= s: T](s)
S, is the set of states
On
task
sequence
is minimum
visited
online
DTSA
A
that
T.
among
by algorithm
A‘
T, the
schedule
u
s G S1
during
the
time
interval
[j, j + 1). Since A’ is on-line,
u(j)
is computable
from
s,] and T] Tz . . . T’.
Furthermore,
we claim that the cost incurred
by algorithm
A is at most the
cost incurred
by A‘.
This
follows
from
(1) By the definition
of u(i),
the sequence of states visited by A up to time j
is a subsequence
of the states visited
by A‘.
Hence,
by the triangle
inequality,
the total transition
cost for A is at most that for A‘.
(2) The cost of processing
task j in A’
s E S, and hence is at least TJ(u(j)).
4. Nearly
For
Obiil’ious
motivation,
Algorithms
consider
is a convex
❑
and Traversal
a very
simple
combination
of
T’(s)
for
Algorithms
metrical
task
system
consisting
of two
on-line
continuous
algorithm
A is as follows:
A
states SO and S1. A “safe”
remains in starting
state SO until the task processing
cost incurred
by A equals
d(so, S1), A then moves to state S1 where it stays until the task processing
cost
incurred
by A in s,, equals d(sl, SO) = d(sO, S1). At this time, say time c, A
returns to state so. This strategy of cycling between
so and SI now repeats. We
note
during
the
time
interval
[0, c],
that
A
incurs
state
transition
costs
of
2 d(sO, s] ) and task-processing
costs of 2 d(sO, S1); that is, a total
cost of
4d(s~,, S1). Consider
the cost of any off-line
algorithm
B during the same time
interval
[0, c] (and against the same sequence
of tasks). If B makes a state
transition,
then
B incurs
one of the states
includes
the time
a cost of at least
cost of at least d( SO,S1). It follows
is at most 4 times that of B.
This
simple
on-line
algorithm
stracted
as follows:
An
because
its behavior
depends
specified
coclc~c~
d(sO, S1). Otherwise,
the entire
time
SZ(i = O or i = 1) during
interval
that A was in state s, incurring
then
that whether
has certain
on-line
CTSA
only
nice
is called
weakly
or not
B moves,
properties
nearly
on the input
B remains
that
A‘s cost
can be ab-
obliuious
task
(so
called
sequence)
if it is
by two infinite
sequences
SOS1S2 “” o of states (SO is given),
“.. of positive reals, and produces
a schedule as follows:
(1) The states are visited
input task sequence.
(2) The transition
from
the total processing
is defined
so that
in
the
sequence
S1, s?, s,, “””
in
interval,
and this
a task-processing
independent
of
and
the
state s, to s]+ ~ occurs at the instant in time t~+ ~ when
cost incurred
since entering
s] reaches c,. Hence, t,+,
c,=
j“’’~f’(s,)d.
t,
k
Such an algorithm
clearly operates on-line.
if, for j > k, s] = s,_ ~ and CJ = cl_ ~. A
We say it is periodic
with period
periodic
algorithm
is a traLersal
Optimal
On-Line
algorithm
Algorithm
for Metrical
Task Systems
if for each j, CJ = d(sj, s, +, ), that
753
is, the processing
cost in state
j is
equal to the cost incurred
in the transition
to state SJ+ 1. A traversal
algorithm
(with period k) is completely
specified by the sequence ~ = SOSISZ o”” Sk ( = So),
which
is called
a traljersal
The traversal
A(T)
on a given
completed.
specified
task
sequence
time
~ is called
during
A(T).
which
A cycle of
one
traversal
is
If
7 = Soslsz
..” Sk, then
A(T)
incurs
a cost
of exactly
st ) dutzng each cycle.
2 Zz=ld(sl..l,
5. A Good
Trallersal
now
Algorithm
generalize
the
two-state
section so as to achieve
task system. Intuitively,
remain
by the traversal
T is the
Clearly,
PR~OPOSITION 4.1.
We
of S.
algorithm
in a component
cost incurred
in that
other
component.
view
the transition
traversal
algorithm
an 8(n – 1) competitive
we partition
the states
(using
the strategy
component
Formally,
costs
equals
recursively)
until
the cost to make
we proceed
d( i, j)
discussed
as follows:
as edge weights
in
the
last
ratio for any n state metrical
into two components
and we
the task-processing
the transition
For
to the
d symmetric,
on the (undirected)
let
us
complete
graph on n vertices. Let MST = (S, E) be a minimum
weight spanning tree for
this graph. For (u, v) G E, we define
the modified
weight d’(u, L1) to be 2p
define a traversal
~* of S:
where 2P – 1 < d(u, L’) < 2‘. We now inductively
(a) If ISI = 1 and S = {u}, we have an empty traversal.
weight
(b) For IS I >2,
let (u, L1) be an edge with maximum
The removal
of (u, L ) partitions
the MST
d’(u, LJ) = 2“.
trees
MSTI
i = 1,2, let
= (Sl, El)
and
iVISTz = (Sz, Ez) with
~, be the inductively
of
the
constructed
modified
edge
u G SI and
traversals
weights
in MST and let
into two smaller
in
for
E,.
L) G
Sz. For
S,, and let 2 ~
be the
maximum
consists
2 ‘1- ‘~’
permute
of 2J1- ~’ cycles of ~ ~, followed
by the edge (u, u), followed
by
cycles of 71, followed
by the edge (~, u). (Here
we cyclically
~1 (resp., ~z) if necessa~
so that it begins and ends at u (resp.,
Starting
at
u, T
L’).)
A simple
LEMMA
induction
5.1.
Let
edge of modified
From
this,
2’”
The
8(n
desired
5.3.
B incurs
weight
in MST.
Then,
in ~*,
an
2M - m times in each direction.
4.1, we have
The total
– 1) ratio
During
modified
is trauersed
and Proposition
5.2.
LEMMA
2 M be the lagest
weight
COROLLARY
algorithm
yields:
the
cost of one cycle of A(T*)
now follows
time
that
a cost at least 2 ‘–
is at most
4(n
– l~2~f.
from:
A(T *)
completes
a cycle,
any
off-line
~.
PROOF.
We show that B‘s transition
costs plus the task costs incurred
while
on
B is in the same state as A(~*)
is at least 2 ‘– 1 We proceed by induction
IEI, where E defines the
Thus, d(u, u) >2 M – 1. If
MST. For the basis, 1~1 = 1, say ~“ is u ~ L) - u.
algorithm
B makes an edge traversal,
then we are
A. BORODIN ET AL,
754
done.
Otherwise,
B stays at some state (say u). Since A(T)
while in state u, so does B.
For IEI >1, let (u, u) be the edge of maximal
in the construction
Case 1.
(i)
During
a cycle
of A(T *), B moves
B stays in some
while
MSTQ be as
from
some
S, (say SI ) during
the entire
in S1 to some
state
is a minimum
spanning
tree,
this
cycle.
B remains
at u throughout
the cycle. By the
a task cost of at least d(u, v) > 2~f - 1 is incurred
by B
of A(T),
A( ~ *) is in u.
lS1I > L M
2MI be the maximum
modified
weight in MSTI. Assume first
at u or at a vertex in Sz. Then the portion
of
the cycle spent in SI consists of 2 ~–}’1 cycles of ~l. By the induction
hypothesis,
B incurs a cost of 2~’ -1 in each cycle, for a total cost at least
2~- 1. If instead
~* begins at a vertex
s + u in SI then the argument
is
that
the traversal
almost
a visit
crosses
that
during
inequality,
incurs
and
6. An Algorithm
on-line
matching
on-line
time
since
algorithm
upper
algorithm
its traversals
moves
our
s‘ until
this exactly
with
2, we
ratio
B
lower
from
bound
A( ~ *) is in Sz, the relevant
in state
we can treat
Section
(u, u ), completes
this
while
B waits
competitive
~” begins
the same except that each traversal
of ~ ~ by A(~) is interrupted
by
to Sz, and then resumed
later. Consider
the moves of B while
xl(~’)
In
and MSTI,
lS1I = 1, so S1 = {u}. Then,
definition
(ii)
a cost d(u, u)
of r*.
state in S2 (or vice versa). Then, since MST
transition
costs at least d(.u, z]) > 2~- 1.
Case 2.
weight
incurs
A(7 *) returns
saw
that
Ratio
(2n
the
cruel
task
that
anticipates
the cruel
bound
against
guarantees
s“.
B disregards
(~, u). Say
By
the
the
task
triangle
cost,
cost to B is at least as much
moves
it
as if
to s“. Thus,
– l)x(d)
taskmaster
system.
In
strategy
this
taskmaster
adversary.
a 21Z – 1 ratio
the case of asymmetric
task systems,
To motivate
the on-line algorithm,
to
to u, and then
in any metrical
that
for
s‘
❑
as before.
Competitive
of ~z and crosses
state
We
forces
a 2n – 1
section,
we present
strategy
and achieves
then
for any input
show
that
sequence
the
an
a
same
of tasks. In
a factor of +(d)
creeps into the analysis.
assume for now that the adversary follows
the cruel taskmaster
strategy (for some arbitrarily
small e). In this case, each
successive
task has cost E in the state currently
occupied
by the on-line
algorithm
and cost O in every other state. Thus, the entire history of the system
(inchzcling
the tasks) depends
only on the sequence
SO,s,, S2,...
of states
occupied by the on-line algorithm
and the transition
times tl, tz, . . . Where tl is
convenient
to
the time s, is entered.
Instead
of using the t, ‘s, it is more
parametrize
the system history by CO,c1, Cz, . . . where c~ is the cost incurred
by the on-line
algorithm
while in state Sk; here CL = (tk+~ – tkk. The choice
of {Sk} and {c~} constitutes
a nearly oblivious
on-line
algorithm.
How do we choose {Sk} and {c~}? The key is to understand
how the off-line
cost changes during some time interval
(;, i + At).
For a given task sequence T and an arbitrary
time
t, let
@,: S -
R be the
off-line
cost function;
that is, @t(s) is the minimum
cost of processing
T up to
time t, given that the system is in state s at time t.
In
the present
case, where
the adversary
follows
the cruel
taskmaster
strategy, consider a time ; at which the on-line algorithm
is in state ;. Then, for
Optimal
On-Line
sufficiently
can
small
stay
off-line
Algorithm
At,
in state
@f+Js)
s during
algorithm
tional
for Metrical
can
either
for
interval
occupy
cost of ~At, or occupy
f at time
s +$
(;,;
~ during
some other
since
+ At).
state
the
For
(;,;
off-line
+ At),
s during
algorithm
s = f, note
incurring
(;, i + At)
that
the
an addi-
and move
to
~ + At. Thus,
rein{+;(f)+ dt,:n+~(s)
O;+.,(f) =
This
= 0;(s)
the
755
Task Systems
expression
indicates
that
+,(f)
increases
;)).
+ d(s,
with
t Up until
the time
? such
that there is some state s for which ~;(s) < +;(;)
– d(s, i). From that time o%
the off-line
cost remains
constant
until
the next transition
by the on-line
algorithm.
This suggests the following
strategy by the on-line
algorithm:
stay in
S until the time i such that there is some state s such that +;(s) < @f($) –
d(s,;)
and then
adversa~
nearly
izing
move
is a cruel
oblivious
it so that
We define
define
reader
to state
s. As noted
taskmaster,
this strategy
one. We now describe
this on-line
it applies
the sequences
so
defined
=
defined
At the same time,
the functions
oblivious
we also
is given,
for all
s.
f~_l(s)
+ d(s,
fk-l(s)
c~_l
The nearly
general-
Sk. ~ and fk _ ~:
{ fk-,(sk)
Having
explicitly,
the
as a
off-line
cost up to the time the on-line
to the off-line
algorithm
being in state s
Sk = state s minimizing
fk(s)
that
from
S to the reals. For intuition,
the
in the case that the tasks are given by the
= o
f,(s)
having
algorithm
{Sk} and {c~} inductively.
a sequence
{ f~} of functions
may find it useful to note that
k >1,
if we assume
can be “recomputed”
to any task sequence.
cruel taskmaster,
f~(s) is the optimal
algorithm
moves into state sk, subject
at that time. We have:
For
previously,
then
+ ~(s~>sk-1)
{ fk}
and states
=fk(sk_l)
algorithm
if
s+
if
s ‘s&l
{sk}, we define
–fk_l(s&
defined
s~_l)
s&~
}
for
“
k z 1,
l).
by {sk} and {c~} above
is denoted
Aj.
We prove:
THEOREM
PROOF.
6.1.
A:
has competitive
Let T = TITQ
< m be the (continuous)
ratio
at most
(2n
– l)o(d).
““o Tm be any task sequence. Let tl< t2 c
times at which A: prescribes
state transitions,
“””c t,
that
is,
the on-line
algorithm
enters sk at time tk. Let hk(s) be the cost incurred
by an
optimal
off-line
algorithm
up to time tk,subject to its being in state s at time
tk. (In terms of the function
@ defined
before,
hk(s) = @t$.s).) Let }ZL =
be the optimal
off-line
cost up to time tk. We want to compare
rein, ~s hk(S)
to prove
hk to a~, the cost of the on-line
algorithm
up to time tk.Specifically,
Theorem
6.1 it suffices to prove that for some constant
L independent
of k,
‘k2 (2n
(6.1)
-a~)v(d)
- “
A. BORODIN ET AL.
756
Note
first
that
ak = D~ + CL_,
where
and
k
CL =
We need
three
lemmas:
LEMMA
6.2.
hk(s)
> ~~(s) for alls
LEMMA
6.3.
~k(s)
–~~(s’)
LEMMA
6.4.
Let
To deduce
EC,.
1=[)
(6.1) from
h~>
= S.
s d(s’,
s) @
the lemmas,
minfA(s)
SES
s’ = S.
we have from
~,z~
>
alls,
~[FL
-
Lemma
2n. maxci(i,
6.2 and 6.3:
J)],
1
Fk – B,
22?1–1
where
B is a constant
independent
of k. Using
Lemma
6.4 to substitute
for FL
gives
(6.2)
Now
the distance
around
the cycle
so, SI, ...,
sk, SO k
&7ys1,s1
_,)
+d(so,
s,)
,=~
and by the definition
of q(d),
k
z
,=,
Using
this in (6.2), we get
this is at least
d(sL_J,.s,)
+(d)
=
‘h
+(d)
“
Optimal
On-Line
Since
~(d)
Algorithm
>1,
for Metrical
we have
C~_ ~ + D/*(d)
‘k>
where
L is independent
(2n
757
Task Systems
> a/4(d).
-a~)v(d)
of k, as required
Thus,
- “
to complete
the proof
of Theorem
6.1.
It remains
to prove
PROOF OF LEMMA
k >1,
the lemmas.
6.2.
By induction
on k; for k = O the result
is trivial.
For
we have
h~(s)
if s # sk_l.
If s = sk_l,
hk(s~_l)
since either
> min
interval
(in
=f~(s)
>fk_l(S)
we have
hk_l(Sk_l)
the off-line
[t’_~,t’) or makes
hk_ l(S)).
{
> h~_l(s)
+
algorithm
min (hk_l(s)
s#s~_,
C’_~,
is in state
+ d(s,
sk _ ~ throughout
s’_~)))
the time
a transition
from some state s to s~_ ~ during
case the cost incurred
prior
to that transition
which
,
interval
that time
is at least
NOW
h~_l(s~_l)
by the induction
+ c~_~ >f’_,(s~_~)
hypothesis
+ c’_~
and the definition
‘f~-l(s~)
of c’-
‘f’(s~-~),
~, while
+ d(s’>s~-,)
‘fk(sk-1)
by the induction
hypothesis
PROOF OF LEMMA
6.3.
and the definition
By induction
s’)
<fk_l(s’)
= d(s’,
and for s’ = s&~,
‘fk(sk-l)
❑
is obvious
for
k = O.
= f~_ ~(s) – f~_ ~(s’) if neither
-fk(s’)
‘f&,(
f’(s)
and Sk.
on k; the result
– f’(s’)
Suppose it is true for k – 1.Then, f’(s)
of S,s’ equals s’_~. For s = s’_l and s’ + s’–l,
fk(sk-1)
of f’
+ d(sk> s’-,)
‘f’(s’)
+ d(s’,
‘f’(s’)
s~-1)
s~-l)
(by definition
of s’)
(by definition
Of f’(s’)),
s # s~_l,
‘f’-l(s)
<
d(Sk,
< d(sk-l,
‘f’-l(s’)
S)
–
s).
~(s’,
– d(s’>s~-1)
s’-l)
(by the induction
hypothesis)
❑
A. BORODIN ET AL.
758
PROOF OF LEMMA
2d(s1,
so). For
(x
2
6.4.
k >1,
H
fk(~) +fk(~k) – 2
5# s,’
Now,
By
induction
it is enough
z
zfk(~k-1)
equals
For
On-Line
of Section
k = 1, both
.fk-1(~) +“fk-l(~k-1)
so the left-hand
–fk-l(sk)
sides
equal
that
)
‘CL-1
side is equal
+~(~~l~k-1).
to
–fk-l(sk-1)>
❑
c~_ ~ + d(,s~, s~_ ~), by definition.
7. Probabilistic
The result
k.
s#sk_,
~~(s) = f~ _ ~(s) if s # s~_,,
which
on
to show, by induction,
Algorithms
2 shows that,
in a general
task system, we cannot
hope for
a competitive
ratio better
than 2n – 1. The ability
of the adversaty
to force
this ratio is highly
dependent
on its knowledge
of the current
state of the
algorithm.
This suggests that the performance
can be improved
by randomizing
some of the decisions
made by the on-line
algorithm.
A randomized
on-line
scheduling
algorithm
can be defined simply as a probability
distribution
on the
set of all deterministic
algorithms.
The adversary,
knowing
this distribution,
selects
the
sequence
deterministic
then
each successive
the
notation
2A(T)
=
of tasks
the
adversary
can
task.) The resulting
of
Section
1) we
~ UC(T; U) . pr(~
probability
to be processed.2
that
A follows
predict
schedule
are
(Note
with
u on input
if the
algorithm
is
the
response
to
certainty
u is a random
interested
IT) as it relates
that
in
the
variable
relationship
to CO(T), where
T. The expected
and (using
between
pr(cr IT) denotes
competitive
ratio
the
of A,
E(A),
is the infimum
over all w such that F~(T) – WC()(T) is bounded
above for
any finite task sequence, and the randomized
competitive
ratio, W(s, d), of the
task system
rithms
A.
The
above
(S, d) is the
definition
game described
T’,...,
T1, the
For
a given
random
infimum
of
iZ(A ) over
can be described
in Section 2. The ith state
first i states u(0), u(l),...,
sequence
variable
of
and
the
i tasks,
the
adversary
all randomized
in terms
on-line
algo-
of the scheduler-taskmaster
u(i) is a function
of the first i tasks
~(i – 1), and some random
bits.
schedule
selects
followed
the
up to that
(i + l)st
task
time
knowing
is a
the
distribution
of this random
variable,
but not the actual
schedule
that was
followed.
It is this uncertainty
that gives randomized
algorithms
a potential
advantage
over purely deterministic
ones.
Thus. far we have been unable to analyze the expected competitive
ratio for
arbitrary
task systems. However,
for the uniform
task system, the system where
all state transitions
have identical
determine
the expected competitive
randomization
provides a substantial
that
the competitive
ratio
H(n)=
1 + 1/2+
1/3+
in n and 1 + in ~z.
(say unit)
cost, we have been able to
ratio to within
a factor of 2. In this case,
reduction
in the competitive
ratio. (Recall
for a deterministic
algorithm
It is well known
““. + l/n.
2 Such an adversary is now called an “oblivious
in Raghavan and Snir [18] and further studied
is at least 2 n – 1.) Let
that H(n)
is between
adversary”. Adaptive adversaries were introduced
in Ben-David et al. [1],
Optimal
On-Line
THEOREM
Algorithm
7.1.
for Metrical
If (S, d) is the uniform
H(n)
PROOF
OF THE
randomized
algorithm
proceeds
Task Systems
on-line
UPPER
s iZ(S,
BOUND.
algorithms
that achieves
in a sequence
759
task system on n states, then
d)
< 2H(Jz).
Note
first
that
and thus it suffices
the desired expected
of phases. The time
Lemma
3.1
to describe
extends
to
a continuous-time
competitive
ratio. The algorithm
at which the ith phase begins is
denoted
t,_ ~. For a state s, let A, be the algorithm
that stays in state s for the
entire algorithm.
A state s is said to be saturated for phase i at time t > t, – 1, if
by As during the time interval
[t, _ ~, t] is at least 1. The ith
the cost incurrecl
t,when
phase ends at the instant
a phase
until
the
that
on-line
state. Note
probability
proceeds
saturated
as follows:
and then
jump
for that phase.
remain
in the
randomly
During
same
state
to an unsaturated
that at the end of a phase, the on-line
algorithm
moves with
to some state, with all states again becoming
unsaturated.
To analyze
phase,
algorithm
state becomes
all states are saturated
the expected
any off-line
competitive
algorithm
incurs
ratio
of this
algorithm
a cost of at least
1, either
equal
during
a single
because
it makes
a state transition
or because it remains
in some state during the entire phase
and that state becomes saturated.
Hence, it suffices to show that the expected
cost
of the
number
on-line
of state
algorithm
transitions
Let
f(k)
be the
the end of the phase
given
that
is at most
until
2H( n).
expected
there
are k
is
unsaturated
states remaining.
Note f(1) = 1. For k > 1, since the algorithm
in each unsaturated
state with equal probability,
the probability
that the next
state to become
saturated
+ l/k,
so that f(k)
ing costs + transition
results
PROOF OF THE LOWER
Neumann
minimax
performance
be obtained
the
task
sequence)
on
BOUND.
theorem
for
and
algorithm
the
expected
proving
on this
LEMMA
Yao
7.2.
[23]
competitive
Thus,
has
noted
games,
that
a lower
bound
on the
We use this
Let
= f(k
(task
– 1)
process-
using
the
bound
ratio
of
expected
principle
on
a randomized
let TJ denote
the first
E denote
expectation
cost
to derive
von
the
of any
a lower
algorithm
on
a
j tasks.
Let (S, d) be a task system and D be a probabili~
task sequences.
f(k)
in these (and most other) models can
distribution
on the input (in this case,
a lower
input.
uniform
task system.
If T is an infinite
task sequence,
set of infinite
is l\k.
cost of the phase
❑
two-person
of randomized
algorithms
by choosing any probability
deterministic
bound
in a transition
= H(k)
and the expected
costs) is at most 2H(n).
with
measure
respect
suppose that E(cO(TJ )) tends to infinity with j. Let mJ denote the minimum
deterministic
on-line algorithms A of E(cti(T])).
Then
on the
to D and
overall
mJ
ii(S,
d)
> limsup
~+~
E(cO(T’))
“
PROOF.
By the definition
of i7(S, d), there exists a randomized
algorithm
R
and a constant
c such that for all infinite
task sequences
T and integers
j,
CR(TJ ) – W(S, d)cO(TJ ) < c. Averaging
over the distribution
D, we obtain
that for each j, E(cR(T]))
– iiXS, d) E(cO(TJ))
< c or iiXS, d) > E(c~(TJ))\
E(CO(TJ)) – C/E(CO(TI)).
Since E(c~(TJ))
> ml and E(cO(TJ)) tends to infinity
we obtain lim sup m,\E(cO(T1))
< EJ(S, d).
❑
A. BORODIN ET AL.
760
Now
for
s c S, define
cost of 1 in state
sequences
uniformly
U, to be the unit
s and O in any other
obtained
by generating
from the unit elementary
and E(cO(T’))
< j/(nH(n))
elementary
state.
Let
task with
a processing
D be the distribution
on task
each successive
task independently
and
tasks. We show that for each j, m, > j/n
7.2, we obtain
‘
Applying
Lemma
+ 0(1).
j/n
i7(S,
d)
= H(n),
> lim sup
,4=
J/’(~~(~l)
+ 0(1))
as required.
Ide~tify
a sequence
of unit elementa~
that
define
each
task. Note that
sl, s~, ...,
deterministic
on-line
algorithm
A, then,
tasks with the sequence
of states
if u is the schedule produced
by a
for
each time
step j, A incurs
a cost
of 1 at time step j if SJ = cr(j – 1). This happens with probability
l/n,
and
hence the expected cost of processing
each task is at least l/n
and rnj > j\n.
Now
for each time
sequence
off-line
unless
i, let
a(i) be the least index greater than i such that the
contains
all states of S. Consider
the following
s~+l,st+z,...,s~(i~
algorithm:
letting
U,, denote the ith
m(i – 1) =s,, in which case let u(i)
off-line
strategy
fact.) We claim
this strategy
algorithm
and incurs
after
the first
incurs
j tasks, as a function
a cost of one exactly
zero cost at other
times.
the total
Then,
X~ s j}
and the
c1 = max{k:
distributed
{c,: j > 1} is a renewal
Theorem
3.8]),
the
limit
compute
Exp( Y~ ), note
random
process.
of the
that
at those
Let
at which
identically
i do not change state
is in fact the optimal
for each such task sequence,
although
we do not need that
that the expected cost (with respect to the distribution
D) of
let X~ be the time
dent
task, at time
= s.(,). (This
cost incurred
quantities:
time
– 1) = S,,
up to time
j and
reaches
k.
Y~ = X~ – X~ _ ~ are indepenwhich
Exp(cJ )/j
Y~ is the
u(i
by the algorithm
By the elementary
ratio
i such that
C-Jbe the cost incurred
variables,
+ 0(1).The
of j, is j/(nH(rz))
times
means
renewal
exists
until
and
that
the sequence
theorem
(see [20,
is l/Exp(Y~).
a sequence
of
To
randomly
generated
states contains
all of the states. When i of the states have appeared
in the sequence the probability
that the next state is a new state is (n – i)/n
and hence the expected number
of states between
the ith and (i + l)st
has appeared
is n/(rz – i). Summing
on i yields Exp(Y~) = nH(n).
❑
Upper and lower
task systems would
is for the system
state
bounds on the expected competitive
ratio for other specific
be of interest.
In particular,
we would like to know what it
in which
d is the graph
distance
on a cycle; that is,
d(s,, s]) = min(lj
– il, n – lj – ii). We suspect that there is an O(log n) upper
bound for any task system on n states, but the analysis of iZ(,S. d) for arbitrary
task systems seems to be a challenging
problem.
8. Conclusion
We have introduced
an abstract
framework
for studying
the efficiency
of
on-line
algorithms
relative
to optimal
off-line
algorithms.
With
regard
to
deterministic
algorithms,
we have established
an optimal
2n – 1 bound on the
competitive
ratio for any n state metrical
task system. For randomized
algorithms, we have only succeeded in analyzing
the expected competitive
ratio for
the uniform
task system.
Optimal
On-Line
Algorithm
for Metrical
Task Systems
761
One must ask about the “utility”
of such an abstract model. (For the utility
of an even more abstract model, we refer the reader to Ben-David
et al. [1]). As
stated
in the
unduly
pessimistic
utility
of a general
application
model,
introduction,
techniques
One
such
optimal
our model
model
of results,
Manasse
our
in that
should
but
one
not
suggest
appealing
setting
very
for
consider
is
the
concrete
k-server
satisfy
extent
of
the
direct
to which
the
settings.
model
requests
is
that
in terms
the
in more
servers
algorithms
tasks. We believe
be measured
also
results
k mobile
deterministic
arbitraxy
only
should
and results
et al. [16]. Here,
bound
allows
introduced
that
come
by
in the
form of a command
edges have lengths
to place a server on one of n nodes in a graph G whose
that satisfy the triangle
inequality.
(In their definition,
the
edge
not
lengths
need
be symmetric
but
since
all the
results
in [16]
assume
symmetry,
we make this assumption
to simplify
the discussion.)
In fact, the
model permits
n to be infinite
as, for example, when one takes G to be all of
Rd. As observed
in [16], there
k server
problem
problem
is equivalent
tasks
(i.e.,
and ~ in that
relation
task system
those
that
remaining
are defined
by a given
k server
tasks
are
studies
with
restricted
the
matrix)
to
problem
arbitrary
competitil’e
ratio.
to task systems
that
bound
the proof
by using
n – 1 is the
of Theorem
when
applying
2.2 could
tasks
k server
the same adversary
2.2 and then
bound
show
of a n = k + 1 node,
is derived
our Theorem
lower
et al. [16]
in
problem
where
the
elementary
tasks. For both problems,
one
Our 2n – 1 upper
bound
then
immediately
Indeed
ratio
is zero
and in this case the n = k + 1
is equivalent
(with or without
to the k server
Manasse
that
of the k server
(where servers can either move to
where the costs of such excursions
applies in the n = k + 1, k server
lower bound
only applies
directly
competitive
k server
the tasks are restricted
a cost vector
A variant
is also introduced
excursion
the n = k + 1 node,
an n = k + 1 node,
where
have
state).
called the k-server with excursions
problem
service a request or else make an excursion
node,
between
To be precise,
to an n state
to be ~-elementary
n – 1 states
is a strong
and task systems.
a nice
optimal
problem.
bound
Their
(i.e., the cruel
averaging
be modified
are restricted
excursions)
case but the
with excursion
problem.
taskmaster)
argument.
so as to yield
to ~-elementary
for
as in
(Alternatively,
the optimal
tasks.)
the
n – 1 lower
The
n – 1
k = n – 1
upper bound requires
a new and intuitively
appealing
algorithm.
By a quite
different
algorithm,
Manasse
et al. [16] also prove that for all k = 2 server
problems,
the competitive
ratio is 2 and they make the following
intriguing
conjecture:
the
competitive
ratio
for
every
k-server
problem
(without
excur-
from the previous discussion since
sions) is k. A lower bound of k is immediate
such a bound can be derived when one restricts
the adversary
to n = k + 1
nodes. Our task system framework
provides
a little positive
reinforcement
for
the conjecture
the
competitive
which
possible
induced
(at
least)
in that
ratio
our deterministic
upper
for
k-server
any
is independent
positions
of the
state transition
n-node,
of the
edge
bound
can be used to show that
problem
lengths.
is at most
Here
servers becomes
a state of the task
costs inherit
the triangle
inequality
each
2(Y)
of
the
– 1
(i!)
system and the
and symmetry
properties.
A node request becomes a task whose components
are either zero
(if the state contains
the requested
node) or infinite
(if the state does not
contain the requested
node). Although
a state transition
may represent
moving
more than one server, as noted in [16], we need only move the server that
covers the request
and treat the state as representing
the virtual
position
of
A. BORODIN ET AL.
762
servers
while
simulation,
remembering
our Theorem
the
physical
position
6.1 also provides
of all
servers.
some information
By the
same
for nonsymmetric
k-server problems.
We note that there is now an impressive
literature
concerning the k-server conjecture,
including
optimal
results on specific server systems
(e.g., [5]. [61), as well as results yielding
upper
arbitrary
k-server systems (see [10] and [12]).
We note however
general, be formulated
the position
but
(personal
of the servers,
communication)
with
the
edge
(exponential
in k) for
that the k-server
with
excursion
problem
cannot,
in
as a task system problem.
For now, if a state represents
the cost of a task is not just
also of how we reached
grow
bounds
this state.
has shown
lengths.
Indeed,
that
for
a function
the competitive
(However,
we
of the state
n > k + 2, Shai Ben-David
should
ratio
note
can be made
that
example uses excursion
costs which do not satisfy the triangle
inequality.
suggestion
is to redefine
the problem
so that servers can “move
and
rather than “move or serve”. This
one server by Chung et al. [7].
point
of view
Our results on the expected competitive
the unit-cost
task system has a very nice
problem
that is better known as the paging
problem,
Fiat
competitive
ized
et al. [9] have established
ratio
algorithm.
case when
using
n = k + 1, their
7.1. A direct
modification
algorithm
of our
establish
an H(k)
lower bound
ized on-line
paging algorithms.
constructed
number
of
to note
upper
(and
possibly
that
in the
becomes
lower
) One
serve”
in the case of
ratio of randomized
algorithms
for
analogue
in the unit-cost
k-server
or caching problem.
For the paging
a 2H(k)
a very appealing
It is interesting
is investigated
to
Ben-David’s
bound
bound
quite
on the expected
practical)
(admittedly
the algorithm
in Theorem
random-
nonpractical)
of our Theorem
7.1 can be used to
for the expected competitive
ratio of randomSubsequently,
McGeoch
and Sleator [17] have
a randomized
algorithm
with a matching
upper
other
important
results
on paging
problems,
between randomization
and memory,
ized algorithms
in the context
of
bound of H(k).
A
on the trade-off
and on a more general
study
competitive
ratios for on-line
of randomalgorithms
(against various types of adversaries)
can be found in Raghavan
and Snir [18],
Coppersmith
et al. [8], and Ben-David
et al. [1].
The papers of Manasse et al. [19], Fiat et al. [9], and Raghavan
and Snir [18]
and the original
papers by Sleator and Tarjan [21] and Karlin
et al. [15] argue
for the development
of a theory (or rather theories)
for competitive
analysis.
Such a study
seems fundamental
to many
addition
to computer
science and,
alternative
models and performance
other
disciplines
as such, we
measures.
expect
to
(e.g., economics)
in
see a number
of
The idea of developing
a general
model
for on-line
task
ACKNOWLEDGMENTS.
systems was introduced
to us in various conversations
with Fan Chung, Lam-y
Rudolph,
Danny
Sleator,
and Marc
Snir. Don Coppersmith
suggested
that
randomization
may be useful in this context. We gratefully
acknowledge
their
contributions
to this paper. Finally, we thank the referees and Gordon
Turpin
for their careful reading and suggestions
concerning
the submitted
manuscript.
REFERENCES
1. BEN-DAYIIZ S., BORODIN,A., KARP, R.. TARDOS,G., AND WIGDERSON,A.
On the power of
on
in online algorithms. In F’roceedufgs of the 22nd Annual A CM Syrnposmm
of Computing
(Baltimore,
Md., May 12-14). ACM, New York, 1990, pp. 379-386.
randomization
Theov
Optimal
On-Line
Algorithm
for Metrical
Task Systems
763
2. BENTLEY, J. L., AND MCGEOCH, C. C. Amortized
analyses of self-organizing
sequential
ACM 28, 4 (Apr. 1985), 404–411.
search heuristics. Comrnun.
3. BITNER, J. R. Heuristics that dynamically alter data structures to reduce their access time.
Ph.D. dissertation. Univ. Illinois, 1976.
that dynamically
organize data structures. ,SL4M J. Cornput. 8
4. BITNER, J. R. Heuristics
(1979), 82-110.
S. New results on sezver
5. CHROBAK.M., KARLOFF,H. J., PAYNE, T., AND VISHWANA’ZTZAN,
problems. NAM J. Disc. Math. 4, 2 (1991), 172-181.
6. CHROBAK,M., AND LARMORE,L. An optimal on-line algorithm for the server problem on
trees. SIAM J. CornpLtt. 20 (1991), 144-148.
A dynamic location problem for graphs.
7. CHUNG,F. R. K., GRAHAM, R. L., AND SAKS, M.
Combinzzforica 9 (1989), 111-131.
D., DOYLE, P., RAGHAVAN,P., AND SNIR, M. Random walks on weighted
8. COPPERSMITH,
of the 22nd Arznual
ACM
graphs and applications
to on-line algorithms.
In Proceedings
Symposium on Theory of Cornputmg
(Baltimore,
Md., May 12-14). ACM, New York, 1990, pp.
369-378.
9. FIAT, A.. KARP, R., LUBY, M., MCGEOCH, L., SLEATOR,D., AND YOUNG, N. Competitive
paging algorithms. J. Algotithnzs 12 (1991), 685-699.
k-Server Algorithms.
In Proceedings of the
10. FIAT, A., RABANI,Y., AND RAVID, Y. Competitive
31st Annual
Symposium
on Foundations
of Computer
Science. IEEE,
New York, 1990, pp.
454-463.
Exegesis of self-organizing
linear search.
11. GONNET, G., MUNRO, J. I., AND SUWANDA,H.
SIAM J. Conzput. 10, 3 (1981), 613-637.
12. GROVE,E. The harmonic online k-server algorithm is competitive. In Proceedings of the 23rd
Azzrzual ACM
Symposium
on Theo~ of Computing
(New Orleans, La., May 6–8). ACM, New
York, 1991, pp. 260-266.
13. HEYMAN,D., AND SOBEL,M. Stochastic Models in Operations Research. vol. II. McGraw-Hill,
New York, 1984.
Optimal list order under partial memory constraints. J. Appl.
14. KAN, Y. C., AND Ross, S. M.
Prob.
15.
17 (1980),
caching.
16.
Algotithmica
3, 1 (1988),
11 (1990),
snoopy
Competitive
algorithms
for server problems.
208-230.
MCGEOCH,L. A., ANDSLEATOR,D. D.
Algon”thmica,
Competitive
79–119.
MANASSE,M., MCGEOCH,L., AND SLEATOR,D.
J. Algorithms
17.
1004-1015.
KARLIN, A. R., MANASSE,M. S., RUDOLPH,L., AND SLEATOR,D. D.
A strongly
competitive
randomized
paging algorithm.
to appear.
in on-line algorithms”,
Au18. RAGHAVAN,P., AND SNIR, M. “Memozy versus randomization
tomata, Languages and Programming,
Lecture Notes in Computer Science v. 372, SpringerVerlag, New York, 1989, pp. 687-703.
19. RIVEST, R. On self-organizing
sequential search heuristics. Commun.
ACM
19, 2 (Feb.
1976), 63-67.
Models with Optimization
Applications.
Holden-Day,
Calif. 1970.
20. ROSS,S. Applied Probabili~
efficiency of list Update and Paging rules.
21. SLEATOR,D. D., AND TARJAN, R. E. Amortized
Commun.
22.
ACM
28 (1985),
TARJAN, R. E.
Amortized
202-208.
computational
complexity.
SZAM
J. Alg.
Disc.
Math.
6 (1985),
306-318.
23.
YAO, A.
Probabilistic
ings of the 18th Annual
computations:
Symposwm
Towards
a unified
on Foundations
measure
of Computer
of complexity.
Science.
In Proceed-
IEEE, New York
1977, PP. 222-227.
RECEIVEDJULY 1988; REVISEDAPRIL 1991; ACCEPTEDMAY 1991
Journalof theAssoclatlon
for Computmg
Machinery,
Vol.39,No.4,October1992
© Copyright 2026 Paperzz