Randomized k-Server Algorithms for Growth-Rate

Randomized k-Server Algorithms for Growth-Rate Bounded Graphs
Yair Bartal∗
Abstract
The k-server problem is a fundamental online problem where
k mobile servers should be scheduled to answer a sequence
of requests for points in a metric space as to minimize the
total movement cost. While the deterministic competitive
ratio is at least k, randomized k-server algorithms have the
potential of reaching o(k) competitive ratios. This goal may
be approached by using probabilistic metric approximation
techniques. This paper gives the first results in this direction
obtaining o(k) competitive ratio for a natural class of metric
spaces, including d-dimensional grids, and wide range of k.
Prior to this work no result of this type was known beyond
results for specific metric spaces.
1
Introduction
The k-server problem, defined by Manasse, McGeoch,
and Sleator [18], consists of an n-point metric space,
and k mobile servers (k < n) residing in points of this
metric space. A sequence of requests is presented, each
request is associated with a point in the metric space
and must be served by moving one of the k servers to
the request point. The cost of an algorithm for serving
a sequence of requests is the total distance traveled by
the servers.
The problem is formulated as an online problem, in
which the servers algorithm must make the movement
decision without the knowledge of future requests. Denote by costA (σ) the cost of an algorithm A for serving
the request σ (in case A is randomized algorithm, this is
a random variable), and by costopt (σ) the minimal cost
for serving σ. As customary for online algorithms, we
measure their performance using the competitive ratio.
A randomized online algorithm A is called r-competitive
if exists some C such that for any task sequence σ,
E [cost A (σ)] ≤ r costopt (σ) + C. The randomized (resp.
deterministic) competitive ratio is the infimum over r
for which there exists a randomized (resp. deterministic) r-competitive algorithm.
The k-server problem attracted much research,
∗ Computer
Science Department, The Hebrew University,
Jerusalem. Emails: [email protected], [email protected].
Supported in part by a grant from the Israeli Science Foundation
(195/02).
† Supported in part by the Landau Center.
Manor Mendel∗†
mainly toward resolving the k-server conjecture due
to [18], which states that the deterministic competitive
of the k-server problem is k in any metric space. A
lower bound of k is established in [18], and the best upper bound known is 2k − 1 given by Koutsoupias and
Papadimitriou [16].
The randomized k-server conjecture analogously
states that the randomized competitive ratio of the kserver problem is Θ(log k), in any metric space. In
the randomized case very little is known. The best
lower bound known is Ω(log k/ log log k) due to [2, 3],
whereas the best upper bound in terms of k is still the
deterministic upper bound 2k − 1.
Essentially, the only case where the randomized
k-server conjecture is known to be true is for the
uniform metric space (where all pairwise distances are
equal), which is equivalent to the paging problem.
The competitive ratio for the uniform space is Hk =
P
k
−1
≈ ln k [12, 19, 1]. Thus, a challenging problem
i=1 i
is to design randomized k-server algorithms that achieve
competitive ratio strictly less than k for arbitrary metric
spaces.
Lots of work has been devoted to achieve this goal
with very little success thus far. Some partial results
have been achieved for specific metric spaces. Irani
[14] studies the problem of two-weight caching. Bartal,
Chrobak and Larmore give 2 − ǫ algorithm for 2 servers
on the line [8]. Recently, Csaba and Lodha [10] gave
an O(n2/3 ) competitive randomized algorithm for n
equally-spaced point on the line. Some more results
are known when k is very close to n [6, 13] which we
discuss in more detail in the sequel.
Let us explain our general methodology. A general
tool in approaching the randomized k-server problem
is that of metric approximation. Let the aspect ratio
∆ of a metric space be the ratio of the largest to
smallest pairwise distances. This paper deals with finite
metric spaces, for which it is convenient to normalize
distances so that the smallest distance is one and ∆
is the diameter of the metric space. Also, in the
context of this paper, we think of ∆ = O(n) (as for
the shortest distance metric in an unweighted graph).
Obviously, randomized algorithms for metric spaces of
small aspect ratio can be obtained by first embedding
it into a uniform space with distortion at most ∆ and
then using the randomized algorithm for the uniform
space. This trivial approach results with competitive
ratio O(∆ log k) which may be useful if ∆ is small
relative to k.
The main result of this paper is improving on the
above trivial bound to get ∆1−ǫ polylog(n) competitive
algorithms for a large class of metric spaces.
Probabilistic metric approximation [4] provides a
useful tool in approaching the randomized k-server
problem by reducing the problem to the case where the
metric space is an HST. A µ-HST is a metric space
that can be recursively decomposed into subspaces of
diameter smaller by a factor of µ than that of the whole
space. In [4, 5, 11] it is shown that the competitive ratio
for the k-server problem is within O(µ log µ n) factor
of the competitive ratio that can be achieved for HST
spaces. This general approach has been the basis to the
development of randomized algorithms for the related
metrical task systems problem, which also imply that
for k = n − c servers there is an O((c log k)2 log log k)
competitive randomized k-server algorithm [6, 13].
The goal of designing randomized servers algorithms on HSTs has been previously approached by Seiden [20]. He showed that for a µ-HST with µ ≈ k, low
height and small degree, there is polylog(k) competitive
algorithm. Unfortunately, this result is not applicable
to the general problem because of the condition on µ.
In this paper we make a step in applying the general
framework described above. Our main result provides
randomized k-server algorithms for a natural class of
metric spaces defined on unweighted graphs having
bounded growth-rate, including d-dimensional grids.
The growth-rate (see [17]) of an unweighted graph M
is ρ = max{log |B(x, r)|/ log r; x ∈ M, r ≥ 2}. Such
metric spaces can be probabilistically approximated by
HSTs with some special properties that enable us to
get better algorithms. The competitive ratio we obtain
1
for the class of growth-rate ρ is ∆1− ρ+2 polylog(n). For
d-dimensional grids we obtain a competitive ratio of
d+1
n d(d+2) d3 log n.
Our result is motivated by the recent work of Csaba
and Lodha [10] for the special case where the metric
space is composed of equally spaced points on the line
(i.e. d = 1). Our result extends their result, as well as
simplifies their proof for this special case.
proximation of larger families of metric spaces by the
decomposable metric spaces.
Definition 2.1. A (θ, b, t)-decomposable space is a
metric space M = (V, d) that can be partitioned into t
blocks B1 , . . . , Bt with the following properties: (i) the
distance between any two points in the same block is 1.
(ii) Each block contains at most b points. (iii) Denote by
δ the minimum distance between two points in different
blocks, and by ∆ the maximum distance between two
points in different blocks. Then δ ≥ 1, and θ = ∆
δ .
We prove in the next section.
Lemma 2.1. There exists a k-server randomized algorithm DM which is O(max{b, θ} log t) competitive on
any (θ, b, t)-decomposable metric space.
In this section we obtain algorithms for more natural and general families of metric spaces using DM, and
probabilistic approximation of metric spaces [4]. We
need the following definition taken (with adaptations)
from [4].
Definition 2.2. For µ ≥ 1, a µ-hierarchically wellseparated tree (µ-HST) is a metric space defined on the
leaves of a rooted tree T . To each vertex u ∈ T there we
associate a label Λ(u) ≥ 0 such that Λ(u) = 0 if and only
if u is a leaf of T . The labels are such that if a vertex
u is an offspring of a vertex v then Λ(u) ≤ Λ(v)/k.
The distance between two leaves x, y ∈ T is defined
as Λ(lca(x, y)), where lca(x, y) is the least common
ancestor of x and y in T . Clearly, this function is a
metric on the set of vertices. The tree T is called the
tree defining the HST.
An exact µ-HST is an µ-HST which also satisfies
for every internal child u of a vertex v, the stronger
condition Λ(u) = Λ(v)/k.
Definition 2.3. ([4]) An embedding f : M → N
between metrics spaces is called dominating if ∀u, v ∈
M , dM (u, v) ≤ dN (f (u), f (v)). A metric space M is αprobabilistically approximated by a set of metric spaces
S if there exists a distribution D over S and for every
N ∈ S, a dominating embedding fN : M → N , such that
for all u, v ∈ M , E N ∈S [dN (fN (u), fN (v))] ≤ αdM (u, v).
In [4] it is observed.
Bounded Proposition 2.1. If M is α-probabilistically approximated by S and for each S ∈ S there is r-competitive
randomized k-server algorithm, then M has αr competThe algorithm presented in this paper can be natu- itive randomized algorithm for the k-server problem.
rally presented as having two parts: A basic algorithm
The following theorem is result due to [11] (improvthat works on a family of parameterized “decomposable” spaces, and generalizations via (probabilistic) ap- ing previous constructions [4, 5]) and an observation
made in [7].
2
Algorithms
Graphs
for
Growth-Rate
Theorem 2.1. For any µ ≥ 1, and any n point metric
space is O(µ logµ n) probabilistically approximated by as
set of exact µ-HSTs.
,
Definition 2.4. We define ∆(M ) = mindiam(M)
x6=y dM (x,y)
the distortion of M from the uniform space. By scaling we may assume without loss of generality that
minx6=y dM (x, y) = 1, and therefore throughout the rest
of the paper ∆(M ) = diam(M ).
Definition 2.5. We define bM (r) = max{|S| : S ⊂
M, minx6diam(S)
≤ r}.
=y dM (x,y)
Corollary 2.1. Any n-point metric space M with
d+1
log2 n
growth rate at most d ∈ N , has a ∆ d+2 dlog
∆ competitive randomized algorithm for the servers problem,
where ∆ = ∆(M ).
Proof. Let h > 2 to be a natural number to be defined
shortly. Then bM (∆1/h ) ≤ (∆1/h )d . Thus, in order to
use Theorem 2.2, it is sufficient to show that (∆1/h )d ≤
∆1−2/h , i.e., h ≥ d + 2
Corollary 2.2. The d-dimensional grid on n points
d+1
has O(n d(d+2) d3 log n) competitive randomized k-server
algorithm.
Combining Lemma 2.1 with Theorem 2.1 we obtain
Lemma 2.2. Let M be an n-point metric space. Let
∆ = ∆(M ). Fix a natural number h > 1. Then there
exists an r-competitive randomized algorithm for the kserver algorithm on M and
r = O(max{∆1/h bM (∆1/h ), ∆1−1/h }
h log2 n
).
log ∆
Proof. The result follows from the fact that growth rate
of the d-dimensional grid is d and that its diameter is
∆ = dn1/d .
Note that for d = 1 we obtain the same competitive
ratio achieved in [10] for this metric space.
3 An Algorithm for Decomposable Spaces
We recall some concepts from the servers problem on
Proof. Let µ = ∆1/h . By Theorem 2.1, M is O(µ logµ n) uniform spaces (also known as the paging problem).
probabilistically approximated by a set of exact µ- The following concepts are commonly used in that
HSTs. We claim that each such HST is ( µ∆2 , bM (µ), t) setting, and were first introduced in [21, 15, 12].
decomposable space for some t ≤ n. To see this,
decompose such an HST into blocks which are subtrees Definition 3.1. (Phase partition) The request seof height 1. The diameter of each such block is at most quence is partitioned into disjoint contiguous subseµ (actually, either 0 or µ). Note that since the distances quences, called phases, as follows. The first phase begins
in the HST dominates the distances in M , each block at the beginning of the sequence. The ith phase begins
contains at most bM (µ) points. The distances between immediately after the i − 1th phase ends. It ends either
blocks are at most ∆, and at least µ2 . Thus these at the end of the sequence, or just before the request for
trees are indeed ( µ∆2 , bM (µ), t) decomposable, for t ≤ k + 1’th distinct point during the ith phase (whatever
n. By Lemma 2.1 there are O(max{ µ∆2 , bM (µ)} log n) comes first). Note that phase partitioning can be done
competitive algorithms for these trees and applying in an online fashion.
We call a phase of k servers in, a k-phase.
Proposition 2.1 we conclude the claim.
Definition 3.2. (Marking algorithms) A
point
that has been already requested during the current phase
Theorem 2.2. Let M be an n-point metric space. Let is called marked. Marks are erased at the end of the
∆ = ∆(M ). If there exists h > 2 such that bM (∆1/h ) ≤ phase. An online algorithm is said to have the marking
log2 n
property if it never leave a marked point without a
∆1−2/h , then for any k there exists ∆1−1/h hlog
∆ comserver.
petitive algorithm for M .
Denote by MARKk (S), a marking algorithm with k
To reach a natural family of metric spaces for which servers to applied to the space S.
Theorem 2.2 implies non-trivial bounds, we use the
We now present the algorithm for (θ, b, t) decomnotion of growth-rate from [17].
posable spaces.
Definition 2.6. Let a = minx6=y dM (x, y). The growth
Algorithm Demand-Marking (DM). Assume the
rate of M is defined as
(θ, b, t)-decomposable space M is composed of the blocks
B1 , . . . , Bt .
log |{y : dM (x, y) ≤ ar}|
The algorithm is executed in periods. Periods are con.
ρ(M ) = max
x∈M, r≥2
log r
tiguous subsequences that partition the request sequence
We thus conclude.
σ = σ1 σ2 · · · , where σp is the subsequence of period p.
The end of a period is determined by DM in an online
fashion.
We further decompose the subsequence σp into noncontiguous subsequences σpi , i ∈ {1, . . . , t}. where σpi
contains the requests in σp to block Bi . Each of the
algorithms MARKj (Bi ), 1 ≤ j ≤ k, receives the requests
in σpi as input.
For every block Bi we define the demand of the block
demi at a given time as maximum j such that MARKj (Bi )
has already finished ⌈δ⌉ j-phases since the beginning of
the period. Note that the demand of a block does not
decrease during the period. At the beginning of the period
we assume demi = 0.
DM maintains the invariant that if a block Bi has ki
servers, then they occupy the same points occupied by the
servers of MARKki (Bi ).
A block Bi for which ki > demi (i.e., it has more
servers than its demand) is called available. DM maintains
the invariant that after serving a request, in every block
Bi , ki ≥ demi . In case demi increases above ki , then DM
adds demi −ki servers to Bi (see below how) to satisfy
demi = ki . Note that in this way, once a block becomes
unavailable, it remains unavailable until the end of the
period.
In order to bring a server to block Bi , DM chooses
uniformly at random a block Bj among the available blocks
and moves one of the servers in Bj to Bi .
A period ends when there is a server to move, but no
available block. In this case the demands in all the blocks
are reinitialized to 0, the phase in each block is stopped
(i.e., all MARKj (Bi ) are reinitialized), and a new period
begins.
Denote by Dp (i) the number of servers DM has at
the end of period p in block Bi . Similarly, denote by
Cp (i) the number servers adv has at the end of period
p in block Bi . P
Let mp = i max{0, Dp−1 (i) − Dp (i)} (where the
sum is over the blocks) be the number of servers that
changed places during the phase.
Lemma 3.2. The expected number servers movements
between blocks during period p done by DM is at most
mp (1 + ln t).
Proof. In order to prove this lemma it is easier to think
of the k-server problem as an (n − k)-evader problem
[9]:
There are n − k evaders in the n-point metric
space, and no two evaders can occupy the
same point. A sequence of point requests is
presented. Upon requesting a point x, the
evader in x (if there is one) must be moved
to an unoccupied point. The cost is the total
movement of the evaders.
Obviously, this problem is equivalent to the k-server
problem, and in particular, the algorithm DM can be
cast in terms of evaders. In order to prove the lemma
we need to observe that when DM ejects an evader from
block Bi , it does so by choosing uniformly at random
an available block and moving the evader to that block.
It is helpful to identify evaders with an identity that
is kept when an evader moves from one block to another.
We also assume that when there is a need to eject an
evader from block Bi , DM first chooses an evader among
those that were injected to Bi during the period, if there
exists such an evader in Bi . These assumptions have
3.1 Analysis We will need the following lemma,
no real influence on the behavior of the algorithm. It
which appeared (implicitly) in [21].
supposedly influence the cost of reconfiguration of the
evaders in the block, however, we bound this cost by
Lemma 3.1. Consider a partition to j-phases of the
the maximum cost possible.
request sequence in a uniform space S with b points
With the above assumptions, there are exactly mp
unique
evaders that are ejected at least once during
1. Any marking algorithm MARKk (S) has a cost at
period
p.
We next show that for each such evader,
most k in a k-phase.
the expected number of ejections is at most 1 + ln t.
2. Any (offline) strategy with h servers, h ≤ k, has a Fix an ejected evader e and denote by Y the number of
ejections of evader e. Note that between ejections of e at
cost of at least k − h + 1 in each k-phase.
least one block must have change status from available
The definition of periods and phases of DM is a de- to unavailable. This is so since when e is ejected, it
terministic function of the request sequence, indepen- is ejected to an available block Bi , and it will not be
ejected from Bi before Bi will become unavailable.
dent of the random bits of DM.
From the discussion above, we can bound Y as
Denote by costA (p) the cost of strategy A during the
follows.
Order the blocks Bi1 , Bi2 , . . . , Bit according to
period p. We fix an
oblivious
adversary
strategy
adv,
P
P
and we prove that p E [cost DM (p)] ≤ r p costadv (p) + the order in which they become unavailable during the
C, where r = O(max{b, θ} log t), and C is independent period. Note that availability is a deterministic function
of the request sequence.
of the request sequence, independent of the random
bits of DM. Denote by Ih the indicator variable of
whether
Pt the evader e reached Bih . With these notations
Y ≤ h=1 Ih . Denote by h0 the block Bih0 in which e
began the period. Thus for h < h0 , Pr[Ih = 1] = 0. We
next prove that for h ≥ h0 , Pr[Ih = 1] ≤ 1/(t − h + 1).
To see this denote by Jh′ the event “h′ is the largest
number
such that h′ < h and Ih′ = 1”. Note that
P
h′ : h′ <h Pr[Jh′ ] = 1. From the behavior of DM,
Pr[Ih = 1] =
1
Pr[Jh′ ] ≤
t
−
h′
h′ <h
X
1
1
≤
.
Pr[Jh′ ] =
t−h+1 ′
t−h+1
X
h <h
Thus,
E[Y ] ≤
t
X
h=1
E[Ih ] ≤
t
X
h=1
1
≤ 1 + ln t.
t−h+1
We now bound the cost of DM in period p using the
number of servers’ movements between blocks in the
period.
Lemma 3.4. For a period p,
(3.1) costadv (p) ≥ δ
t
X
max{0, Dp (i) − Cp−1 (i) + 1}.
i=1
Proof. Fix a block Bi in which Cp−1 (i) ≤ Dp (i).
We will associate uniquely with this block a cost of
δ · (Dp (i) − Cp−1 (i) + 1) for the adversary.
Assume that during the phase the adversary
brought s servers into Bi . The cost for doing so is at
least sδ. If s ≥ Dp (i) − Cp−1 (i) + 1, we are done.
Otherwise, Cp−1 (i) + s ≤ Dp (i). Let a′ be the
maximum number of servers the adversary had in Bi
during period p. Clearly, a′ ≤ Cp−1 (i) + s ≤ Dp (i).
During period p the requests in Bi formed at least
⌈δ⌉ (Dp (i))-phases. From Lemma 3.1, the cost for
serving each (Dp (i))-phase using a′ servers is at least
a′ −Dp (i)+1. Adding this cost with the cost of bringing
the s servers we conclude
costadv (p) ≥ (Dp (i) − a′ + 1)⌈δ⌉ + sδ
≥ (Dp (i) − Cp−1 (i) + 1)δ.
P
Lemma 3.5. The adversary’s cost is at least cδ p (t +
mp ) + C, where the sum is taken over the periods, c > 0
Proof. For each server movement from Bi2 to Bi1 , i1 6= is a universal constant and C is a constant independent
i2 , we associate the following costs of DM: (i) The server of the request sequence.
movement, which is at most ∆. (ii) The reconfiguration Proof. Note that
cost in both Bi1 and Bi2 , which is at most 2b. (iii) The
X
X
X
total cost for servicing the requests in Bi1 since the last (3.2)
Dp−1 (i)
Cp (i) =
Cp−1 (i) =
time a server was brought into Bi1 (or the beginning of
i
i
i
X
the period, in case this is the first time) until that point.
Dp (i) = total # of servers.
=
We next show that the last term is at most b⌈δ⌉.
i
Indeed, just before the server was brought into Bi1 ,
From (3.1) and (3.2),
block Bi1 had demand demi1 , and thus the there were
X
at most ⌈δ⌉ demi1 -phases in Bi1 since the beginning
|Cp−1 (i) − Dp (i)| =
(3.3) δ
of period p. In particular since the last time a server
i
was brought to into Bi1 . During that time DM had
X
max{0, Dp (i) − Cp−1 (i)} ≤ 2 costadv (p).
= 2δ
ki ≥ demi servers, and therefore its cost on each such
i
di1 -phase was at most di1 ≤ b. The upper bound of b⌈δ⌉
follows.
Also, counting the number of ejection the adversary had
By Lemma 3.2, there are at most O(mp log t) ex- in period p,
X
pected servers movement between blocks, and so the
|Cp−1 (i) − Cp (i)|δ ≤ 2 costadv (p).
total cost associated with servers movement between (3.4)
Lemma 3.3. E [cost DM (p)] = O((∆ + bδ)(t + mp log t)).
i
block is O((∆ + bδ)mp log t).
We are left to consider the costs on blocks for which
Summing (3.3) and (3.4), we obtain
no server is brought in during phase p. There are at
most t such blocks, and for each such block, we have (3.5) 4 costadv (p)
X
the same bound as associated with a server movement
|Cp−1 (i) − Dp (i)| + |Cp−1 (i) − Cp (i)|
≥δ
between blocks. Thus the claim follows.
i
X
|Cp (i) − Dp (i)|.
≥δ
Next, we bound from below the cost of the adveri
sary.
Adding the pth period of (3.3) and the (p − 1)th
period of (3.5), we have
[5] Yair Bartal. On approximating arbitrary metrics by
tree metrics. In Proceedings of the 30th Annual ACM
Symposium on Theory of Computing, pages 183–193,
1998.
(3.6) 2 costadv (p) + 4 costadv (p − 1)
[6]
Yair
Bartal, Avrim Blum, Carl Burch, and Andrew
X
Tomkins. A polylog(n)-competitive algorithm for met|Cp−1 (i) − Dp (i)| + |Cp−1 (i) − Dp−1 (i)|
≥δ
rical task systems. In Proceedings of the 29th Annual
i
X
ACM Symposium on Theory of Computing, pages 711–
|Dp−1 (i) − Dp (i)| = mp δ.
≥δ
719, 1997.
i
[7] Yair Bartal, Moses Charikar, and Danny Raz. Approximating min-sum k-clustering in metric spaces. In
Next we shall see the lower bound in terms of t.
Proceedings of the 33rd Annual ACM Symposium on
From (3.1) it follows that
Theory of Computing, 2001.
[8] Yair Bartal, Marek Chrobak, and Lawrence L. Larcostadv (p) ≥
more. A randomized algorithm for two servers on
the line. Information and Computation, 158(1):53–69,
t
X
2000.
max{0, Dp (i)−Cp−1 (i)}+|{i : Dp (i) = Cp−1 (i)}| .
δ
[9] A. Blum, H. Karloff, Y. Rabani, and M. Saks. A
i=1
decomposition theorem for task systems and bounds
P
P
for randomized server problems. SIAM J. Comput.,
Since i Dp (i) = i Cp−1 (i), we conclude that
30(5):1624–1661, 2000.
[10]
Béla
Csaba and Sachin Lodha. A randomized on-line
(3.7) costadv (p) ≥
algorithm
for the k-server problem on a line. Technical
X
t
Report 2001–34, DIMACS, October 2001.
1
δ
|Dp (i) − Cp−1 (i)| + |{i : Dp (i) = Cp−1 (i)}|
[11] J. Fakcharoenphol, S. Rao, and K. Talwar. A tight
2 i=1
bound on approximating arbitrary metrics by tree met≥ δt/2.
rics. In Proceedings of the thirty-fifth ACM symposium
on Theory of computing, pages 448–455, 2003.
Summing (3.6) and (3.7) over all periods, we obtain [12] Amos Fiat, Richard Karp, M. Luby, L. A. McGeoch,
Daniel D. Sleator, and N. E. Young. Competitive
X
X
paging algorithms. Journal of Algorithms, 12:685–699,
8
costadv (p) ≥ δ
(mp + t) − ctb2 δ.
1991.
p
p
[13] Amos Fiat and Manor Mendel. Better algorithms for
unfair metrical task systems and applications. SIAM
J. Compt., 32(6):1403–1422, 2003.
[14] Sandy Irani. Randomized weighted caching with two
Proof of Lemma 2.1. Follows directly from Lemma 3.3
page weights. Algorithmica, 32(4):624–640, 2002.
and Lemma 3.5.
[15] Anna Karlin, M. Manasse, L. Rudolph, and Daniel D.
Sleator. Competitive snoopy caching. Algorithmica,
3(1):79–119,
1988.
References
[16] E. Koutsoupias and C. Papadimitriou. On the k-server
conjecture. J. Assoc. Comput. Mach., 42(5):971–983,
1995.
[1] D. Achlioptas, M. Chrobak, and J. Noga. Competitive
[17]
N.
Linial, E. London, and Y. Rabinovich. The geomeanalysis of randomized paging algorithms. Theoretical
try
of graphs and some of its algorithmic applications.
Computer Science, 234:203–218, 2000.
Combinatorica, 15(2):215–245, 1995.
[2] Y. Bartal, B. Bollobás, and Manor Mendel. A Ramseytype theorem for metric spaces and its application for [18] M. Manasse, L. A. McGeoch, and D. Sleator. Competitive algorithms for server problems. Journal of Algometrical task systems and related problems. In Prorithms, 11(2):208–230, 1990.
ceedings of the 42nd Annual Symposium on Founda[19] L. McGeoch and D. Sleator. A strongly competitive
tions of Comptuer Science, pages 396–405, 2001.
randomized paging algorithm. Algorithmica, 6(6):816–
[3] Y. Bartal, N. Linial, M. Mendel, and A. Naor. On
825,
1991.
metric Ramsey-type phenomena. In 35th Annual ACM
Symposium on Theory of Computing, pages 463–472, [20] Steve Seiden. A general decomposition theorem for
the k-server problem. Information and Computation,
2003.
174(2):193–202, 2002.
[4] Yair Bartal. Probabilistic approximations of metric
[21]
Daniel
D. Sleator and Robert E. Tarjan. Amortized
space and its algorithmic application. In 37th Anefficiency of list update and paging rules. Communicanual Symposium on Foundations of Computer Science,
tion of the ACM, 28:202–208, 1985.
pages 183–193, 1996.