Tehnical Report CS0607 - 1990

------------
-------------------------
TECHNION - Israel Institute of Technology
Computer Science Department
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
.
A. NEW
~IEASURE FOR THE STUDY OF
ONLINE ALGORITH}5
. by
S. Ben David and A. Borodin
Technxcal Report #607
February 1990
•
.
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
A New Measure for the Study
of Online Algorithms
S. Ben David
Department of Computer Science
Technion, Haifa,. Israel
A. Borodin
Department of Computer Science
University of Toronto, Toronto, Canada
(Extended Abstract)
Abstract
...
An accepted measure for the performence of an online algorithm
is the "competitive ratio" introduced by Sleator and Tarjan. This
measure is well motivated and has led to the development of a significant mathematical theory for online algorithms. We point out some
counter-intuitive features of this measure: Its dependence upon memory and its inability to reflect the benefits of lookahead. We offer
an alternative measure that exhibits a more intuitive behaviour. In
particular, we demonstrate the use of our new measure by analysing
the tradeoff between the amortized cost of an online algorithm and
the amount of lookahead. We also derive online algorithms for the
K-server problem on any graph, which, relative to the new measure,
are optimal among all online algorithms (up to a factor of 2) and are
within a factor of 2K from the optimal offline performance.
1
a
..
.
I
Introduction
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
We consider the problem of carrying out a sequence of tasks subject to
some cost function where at any given moment one has to process a current task without knowledge of future requests. In a seminal paper, Sleator
and Tarjan [ 1 introduce a measure for analyzing the performance of online
algorithms. Rather than study the performance of such online algorithms
relative to probabilistic assumptions about the input distribution, they relate the online cost to that of an optimal offline algorithm. The resulting
measure, called the competitive ratio in [ 1, has now been investigated in a
number of specific and abstract settings. Although the competitive ratio is in
general a pessimistic bound there are often algorithms yielding surprisingly
good bounds on this ratio.
On the other hand, since online vs offline considerations are so pervasive,
one should not expect that consideration of the competitive ratio will always
lead to efficient (or even reasonable) algorithms. In this paper, we reflect on
some aspects of the competitive ratio and also introduce another measure,
which we shall call the max/max ratio. We will see that the max/max ratio
suggests different types of algorithms which in some circumstances may be
more appropriate. A possibly disappointing aspect of the competitive ratio
measure is that it does not reflect improvement in an online agent's behaviour
due to any finite amount of lookahead. This is a rather simple consequence
of the definition. With respect to the max/max ratio, we will see that the
potential benefit of finite lookahead becomes a central issue. We analyze the
tradeoff ,between the amortized cost of an online algorithm and the amount of
lookahead available to it. In particular, we show that in the paging problem
for n pages, a lookahead of 1 = n - 1 suffices for an online player to perform
optimally; that is, the max/max ratio equal one!
One of the more enticing aspects of the perspective provided by the competitive ratio is the role of memory. In fact, at first it appears to be a
paradox that the known competitive algorithms make essential use of memory even though the past is uncorrelated with the future. We will show that
this memory dependence is inherent in the competitive ratio by giving some
lower bounds on the amount of memory needed for running competitive online algorithms. The paradox is explained by observing that the goal of a
2
•
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
competitive algorithm is not to be optimal on any sequence but rather to
make sure that the optimal (offline) algorithms can't be doing too much better. That is, it is sometimes beneficial to increase one's own cost, if in doing
so it can be guaranteed that the optimal offline cost grows accordingly. This
paradoxical behaviour disappears when considering the max/max measure.
Memory plays absolutely no role in the max/max effeciency of an online
algorithm.
For definiteness we shall formulate our considerations in terms of the Kserver model of Manasse, McGeouch and Sleator (MMS[]) although the issues
transcend this particular abstract model. In the next section we introduce
the required definitions and notation. In the following two sections, we study
some basic properties of the two measures. In Section V we show that for
every bounded K server system there is a natural and simple online algorithm
achieving a max/max ratio of 2K. This should be contrasted with the still
open and intriguing K server conjecture of MMS [ ], of whether or not for
every K server system the competitive ratio is equal to K (or indeed if
the competitive ratio can always be bounded by any function of K). An
additional benefit of our max/max measure is that online algorithms can be
directly compared. Our natural online algorithm is shown to be within a
factor of 2 of any online algorithm. In Section VI we begin the study of
lookahead, starting with a complete analysis of "paging". We then consider
arbitrary K server problems and conclude with our own + version of a K
server conjecture.
II
Definitions and Notation
A K server system consists of a metric space (V; d) where V is a set of
(possibly infinite) and d is a metric defined on the nodes; that is,
d is symmetric, d( u, u) = 0, and d satisfies the triangle inequality.l At any
time, K servers are located on the nodes, where K < #V. A request for a
node (not presently occupied by a server) v must be satisfied by moving a
server from its present location u to v at a cost of d( u, v). Given a request
+ nodes
lIn the MMS [l formulation, d needn't be symmetric but since all the known results
concern symmetric systems we shall always make this assumption.
3
&
sequence 0' = VI' V2, ... ,Vt, an algorithm A responds by an appropriate move
(= edge) sequence.
Given a K server system and an initial configuration So of servers, an
algorithm A when applied to a request sequence 0' = VI, . .. ,Vt produces a
sequence of configurations SI, S2, ... , St such that Vt E St ~ V. 2 The cost
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
t
of A on iT, denoted C~V;d)(O'), is the sum of move costs, 2:d(Si' Si+d where
1=1
d(Si' Si+d is the minimal cost induced
configuration Si to configuration Si+I'
.
A as We'
(V'd) ( ) .
algOrIthm
A = hm sup max
t.... oo
T
l"l=t
by the metric d to change from +
Define the competitive ratio of an
h
COPT ()
.
CCA( 0')
(_) were
0' IS the opOPT
(T
timal cost to satisfy the request sequence 0'.3 That is, we compare the online
and offline performance on the same request sequence and choose,for every
length, the worst sequence in terms of this ratio. An algorithm A is called
competitive if its competitive ratio is bounded by some constant. One notes
that the optimal cost is well defined since 0' is a finite sequence. Moreover,
dynamic programming affords a reasonably efficient algorithm (see 0) to realize the optimal bound. However, dynamic programming is an "offline"
algorithm in that the entire sequence must be seen in order to produce the
configuration (or server move) sequence. An algorithm A is called online if
for all i, Si+I is a function of VI, ... ,Vi+I' We can also define algorithms with
limited lookahead and we do so by saying that A has lookahead l if for all
i, Si+I is a function of VI, .. . ,ViH' In particular then, online algorithms are
algorithms with lookahead l = 1.
The concept of memory in server systems is introduced in Raghavan and
Snir []. They define a K server online algorithm with Q memory states as
a function u: Q X V K X V f-+ Q X V K satisfying the server property that if
2The original definition of the ]{ server problem also implies that IS; n S;+11 ~ ]{ -- 1 (ie
at most one server moves). As observed in MMS [], because of the triangle inequality the
competitive ratio does not change when we allow more than one server to move. However,
when one also considers memory requirements, this greater generality is useful and we
allow this generality in the definition.
3We will drop the superscript (V;d) whenever it is clear from the context. We
should note that an equivalent definition of competitive ratio as defined in BLS [ ]
and MMS [] is given as the inf {pi there exists a constant (3 such that for all finite request
sequences it, CA(it) - p. COPT (it) ~ {3}.
4
u(s,< Xll ... ,XK >,Z)
= (S',< Yll ... ,YK »
then z E {Yll.·.,YK}.
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
We now introduce a new ratio where memory is not at all needed and
which, in + some sense, seems closer to the spirit of worst case complexity
analysis. We define the max/max ratio of an algorithm A as
That is, we compare the worst case behaviour of A and OPT rather than
their performance on the same sequence. Since the amortized (per request)
complexity of an algorithm A is lim sup max GA(a-)/t, our max/max ratio is
t-+oo
11J1=t
essentially a normalized amortized complexity, where here we normalize by
the best that can be done using the optimal offline algorithm. In order to
make this concept well defined, we shall hereafter assume that our server
systems are bounded in the sense that d(u, v) ~ B < 00 for all nodes u, v.
III
..
The role of memory and lookahead in the competitive
+ ratio
In this section we show that K -server online algorithms have to use memory growing (unboundedly) with the distances of the graphs they are designed
for.
We shall demonstrate this phenomena by proving two lower-bound results
for the 2-server problem. Similar considerations can be found in Chrobak,et
aI[ ].
Definition: Given a graph G and a node v in G let m(v) be maxi d(v, v') : v' in G} /
min{d(v,v' ): v' in G and is different than v}. Letm(G) bemax{m(v): v is in G}.
Note that when G has only three nodes then m(G) is the ratio between the
maximal and the minimal edges of G.
Recall that for any two server problem, there exists a 2 competitive online
algorithm (MMS[88]).
5
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
Theorem 1: For every graph G, any online algorithm that has memory less
than 10g(m(G) - 1) has a competitive ratio> 2.
Proof: Let VI, V2, V3 be nodes of G such that meG) = d(VI V 3)/d(VI V 2)'
Without loss of generality we assume the initial configuration of the servers
is occupying VI and V3 (we can always force a competitive algorithm to set
his servers in a given configuration by a long enough sequence of requests to
these nodes only).
Let us examine the response of the algorithm to a long sequence of requests alternating between VI and V2' If for more than m( G) - 1 many times,
the algorithm uses only one server then, due to its memory bound, the algorithm will use only this server on any sequence alternating between VI and
V2 allowing the offline player to fix his servers, pay nothing and defeat any
competitive ratio.
On the other hand if after some l < m(G) - 1 such requests, the online
algorithm responds with the server at node V3, then the 'taskmaster' requests
V3 and goes back to alternations of VIV2' As the offline algorithm can always
keep a server on V3, its cost is bounded by d(VIV2) per request. The ratio on
0
such sequences is therefore ~ i+i,:\G) which is > 2 (for l < meG) - 1).
In fact, we can bound from below the competitive ratio of any memoryless
online algorithm by a function that can grow unboundedly with the graph
parameters.
Definition: For every triple of points Vb V2, V3 in G let t( Vb V2, V3) be the
sum of the three edges between them divided by two times the smallest edge
among the three. Let t( G) be the max of t over all triples in G.
Theorem 2: A memoryless online algorithm for two servers on G has competitive ratio of at least t( G).
If we consider a setting where only the server to the requested node is
allowed to move, it is interesting to note that the lower bound on memory
6
;
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
provided by Theorem 1 (ie log(m(G) - 1)) matches the upper bound for
K -servers on the line that follows from the Chrobak et al[ ] algorithm.
In fact, the "memoryless" aspect of the Chrobak, et al [ ] algorithm is
achieved not only by allowing more than one server to move but also by
having the ability to move servers to arbitrary points on the line. That is,
if we considered a discrete set of points on the line to be the space of the
server system, then even when we can move all servers the competitive ratio
for memoryless algorithms can still be made to grow arbitrarily.
Uselessness of lookahead for improving the competitive ratio
The (only) advantage an offline algorithm has over an online one is its
ability to see the future. One might expect that allowing an online agent
access to some finite lookahead would result in improving its performance
relative to that of the offline algorithm.
Contrary to this expectation, as far as the competitive ratio is concerned,
no finite lookahead is sufficient for any improvement in the performance of an
online algorithm. As a measure of efficiency of an algorithm, the competitive
ratio fails to reflect an important aspect of one's everyday experience of the
benefits of (finite) lookahead for decision making tasks.
Theorem 3: If the competitive ratio of a server system is some a (with
lookahead l = 1), then for any fixed l, tli~ ratio obtained by algorithms with
lookahead l for this system is also a.
Proof: For any f > 0 consider a sufficiently long sequence of requests
Zi, Zi2 ••• Zit which results in a ratio of a - f . (Recall that the competitive ratio is defined in terms of a limsup, so that there may not be a finite
sequence which forces the ratio a.) We then consider the request sequence
Zil •.. Zil Z'2 .•. Zi2 ••. Zit" . Zit' On this sequence an online algorithm with
~~
l
l
'-,.0-'
l
lookahead l can not perform any better than a fully online algorithm did
on the original request sequence. That is, the ratio can be forced to be
arbitrarily close to a and hence the competitive ratio is again a.
7
IV
Some basic properties of the max/max measure
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
Let us define the amortized cost of an algorithm A (for a server system
(V; d) as:
M(A) = limsupmaxCA(u)/t.
t-+oo 1~I=t
This is a natural adaptation of the notion of amortized complexity [ ] to our
context.
We would like to show that the max/max ratio of an algorithm A is
measuring its amortized cost. We shall see that the wM(A) is just M(A)
normalized by the amortized cost of the optimal offline algorithm. Towards
this end we need the following lemma.
Lemma 1: For every K-server system and for every K-server algorithm for
the system (not necessarily an online algorithm), the sequence
converges to a (finite) limit as t grows to infinity.
Fact 1: For every K-server system on a finite graph and for every algorithm
A
M(A)
wM(A)
= M(OPT)'
The above fact implies that our max/max measure reflects the 'traditional' worst case behaviour of an algorithm. As we shall soon see, the entity
abstracted by the competitive ratio is quite different; in many cases minimizing the amortized cost is in sharp conflict with minimizing the competitive
ratio.
Before we pursue this line any further we would like to point out a few
more properties of the max/max measure.
Corollary 1: let A,B be algorithms for servicing the same finite K-server
8
system then
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
M(A)
M(B)
wM(A)
= wM(B)'
Corollary 1 enables a complete analysis of the relative (max/max) efficiency of two online algorithms without referring to an optimal offline algorithm (and consequently without having to analyze it). We shall make use
of this convenience in section V where we present an online algorithm which
is optimal up to a factor of 2 amongst all online algorithms for any bounded
K-server system.
One more possible use of this comparability property is that it gives an
exact measure for the improvement in performance gained by increasing the
number of servers; it is not at all clear how to address this issue relative to
the competitive ratio.
The max/max ratio is graph-dependent
The competitive ratio for a system may depend on K only. This impression is supported by the [MMS] lower bound of K for every K server
problem and their K server conjecture. The situation is quite different when
we consider the max/max ratio. In section VI we shall see that for uniform
K + 1 node graphs the max/max ratio + for K servers is exactly K. We now
present a class of graphs for which this ratio is 1.
Definition: A graph (7 is a K -cluster if it can be partitioned into K subgraphs GIl'" I GK so that: (i) For all i there are at least two nodes in Gil
(ii) For all i :f:. j if z E Gil y E Gil Z E G j then ddfx,z~
> K , (iii) For all i,j
x,y
diameter (G i) = diameter (Gj ).
Fact 2: If G is a K-cluster then there exists an online K-server algorithm
that achieves max/max ratio of 1 on G.
Relating the measures
Theorem 4: Given any task system and any algorithm A for servicing it
9
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
We would like now to point out some of the differences between the
max/max view of online algorithms and that of the competitive ratio. The
two questions that we consider most significant are the memory dependence
and the benfits of lookahead. Here we remark on the way these efficiency
measures suggest different online algorithms.
Consider the following example:
Assume you run two restaurants, one in Boston and the other in Berkeley.
Assume you have a total of 20 waiters, there are 20 tables in each restaurant,
and you try to minimize the time a customer waits by his table before a
waiter approaches him. Any strategy for distributing the waiters that does
not (infinitely often) send all of them to one city and thus have a customer
at the other location wait till a waiter will fly back to serve him won't be
competitive. This can be seen by considering the case that from a certain
day onward all your customers will visit only one of the restaurants.
It is not hard to see that having 10 waiters in each location (and never
asking a customer in Boston to wait till his waiter will fly back from the west
coast) is less costly in all cases except very extreme ones. From the point of
view of the max/max measure such a strategy is much better as it bounds
the worst cost per task of servicing.
We point out two sources for such differences between the measures:
i) Concentration on worst case: The competitive ratio 'prefers' protecting
against a big loss over (even) one request sequence over saving on most
request sequences.
ii) Singularity in O-cost: Any algorithm that (for infinitly many n's) pays
something on a request sequence (of length n) that an optimal algorithm can serve for free, will be considered not competitive by the
competitive ratio. This happens regardless of how small this payment maybe, as when compering the two costs we devide by a O. The
max/max measure does not face such a singularity as there are no server
systems for which all sequences cost 0 for an optimal player. On the
10
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
other hand, if there are big differences in the cost of optimal (offline)
service to sequences of the same lengths, then the max/max measure
becomes very permissive as it allows the maximal cost on every single
sequence.
V
A good algorithm
Definition: Let X ~ V. Then for
U
E V, d(u, X) = infvExd(u, v).
Definition: Given a bounded metric space G = (V j d) and KEN, the
K-covering radius of G is defined as R(K, G) = inf{rl there exist a "set of
centres" X = {Vb ... ,VK} ~ V such that d(u,X)::; r for all u E V}.
Lemma 2: Let R(K, G) = r. Then for every
in V such that d(Ui' Uj) > r - f for all i =I j.
f> 0 there exist Ub' .• ,UK+l
Consider the following algorithm for any K server problem on a bounded
metric space (Vi d). Let the covering radius be R and let X = {Vb"" VK}
be a set of centres such that all nodes in V are within R of X. Place the ith
server on Vi. To serve a request, choose any server subject to the constraint
that the ith server remains within distance R of Vi.
Theorem 5: The above "covering radius" algorithm achieves a max/max
ratio ::; 2K for the K server problem on any bounded metric space.
Proof: Clearly the algorithm never pays more than 2R to service a request. For every f > 0 let Ub" " Uk+1 be a set of pairwise (R - f) remote
nodes. Consider any (offline) algorithm on the repeating request sequence
(Ul' ... , UK +1)*' Clearly for every f > 0 even the optimal algorithm must
pay R - f every K requests (see the discussion concerning the uniform system in the next section), so that for all f > 0, wM(A) ::; (R~~/K and thus
wM(A) ::; 2K.
0
11
Corollary 2: The covering radius algorithm is within a factor of 2 of any
online algorithm.
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
Proof: Clearly every online algorithm can be forced to pay at least R in
every request.
0
Hochbaum and Shmoys [] show that the problem of computing an optimal
K-set of centres for a finite metric space is an NP-hard problem. An efficient
approximation algorithm does exist for obtaining a K-set with radius r' ::; 2r
where r is the optimal radius, but obtaining an approximation within a factor
of (2 - €) remains an NP-hard problem.
Finally, one might ask about the performance of perhaps the "most basic"
algorithm, namely serving with the closest server. It is easy to see that if the
k servers are configured so as to make the covering radius large (eg by having
one server for two remote nodes) then "serving with the closest server" will
not correct itself. Even if the servers are initially placed on an optimal set
of centres, it is still possible to drag servers away from their territory so
that in some fintie metric space the "serve with closest" online algorithm can
eventually be forced to pay 2Kr per request resulting in a max/max ratio of
2K2.
VI
The influence of lookahead
Unlike the situation for the competitive ratio, we now show that lookahead can help in server problems when considering the max/max ratio. In
particular, we can completely analyze the power of lookahead with respect
to any uniform server system (i.e. paging).
Theorem 6: Let Un denote the uniform n-node graph (i.e. d(i,j) = 1 for
all i -# j). Then for all K ::; n - 1
MUn(OPT) = (n - K)/(n - 1).
Proof: If we consider the uniform server system Un' then the offline player
can play a simple greedy strategy , namely to postpone paging as long as
12
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
..
possible. (This strategy was proposed by Belady[ ] and proven optimal by
Mattison,et al[]. An elegant proof can be found in McGeouch and Sleator).
We assume the initial configuration has servers on nodes Zn-K+ll ... , Zn and
consider the request sequence ZI, ... , Zn, ZI, ... , Zn, . . . . In trying to postpone moving as much as possible at time t the omine player will serve an
uncovered node with that server whose present location will be the last to
be requested (for the first time since time t). In doing so, it is easy to see
that on the given input sequence the omine player then moves n - K times
for every n - 1 requests.
Theorem 7: Consider the K server problem on Un, with lookahead I., 1 <
l:::;n-l.
1) If l :::; n - K, then the max/max ratio = ::~k. (That is, I. :::; n - K
yields tne same max/max ratio as for l = 1 ,which is the case of an
online algorithm.)
2) If n - K < I. :::; n - 1, then the max/max ratio is
particular, the max/max ratio = 1 for l = n - 1.)
nil.
(Thus, in
Proof: We first consider the case I. :::; n - K so without loss of generality let
+ell = n - K. At every point in time, there are n - K uncovered nodes. The
adversary constructs the request sequence by maintaining the property that
the next n - K requests are distinct and different from the K covered nodes.
To do so, the adversary + makes the n - Kth next request to be a node that
is neither covered now nor is it requested in the next I. requests foreseen by
the online algorithm. Then the online player moves on every request.
Now we consider the case n - K
algorithm is as follows:
< l :::; n -
1. The online (I.-Greedy)
Lookahead the next I. requests and if there are any presently occupied
nodes which are not requested in the next l requests, then use any server
sitting on such a node. Otherwise use that server whose location is the last
to be requested for the first time amongst the next l requests.
13
.
Let us prove that in any sequence of i requests, the online algorithm pays
at most n - K.
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
Let 0'10'2 ••• O'm be a sequence of requests and let C be a configuration of
servers. We say O'i is a repeat point in 0'10'2 ••• O'm relative to C if either O'i is
a location occupied in C or there exists j < i such that 0'; = O'i.
Claim 1: If t < t', and C t and C t , represent the configurations at times t
and t' respectively, then any request at time til > t' to a node v in C t , is a
repeat point with respect to Ct.
Claim 2: For every C and every
repeat points.
0'10'2'"
O't,
there exist at least i - (n - K)
Claim 3: Suppose the i-Greedy algorithm vacates a node v in configuration
Ct. Then either v is not requested in the next i steps or before the first
request to v there are at least K - 1 requests to members of Ct.
Claim 4: Relative to any C1 the first K - 1 repeat points (or as many that
exist) in 0'10'2'" O't do not cost the i-Greedy algorithm.
Proof: Let O's be the ith repeat point for any 1 :::; i :::; K - 1; ie
with the algorithm in configuration Cs'
O's
occurs
If O's is in C 1 and is not vacated in the processing of 0'1, ••• , O's-1 then
clearly O's does not cost the i-Greedy algorithm. If O's is not in C1 then it
must have been requested at some time t' : 1 :::; t' < s. Let t' be the last
such request. Clearly O's E Ct '+! and if it is not vacated in the processing of
O't'+b" ., O's then again O's does not cost the algorithm.
So now suppose that Ci is the last configuration from which O's was vacated. Then by Claim 3 there must be at least K -1 requests in O'i,"" O's-1
to members of Ci and by Claim 1, each of these requests is a repeat point
with respect to C 1 • Hence there are K - 1 repeat points relative to C 1 in
O'i0'2' •• 0'8-1 contradicting the assumption that O's is the i :::; K - 1 th repeat
point relative to C1 •
The online player then pays at most n - K in any i steps since the first
14
i - (n - K) repeat points are free. Thus the MAX ratio ::;
ni K / : __~ = n+ Ii.
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
Finally we note that any online player with lookahead l can be forced to
pay n - K for every i steps since in any sequence of i requests there are at
least n - K uncovered nodes.
o
The previous theorem raises an important question. Namely, is it the
case that for every bounded K server problem, indeed for every bounded
task system problem, there exists an l such that there is an i-Iookahead
algorithm achieving a max/max ratio equal to I? That is, can amortized
optimality always be achieved with finite lookahead? As observed before,
the usual optimal (offline) algorithm is dynamic programming which needs
to see the entire request sequence. The obvious approach then would be to
use an i-Iookahead approximation to dynamic programming, which indeed as
i grows gives a better approximation to the optimal dynamic programming
algorithm. We have the following:
Theorem 8: For every bounded K server system, Ve > 0, 3i and an ilookahead algorithm DP(i) such that the max/max ratio of DP(l) ::; (1 + e).
Proof: Let R = R(K I G) be the K -covering radius of G, so that by the argument of Theorem 5 the offline player pays at least R/K per request on a worst
case sequence. The online algorithm DP(l) simply looks ahead i requests and
performs as does the dynamic programming solution OPT for the sequence
of i requests 0'10'2'" 0'1. say ending in configuration C. Then DP( i) looks
ahead at the next i requests 0'1.+10'1.+20'21. and now simulates the behaviour
of OPT on the entire sequence of 21 requests 0'10'2' •. 0'21.. If OPT would be
in configuration C' after 0'10'2 ••• 0'1., then in processing 0'1.+1 ••• 0'21., the cost
to DP(l) will be at most d(C, C') + OPTs cost on 0'1.+1'" 0'21.. Continuing
in this manner, it follows that if l = m· max d( C, C') / (R(K, G)/K), then
(C.C')
COSTDP(I.)(u) ::; (1
+ ~)COSToPT(u).
The argument above produces an l that depends upon both K and the
graph. We have already seen that in the paging scenario the amount of
15
lookahead needed for apprpximating an offline performance grows unboundedly with n (and therefore with K). We now show a similar behaviour with
max(c,G') d( G, G')
respect to m(G) =
R(K, G)
.
Technion - Computer Science Department - Tehnical Report CS0607 - 1990
Claim: For a two server problem on a triangle G, any lookahead less than
m( G) cannot improve the performance of the online algorithm.
Conjecture
We coI).clude with our own version of a K Server Conjecture
1) for every (bounded) K server system, the max/max ratio::; Kj
2) for every (bounded) K server system, there is an l and an l-lookahead
algorithm with max/max ratio = 1.