Throughput-Competitive On-Line Routing
Baruch Awerbuch
Lab. for Computer Science
MIT
Yossi Azary
Tel-Aviv University
and DEC SRC
Serge Plotkinz
Dept. of Computer Science
Stanford University
Abstract
We develop a framework that allows us to address the issues of admission control and
routing in high-speed networks under the restriction that once a call is admitted and routed,
it has to proceed to completion and no reroutings are allowed. The \no rerouting" restriction
appears in all the proposals for future high-speed networks and stems from current hardware
limitations, in particular the fact that the bandwidth-delay product of the newly developed
optical communication links far exceeds the buer capacity of the network.
In case the goal is to maximize the throughput, our framework yields an on-line O(log nT )competitive strategy, where n is the number of nodes in the network and T is the maximum
call duration. In other words, our strategy results in throughput that is within O(log nT )
factor of the highest possible throughput achievable by an omniscient algorithm that knows
all of the requests in advance. Moreover, we show that no on-line strategy can achieve a
better competitive ratio.
Our framework leads to competitive strategies applicable in several more general settings.
Extensions include assigning each connection an associated \prot" that represents the
importance of this connection, and addressing the issue of call-establishment costs.
1 Introduction
High-speed integrated communication networks are going to be the most
important communication platform of the future. The technological advances in this area are
quickly oset by increase in consumption, due
to a wide spectrum of new applications (teleconferencing, cable-TV, tele-shopping, etc.). It is
thus important to address the fundamental prob-
Motivation.
Lab. for Computer Science, MIT. Supported by Air
Force Contract TNDGAFOSR-86-0078, ARO contract
DAAL03-86-K-0171, NSF contract 9114440-CCR, DARPA
contract N00014-J-92-1799, and a special grant from IBM.
y DEC System Research Center, 130 Lytton, Palo Alto,
CA 94301. Department of Computer Science, Tel-Aviv
University, Israel.
z
Department of Computer Science, Stanford University.
Research supported by U.S. Army Research Oce Grant
DAAL-03-91-G-0102, and by a grant from Mitsubishi Electric Laboratories.
lem of ecient allocation of network resources.
One of the main network resources is the available bandwidth of the communication channels.
In order to use the network (say, transmit video
signal from one point to another) the user requests a (virtual) connection to be established
between these points. Although the rate of information owing through such a connection might
vary in time, the network has to guarantee that
the connection will support at least the bit rate
that was agreed upon during the connection establishment negotiations. This guarantee is imperative for correct operation of many of the services, including constant bit-rate video and voice
transmission. In other words, establishing a connection corresponds to reserving the requested
bandwidth along some path connecting the end
points specied by the user.
In the context of low-speed networks (e.g. In-
ternet), where buer requirements are less of
an issue, the diculties associated with on-line
circuit-switching may be alleviated by delaying
the transmission, slowing down its rate, or by
rerouting the connection after it has been established. However, these approaches are usually inappropriate in the context of Gigabit rate networks (e.g. atm [2], paris/planet [4, 8]). This is
mostly due to the fact that the product of transmission rates and network latency well exceeds
the available nodal buer space, in other words
because of the high bandwidth-delay product.
In this paper we describe an on-line framework
that allows us to address both the admission control (i.e. which requests to satisfy and which ones
to reject), and bandwidth reservation issues. Our
techniques are applicable in the context of real
time high-speed networking environments with
strict performance guarantees: transmission must
start at a specic time, end at a specic time, use
a specic amount of bandwidth, and is guaranteed to be successfully accomplished, once admitted into the network. We assume that requests
for establishment of connections arrive on-line;
each request species the source and destination
nodes, the requested bandwidth, and the duration. The algorithm either rejects the request,
or establishes a connection by allocating the required bandwidth along some path between the
source and the destination nodes for the specied
duration.
A natural performance measure is the amortized throughput, which is dened as the average
over time of the number of bits transmitted by
the accepted connections.1 In fact, our framework can be described in terms of a generalization of the throughput. We assume that each
request for connection has an associated prot,
which is received only if the request is satised.
The goal is to maximize the prot. In the simplest case, where the prot is proportional to the
1 To
simplify the denitions, we assume that the customer will use as much bandwidth as he has requested.
bandwidth-duration product, maximizing the total prot corresponds to maximizing the throughput. Roughly speaking, the prot abstraction is
useful if the connections dier in importance; in
this case one can assign higher prot per bit for
the more important connection.
Since the admission control and routing algorithm has to make decisions without knowledge
of the future requests, it is natural to evaluate
its performance in terms of the competitive ratio [15], which in this case is the supremum, over
all possible input sequences, of the ratio of the
prot achieved by the on-line algorithm to the
prot achieved by the optimal o-line algorithm.
Note that several natural approaches do not lead
to algorithms with good competitive ratio. On
one hand, a greedy admission strategy that always accepts a connection as long as there exists
a path from source to sink with sucient residual
bandwidth may, in some cases, work very poorly
since certain admitted connection may end up
\blocking" many future, perhaps more protable,
connections. On the other hand, a conservative
policy of waiting for most \protable" connections may clearly lead to very poor performance
in cases where only low-prot connections show
up.
Our results versus existing work.
In
this paper we describe an admission and routing
strategy that achieves an O(log nT ) competitive
ratio if the prot of a call is proportional to the
bandwidth-duration product, where T is maximum duration of a call. We also prove that the
above bound is tight. We prove similar results for
several generalizations of this problem; discussion
of these generalizations is deferred to the end of
this section.
The techniques used in our algorithm resemble
the ones previously used in the context of multicommodity network ow [14, 12], approximate
fractional packing [13] and on-line load balancing [1, 6]. In particular, we use of the idea of
assigning each edge a cost function that is expo-
nential in its current load, and the idea of concurrently working with multiple copies of the graph,
one copy per each time instance.
The rst competitive solutions for on-line
throughput maximization have been pioneered
by Garay and Gopal for the case of a single
link [10], and by Garay, Gopal, Kutten, Mansour and Yung [9] for a line network; the latter
work achieved logarithmic competitive ratio. The
problem we are concerned with, namely competitively maximizing network throughput, has been
open for general network topologies. Other previous work on throughput concentrated on probabilistic models, and was based on various assumptions on the distributions of call arrivals times and
source-destination pairs (see e.g. [11]).
Instead of throughput, one can measure \relative load" which is dened as the maximum (over
all links and over all moments in time) of the
link capacity utilization by the currently routed
circuits.2 Roughly speaking, when we say that
the competitive ratio of an on-line algorithm is
with respect to load means that, given any sequence of requests that can be satised by the
o-line algorithm without overowing the capacities, we can satisfy all of these requests on-line if
we reduce the rate of each request by a factor of
. Alternatively, the on-line algorithm can satisfy
all of these requests if we increase the capacity of
each edge by a factor of .
The problem of minimizing the relative load in
the context of machine scheduling was considered
in [7, 5]. On-line algorithms that are O(log n)
competitive with respect to load for the case of
permanent virtual circuit routing were presented
in [1]. Extension of these techniques to the case
of virtual circuits with known duration appeared
in [6]. Recent results in [3] address the case where
the duration of each virtual circuit is a priori un2 Note that using
relative load as a performance measure
makes sense only if we implicitly assume that the o-line
algorithm does not need to reject any requests and disallow
rejections by the on-line algorithm as well.
known. They show that in this case, in order to be
competitive, one has to allow to reroute circuits,
and present an algorithm that reroutes each circuit at most O(log n) times while maintaining a
competitive ratio of O(log n) with respect to load.
and extensions.
The
framework presented in this paper leads to online algorithms with polylogarithmic competitive
ratio in several more general settings. In particular, we show that our algorithm can easily handle
the situation where the required bandwidth does
not remain constant for the duration of the call,
as long as the required bandwidth vs. time function is known during the call negotiation phase.
Similarly, the presented algorithm can handle the
case where the ratio of prot to the bandwidthduration product varies from call to call.
Generalizations
An interesting generalization of the model is to
incorporate the notion of call-establishment costs.
Here, the prot accrued by the system is computed as the dierence between the prot associated with the request, and a cost which might
depend both on the request and on the path used
to route the connection. A straightforward modication of the presented algorithm leads to logarithmic competitive ratio in this model. It is also
possible to adapt the algorithm to allow negotiation between the network and the customers on
the amount of the requested bandwidth. In that
case the prot depends, of course, on the amount
of bandwidth that the algorithm agrees to reserve
for the customer.
An important issue that arises in the context of
this work is the question of how to dene competitiveness in a very long (or even non-terminating)
execution. The disadvantage of the competitive factor as described above is that it provides
an amortized measure of performance where the
amortization is over all of the time interval in
which the system was active. Roughly speaking,
by measuring the total prot accrued since time
zero, we allow on-line algorithm to grossly misbehave locally in certain epochs of the execution's
history. Intuitively, this is unsatisfactory in the
considered context, since the fact that the routing algorithm accrued a lot of prot \last year"
should not allow it to reject all the connections
during the \next year", if the duration of the requests is measured in days. A natural approach
is to compare the performance of the on-line vs.
o-line algorithm on any suciently long (with
respect to the maximum call duration) interval
of time, not necessarily starting at time zero.
As it turns out, our algorithm can be proved
competitive with respect to this modied measure as well. For example, one of the properties
of the presented algorithm is that the prot accrued by the o-line algorithm during any given
interval [1; 2], is within a logarithmic factor of
the prot accrued by the on-line algorithm in a
slightly larger interval [1 T; 2 + T ], where T is
the maximum duration of a connection. A natural question is why are we not comparing on-line
vs. o-line on the same given interval. To address
this question, in the full paper we show that for
any on-line algorithm and for any time interval,
one can always nd a sequence of requests where
the o-line to on-line prot ratio is unbounded.
2 Preliminaries and Denitions
The network is represented by a capacitated (directed or undirected) graph G(V; E; u). The capacity u(e) assigned to each edge e 2 E represents
the bandwidth available on this edge. The input sequence consists of a collection of connection
requests: 1 ; 2; : : : ; k , where the ith request is
represented by the tuple:
i = si ; ti; ri( ); T s(i); T f (i); (i) :
Node si is the origin of the connection i, node ti
is its destination, ri( ) is the function that denes
the trac rate at time required by the connection, and (i) is the \revenue" that the algorithm
receives if it commits to routing this connection.
T s(i) and T f (i) are the starting time and comple-
tion time, respectively, for the connection. For
simplicity, we assume that these times are integer. Upon receiving a connection request i , the
algorithm either routes it by assigning it a path
Pi from si to ti , or rejects it. In the later case,
we set Pi = ;. To simplify notation, we assume
that ri ( ) is dened for any , but that ri( ) = 0
for 62 T s (i); T f (i) . The relative load on edge e
just before considering the kth request is dened
by:
X ri ( )
e (; k) =
:
e2Pi ;i<k u(e)
We require that the capacity constraints will
be enforced, i.e. 8; e 2 E; k : e(; k) 1:
The goal of the algorithm is to route maximum
number of connections
weighted by their prots,
P
i.e. maximize Pi 6=; (i):
Another constraint on the algorithm is that
it must be on-line, in the sense that the decision about routing or rejecting a connection i is
made at its start point T s (i), without any knowledge about future connections. Once a connection is made, it cannot be interrupted nor can it
be rerouted.
Let T (j ) = T f (j ) T s (j ) be the duration of
connection j , and T = maxj fT (j )g be the maximum duration of a connection. As mentioned in
the introduction, we consider the case where the
prot of each request is proportional to the rate
and to the duration of this request. In fact, we
allow some variation in the prot for a unit of
rate for a unit of time, as long as this variation
is not very large. More precisely, we normalize
the prot such that for any connection j and
rj ( ) 6= 0 we have:
1 n1 r ( ()jT)(j ) F:
(1)
j
Note that 1=n factor in above inequalities is
used only for convenience of normalization. One
interesting special case is when the requested rate
is constant per connection, and when the prot
is exactly proportional to the rate-duration product, i.e. to the number of bits that can be sent using this connection. For this case, we have F = 1.
New Connection(s; t; T s; T f ; r( ); ):
8; e 2 E : ce(; j ) u(e)(e ;j 1);
(
Denote = 2nT F +1. We assume that for any
j and ,
mine fu(e)g :
rj ( ) (2)
log Informally, this means that the requested rates
are signicantly smaller than the minimum available capacity in the network. Although, at rst
glance, this restriction seems somewhat articial,
in Section 4 we show that without this restriction
it is impossible to design on-line algorithms with
polylogarithmic competitive ratio.
3 The Admission Control and
Routing Algorithm
The admission control and routing algorithm
Route or Block is shown in Figure 1. Consider
T s (j ), the start time of request j . With each
edge e and time instance , we associate a \cost"
of this edge, dened by ce (; j ) = u(e)(e(;j )
1). The algorithm routes j on a path that is
small with respect to a weighted average of these
costs. More precisely, if e 2 Pj , then e's contribution
to the cost of the path is computed as:
P rj ( )
u(e) ce(; j ): If there exists a path which cost
is bounded by the prot (j ), then this path is
used to route the connection j . Otherwise, the
connection is rejected.
The analysis of the algorithm is divided into
two parts. First, we prove that the algorithm does
not violate the capacity constraints, and then we
show that the prot accrued by the algorithm is
within a logarithmic factor of the prot accrued
by the optimal o-line algorithm.
Informally, the reason that the capacity constraints are always satised, is that when an edge
if
)
9 path P in G(V; E ) from s to t s.t.
X r( )
c (; j ) u(e) e
route the connection on P , and set:
8e 2 P; T s T f ,
e(; j + 1) e(; j ) + ur((e))
else block the connection
then
Figure 1: The Route or Block Algorithm.
that is close to being saturated, its cost is high
enough that it will never be used for routing. Let
A denote the set of indices of requests that were
satised by Route or Block, i.e. A = fi : Pi 6=
;g.
Lemma 3.1 For all edges e 2 E and all times ,
i2A;e2Pi ri ( ) u(e).
P
Proof: Let j be the rst connection that was assigned to an edge e and that caused relative load
to exceed 1. In other words, e has available capacity less than rj ( ) at some instance where
T s(j ) T f (j ). By the denition of relative load, we have e (; j ) > 1 ruj((e)) . Using the
u(e)
assumption that rj ( ) log
, we get:
ce (; j )=u(e) = e (;j ) 1
1 1
= =2 1 = T F n
Therefore, using Assumption (1):
1
log
rj ( )
c (; j ) T F n rj ( ) (j ):
u(e) e
Hence, connection j could not have used link e.
The next lemma shows that we can use sum of
link costs to lower-bound the total prot accrued
by our algorithm.
Lemma 3.2 Let A be the set of indices of connections routed by Route or Block algorithm, and
let k be the index of the last connection. Then
2 log X
j 2A
(j ) XX
e
ce (; k + 1):
Proof: By induction on k. For k = 0 the inequality is trivially true since both sides are 0. Connections that were refused do not change either
side of the inequality. Thus, it is enough to show
that, for any j , if we admit connection j , we get:
XX
e
[ce(; j + 1) ce (; j )] 2(j ) log :
Consider link e 2 Pj . Using the denition of
the link cost, we get:
ce(; j + 1) ce(; j ) =
rj ( )
u(e)(e (;j )+ u e
( )
= u(e)
= u(e)
rj ( )
e (;j )( u(e)
e (;j ))
1)
rj ( )
e (;j )(2log u(e)
XX
e
[ce(; j + 1) ce (; j )]
log e2Pj
X
jPj j rj ( )
!
Lemma 3.3 Let Q be the set of indices of the con-
nections that were admitted by the o-line algorithm but not by thePon-line algorithm,
and denote
P P
` = maxfQg. Then j 2Q (j ) e ce (; `):
Proof: Let Pj0 be the path used by the o-line
algorithm to route j , for j 2 Q. The fact that
j was not admitted and monotonicity in j of the
costs ce (; j ) imply
XX
rj ( )ce(; j )=u(e)
(j ) e2Pj0
XX
e2Pj0
rj ( )ce(; `)=u(e):
Summing over all j 2 Q, we get:
X
j 2Q
The above upper bound on the change in costs,
the fact that the connection j was admitted, and
Assumption (1) imply:
r ( )
ce (; j ) j + rj ( )
u(e)
Next we show that sum of the link costs is an
upper bound on the maximum prot that can be
obtained by the optimal o-line algorithm.
1)
ce (; j + 1) ce(; j ) e (;j ) rj ( ) log = ce(; j ) ruj((e)) + rj ( ) log :
log (j ) +
2(j ) log :
u(e)
By Assumption (2), we have rj ( ) log
. Since
x
2 1 x for 0 x 1, we conclude
XX
(j )
rj ( )
ce(; `)
e2Pj0 u(e)
XX X
j 2Q XX
XX
e
e
ce(; `)
rj ( )
j 2Q;e2P 0 u(e)
ce(; `):
X
j
The last inequality follows from the fact that
the o-line algorithm cannot exceed unit relative
load at any instance in time.
Theorem 3.4 The Route or Block Algorithm,
shown in Figure 1, never violates the capacity con1
straints and accrues at least 2 log(2
) -fraction of the
prot accrued by the optimal o-line algorithm.
also show that for any on-line algorithm and for
any time interval, one can always nd a sequence
of requests where the ratio of o [1; 2]=on[1; 2]
is unbounded.
Proof: The prot accrued by the o-line algorithm can be bounded from above by:
4 The Lower Bounds
X
i2Q
(i) +
X
i2A
(i)
Using Lemma 3.3, this prot is upper-bounded
by:
XX
X
ce(; `) + (i):
e
2 log X
i2A
Since ce (; k +1) is the nal cost of the edge e, we
know that 8e 2 E; ce(; k +1) ce(; `): Together
with Lemma 3.2, this implies that the prot of the
o-line is bounded by
(i) +
i2A
X
i2A
(i)
(2 log + 1)
2 log(2)
X
i2A
X
i2A
(i)
(i):
The above bound on the accrued prot, together
with Lemma 3.1, complete the proof of the claim.
Remark: As we have mentioned in the Introduction, it is interesting to compare the o-line and
on-line prots performance over an arbitrary interval in time that does not start at time zero.
More precisely, let [1 ; 2] be some interval in time
and consider the prot o [1; 2] obtained by the
o-line algorithm due to requests fj : T s (j ) 2
[1; 2]g, and let on [1; 2] be the corresponding
prot of the on-line algorithm. In the full paper
we show that on[1 T; 2 + T ] is within a logarithmic factor of the o [1; 2]. Roughly speaking, this implies that the o-line prot on a given
interval is not much higher than the corresponding on-line prot on a slightly larger interval. We
In this section we show that our algorithm is optimal with respect to the achieved competitive ratio. We also justify our assumption of bounding
the rates of the requests. First we show that even
if all the requested rates are very small, the prot
accrued by the o-line algorithm can exceed the
best possible on-line prot by at least an (log )
factor. In all of the subsequent proof we assume
that all the capacities are 1, and that all requests
appear at the beginning. Moreover, we assume
that all the requests have some xed rate . In
the end of this section, we show much stronger
bounds for the case where is allowed to be large
relative to the capacities in the network.
Let G(n) be a graph which is a line of n edges
(n +1 vertices). Denote the vertices by v0; : : : ; vn,
and let n be a power of 2.
Lemma 4.1 Any on-line algorithm for G(n) has
competitive ratio of (log n).
Proof: Let all requests have unit duration. Consider sequence of requests that consists of log n +1
phases. Each phase i, for 0 i log n, consists
of 2i groups of requests, 0 j 2i 1. A request in phase i, group j has vjn=2i as its starting
node and v(j +1)n=2i as its destination. For each
i; j there are 1= identical requests, each requesting capacity and providing the same prot, say
.
Let xi be the amount of prot that the on-line
algorithm accrues due to the requests in phase i.
A unit of prot due to requests in phase i can be
achieved only by using up n=2i units of capacity.
Since there are only n units of capacity overall, we
Plog n
get:
Then,
log n
X
j =0
i=0
2 i nxi n: Dene Sj = 2
X
Sj =
0
ij log n
2
jx
i
log n
X
i=0
j Pj x .
i=0 i
2 2 i xi 2
Hence, there exists k such that Sk 2= log n.
Now consider a prex consisting of the rst k
phases of the request sequence. The Pbenet of
the on-line algorithm in this case is ki=0 xi =
2k Sk 2k (2= log n). The o-line algorithm can
reject all the requests except the ones in phase k,
accruing benet of 2k .
Consider a case where the graph consists of
a single link. By constructing a sequence of requests that have exponentially growing duration
from phase to phase and another sequence of requests where the prot per transmitted bit increases exponentially with the phase number, it
is relatively easy to show the following claim:
Lemma 4.2 Any on-line algorithm has competitive
ratio of (log(T F )).
By combining the Lemmas 4.2 and 4.1, we get the
following theorem.
Theorem 4.3 Any on-line algorithm has throughput competitive ratio of (log nF T )
Next we show that if some connections request
rates in excess of 1=k-factor of the capacity, then
the competitive ratio of any on-line algorithm is
bounded by (T 1=k + F 1=k + n1=k ). In other words,
in order to achieve polylogarithmic competitive
ratio, we need to bound the maximal rate of a
connection to be below log(T F n)= log log(T F n)
fraction of the minimum capacity. This bound
is close to the log(T F n)-fraction bound that, as
shown in Section 3, implies logarithmic competitive ratio of the Route or Block algorithm.
Lemma 4.4 If we allow requests of rate as large
as 1=k-fraction of the minimum capacity, then the
competitive ratio is at least (T 1=k + F 1=k ) for any
algorithm.
Proof: Omitted.
For the next lower bound we use G(n) which
is a line of n unit-capacity edges, where n be a
power of 2. All requests have unit duration.
Lemma 4.5 If we allow requests of rate 1=k then
the competitive ratio is (n =k ) for any algorithm
for G(n).
1
Proof: The sequence of requests consists of k + 1
phases. In phase 0 there is a request from v0 to
vn. The rst request must be accepted since it
might be the only request. Thus before phase 1
the utilization of each edge is 1=k and the benet
of the on-line is 1=k. For phases 1 i k we
maintain the following invariant. Either the lower
bound was already proved or before phase i the
utilization of each edge in the range li to li +
n1 (i 1)=k is i=k and the total benet of the online for all previous requests is only i=k.
Let l1 = 0. The invariant clearly holds for i =
1. We assume by induction that the invariant is
true for i and some li and we prove it for i +1 and
some li+1. We make k requests between vli +jn i=k
to vli +(j +1)n i=k for each 0 j < n1=k . If the
on-line algorithm reject all of these request we
are done, since the o-line algorithm can accept
all the request in the phase and get benet n1=k .
The benet of the on-line algorithm in this case
is bounded by i=k 1 which implies the lower
bound.
On the other hand if some request was accepted, then we stop phase i after that rst such
request. That request is between vli +jn i=k to
vli +(j +1)n i=k . Dene li+1 = li +jn1 i=k . We claim
that the invariant holds for i + 1. The total benet of the on-line algorithm before phase i + 1 is
i=k + 1=k = (i + 1)=k; the utilization of the edges
from li+1 to li+1 + n1 i=k is i=k + 1=k = (i + 1)=k.
1
1
1
1
This completes the proof of the invariant.
The invariant implies that at the start of phase
k, all the edges from lk to lk + n1=k are fully
utilized. Thus, the the on-line algorithm has
to reject all the requests from vlk +j to vlk +j +1
0 j < n1=k . In contrast to this, the o-line
algorithm accepts all of these requests, and gets
benet n1=k . The claim follows since the on-line
benet is bounded by 1.
By combining the Lemmas 4.4 and 4.5, we get the
following theorem:
Theorem 4.6 If the rate of requests can be as large
as 1=k of the capacities then throughput competitiveness of any on-line algorithm is (T =k + F =k +
n =k ).
1
1
1
Acknowledgements
We would like to thank Orli Waarts for many
helpful discussions.
References
[1] J. Aspnes, Y. Azar, A. Fiat, S. Plotkin, and
O. Waarts. On-line machine scheduling with
applications to load balancing and virtual
circuit routing. In Proc. 25th Annual ACM
Symposium on Theory of Computing, pages
623{631, May 1993.
[2] Special issue on Asynchronous Transfer
Mode. Int. Journal of Digital and Analog
Cabled Systems, 1(4), 1988.
[3] B. Awerbuch, Y. Azar, S. Plotkin, and
O. Waarts. Competitive routing of virtual
circuits with unknown duration. Unpublished manuscript, July 1993.
[4] B. Awerbuch, I. Cidon, I. Gopal, M. Kaplan, and S. Kutten. Distributed control for
PARIS. In Proc. 9th Annual ACM Symposium on Principles of Distributed Computing, pages 145{160, 1990.
[5] Y. Azar, A. Broder, and A. Karlin. Online load balancing. In Proc. 33rd IEEE
Annual Symposium on Foundations of Computer Science, pages 218{225, 1992.
[6] Y. Azar, B. Kalyanasundaram, S. Plotkin,
K. Pruhs, and O. Waarts. On-line load balancing of temporary tasks. In Proc. Workshop on Algorithms and Data Structures,
August 1993.
[7] Y. Azar, J. Naor, and R. Rom. The competitiveness of on-line assignment. In Proc.
3rd ACM-SIAM Symposium on Discrete Algorithms, pages 203{210, 1992.
[8] I. Cidon and I. S. Gopal. PARIS: An approach to integrated high-speed private networks. International Journal of Digital &
Analog Cabled Systems, 1(2):77{86, AprilJune 1988.
[9] J. Garay, I. Gopal, S. Kutten, Y. Mansour,
and M. Yung. Ecient on-line call control algorithms. In Proc. of 2nd Annual Israel Conference on Theory of Computing and Systems, 1993.
[10] J.A. Garay and I.S. Gopal. Call preemption
in communication networks. In Proc. INFOCOM '92, volume 44, pages 1043{1050, Florence, Italy, 1992.
[11] F. P. Kelly. Blocking probabilities in large
circuit-switched networks. Advances in Appl.
Prob., 18:473{505, 1986.
[12] T. Leighton, F. Makedon, S. Plotkin,
C. Stein, E . Tardos, and S. Tragoudas. Fast
approximation algorithms for multicommodity ow problem. In Proc. 23th ACM Symposium on the Theory of Computing, pages
101{111, May 1991.
[13] S. Plotkin, D. Shmoys, and E . Tardos.
Fast approximation algorithms for fractional
packing and covering problems. In Proc.
32nd IEEE Annual Symposium on Foundations of Computer Science, pages 495{504,
October 1991.
[14] F. Shahrokhi and D. Matula. The maximum
concurrent ow problem. J. Assoc. Comput.
Mach., 37:318{334, 1990.
[15] D.D. Sleator and R.E. Tarjan. Amortized
eciency of list update and paging rules.
Comm. ACM, 28(2):202{208, 1985.
© Copyright 2025 Paperzz