The material in this paper has appeared in “Limit theorems for workload input models.”
Stochastic Networks: Theory and Applications. F.P. Kelly, S. Zachary, I. Ziedins, eds.
Royal Statistical Society Lecture Note Series 4. Oxford Science Publications, Oxford. 1996,
119-140.
Limit theorems for workload input models
Thomas G. Kurtz
Departments of Mathematics and Statistics
University of Wisconsin - Madison
Madison, WI 53706-1388
USA
February 6, 1996
Abstract
General models for workload input into service systems are considered. Scaling limit
theorems appropriate for the formulation of fluid and heavy traffic approximations
for systems driven by these inputs are given. Under appropriate assumptions, it is
shown that fractional Brownian motion can be obtained as the limiting workload input
process. Motivation for these results comes from data on communication network traffic
exhibiting scaling properties similar to those for fractional Brownian motion.
1
Discrete source models.
We consider models for the input of work into a system from a large number of sources.
Each source “turns on” at a random time and inputs work into the system for some period
of time. Associated with each active period of a source is a cumulative input process, that
is, a nondecreasing stochastic process X such that X(t) is the cumulative work input into
the system during the first t units of time during the active period. One representation of
the process is as follows. Let N (t) denote the number of source activations up to time t, and
for the ith activation, let Xi (s) denote the cumulative work input into the system during
the first s units of time that the source is on. We will refer to N as the source activation
process. The total work input into the system up to time t is then given by
Z t
XN (s) (t − s)dN (s) .
(1.1)
U (t) =
0
In some settings, it is useful to model the length of time τi that a source remains active
separately from Xi . The total work input up to time t then becomes
Z t
XN (s) (τN (s) ∧ (t − s))dN (s) .
(1.2)
U (t) =
0
1
Of course these two approaches are equivalent since for the second model, we can always
define X̃i (s) = Xi (τi ∧ s) and obtain a model of the first form with the same total work
input. Note that many sources may be active at the same time, and we let L(t) denote the
number of sources active at time t, that is,
Z t
L(t) =
I[0,τN (s) ) (t − s)dN (s) .
0
We assume that the active periods are identical in the sense that the (Xi , τi ) are i.i.d.
with distribution ν on DR [0, ∞) × [0, ∞), although the methods we use can be extended to
multiple source classes. Let λ : DR3 [0, ∞) → D[0,∞) [0, ∞) be nonanticipating in the sense
that λ(v, t) = λ(v t , t) for all t where v t (s) = v(t ∧ s), v ∈ DR3 [0, ∞). Assume that N is a
counting process with an intensity λ(N, U, L, ·), that is,
Z t
N (t) −
λ(N, U, L, s)ds
0
is a martingale with respect to the filtration given by Ft = σ(N (s), U (s), L(s) : s ≤ t). In
particular, λ(N, U, L, t) depends on values of N , U , and L only at times s ≤ t.
We can represent the process (N, U, L) as the solution of a stochastic equation involving
a Poisson random measure. Recall that a Poisson random measure ξ with mean measure
η on a measurable space (E, B) is a random measure on B such that for each Γ ∈ B, ξ(Γ)
has a Poisson distribution with mean η(Γ) and for disjoint Γ1 , Γ2 , . . ., ξ(Γ1 ), ξ(Γ2 ), . . . are
independent.
Let ξ be a Poisson random measure on [0, ∞) × DR [0, ∞) × [0, ∞) with mean measure
η = m × ν (m being Lebesgue measure). ξ can be represented as
ξ=
∞
X
δ(Si ,Xi ,τi )
i=1
where 0 < S1 < S2 < · · · are the jump times of a unit Poisson process and the (Xi , τi ) are
as above. Let N , U , and L satisfy the system of equations
N (t) = ξ(B(t))
Z
U (t) =
u(r ∧ γt (s))ξ(ds × du × dr)
B(t)
L(t) = N (t) − ξ(A(t))
where
Z
B(t) = {(s, u, r) : s ≤
t
λ(N, U, L, z)dz}
0
Z
A(t) = {(s, u, r) : s ≤
t−r
λ(N, U, L, z)dz}
0
Z
β(t)
λ(N, U, L, s)ds = t
0
2
(1.3)
and
γt (s) = t − β(s) .
Note that β(Si ) is the ith activation time, that is, the ith jump time of N (t); for Si ≤ t,
γt (Si ) is the elapsed time since the ith activation; Ti = β(Si ) + τi is the end of the ith active
period and satisfies τi = γTi (Si ). With these interpretations, Xi (τi ∧ γt (Si )) is the work input
by the ith activated source up to time t (giving the equation for U in (1.3)), and ξ(A(t)) is
the number of active periods that have completed at or before time t (giving the identity
for L in (1.3)). It is easy to see that the solution of the above system exists and is unique
up to σ∞ = inf{t : N (t) = ∞}. This representation is essentially that used in Kurtz (1983)
and Solomon (1987) for population models and by Foley (1982) in the particular case of the
M/G/∞ queue.
Rt
Note that η(B(t)) = Λ(t) ≡ 0 λ(N, U, L, s)ds, and the fact that
Z
N (t) −
t
λ(N, U, L, s)ds = ξ(B(t)) − η(B(t))
0
is a martingale follows from the fact that for each t ≥ 0, Λ(t) is a stopping time with respect
to the filtration {Hu } given by
Hu = σ(ξ(Γ) : Γ ⊂ Γu ≡ [0, u] × DR [0, ∞) × [0, ∞))
and that ξ(Γu ) − u is an {Hu }-martingale.
Example 1.1 Queue-dependent arrivals. Consider the simple model for workload processing
in which work is processed at rate 1 as long as there is work in the system. The queued work
is given by
Q(t) = Q(0) + U (t) − t + Λ(t)
where Λ denotes that idle-time of the processor. Note that Q is uniquely determined by U ,
so an arrival intensity λ(Q(t)) is of the form considered above.
Example 1.2 On/off sources. Let L0 be a positive integer and set
λ(N, U, L, t) = λ0 (L0 − L(t)).
This model would correspond to L0 fixed sources, each of which alternates between active and
inactive periods. The active periods have lengths τ as above, and the inactive periods are
exponentially distributed with parameter λ0 .
Section 2 considers scaling limits of the above models that give deterministic “fluid approximations” and corresponding central limit theorems. The proofs of these results extend
ideas developed in Kurtz (1983), Solomon (1987), and Garcia (1995). Section 3 considers
models in which the activation times occur at a constant rate (that is, form a Poisson process). Extending the definition of the process to the time interval (−∞, ∞), the input process
has stationary increments. Under “heavy traffic” scaling, we obtain Gaussian limit processes
having stationary increments. In Section 4, we show that fractional Brownian motion can
be obtained as a limit under the heavy traffic scaling. Section 5 suggests two variations
3
of the model indicating how a type of control can be introduced and how models with a
fixed number of regenerative sources can be developed. Finally, Section 6 is an appendix
containing several technical lemmas.
Iglehart (1973) considers functional limit theorems for models of the form (1.1) in which
N is a renewal process. His results have substantial overlap with the results of Section 3.
In particular, his proof of relative compactness is similar to that used here. Models of this
form have also been considered by Klüppelberg and Mikosch (1995a,b) in the context of
risk analysis, where X is interpreted as the cumulative payout of an insurance claim. They
assume that N is a Poisson process. Their results also have substantial overlap with the
results of Section 3, although the approach is quite different.
Our motivation comes from the analysis that has been carried out on network traffic data
(see Willinger, Taqqu, Leland, and Wilson (1995)) which indicates that the traffic exhibits
“long-term dependence” that is not consistent with simpler compound-Poisson models of
workload input. In this context, Willinger, Taqqu, Sherman and Wilson (1995) obtain a
limit theorem for an on/off source model similar to the results of Section 3. Their model is
not covered by the class of models considered in detail here (except in the case of exponential
off periods as in Example 1.2). Their model is included in a class of regenerative source
models that can be studied using techniques similar to those developed here. These models
are described briefly in Section 5, but the details of the convergence results will be given
elsewhere.
2
Law of large numbers and central limit theorem.
A variety of limit theorems can be proved for solutions of systems of the above form based on
the law of large numbers and the central limit theorem for the underlying Poisson random
measure ξ. For example, suppose that λn (v, t) = nλ(n−1 v, t) (where v ∈ DR3 [0, ∞)) and
that ξn is a Poisson random measure with mean measure nm × ν. Let Nn be the activation
process, Un the work input process, and Ln the active source process, and define
Xn (t) =
1
Nn (t),
n
Yn (t) =
1
Un (t),
n
Zn (t) =
1
Ln (t) .
n
Then we can write
Xn (t) =
1
ξn (Bn (t))
Zn
1
u(r ∧ γtn (s)) ξn (ds × du × dr)
n
Bn (t)
1
Zn (t) = Xn (t) − ξn (An (t))
n
Yn (t) =
where
Z
Bn (t) = {(s, u, r) : s ≤
t
λ(Xn , Yn , Zn , z)dz}
0
Z
An (t) = {(s, u, r) : s ≤
λ(Xn , Yn , Zn , z)dz}
0
4
t−r
(2.1)
Z
βn (s)
λ(Xn , Yn , Zn , z)dz = s
0
and
γtn (s) = t − βn (s) .
Using the fact that n1 ξn (A) → m × ν(A) in probability for each Borel set A and uniformity
results of Stute (1976), under appropriate uniqueness conditions, it is not difficult to show
that (Xn , Yn , Zn ) converges to the solution of the system
Z t
X(t) =
λ(X, Y, Z, s)ds
Z0 t
(2.2)
Y (t) =
µ(t − s)λ(X, Y, Z, s)ds
0
Z t
Z(t) =
λ(X, Y, Z, s)ντ (t − s, ∞)ds
0
R
where µ(t) = DR [0,∞)×[0,∞) u(r ∧ t)ν(du × dr) = E[Xi (τi ∧ t)] and ντ is the distribution of τi ,
so that ντ (t, ∞) = P {τi > t} = ν(DR [0, ∞) × (t, ∞)). In particular, we have the following
theorem.
Theorem 2.1 Suppose that there exists a finite, nondecreasing function K such that
|λ(x1 , y1 , z1 , t) − λ(x2 , y2 , z2 , t)|
≤ K(t) sup(|x1 (s) − x2 (s)| + |y1 (s) − y2 (s)| + |z1 (s) − z2 (s)|).
s≤t
Then there exists a unique solution to (2.2) and
sup(|Xn (s) − X(s)| + |Yn (s) − Y (s)| + |Zn (s) − Z(s)|) → 0
s≤t
in probability.
Rt
Proof. Observing, for example, that m × ν(Bn (t)) = 0 λ(Xn , Yn , Zn , s)ds, we can rewrite
(2.1) as
Z t
1˜
Xn (t) =
ξn (Bn (t)) +
λ(Xn , Yn , Zn , s)ds
n
0
Z
1
Yn (t) =
u(r ∧ γtn (s)) ξ˜n (ds × du × dr)
(2.3)
n
Bn (t)
Z t
+
µ(t − s)λ(Xn , Yn , Zn , s)ds
(2.4)
0
Z t
1˜
Zn (t) =
ξn (Bn (t) − An (t)) +
ντ (t − s, ∞)λ(Xn , Yn , Zn , s)ds ,
n
0
where ξ˜n (B) = ξn (B) − m × ν(B). The terms involving ξ˜n go to zero uniformly in bounded
time intervals, and the theorem follows by the Lipschitz assumption and Gronwall’s inequality.
5
√
To derive
the corresponding central
√ limit theorem, define X̃n (t) = n(Xn (t) − X(t)),
√
Ỹn (t) = n(Yn (t) − Y (t)), Z̃n (t) = n(Zn (t) − Z(t)), and
1
Ξn (A) = √ (ξ(A) − nm × ν(A)) .
n
Note that Ξn (A) ⇒ Ξ(A) where Ξ is Gaussian white noise with E[Ξ(A)Ξ(B)] = m×ν(A∩B).
Then
Z t
√
X̃n (t) = Ξn (Bn (t)) +
n(λ(Xn , Yn , Zn , s) − λ(X, Y, Z, s))ds
0
Z
Ỹn (t) =
u(r ∧ γtn (s))Ξn (ds × du × dr)
(2.5)
Bn (t)
Z
t
+
√
µ(t − s) n(λ(Xn , Yn , Zn , s) − λ(X, Y, Z, s))ds
0
Z̃n (t) = Ξn (Bn (t) − An (t))
Z t
√
+
ντ (t − s, ∞) n(λ(Xn , Yn , Zn , s) − λ(X, Y, Z, s))ds
0
Assume that there exist K1 and K2 such that λ(v, t) ≤ K1 +K2 sups≤t |v(s)| and that λ is
differentiable in an appropriate sense. For example, we can assume that Dλ : DR6 [0, ∞) →
DR [0, ∞) satisfies
|λ(v + ṽ, t) − λ(v, t) − Dλ(v, ṽ, t)| ≤ 2 K3 sup |ṽ(s)|2 .
(2.6)
s≤t
In particular, if λ(v, t) = h(v(t)), then Dλ(v, ṽ, t) = ∇h(v(t)) · ṽ(t).
R
Theorem 2.2 Suppose that for each T > 0, u(r ∧ T )2 ν(du × dr) < ∞ and there exists
C > 0 such that for 0 ≤ h ≤ 1,
Z
sup (u(r ∧ (t + h)) − u(r ∧ t))2 (1 + u(r ∧ t)2 )ν(du × dr) ≤ Ch
t≤T
and
Z
sup
(u(r ∧ (t + h)) − u(r ∧ t))2 (u(r ∧ t) − u(r ∧ (t − h)))2 ν(du × dr) ≤ Ch2 .
h≤t≤T
Suppose that λ is bounded and satisfies (2.6) and that Dλ satisfies the Lipschitz condition
|Dλ(v1 , ṽ1 , t) − Dλ(v2 , ṽ2 , t)| ≤ K(t) sup(|v1 (s) − v2 (s)| + |ṽ1 (s) − ṽ2 (s)|).
s≤t
Then (X̃n , Ỹn , Z̃n ) ⇒ (X̃, Ỹ , Z̃) where
Z t
Dλ(X, Y, Z, X̃, Ỹ , Z̃, z)dz
X̃(t) = Ξ(B(t)) +
0
Z
Ỹ (t) =
u(r ∧ γt (s))Ξ(ds × du × dr
B(t)
Z
t
µ(t − z)Dλ(X, Y, Z, X̃, Ỹ , Z̃, z)dz
Z t
ν(t − z, ∞)Dλ(X, Y, Z, X̃, Ỹ , Z̃, z)dz
Z̃(t) = Ξ(B(t) − A(t)) +
+
0
0
6
(2.7)
Proof. The proof is similar to the arguments in Kurtz (1983) and Solomon (1987). Note that
(2.5 can be viewed asRa system of integral equations driven by the random processes Rn1 (t) =
Ξn (Bn (t)), Rn2 (t) = Bn (t) u(r ∧ γtn (s))Ξn (ds × du × dr), and Rn3 (t) = Ξn (Bn (t) − An (t)).
By the Lipschitz assumptions on Dλ and Gronwall’s inequality, the result will follow by the
1
2
3
continuous mapping theorem, provided we
R t verify the functional convergence
R t of (Rn , Rn , Rn ).
Let E = DR [0, ∞) × [0, ∞), Λn (t) = 0 λ(Xn , Yn , Zn , s)ds, and Λ(t) = 0 λ(X, Y, Z, s)ds.
If we take V (s, u, r) = I[0,Λn (t)) (s) − I[0,Λ(t)) (s), then
Z
Ξn (Bn (t)) − Ξn (B(t)) =
V (s−, u, r)Ξn (ds × du × dr).
[0,t]×E
and by (6.3),
E[(Ξn (Bn (t)) − Ξn (B(t)))2 ] = E[m × ν(Bn (t)4B(t))] = E[|Λn (t) − Λ(t)|].
We also have
R
2 R
n
E
u(r ∧ γt (s))Ξn (ds × du × dr) − B(t) u(r ∧ γt (s))Ξn (ds × du × dr)
Bn (t)
= E
hZ
2
IBn (t) (s, u, r)u(r ∧ γtn (s)) − IB(t) (s, u, r)u(r ∧ γt (s))
i
m × ν(ds × du × dr)
Z
≤
u2 (r ∧ t)ν(du × dr)E[|Λn (t) − Λ(t)|]
E
Z
n
2
I[0,Λ(t)] (s)(u(r ∧ γt (s)) − u(r ∧ γt (s)) m × ν(ds × du × dr)
+E
[0,∞]×E
with a similar inequality holding for Ξn (Bn (t) − An (t)) − Ξn (B(t) − A(t)). The right sides of
these inequalities go to zero by the law of large numbers, Theorem 2.1, and the convergence
of the finite dimensional distributions for (Rn1 , Rn2 , Rn3 ) follows, that is, replace Bn (t) An (t)
and γtn by their deterministic limits and apply the ordinary central limit theorem. (See
Theorem 6.1.)
Since the limits are continuous, to obtain relative compactness of the sequence {(Rn1 , Rn2 , Rn3 )},
it is sufficient to check relative compactness of each component. We employ results of
Chenčov (1956) (see Ethier and Kurtz (1986), Theorem 3.8.8) summarized in Theorem 6.2.
We restrict our attention to
Z
2
Rn (t) =
u(r ∧ γtn (s))Ξn (ds × du × dr)
ZBn (t)
=
I[0,Λn (t)] (s)u(r ∧ γtn (s))Ξn (ds × du × dr).
[0,t]×E
The estimates in the other cases are similar. We apply Corollary
6.8 with U (t) = Rn2 (t +
√
h) − Rn2 (t), V (t) = Rn2 (t) − Rn2 (t − h), and C(X, Y, M ) = c h, for appropriate c. Note that
7
M = Ξn and ηk = n1−k/2 ν, so
Z tZ
2
n
I[0,Λn (t+h)] (s)u(r ∧ γt+h
(s)) − I[0,Λn (t)] (s)u(r ∧ γtn (s) ν(du × dr)ds
0
E
Z tZ
n
≤
I(Λn (t),Λn (t+h)] (s)u2 (r ∧ γt+h
(s))ν(du × dr)ds
0
E
Z tZ
n
(s)))2 ν(du × dr)ds
+
I[0,Λn (t)] (u(r ∧ γtn (s) − u(r ∧ γt+h
0
E
Z
≤ u2 (r ∧ t)ν(du × dr)(Λn (t + h) − Λn (t)) + tCh
Z
2
≤
u (r ∧ t)ν(du × dr)kλk∞ + tC h
gives (6.6) for i = 2 and j = 0. The calculation for i = 0 and j = 2 is the same. For
i = j = 2, we have
Z tZ
2
n
I[0,Λn (t+h)] (s)u(r ∧ γt+h
(s)) − I[0,Λn (t)] (s)u(r ∧ γtn (s))
0
E
2
n
⊗ I[0,Λn (t)] (s)u(r ∧ γtn (s)) − I[0,Λn (t−h)] (s)u(r ∧ γt−h
(s)) ν(du × dr)ds
Z tZ
n
I(Λn (t−h),Λn (t)] (s)(u(r ∧ γt+h
≤
(s) − u(r ∧ γtn (s)))2
0
E
⊗u2 (r ∧ γtn (s))ν(du × dr)ds
Z tZ
+
0
n
I[0,Λn (t−h)] (s)(u(r ∧ γt+h
(s) − u(r ∧ γtn (s)))2
E
n
⊗(u(r ∧ γtn (s) − u(r ∧ γt−h
(s)))2 ν(du × dr)ds
≤ (Λn (t) − Λn (t − h))Ch + tCh2
≤ (Ckλk∞ + tC) h2 .
The estimates for i = 1 and j = 2 and for i = 2 and j = 1 follow as in Remark 6.9, and
Corollary 6.8 gives the inequalities
E[(Rn2 (h) − Rn2 (0))2 ] ≤ c0 h
and
E[(Rn2 (t + h) − Rn2 (t))2 (Rn2 (t) − Rn2 (t − h))2 ] ≤ c0 h2
for some c0 > 0 independent of n. Similar estimates hold for Rn1 and Rn3 , and Lemma 6.2
gives the relative compactness of {Rnk }. As noted, the asymptotic continuity then ensures
the relative compactness of {(Rn1 , Rn2 , Rn3 )}, and convergence of {(X̃n , Ỹn , Z̃n )} follows by the
continous mapping theorem and the uniqueness of the solution of the limiting system (2.7).
3
Models with stationary increments
We now assume that the arrival rate λ is a constant and that the input process has been
running forever. Let ξ be the Poisson random measure on (−∞, ∞) × DR [0, ∞) × [0, ∞)
8
with mean measure λm × ν. For t1 < t2 , let U (t1 , t2 ) be the work input into the system
during the time interval (t1 , t2 ]. Then, letting E = DR [0, ∞) × [0, ∞), we can write
Z
U (t1 , t2 ) =
(u(r ∧ (t2 − s)) − u(r ∧ (t1 − s)) ξ(ds × du × dr)
(−∞,t1 ]×E
Z
+
u(r ∧ (t2 − s))ξ(ds × du × dr)
(t1 ,t2 ]×E
where the first term gives the work input into the system in the time interval (t1 , t2 ] by
sources that activated at or before time t1 and the second term is work input by sources
that activate during the time interval (t1 , t2 ]. Note that U is additive in the sense that
U (t1 , t3 ) = U (t1 , t2 ) + U (t2 , t3 ) for t1 < t2 < t3 , and the amount of work input into the
system has stationary increments in the sense that the distribution ofR U (t1 + t, t2 + t) does
not depend on t. It follows that E[U (t1 , t2 )] = α(t2 − t1 ) where α = λ E u(r)ν(du × dr). We
can simplify notation if we define u(s) = 0 for s < 0, and we have
Z
˜ × du × dr)
U (t1 , t2 ) − α(t2 − t1 ) =
(u(r ∧ (t2 − s)) − u(r ∧ (t1 − s)) ξ(ds
(−∞,t2 ]×E
˜
where ξ(A)
= ξ(A) − λm × ν(A).
In a simple service model with a single server working at “rate” one and employing U as
the work input, queued work would satisfy
Q(t) = Q(0) + U (0, t) − t + Λ(t)
(3.1)
where Λ measures the idle time of the server, and assuming the server works at maximal
rate whenever there is work to be done,
Λ(t) = 0 ∨ sup(t − U (0, s) − Q(0)).
s≤t
See, for example, Harrison (1985), Section 2.2. A heavy traffic limit for such a model involves
rescaling the work measurements and the arrival and service rates under the assumption that
the workload arrival and service rates are asymptotically the same. Letting λ and ν (and
hence α) depend on n, (3.1) becomes
Qn (0)
1
αn − βn
1
Qn (t)
=
+ (Un (0, t) − αn t) +
t + Λn (t),
σn
σn
σn
σn
σn
and σn−1 Qn converges in distribution provided σn−1 (αn −βn ) converges and σn−1 (Un (0, t)−αn t)
converges in distribution.
Consequently, we consider the behavior of
Vn (t1 , t2 ) = σn−1 (Un (t1 , t2 ) − αn (t2 − t1 ))
Z
=
(u(r ∧ (t2 − s)) − u(r ∧ (t1 − s)) Ξn (ds × du × dr)
(−∞,t2 ]×E
9
where Ξn (A) = σn−1 (ξn (A) − λn m × νn (A)) and σn and λn tend to infinity. Note that
V ar(Ξn (A)) = σn−2 λn m × νn (A). We assume that νn (DR [0, ∞) × {0}) = 0 and that
lim σn−2 λn νn = ζ
n→∞
in a “somewhat vague” topology on M(DR [0, ∞) × (0, ∞)), that is,
Z
Z
−2
σn λn
h(u, r)νn (du × dr) →
h(u, r)ζ(du × dr)
DR [0,∞)×(0,∞)
(3.2)
(3.3)
DR [0,∞)×(0,∞)
for all bounded continuous functions on DR [0, ∞) × (0, ∞) for which there exists an r0 > 0
such that h(u, r) = 0 for all r < r0 . In addition, we assume that the measures on E defined
by
Z
γn (B) = σn−2 λn
u(r)2 ∧ 1νn (du × dr)
B
converge weakly to the measure
Z
γ(B) =
u(r)2 ∧ 1ζ(du × dr).
B
Formally, these assumptions imply that Vn converges in distribution to V defined by
Z
V (t1 , t2 ) =
(u(r ∧ (t2 − s)) − u(r ∧ (t1 − s)) W (ds × du × dr)
(3.4)
(−∞,t2 ]×E
where W is Gaussian white noise on (−∞, ∞) × E corresponding to m × ζ. The variance of
V (t1 , t2 ) is
Z
2
E[V (t1 , t2 ) ] =
(u(r ∧ (t2 − s)) − u(r ∧ (t1 − s)))2 dsζ(du × dr)
(3.5)
(−∞,t2 ]×E
again recalling our convention that u(s) = 0 for s < 0.
Theorem 3.1 Let ϕ be nonnegative and convex on [0, ∞) with ϕ(0) = 0 and limx→∞ ϕ(x)/x =
∞. Suppose that for each T > 0,
Z
sup λn
ϕ(σn−2 u(r ∧ T )2 )νn (du × dr) < ∞,
(3.6)
n
lim sup σn−2 λn
t→∞ n
E
Z
Z
DR [0,∞)×[t,∞)
r
(u(r ∧ (T + s)) − u(r ∧ s))2 dsνn (du × dr) = 0,
and there exists C > 0 such that for 0 ≤ h ≤ 1,
Z Z r
sup
(u(r ∧ (h + s)) − u(r ∧ s))2 dsσn−2 λn νn (du × dr) ≤ Ch
n
E
(3.7)
t
(3.8)
0
and
Z Z
sup
n
E
r
(u(r ∧ (s + h)) − u(r ∧ s))2
0
⊗(u(r ∧ s) − u(r ∧ (s − h)))2 dsσn−4 λn νn (du × dr)
≤ Ch2 .
(3.9)
Suppose that the mapping (s, u, r) → u(r∧s) from [0, ∞)×DR [0, ∞)×[0, ∞) is continuous
a.e. m × ζ. Then Vn (0, ·) ⇒ V (0, ·) in the Skorohod topology on DR [0, ∞).
10
Remark 3.2 For fixed t1 and t2 , convergence in distribution of Vn (t1 , t2 ) is essentially a
central limit theorem for a “triangular array”. Conditions (3.6) and (3.7) give the “uniform
√
integrability” that Rsuch results require. (See Theorem 6.1.) In particular, if λn = n, σn = n,
and νn ≡ ν, then E u(r)2 ν(du × dr) < ∞ implies (3.6) and (3.7) for appropriate choice of
ϕ. If, in addition, the input process has finite second moments and stationary, independent
increments, the remaining conditions follow.
R Rr
Proof. Let t1 ≤ t2 . Note that E[Vn (0, t1 )VRn (0,R t2 )] = E 0 (u(r ∧ (t2 + s)) − u(r ∧ s))(u(r ∧
r
(t1 +s))−u(r∧s))dsσn−2 λn νn (du×dr) → E 0 (u(r∧(t2 +s))−u(r∧s))(u(r∧(t1 +s))−u(r∧
s))dsζ(du×dr) where the convergence follows from the convergence assumption (3.2) and the
convergence of γn to γ, the a.e. continuity assumption on the mapping (s, u, r) → u(r ∧ s),
and the uniform integrability assumptions (3.6) and (3.7). The convergence of the finite
dimensional distributions of Vn (0, ·) then follows from Theorem 6.1. The relative compactness
of {Vn (0, ·)} follows from Theorem 6.2 by essentially the same argument as in Theorem 2.2.
.
4
A representation of fractional Brownian motion.
Fractional Brownian motion is a mean-zero Gaussian process Z with stationary increments
satisfying E[(Z(t2 ) − Z(t1 ))2 ] = c(t2 − t1 )α for some c > 0 and 1 < α < 2. (Of course, if
α = 1, Z is an ordinary Brownian motion.) If we assume the source in the previous section
broadcasts at rate 1, that is, X(s) ≡ s, s ≥ 0, and if we let W be Gaussian white noise on
(−∞, ∞) × [0, ∞) corresponding to
ds ζ(dr) = λ
where 2 < β < 3, then (3.4) becomes
Z
V (t1 , t2 ) =
1
ds dr,
rβ
(r ∧ (t2 − s) − r ∧ (t1 − s)) W (ds × dr)
(−∞,t1 ]×[0,∞)
Z
(r ∧ (t2 − s))W (ds × dr)
+
(t1 ,t2 ]×[0,∞)
and (3.5) gives
Z
2
t1
∞
Z
E[V (t1 , t2 ) ] =
−∞
0
(r ∧ (t2 − s) − r ∧ (t1 − s))2 λr−β drds
Z t2 Z ∞
+
(r ∧ (t2 − s))2 λr−β drds
t1
Z
∞
Z
=
0
0
0
∞
(r ∧ (t2 − t1 + u) − r ∧ u)2 λr−β drdu
Z t2 −t1 Z ∞
+
(r ∧ u)2 λr−β drdu
0
0
11
(4.1)
Z
∞
Z
=
0
u
t2 −t1 +u
(r − u)2 λr−β drdu
Z ∞Z ∞
+
(t2 − t1 )2 λr−β drdu
0
t −t +u
Z t2 −t12Z 1u
Z t2 −t1 Z
2−β
+
λr drdu +
0
0
0
∞
u2 λr−β drdu
u
=
λ
λ
λ
+
+
(β − 1)(4 − β) (β − 1)(β − 2) (3 − β)(4 − β)
λ
+
(t2 − t1 )4−β
(β − 1)(4 − β)
and hence V is fractional Brownian motion with α = 4 − β.
To see that ζ can arise as a limit as in (3.2), let λn = nα , with α > β − 1, νn (dr) =
α−β+1
(β − 1)n(nr + 1)−β dr, and σn = n 2 . It is easy to check that (3.2) holds and that γn ⇒ γ.
For (3.6), take ϕ(x) = x2 . The integral on the left of (3.7) is bounded by
Z ∞
T 2 (β − 1)r−β+1 dr
t
which is convergent since β > 2. Since r ∧ (h + s) − r ∧ s ≤ r ∧ h, the left side of (3.8) is
bounded by
Z h
Z ∞
β−1 β−1
−β+3
2
−β+1
(β − 1)
r
dr + h (β − 1)
r
dr =
+
h4−β ,
4
−
β
β
−
2
0
h
which, since 4 − β > 1, implies the relative compactness in CR [0, ∞) by the Kolmogorov
criterion without checking (3.9).
For a more complex example, let ν be the distribution of (X, τ ) where X has stationary,
ergodic increments (for example, X can be a compound Poisson process) and τ is independent
of X and satisfies
lim rβ−1 P {τ > r} = c,
r→∞
that is, τ is in the domain of attraction of a stable law of index β − 1. Assume that
supt≥1 E[ϕ(t−2 (X(t))2 )] < ∞, where ϕ is as in Theorem 3.1 and that for some C > 0,
V ar(X(t)) < Ct for 0 ≤ t ≤ 1 and
E[(X(t3 ) − X(t2 ))2 (X(t2 ) − X(t1 ))2 ] ≤ C(t3 − t2 )(t2 − t1 ).
Let νn be the distribution of ( n1 X(n·), n1 τ ). Note that this transformation corresponds to
rescaling time and workload measurements and that n1 X(nt) → mt a.s. and in L2 . Then
with h as in (3.3) and σn−2 λn = nβ−1 ,
Z
1
1
−2
lim σn λn h(u, r)νn (du × dr) = lim σn−2 λn E[h( X(n·), τ ]
n→∞
n→∞
n
n
Z ∞
c(β − 1)
=
h(m·, r)
dr,
rβ
0
and the estimates in Theorem 3.1 are easy to check.
12
5
5.1
Variations of the model.
Admission control.
The estimates used in proving relative compactness in the convergence theorems, Theorems
2.2 and 3.1, depend on the integrand being “predictable” with respect to the filtration
generated by the Poisson random measures. Similar estimates will also hold for more general
predictable integrands. For example, we can associate with each source another variable Zi ,
say with values in [0, 1]. Now let ξ be the Poisson random measure on [0, ∞) × DR [0, ∞) ×
[0, ∞) × [0, 1] with mean measure λm × ν̂. Let U R satisfy
Z
R
U (t1 , t2 ) =
(u(r ∧ (t2 − s)) − u(r ∧ (t1 − s))
(−∞,t2 ]×E×[0,1]
⊗I[0,R(s)] (z)ξ(ds × du × dr × dz)
where R(t) ∈ [0, 1] depends on the process up to time t (for example, on the number of
active sources and/or the workload backlog). Note that if the ith (attempted) activation
occurs at time Si , but R(Si ) < Zi , then the source contributes no work to the system. The
introduction of R allows one to model an admission control policy. The value of Zi can
be thought of as specifying the priority level of the source. If R(s) = 1, any source that
attempts to activate at time s is allowed to input work into the system. If R(s) = 0, any
source that attempts to activate at time s is rejected. Otherwise, whether or not a source is
accepted depends on its “priority level” Zi .
For simplicity, let ν̂n = νn × m. As before, letting Rn = 1 − β1n R̃n and setting Ξn (A) =
σn−1 (ξn (A) − λn m × ν̂n (A)), we have
VnR (t1 , t2 ) = σn−1 (UnR (t1 , t2 ) − αn (t2 − t1 ))
Z
=
(u(r ∧ (t2 − s)) − u(r ∧ (t1 − s))
(−∞,t2 ]×E
⊗I[0,Rn (s)] (z)Ξn (ds × du × dr × dz)
Z
−
(u(r ∧ (t2 − s)) − u(r ∧ (t1 − s))
(−∞,t2 ]×E
λn
νn (du × dr).
βn αn
√
For example, under the conditions of Remark 3.2, if βn = n and R̃n is a continuous function
of the normalized number of active sources, a limit theorem analogous to Theorem 3.1 will
hold.
⊗R̃n (s)
5.2
Regenerative sources.
As noted previously, the models considered above do not include many models of on/off
sources. Suppose that there is a fixed number n of potential sources and that each of the
n sources inputs work over a period of time of length distributed as τ and that during that
period the cumulative work is distributed as X. We assume that at the end of each such
13
time period, the source immediately begins a new period with the same statistics and that
the behavior in different periods is independent (hence the regenerative terminology). An
on/off source that inputs work at a constant rate c during the on period satisfies
X(t) = c(t ∧ τ (1) ) ,
τ = τ (1) + τ (2)
where τ (1) is the length of the on (active) portion of the cycle and τ (2) is the length of the
off (inactive) portion of the cycle.
Let {(Xi , τi )} be iid with the same distribution as (X, τ ). Define ξn ([0, z] × A) =
P[nz]
0
i=1 δ(Xi ,τi ) . For k = 1, . . . , n, let τk denote the first cycle completion (regeneration) time
after time 0 for the kth source, and let Xk0 (t), 0 ≤ t ≤ τk0 , denote the cumulative work input
by the kth source in the time interval [0, t]. Let
n
X
Un (t) =
Xk0 (t
∧
τk0 )
t
Z
XNn (s) (τN (s) ∧ (t − s))dNn (s)
+
0
k=1
and
Nn (t) =
n
X
Z
I[τk0 ,∞) (t) +
k=1
t
0
I[τNn (s) ,∞) (t − s)dNn (s)
where Nn (t) counts the total number of cycle completions in the time interval [0, t] (each
source may have completed more than one cycle) and Un (t) denotes the total work input
during the time interval [0, t]. Un and Nn can be represented in terms of ξn and results
analogous to Theorems 2.2 and 3.1 can be proved using the central limit theorem for ξn .
These results will be discussed elsewhere.
6
Appendix.
The central limit theorem for integrals against Poisson random measures plays an important
role in the limit theorems considered in this paper. If {ξn } is a sequence of Poisson random
measures on U with mean measures η n , ξ˜n (A) = ξn (A) − η n (A) and f n is a sequence of
functions on U , then the characteristic function for
Z
n
Z =
f n (u)ξ˜n (du)
U
is
iθZ n
E[e
Z
] = exp{
(eiθf
n (u)
− 1 − iθf n (u))η n (du)}
U
and the following version of the Lindeberg-Feller central limit theorem holds.
Theorem 6.1 For k = 1, . . . , d, n = 1, 2, . . ., let fkn ∈ L2 (η n ) and
Z
n
fkn (u)ξ˜n (du).
Zk =
U
14
Suppose
Z
lim
n→∞
2
fkn (u)fln (u)η n (du) = σkl
U
and for each > 0 and k,
Z
lim
n→∞
I{|fn (u)|>} (fkn (u))2 η n (du) = 0.
U
2
Then Z n ⇒ Z, where Z is normal with mean zero and covariance matrix σ 2 = ((σkl
)).
Because the processes of interest in Sections 2 and 3 involve stochastic convolutions, it
is difficult or impossible to employ standard martingale estimates in order to verify relative
compactness in the proofs of convergence in distribution. Consequently, we exploit classical
results of Chenčov (1956) (see Ethier and Kurtz (1986), Theorem 3.8.8) which involve estimating moments of products of increments of the processes. The following simplified version
of those results is sufficient for our purposes.
Theorem 6.2 Let {Xn } be a sequence of cadlag Rd -valued processes. Suppose that for each
t ≥ 0, {Xn (t)} is relatively compact (in the sense of convergence in distribution), that for
each > 0,
lim sup P {|Xn (t) − Xn (0)| > } = 0,
(6.1)
t→0
n
and that for each T > 0 there exists a CT > 0 and θ > 1 such that for 0 ≤ h ≤ 1 and
h≤t≤T
E[(Xn (t + h) − Xn (t))2 (Xn (t) − Xn (t − h))2 ] ≤ CT hθ .
Then {Xn } is relatively compact in the sense of convergence in distribution in the Skorohod
topology.
Remark 6.3 Note that to verify the relative compactness of {Xn (t)}, it is sufficient to
verify supn E[|Xn (t)|a ] < ∞ for some a > 0, and to verify (6.1), it is sufficient to show that
limt→0 supn E[|Xn (t) − Xn (0)|a ] = 0.
The necessary moment estimates can be obtained using results on integrals against orthogonal martingale random measures. Let E be a complete, separable metric space with
Borel sets B(E), and let U ⊂ B(E) be closed under finite unions and satisfy A ∈ U and
B ∈ B(E) implies A ∩ B ∈ U. M is an {Ft }-orthogonal martingale random measure
on E if for each A ∈ U, M (A, ·) is an {Ft }-martingale and for disjoint A1 , . . . , Am ∈ U,
M (A1 , ·), . . . P
, M (Am , ·) are orthogonal martingales (that is, the product of any two is a marm
tingale) and m
i=1 M (Ai , t) = M (∪i Ai , t) a.s. If M (A, ·) is square integrable for each A ∈ U,
then there exists a a random measure η̃ on E ×[0, ∞) such that the Meyer process for M (A, ·)
and M (B, ·) is given by hM (A, ·), M (B, ·)it = η̃(A ∩ B × [0, t]). For simplicity, we assume
that
Z t
η(s, A)ds.
η̃(A × [0, t]) =
0
15
where η is an M(E)-valued process adapted to {Ft }. Note that if
M (A, t) =
N (A × [0, t]) − ν(A)t
σ
where N is a Poisson random measure on E × [0, ∞) with mean measure ν × m, then M
is an orthogonal martingale random measure with U = {A ∈ B(E) : ν(A) < ∞} and
η̃ = σ −2 ν × m, that is, η(s, A) = σ −2 ν(A), and if W is Gaussian white noise on E × [0, ∞)
with V ar(W (C)) = ν × m(C), then M (A, t) = W (A × [0, t]) is an orthogonal martingale
random measure with U = {A ∈ B(E) : ν(A) < ∞} and η̃ = ν × m. For details on
integration against orthogonal martingale random measures see Walsh (1986) or Kurtz and
Protter (1995).
Lemma 6.4 Let η be a deterministic measure on E and U = {A ∈ B(E) : η(A) < ∞}, and
let M be an orthogonal martingale random measure on E adapted to {Ft } such that M (A, ·) is
square integrablePfor each A ∈ U and hM (A, ·), M (B, ·)it = η(A ∩ B)t, A, B ∈ U. For l > 2,
let [M (A, ·)]lt = s≤t (M (A, s) − M (A, s−))l (that is, the sum is over all discontinuities) and
suppose that for 2 < l ≤ l0 , there exist measures ηl such that
[M (A, ·)]lt − ηl (A)t
is a martingale, and define η2 ≡ η. (Recall that for any A with η(A) < ∞, [M (A, ·)]t − η(A)t
is a martingale.)
Fix m, 2 ≤ m ≤ l0 , and let X and Y be cadlag, L2 (η)-valued processes such that
Z t Z
E[
0
(|X(u, s)|k + |Y (u, s)|k )ηk (du)ds
mk
]<∞
(6.2)
E
for 2 ≤ k ≤ m. Define
Z
U (t) =
X(u, s−)M (du × ds),
Z
Y (u, s−)M (du × ds).
V (t) =
E×[0,t]
E×[0,t]
Then for k + l ≤ m
X k l Z
E[U (t)V (t)] =
E[
U k−i (s−)V l−j (s−)X i (u, s−)
i
j
E×[0,t]
i≤k,j≤l
k
l
2≤i+j
⊗Y j (u, s−)ηi+j (du)ds] .
In particular,
Z
E[U (t)V (t)] = E[
X(u, s−)Y (u, s−)η(du)ds]
(6.3)
E×[0,t]
Remark 6.5 If M is a centered Poisson process, then ηk = η for all k ≥ 2. If M (A, ·) is
continuous for all A ∈ U, then ηk = 0 for k > 2.
16
Proof. U and V are martingales, and Itô’s formula implies
Z t
Z t
k
l
k−1
l
U (t)V (t) =
kU (s−)V (s−)dU (s) +
lU k (s−)V l−1 (s−)dV (s)
0
0
Z
1 t
+
k(k − 1)U k−2 (s−)V l (s−)d[U ]s
2 0
Z t
+
klU k−1 (s−)V l−1 (s−)d[U, V ]s
Z0 t
+
l(l − 1)U k (s−)V l−2 (s−)d[V ]s
0
X k l X
+
U k−i (s−)V l−j (s−)(∆U (s))i (∆V (s))j ,
i
j s≤t
i≤k,j≤l
(6.4)
2<i+j
where ∆U (s) = U (s) − U (s−). For 2 < i + j ≤ m, the orthogonality and the moment
assumptions imply
Z
X
i
j
(∆U (s)) (∆V (s)) −
X i (u, s−)Y j (u, s−)ηi+j (du)ds
E×[0,t]
s≤t
is a martingale as are
Z
[U ]t −
X 2 (u, s−)η(du)ds,
Z
[U, V ]t −
E×[0,t]
X(u, s−)Y (u, s−)η(du)ds,
E×[0,t]
and the analogous centering of [V ]t . The desired identity then follows by taking expectations
of both side of (6.4). For the moment estimates needed to justify this argument, see Section
2 of Kurtz and Protter (1995).
Corollary 6.6 Let k be even, and let
ki
Z t Z
i
| X (s−, u)ηi (du)|ds ],
C(X, M ) = max E[
2≤i≤k
0
E
and let K be the (unique) positive solution of
K=
k X
k
i
i=2
K
k−i
k
.
Then
E[U k (t)] ≤ KC(X, M ).
(6.5)
Remark 6.7 For a similar estimate on E[|U (t)|k ] for k odd, see Section 2 of Kurtz and
Protter (1995).
17
Proof. Since |U (t)|j is a submartingale for j ≥ 1,
Z t Z
X k k
k−i
E[U (t)] ≤
E[|U (t)|
| X i (s, u)ηi (du)|ds]
i
0
E
2≤i≤k
Z t Z
ki
X k k−i
i
| X i (s, u)ηi (du)|ds ] k
≤
E[U k (t)] k E[
i
0
E
2≤i≤k
X k
k−i
i
≤
E[U k (t)] k C(X, M ) k .
i
2≤i≤k
Dividing both sides of this inequality by C(X, M ) we see that (6.5) must hold.
Corollary 6.8 Suppose in addition to (6.2), for i + j ≥ 2, i, j ≤ 2,
Z t Z
| X i (s−, u)Y j (s−, u)ηi+j (du)|ds ≤ C i+j (X, Y, M )
0
a.s.
(6.6)
E
Then E[|U (t)|], E[|V (t)|] ≤ C(X, Y, M ), E[U 2 (t)], E[V 2 (t)] ≤ C 2 (X, Y, M ) and
E[U 2 (t)V 2 (t)] ≤ 11C 4 (X, Y, M )
Remark 6.9 Note that
Z t Z
| X 2 (s−, u)Y 1 (s−, u)η3 (du)|ds
0
E
Z
Z sZ
t
X 2 (s−, u)η3 (du)
≤
E
0
X 2 (s − u)Y 2 (s−, u)η3 (du)ds
E
sZ Z
t
Z tZ
X 2 (s−, u)η
≤
0
X 2 (s − u)Y 2 (s−, u)η3 (du)ds
3 (du)ds
E
0
E
so, for example, in the Poisson case, the estimates for (i, j) = (2, 0), (0, 2) and (2, 2) imply
the estimates for (2, 1) and (1, 2).
Proof. We have
2
Z
E[U (t)] = E[
X 2 (s−, u)η2 (du)ds] ≤ C 2 (X, Y, M ),
E×[0,t]
and similarly for E[V 2 (t)], which in turn implies the inequalities for E[|U (t)|] and E[|V (t)|].
Using the fact that |U (t)|, |V (t)|, U 2 (t) and V 2 (t) are submartingales and that |U (s)V (s)| ≤
18
1
(U 2 (s)
2
+ V 2 (s)), we have
E[U 2 (t)V 2 (t)]
X 22 Z
=
E[
U 2−i (s−)V 2−j (s−)
j
i
E×[0,t]
i≤2,j≤2
2≤i+j
⊗X i (s−, u)Y j (s−, u)ηi+j (du)ds] .
Z t Z
2 X
2
2−j
≤
E[|V (t)|
| X 2 (s−, u)Y j (s−, u)η2+j (du)|ds]
j
0
E
j=0
Z t Z
1
X
2
2−i
+
E[|U (t)|
| X i (s−, u)Y 2 (s−, u)ηi+2 (du)|ds]
i
0
E
i=0
Z t Z
+2E[(U 2 (t) + V 2 (t))
| X 1 (s−, u)Y 1 (s−, u)η2 (du)|ds]
0
E
≤ 11C 4 (X, Y, M ).
Corollary 6.10 If XY = 0 and
Z t Z
Z t Z
2
max{ | X (s, u)η(du)|ds,
| Y 2 (s, u)η(du)|ds} ≤ C̃(X, Y, M )
0
E
0
E
then
E[U 2 (t)V 2 (t)] ≤ 2C̃ 2 (X, Y, M ).
Proof. Note that E[U 2 (t)], E[V 2 (t)] ≤ C̃(X, Y, M ). By Lemma 6.4 and the argument used
in the proof above
Z
2
2
E[U (t)V (t)] = E[
U 2 (s−)Y 2 (s−, u)η(du)ds]
E×[0,t]
Z
+E[
V 2 (s−)X 2 (s−, u)η(du)ds]
E×[0,t]
2
≤ E[U (t)]C̃(X, Y, M ) + E[V 2 (t)]C̃(X, Y, M )
which gives the desired result.
References
Chenčov, N. N. (1956). Weak convergence of stochastic processes whose trajectories have
no discontinuities of the second kind and the heuristic approach to the Kolmogorov-Smirnov
tests. Theory Probab. Appl. 1, 140-149.
19
Cutland, Nigel J., Kopp, P. Ekkehard, and Willinger, Walter (1993). Stock price returns
and the Joseph effect: A fractional version of the Black-Scholes model. (preprint)
Ethier, Stewart N. and Kurtz, Thomas G. (1986). Markov Processes: Characterization and
Convergence. Wiley, New York.
Foley, R. D. (1982). The non-homogeneous M/G/∞ queue. OPSEARCH 19, 40-48.
Garcia, Nancy Lopes (1995). Birth and death processes as projections of higher dimensional
Poisson processes. (to appear).
Harrison, J. Michael (1985). Brownian Motion and Stochastic Flow Systems. Wiley, New
York.
Iglehart, Donald L. (1973). Weak convergence of compound stochastic process, I. Stochastic
Process Appl. 1, 11-31.
Klüppelberg, C. and Mikosch, T. (1995a). Explosive Poisson shot noise processes with
application to risk reserves. Bernoulli (to appear).
Klüppelberg, C. and Mikosch, T. (1995b). Delay in claim settlement and ruin probability
approximations. Scand. Act. J. (to appear).
Kurtz, Thomas G. (1983). Gaussian approximations for Markov chains and counting processes. Bulletin of the International Statistical Institute, Proceedings of the 44th Session,
Invited Paper Vol. 1, 361-376.
Kurtz, Thomas G. and Protter, Philip (1995). Weak convergence of stochastic integrals and
differential equations: Infinite dimensional case. CIME School in Probability. (to appear).
Solomon, Wiremu (1987). Representation and approximation of large population age distributions using Poisson random measures. Stoch. Proc. Appl. 26, 237-255.
Stute, W. (1976). On a generalization of the Glivenko-Cantelli theorem. Z. Wahrsch. Verw.
Geb. 35, 167-175.
Walsh, John (1986). An introduction to stochastic partial differential equations. Lect. Notes
in Math. 1180, 265-439.
Willinger, Walter; Taqqu, Murad; Leland, Will E. and Wilson, Daniel V. (1995). Selfsimilarity in high-speed packet traffic: Analysis and modeling of Ethernet traffic measurements. Statistical Science. 10, 67-85.
Willinger, Walter; Taqqu, Murad; Sherman, Robert and Wilson, Daniel V. (1995). Selfsimilarity through high-variability: Statistical analysis of Ethernet LAN traffic at the source
level.
20
© Copyright 2026 Paperzz