Murphree, E. S.Transient Cumulative Processes."

TRANSIENT CUMULATIVE PROCESSES
by
Emily Stough Murphree
This research was supported in part by the office of Naval Research
under Grant NOOOl4-76-C-0550.
EMILY STOUGH MURPHREE.
Transient Cumulative Processes (Under the
direction of WALTER L. SMITH.)
Suppose (XI,Y ) has the improper distribution G(x,y) and Xl
I
is a positive lifetime whose distribution F is such that F(~) =
w < 1.
Independent vectors (X ,Y ), (XZ,YZ)'ooo having distribution
1 I
G are sampled until an infinite lifetime is chosen, when the renewal
process is said to "die."
Let N(t) be the number of lifetimes ob-
served by time t and let A(t) = I if Xl <
o otherwise.
~,
000
~<
, UN(t)+1
~
and
When A(t) = 1, define the Type B cumulative process
N!t)
Y.
Wet) = L.
j=l
J
To ensure Wet) is defined, expectations and probabilities of interest
are conditioned upon {A(t)=l}.
arise.
Two sharply delineated situations
Although many standard renewal theoretic results have direct
analogs in the first, they do not in the second.
In the first case, there is some a> 0 making
e axG(dx,dy) = I
and a new d.f.
G(x,y) =
is defined.
fX fY
o
ecrllG(du,dv)
_00
Under general conditions on the product moments of
G,
E[W(t)kIA(t)=l] is a kth degree polynomial in t plus ~(t), a term
converging to zero at a rate depending upon assumptions about G.
These
same conditions ensure that the kth conditional cumulant of Wet) is
~t +
Bk
+ ~(t).
- 2 -
Results about the conditional asymptotic normality of Wet) are
also obtained.
In the second case, the tail of F is a function of slow growth;
a technical assumption implies
lim P{N(t)=nIA(t)=l} > 0
t-+oo
and
lim E[N(t)kIA(t)=l] = a
t--+oo
k
<
~.
Several examples of transient cumulative processes are considered
and the assumption that each X. has the same defect 1 - w is relaxed.
~
TABLE OF CONTENTS
Chapter
ACKNOWLEDGEMENTS
1.
2.
iv
NOTATION . .
v
INTRODUCTION
1
1. 1
1.2
1.3
1.4
5
8
RENEWAL TIlEORY
CUMULATIVE PROCESSES
TRANSIENT RENEWAL PROCESSES
EXAMPLES OF TRANSIENT RENEWAL AND
CUMULATIVE PROCESSES
..
10
PRELIMINARY RESULTS ON TRANSIENT PROCESSES
19
2.1 INTRODUCTION.
..
2.2 TIlE TAIL OF H . .
..
2.3 THE RELATIONSHIP BETViEEN
F(~)-F(t) . .
.
3.
TIlE SUB-EXPONENTIAL CASE .
3.1
3.2
4.
4.3
5.
TIlE
5.1
5.2
5.3
.
.
...
H(~)-H(t)
19
.
21
AND
28
. .
46
DEFINITIONS AND BACKGROUND .
PROPERTIES OF TRANSIENT RENEWAL
PROCESSES WHEN w- 1F(x) € S
46
TIlE EXPONENTIAL CASE .
4.1
4.2
1
. .
TIlE PROPER DISTRIBUTION F .
TIlE ASYMPTOTIC INDEPENDENCE OF
1';t AND G(t) . .
.
..
RESULTS AND EXAMPLES . .
.
C~IDLANTS
OF TRANSIENT RENEWAL PROCESSES
PRELIMINARY RESULTS
.
.
.
TIlE CONDITIONAL ~-MO~ffiNTS .
TIlE CONDITIONAL CUMULANTS OF N(t)
- ii -
49
61
61
65
75
83
83
93
105
TABLE OF CONTENTS (Continued)
Page
Chapter
6.
THE CONDITIONAL CUMULANTS OF CUMULATIVE
PROCESSES
6.1
6.2
6.3
7.
INTRODUCTION
THE CONDITIONAL CUMULANTS OF Wet)
THE DERIVATION OF An and Bn
116
116
128
PROCESSES WITH VARIABLE DEFECTS
147
APPENDIX . .
172
BIBLIOGRAPHY
173
- iii -
ACKNOWLEDGEMENTS
I would like to thank Professor Walter L. Smith for suggesting
my thesis topic and directing my research.
He has been a lively
source of ideas and insights throughout my dissertation struggles.
I
appreciate Professors Janet Begun, Richard Ehrhardt, David
Guilkey, and Douglas Kelly for serving on my thesis committee.
I
have benefitted from the skillful teaching of several members of
the Mathematics and Statistics Departments at the University of
North Carolina.
I have especially enjoyed Professor Robert Mann's
encouragement and friendship during my undergraduate and graduate
yearS in Chapel Hill.
I thank the John Motley Morehead Foundation for generously
supporting my graduate education.
Finally, I appreciate Linda Tingen's careful typing of the
manuscript.
- iv -
NOTATION
L {J(t) } = J * (s) =
LOU(t)} = JO(s) =
Note:
e -st dJ(t)
fa
e -st J(t)dt
fa
All functions J are assumed to be zero on
=
F(t)*G(t)
f:
(-~,O).
F(t-T)dG(,), the Stieltjes convolution of
F and G.
t
F(n)(t)
=
fo
F(t_T)dF(n-l)(T), the n-fold Stieltjes convolution
of Fwith itself.
(n=2,3,···)
U(t) = P{O ::;; t}
F(x) =
f:
G(x,y) =
e
ay
fX fY
whenever there is a a
dF(y)
° -~
eau G(du,dv)
making F and G proper
distributions
Any expression of the form 0 , K , 0 , Y, atc. has its usual meaning
r
s
rs
with the understanding that expectations have been taken with respect
to F and G.
C is the class of distributions F such that F(k) has an absolutely continuous component for some finite k.
Ll is the class of Lebesgue integrable functions.
- v -
S is the class of distributions G on
G(x) < 1 for 0 < x <
lim
x+>o
I_G(2) (x)
I-G(x)
~,
and
= 2.
- vi -
(O,~)
such that G(O+)
= 0,
CHAPTER I
INTRODUCTION
I .I
RENEWAL THEORY
A renewal process is a sequence of independent, identically
Let X. have
distributed positive random variables Xl' X2 ' .•..
distribution function F; assume F(O)
< I
and write
l.
~k
= EX.l. k ~
co.
Typically, the XIS are interpreted as waiting times for some predescribed event to occur.
If the event occurs at time t, we begin
waiting allover again for the next occurrence, and the process
beginning at t is a probabilistic replica of the process which
began at time 0 . •
The XIS could be lifetimes of pieces of equipment and the
recurrent event the failure of the equipment.
At time 0, the first
piece is installed; it fails at time Xl and is instantly replaced.
is the lifetime of the nth piece of equipment while the partial
n
n
sum Sn = L x. is the time of the nth renewal.
. I l.
l.=
X
Let N(t) be the integer k such that Sk
number of events by time t.
~
t
<
Sk+l'
N(t) is the
A great deal of research in renewal
theory has centered around the renewal function H(t)
= EN(t), for
knowledge of the properties of H allows one to answer most questions
arising about a renewal process.
H(t) may be written
co
H(t)
= EN(t) = E L xeS n
n=l
~ t)
= L F(n)(t),
n=l
(l.1.l)
- 2 -
where p(n)(t) denotes the n-fold convolution of F with itself.
H
satisfies the integral equation of renewal theory
H(t)
= F(t)
+
J: H(t-T)dF(T).
(1.1.2)
In the usual literature, P is a proper distribution.
lim F(t)
= 1.
That is,
In this case the asymptotic behavior of N(t) and H(t)
t-+oo
is well known.
One of the earliest results concerning H(t) is the Elementary
Renewal Theorem (Feller, (1941)), which states
t H(t)
1
-+ lJ
(1.1.3)
l
where here and elsewhere we interpret ~ as 0 when
lJ
lJ
l
l
= ~.
The waiting times {X.} are d-lattice random variables if
1
~
L
P{X.
1
n=l
= nd} = 1
(1.1.4)
and d is the largest number for which (1.1.4) holds.
If the X's are
lattice random variables, the renewal process is discrete; if not,
it is continuous.
In all the work which follows, we shall assume the
process is continuous.
There is usually a parallel theorem for dis-
crete processes to any theorem about continuous processes.
Blackwell's Theorem (1948) is an important generalization of the
Elementary Renewal Theorem for continuous processes.
for any fixed a
H( t
+0. )
-
>
H ( t)
It states that
0
as
t -+
~.
(1.1.5)
- 3 -
Blackwell's Theorem was extended by the Key Renewal Theorem,
proved by W. L. Smith in 1954 and stated in its most general form
in 1961.
Key Renewal Theorem (Smith 1961)
If F(x) is a non-lattice distribution and Q(x) is Riemann
integrable in every finite interval and satisfies
00
L
n=O
f:
max
IQ(x)
I
<
00,
then
(1.1.6)
n~x~n+l
1
Q(x-T)dH(T)
as x
+ -
].11
+
00
•
(1.1.7)
Feller calls functions satisfying the conditions of this theorem
"directly Riemann integrable."
The Key Renewal Theorem does not require the finiteness of any
moments of F; however, by assuming ].12 <
H(t) _ t
].11
].12
= ----2
- 1 + 0(1) as t
00,
Smith (1954) proved
(1.1.8)
+00
2].11
by applying the theorem to a judiciously chosen function Q.
The theorem was instrumental in his proof that
Var(N(t)) =
].12- ].11
].11
3
2
t + oCt) as t
+
00
(1.1.9)
The fact that the first two cumulants of N(t) are asymptotically
linear functions in t led Smith to investigate higher order cumulants.
Let C be the class of distribution functions F such that for some
finite k, F(k) has an absolutely continuous component.
- 4 -
Theorem (Smith 1959)
If F
a
n
and b
n
~
n+p+ 1
<
00,
P
0, then there exist constants
~
such that the nth cumulant of N(t) is given by
n
a t
C and
€
+
b
n
A(t)
(l+t)p
+
(1.1.10)
where A(t) is of bounded total variation, is 0(1) as t
~
00,
satisfies
the condition
= oCt -1 )
A(t) - A(t-a)
for every fixed a
>
as t
~
0, and when p
(1.1.11)
00
~
1 has the additional property
that A(t) belongs to the class L .
l+t
l
In addition to N(t), there are two other random variables
associated with time which require our attention.
~t
= SN(t)+l
- t
is called the forward delay and represents the waiting time from t
until the next event.
nt
=t
- SN(t) is the backward delay, the
elapsed time since the last event.
nt +
~t
= XN(t)+l'
the lifetime
spanning time t.
When
~l<
00,
limiting distributions for nt and
lated via the Key Renewal Theorem.
P{~t~x} = F(t+x)
-
J:
~t
can be calcu-
One can show that
[l-F(t+x-T)]dH(T) and hence
oo
lim
t~
P{~t~x} = 1 - l I
~l
= ~l
J:
x
[l-F(T)]dT
[l-F(T)]dT - K(x).
(1.1.12)
- 5 -
Similarly, if
~l <
-61;
lim
Ee
t
t.-o
00,
= ;~l
= K*(6).
[1-F*(6)]
(1.1.13)
For the backward delay, we find
= U(x-t) [l-F(t)]
where U(t)
= P{O~t}.
Jt
[l-F(t-T)]dH(T)
t-x
~l <
Again assuming
1
11
+
1
J:
[l-F(T)]dT
00,
(1.1.14)
K(x) .
=
The forward and backward delays have the same asymptotic distributions.
Just as limiting distributions for the forward and backward delays can be derived, there are asymptotic distributional results
Using the relationship P{N(t)2:n} = P'{S ~t}, Feller (1941)
about N(t).
n
proved that if
P{N (t)
>
l1
t
11 1
Z
<
00
_ a.o
11 1
= ~Z
and oZ
~
~l
-+
-
l1
l
Z
, then
H a) as t
-+
(1.1.15)
00.
Feller's proof dealt specifically with discrete renewal processes,
but his argument can be extended to cover the continuous case.
1.Z
CUMULATIVE PROCESSES
Cumulative processes are a natural extension of renewal processes.
A cumulative process is a stochastic process built up from processes
of random length called tours in a special way.
sists of an ordered pair (X(w),
~(t,w))
A random tour con-
defined on a probability
- 6 -
space (Q,e,p), where X is a non-negative random variable and
is a function defined for 0
~(·,w)
in some abstract space X.
F(O)
t
~
X(w) taking its values
Let F be the distribution of X; assume
= 1.
1 and F(oo)
<
~
Let (Q. ,e. ,p.), i
111
1, be independent copies of the probabi-
~
lity space and let P be the product measure defined on
IT
e ..
. 1 1
1=
Suppose 6 , 6 , ... is a sequence of tours where each
2
1
6. = (X(w.),~(t,w.)) has domain Q. and is chosen in accordance
1 1 1
1
with the probability measure p ..
Writing X.
= X(w.),
1 1 1
{X.} is a renewal process.
1
we see that
We can construct a cumulative process
as follows
We require that Wet) be of bounded variation in every finite t
interval with probability one and that the random variables
Y.
1
*
s.1
=
f S.
IdW(t)
I
also be iid.
1- 1
= ~(X.,w.);
111
Let Y.
Nit)
then Wet)
= L Y. +
i=l
1
~(t-SN(t)' wN(t)+l)'
At time t the process is therefore the sum of N(t) iid random variables
plus an extra piece depending upon the tour in progress at time t.
- 7 -
Write
0
2
X
K
= Ey.1. r
r
= Var(X.),
1.
cr
2
and
K
*
r
= EY.1. *r when these moments exist. Let
= Var(Y.),
1.
Y
and p
= Cov(X.,Y.).
Smith (1955)
1.
1.
xy
has shown that
i Wet)
lim
t~
l
K
III
oCt) if III
+
2
= 0y
2p
*
a.s.
III
=-l t
EW(t)
Define y
K
= -
0
0
K1
(-)
xy x Y III
Var(W(t)) = -t Y
III
+
W(t)-K 1 N(t)
lim P{
(1.2.1)
< co •
(1.2.2)
*
< co, K
2
+ 0
oCt) if
< co
11
K
l
l 2
Then
(-)
x III
2
<
CO,K
*
2
*
<
<
co
,
Il
<
co,
Il
°Y~
t~
and
Wet) -
1
<
co
(1. 2 .4)
<
co
(1.2.5)
KIt
Il
-...::1:-. ::; oj =
lim P{
(1. 2.3)
co
t~
2
Equations (1.2.2) and (1.2.3) suggest that the cumulants of
Wet) may be asymptotically linear in t just as the cumu1ants of
N(t) are.
Smith has shown this for two special kinds of cumulative
processes.
Wet) is a Type A cumulative process if
N(t2+ l
Wet)
=
L
k=l
Yk ·
(1.2.6)
- 8 -
It is a Type B process if
o
Wet) =
(1.2.7)
The advantage of dealing with Type A and B processes rather
than the more general process
Wet)
is that we thereby avoid questions about the behavior of the process between regeneration points.
Theorem (Smith 1979)
Suppose Wet) is either a Type A or Type B cumulative process.
For integer n
~
EX 1 r+p Iy 1 IS
~
<
1, assume EX n+p+l < ~, EIY In <~, and
l
l
f or r ~ n, s ~ n, r + s ~ n + 1 , an d p ~ 0 .
Then
the nth cumulant of Wet) is
k (t)
n
= Ant
+
B
n
+
A(t)
(l+tl
The function A(t) is of bounded total variation, is 0(1) as t
+
and if p ~ 1, ~:~) belongs to the class L .
l
1.3
TRANSIENT RENEWAL PROCESSES
All of the quoted properties of renewal and cumulative pro-
cesses depend on the assumption that lim F(x)
x~
= 1.
When
~.
- 9 -
=w
lim F(x)
1, they no longer hold.
<
x-+<x>
"00" with probability 1 - wand when X
n
cess "dies" at time S
.
n-l
eventually occurs is
We allow X to take the value
= 00
we say the renewal pro-
The probability an infinite lifetime
00
p{X
n
=
= L
some n}
00,
n=l
00
n-l
= (l-w) I
w
=
P{X , ... , X _ all finite; X
n l
l
n
= oo}
1.
(1.3.1)
n=l
The process dies with probability one and is called a transient
renewal process for that reason.
If Xl' X2 , ... ,
"alive" at time t.
A(t)
=
are all finite, we &ay the process is
Let
I if the process is alive at t
a
and q(t)
~(t)+l
otherwise
= P{A(t) = I}.
00
q(t)
=
L
P{A(t)
= 1,
N(t)
= n}
n=O
t
=w
- F(t)
+
fa
[W-F(t-T) ]dH(T)
=W -
(l-w)H(t).
(1.3.2)
00
As usual, H(t)
F{t}
+
f:
= L
F(n)(t) and the integral equation H(t)
=
n=l
H{t-T}dF{T} is still true despite the transient nature of
the process.
Note, however, that
- 10 -
00
lim H(t)
= I
n
w =
n=l
t-+<><>
w
(1.3.3)
l-w
w
We only expect to see -1-renewals before the process dies.
-w
The fact that q(t) + a as t +
explains why the usual renewal
00
theoretic results fail to hold when F is defective.
Wet) cannot be
defined satisfactorily and "asymptotic normality" makes no sense in
this case.
Because H(t) is not asymptotically linear in t but con-
verges to a finite limit instead, a Key Renewal type result is not
possible.
f:
Rather than finding
Q(x-T)dH('r)
4-
Q(x-T)dH(T)
+
:1
fa Q(T)dT
we find
X
fo
a
as
X +
00
for all functions Q satisfying the conditions of the Key Renewal
Informally, for large t, dH(t) acts like "Oodt" and not
Theorem.
like "lll
-1
odt."
1.4
EXAMPLES OF TRANSIENT RENEWAL AND
CUMULATIVE PROCESSES
1.
Time Until Ruin in Collective Risk Theory
Let the renewal process {X.} represent times between claims
1
against an insurance company and let Y. be the value of the jth
J
claim.
--
Assume that in the absence of claims, reserves increase at
the constant rate c
>
a
and that the company has initial assets u.
- 11 -
The risk reserve at time t is
R(t)
Y .•
J
The company is "ruined" at time t if R(t) <
,.
t
o.
Define
* = inf{t:R(t)<O}
(1.4.1)
Cramer (1955), von Bahr (1974), Siegmund (1975), and others have
studied
T* ,
the time of ruin.
Let
Y. - ct;
J
Wet)
Wet) is a cumulative process with increments Y. - cX ..
J
Ruin can occur
J
only at some regeneration point S when Wet) exceeds all of its pren
vious values.
Let L , L2 , ... be the ladder random variables conl
structed from the increments Y. - cX .. That is,
J
J
II
L =
l
I
k=l
(Yk -c\)
if II is the smallest integer making L positive.
1
For n = 2, 3, ... ,
L
n
where I
of I
n
n
is the smallest integer making L positive.
n
L is the sum
n
iid random variables; the L'S are iid because the l's are.
- 12 Associated with the L's is the renewal process {Z.},
1.
I +"'+1
1
n
Zn = k=I
+.~'+I n-l +1 ~
1
n = 2, 3, ...
Let M(t) be the renewal count for the process {Z.} and let
1.
be the increasing cumulative process built from the ladder variables.
*
Ph" ~t} = P{
Mit)
l..
k=l
(1.4.2)
L > u}.
k
n
L (Yk-cXk ) attains a maximum with probability
k=l
one and then drifts toward _00. This means {Zi} and WZ(t) are transient
If E(Yl-cX)
processes.
<
0,
In Chapter 4 we will suggest a way of coping with (1.4.2);
our results and methods are similar to those of von Bahr and Siegmund.
2.
Waiting Times for Long Gaps
Suppose the renewal process {X. } is stopped at the first
1.
appearance of an interval longer than L which is free of renewals.
Let F be the proper distribution of the X's and define W to be the
waiting time for such an interval.
t
~
L.
For t
>
L, the event
{W'~
Let Vet) = P{W
~
t}.
Vet) = 0,
t} can occur in one of two ways.
- 13 Either Xl itself exceeds L or Xl
S
L and the search for a long gap
begins again from the new starting point Xl and the event {W
occurs.
Vet)
S
t - XlJ
Thus
= I-F(L)
+
J: V(t-T)dF(T),
t>L.
(1.4.3)
We can write
Vet)
= vet)
+
J: V(t-T)dG(T)
(1.4.4)
where G is a defective distribution defined by
G(t)
=
F(t), t
S
L
F(L), t
>
L
t
S
{
and
vet)
=
t0
L
{l-F(L) , t > L.
Hence
Vet)
= vet)
+
J: v(t-T)dHG(T).
(1.4.5)
Feller (1971) uses the notion of waiting for a large gap to
model the problem of a pedestrian trying to cross a stream of traffic.
Let the renewal process {X.} be the gaps between successive cars.
1
In order to cross the street with safety, the pedestrian must wait
- 14 for a gap of more than L seconds, say.
The distribution of his
waiting time is given by (1.4.5).
Vet) may also be interpreted as the probability that the maximum lifetime or partial lifetime observed by time t exceeds L.
Lamperti (1961) has studied the problem from this point of view.
3.
Lost Telephone Calls
Suppose that calls arriving at a telephone trunkline form a
Poisson process with intensity A.
A call is placed at time
o.
The
lengths of conversations are independent random variables with common
distribution F.
Calls arriving during a busy period are lost; we are
interested in the waiting time Wfor the first lost call.
We may
consider the renewal process {X.} where X. is the time between the
1
1
i-1st and ith calls so long as no calls arrive during the busy period
caused by the i-1st call; otherwise X.
-
1
=
00
and the process stops.
The busy periods associated with the process have distribution
G(x)
x -AT
= J0
e
dF(T) .
(1.4.6)
The event
{Xl~x}
time
x and a new call is received in the remaining time x -
T ~
occurs when the call begun at time 0 lasts for some
T.
Hence
(1.4.7)
lim J(x)
x~
(1.4.8)
- IS We can study W, the waiting time for the first lost call, by
studying the transient process {X.}.
1
Let q(t) be the probability
the X process is alive at time t.
P{W>t}
= q(t) = w -
by (1.3.2).
(1.4.9)
(l-w)HJ(t)
Of course, we know that P{W>t}
+
0 as t
+ ~,
would be interesting to know the rate of this convergence.
obtained in Chapters 3 and 4 address this question.
but it
Results
This example is
from problem 17, Chapter 6.13 of Feller (1971).
Generalized Type II Geiger Counters
4.
Particles arriving at a generalized type II Geiger counter
constitute a Poisson process with intensity A.
The nth particle
locks the counter for a time T and annuls the after-effects of all
n
preceding particles.
Suppose the TIs have common distribution B
and let Y be the length of the ith locked period.
i
P{Yl>t}.
Define Z(t)
=
The event {Yl>t} can result either because T > t and no
l
particles arrive in (O,t] or because a particle arrives at some time
T
<
T , T
l
~
t and the locked period begun at T exceeds t - T.
Z(t) = e -At [l-B(t)]
+
It0 Ae -AT [l-B(T)]Z(t-T)dT.
Thus
(1.4.10)
We may regard
t
F(t) = 0 [l-B(T)]Ae
J
-AT
dT
(1.4.11)
- 16 -
as a defective distribution and write
Z(t)
= e -At [l-B(t)]
+
Jt0 Z(t-T)dF(T)
(1.4.12)
+
Jt0 e-A(t-T) [l-B(t-T)]dHF(T).
(1.4.13)
which implies
Z(t)
= e -At [l-B(t)]
To investigate the distribution of the locked periods, we must deal
with the transient renewal function Hp(t).
Also, if we are interested
in the renewal process {X.} where X. represents the time between the
1
1
beginning of the i-1st and ith blocked periods, we see that
(1.4.14)
because Xl is the sum of the length of a locked period and the
waiting time for the arrival of the first particle after the counter
has become unlocked.
Study of {Xi} requires coping with F and Hp .
This example is from problem 15, Chapter 11.10 of Feller (1971).
5.
Age Dependent Branching Processes
A particle born at time 0 lives some random time and then splits
into k new particles with probability qk' k
time has distribution G; assume G(O+)
=0
= 0,
and
1, ....
G(~)
= 1.
Its lifeThe new
particles develop independently of one another and of their time of
- 17 -
birth; they have the same lifetime distribution and splitting probabilities as the first one.
Let Z(t) be the number of particles at time t and pr (t)
P{Z(t)
= r}.
= 0,
If Z(t)
then Z(t+s)
branching process becomes extinct.
h(s:)
= I
q s
n
and
n
n=O
F(s,t)
=0
for all s
~
=
0; the
Define the generating functions
=
I
r=O
P r (t) sr.
(1.4.15)
Bellman and Harris (1948) initiated the study of age dependent
branching processes and derived the equation
F(s,t)
=
f:
h[F(s,t-y)]dG(y)
+
(1.4.16)
s[I-G(t)].
Levinson (1960) proved that (1.4.16) has a unique solution
F(s,t) which is a generating function for each t and is the unique
bounded solution.
To investigate A(t), the expected number of particles at time
t, suppose a
A(t)
h'(l)
=
= _a_F
<
00
t
s:-",_t~)
=
.>..::(
as
aI
s=l
(1.4.17)
A(t-y)dG(y) + 1 - G(t).
o
Bondarenko (1960) has shown that if a
~
1, poet)
+
1 as t
That is, the process becomes extinct with probability one.
cular, consider the case a
<
1.
+
00.
In parti-
We may regard aG(y) as a defective
distribution with corresponding renewal function H (y).
a
Hence
- 18 A(t)
= 1-G(t)
+
ft [l-G(t-T)]dH (T).
a
(1.4.18)
CL
Depending upon our assumptions about the tail of G, we can find
different asymptotic estimates of A(t) by using the results of
Chapters 3 and 4.
These estimates agree with those of Vinogradov
(1964) and Chistyakov (1964).
To study poet) and u(t)
poet)
= F(O,t).
= 1-po(t), we need only notice that
Thus equation (1.4.16) yields
(1.4.19)
Vinogradov and Chistyakov have derived asymptotic expressions for
u(t) from (1.4.19) by expanding h in a power series around 1 and
applying methods like those we will discuss in Chapters 3 and 4.
CHAPTER 2
PRELIMINARY RESULTS ON TRANSIENT PROCESSES
2.1
INTRODUCTION
Our goal is to develop a theory for transient renewal and cumu-
lative processes which parallels that for standard renewal and cumu1ative processes.
To ensure that it makes sense to talk about the
behavior of a transient process at time t, we will condition expectations and probabilities of interest upon {A(t) = l}.
We will study
questions of the following type:
1)
Does P{A(t+s) = lIA(t) = 1} have a limit as t
2)
Do P{~t~X'A(t) = 1} and P{nt~XIA(t) = 1} have limits which
+
=?
are proper distributions in x?
3)
Is E[W(t)kIA(t) = 1] asymptotically a kth degree po1y-
nomia1 in t?
4)
Is Wet) conditionally normal?
A brief examination of several of these questions will indicate
the types of expressions we must cope with if we are to answer these
queries satisfactorily.
Notice that
P{A(t+s) IA(t)=l} = q(t+s)
q(t)
= H(=)
- H(t+s)
H(=) - H(t)
- 20 -
It is not difficult to derive the conditional distribution of
the backward delay.
We have that
00
=
I
P{nt~x, A(t)=l, N(t)=n}
n=O
= U(x-t) [w-F(t)] +
= U(x-t) [w-F(t)] +
I
It
[W-F(t-T)]dF(n)(T)
n=l
t-x
It
[w-F(t-T)]dH(T).
t-x
Therefore
t
[w-F(t-T)]dH(T)
t-x
= ---("""l---w""')(":":H'(oo'):-_7:H'(t:'"";):-;"]----U(x-t) [W-F(t)]+I
P{ nt ~x
IA(t) =l}
(2.1.1)
Taking Wet) = N(t) and k = 1 in question 3 leads us to investigate E[N(t) IA(t) = 1].
But
00
EN(t)A(t) = E I x(Sn~t, A(t)=l)
n=l
Hence
E[N(t) IA(t)=l]
[H(oo)-H(t-T)]dH(T)
H(oo)-H(t)
(2.1.2)
These examples demonstrate that to answer the questions posed
we must study F(oo)-F(t), H(oo)-H(t), and their relationship.
- 21 2.2
THE TAIL OF H
We begin with a helpful lemma.
Lemma 2.1
q(t+s)
~
q(t)q(s) for all t,s > O.
Proof:
q(t+s) = P{A(t+s)=l} =
=
~
.
e
J:
P{A(t+s)=l'~t~S}
q(s-u)dGt(u) + P{A(t+s) = 1,
q(s) [Gt(s) + P{A(t) = 1,
~t>s}]
+
P{A(t+s)=l'~t>s}
~t>s}
= q(s)q(t).
0
It is well known (see, for example, Chapter 1 of Galambos and
Kotz (1978)) that if
ljJ(s) = lim
t~
q (t+s)
q(t)
exists, then either
(i)
ljJ(s) = e- crs for some cr > 0 or
(ii)
IjJ(s) is either 0 or 1 for all
5
>0.
In fact, Lemma 2.1 shows that in case ii), ljJ(s) _ 1 because
ljJ(s)
~
q(s) > O.
- 22 . f(t+s)_
If 11m f(t) - 1 for all fixed s, f is called a function of
t-+a>
moderate growth.
f{t) - k exp
where a(u)
+
All such functions have the property that
(J:
0 as u
(2.2.1)
.(u)du)
See Smith (1972).
+~.
Hence if ~(s) exists, either H(~)-H(t) or eot[H(~)_H(t)] is
a function of moderate growth.
e
> 0,
e
When case i) obtains, if we fix
there is a T(e) such that
ot
[H(~)-H(t)]
et
<
e
Thus if we choose any
a
<
for all
c
< 0
t
2:
T.
and fix e
< 0 -
c,
(2.2.2)
This ensures that both H andF have moments of all orders.
We shall see in Chapter 4 that
e oxdF(x)
~
1.
0
is an important constant and
If the equality holds, we can salvage many
standard renewal theoretic results by conditioning arguments.
ever, if the inequality is strict, much less can be proved.
regardless of whether F'*(- 0) = 1 or F*(- 0)
<
HowBut
1, we have the following
lemma.
Lemma 2.2
Suppose
~
is an integrable function on [O,x].
If q(t+s)
q(t)
+
e
-os
- 23 -
then
lim
t-+oo
t
Jt-x
~(t-T)
dH(T)
q(t)
=
JX ecry~(y)dy
o
a
I-w
(2.2.3)
Proof:
Establish a partition of [t-x,t]:t-x = YO
Let
N-I
I
e
< ••• <
YN = t.
min
y.STSy. I
J
J+
= ~.J
max
Ht-T)
y.STSY·
J
J+ I
.
YI
<
j=O
tj
N-I
S
H(co) - .H(y.)
H(y. I)-H(y.)
J+
J
H(co) - H(Yj)
(l-w)
J
(H(co) -H(t))
J
(l-w) (H(co)-H(t))
J
As
H(co) -H(y.)
~~-:-:-:,.s..J~ -+
H(co)-H(t)
a(t-y.)
e
J
and
-+
I - e
-a(y. I- Y ')
J+
t
Jt-x
H(co) - H(y.)
H(y·I)-H(y.)
J+
J
H(co)- H(y.)
I~.
j =0 J
S
J
~(t-T) dH(T)
q(t)
- 24 -
this convergence is uniform since t - x ~ y. ~ t.
J
e
< 0,
Thus for any
we can choose t to be large enough to make
1 N-l
I
l-w.
~.{[l-e
-J
J=O
~
1
-1-w
N-l
\'
L
j=O
-o(y. l-Y.) a(t-y.)
J+
J]e
J
4l.{[l-e
J
d~rt-x
Ht-"C) dH(t)
q (t)
-o(y. l-Y.) o(t-y.)
J+
J]e
J + d.
Letting the mesh of the partition approach zero and noting that
~(y)ecry is necessarily integrable on [O,x], we conclude
lim Jt ~(t-"C)dH("C) =
t-+oo t-x
l~w
Jt
t-x
~(t_"C)ea(t-"C)d"C
o
Corollary 2.1
If q(t+s)
q(t)
+
e- as , then
(2.2.4)
Proof:
From (2.1.1), we have
U(x-t) [Lt}-F(t)]+r [w-F(t-"C)]dH("C)
P{nt~XIA(t)=l} = --------------q~(~t~)t~-~x~
___
The result follows directly.
- 25 Corollary 2.2
If q(t+s) ~ e- as , then
q(t)
lim
P{~t~xIA(t)=l} =
t~
I:
-ax
ae
l-w
eaY[l-F(y)]dy - L(x).
(2.2.5)
Proof:
00
P{~t~X,A(t)=l}
= L
P{~t~x,A(t)=l,N(t)=n}
n=O
I
= F(t+x) - F(t) +
It [F(t+X-T)-F(t-T)]dF(n)(T)
0
n=l
t
= F(t+x)-F(x) + 0
+
J
x
_
F(t+X-T)dH(T)
I
t
+
x
F(t+X-T)dH(T)
t
.
e
-J:
F(t-T)dH(T]
t +x
= F(t+x)-F(t)+H(t+x)-F(t+x) -
Jt
F(t+X-T)dH(T)-H(t)+F(t)
t +x
= t
J
[l-F(t+x-T)]dH(T).
Therefore
[l-F(t+x-T)]dH(T)
q(t)
lim P{~t~xIA(t)=l} = lim
t~
t~
tt+
J
x
[l-F(t+x-T) ]dH(T) . q(t+x)
q (t+x)
q (t)
(2.2.6)
- 26 -
=
x
oe -ax
o
o eOY[l-F(y)]dy.
J
l-w
We can rewrite the limiting distribution as
(2.2.7)
It is worth noting that J and L are distinct distributions.
In
proper renewal theory, the forward and backward delays share the same
In the transient setting, however, n is
t
limiting distribution.
asymptotically greater in distribution than
~t.
Lemma 2.3
L(x)
~
J(x) for all x
Proof:
Using the fact that
(l-e
-ax
)
Io
x
~
f:
e
oz
e
o.
eoxdF(x)
dF(z)
+
~
foo
1, we have
(e
ax
-l)dF(z).
x
Thus
l-e -ax
+
e -ax
fX0
e oz dF(z)
+ ~F(x) ~
x
fo
e OZ dF(z)
+
foo ax
e dF(z)
x
.
- 27 -
<=> l-w-e -ax
+
e -axJx0 e az dF(z)
From (2.2.4) and(2.2.7), we see L(x)
~
+
w-F(x)
~
o
J(x).
Much of proper renewal theory draws heavily upon the fact that
.
e
H(t) =
if
H(~)
S~(t)
is asymtotically linear in t.
In the transient situation,
- H(t) is a function of moderate growth, conditioning on
{A(t) = l} will not help us develop an analogous theory.
because E[N(t) IA(t) = 1] is not asymptotically linear.
Lemma 2.4
If
H(~)
H(t) is a function of moderate growth, then
E[N(t) !A(t)=l]
t
+
o.
Proof:
From (2.1.2) we have
t
E[N(t) IA(t)=l] =
Jo
q(t-T) dH(T).
q(t)
This is
- 28 -
Fix
e
all x
>
~
o.
There exists an N(e) such that H(x+1)-H(x) < e for
H(oo)-H(x)
N.
f:
E[N(t) IA(t) =1] =
By Lemma 2.1, q(t-T)
q(t)
q(t-T)
q(t) dH(T)
_1_
q(T)
S
+
f: q~~~~)
dH(T).
Thus
q(t-N)
E[N (t) IA(t) =1] s q(t) H(N)
+
f:
dH(T)
q (T)
But
t dH(T)
N q(T)
J
s
[I] Jj+1
j=N
j
dH(T) < __1__ [I]
q( T) - 1-Ul j=N
H(j+1)-H(j)
H(oo)-H(j)
< -!£
- 1- Ul
Hence
E[N(t) IA(t)=l]
-+
t
0
o
since e is arbitrary.
2.3
THE RELATIONSHIP BETWEEN H(oo)-H(t) AND F(oo)-F(t)
We now turn to the relationship between the moments of F and
H and between the rates at which the tails of F and H approach
oo k
Let ~k = OX dF(x) S 00 and v k =
xkdH(x) S 00.
I:
J
o.
The integral equation of renewal theory yields a simple re1ation between the Lap1ace-Stie1tjes transforms of F and H.
H* (s)
=
F*(s)
1-F * (s)
(2.3.1)
- 29 -
Lemma 2.5
Hand F have the same number of finite moments.
Proof:
Suppose
s
~
O.
r:
xkdF(x)
<
~
k
Then
k
k
=
I (~)
j=O J
ds
j
dk - j
1
d
*
[
- . F (s)
]
k-j
1-F*(s)
ds J
ds
exists and is finite for s
~
exists and is finite for
ds
Hence
!.-k H* (s)
s
d F* (s)
--r
~
o.
(Note that F* (s)
(2.3.1)
:5:
w < 1 for
0.) Therefore
o
s=O
From expression (2.3.1), we see that v
m
of terms like c~ ••• ~k where I k = k.
k1
m
n=l n
~k
------~2 +
(l-w)
k
is a linear combination
In fact,
terms involving products of lower order
moments of F.
Because of this form, one might expect that if
lim
x-+<»
=
1
(l-w)
2
We will show that this is indeed the case.
~
r+ 1 =
~,
then
- 30 -
We begin with a variation on Feller's (1963) ratio limit
theorem.
Theorem 2.1
Let a(x) and b(x) be non-negative and non-increasing functions
of X€[o,~) and define ~(x)
= xka(x)
and $(x)
Suppose ~O(s) and $O(s) exist for all s > 0.
~(x) =
I: ~(y)dy
~(x)
~(x) + ~
as x
and
+ ~
~(x) = J:
$(y)dy,
if and only if
(~
<
+
0.
~).
Proof I:
~O (s)
Assume
+
~
as s
Define
$O(s)
c
~t (x)
= e-CXt~(tx)
~O(c/t)
and
=
e-CXt$(tx)
$O(c/t)
Let
-
lim
t~
~(t)
~(t) = A
=
f: ~~(Y)dY
= xkb(x) ,
k
>
0.
Then if we write
- 31 -
Then there is a sequence
~
{t.}
J
t ~
such that
(t.)
J
=''P-0-(t........""")
J
By He11y's Selection Theorem, 3
,
-+
1\
•
a subsequence {to } such that
In
c
~t. (x)
In
where F and F are nondecreasing right continuous functions
2
I
bounded between 0 and 1.
The continuity theorem for Laplace
transforms implies
c
o
~t.
.
e
(s)
Fo(s)
l
-+
In
~O
'Pt. (s)
In
and
-+
Upon calculating transforms, we see this means
cj> 0 (c+s)
t.
In
scj>°(~)
-+
ljiO(c+s)
t.
In
0
F (s) and
l
sljiO(~)
t.
In
Note that
0
F (s).
2
(2.3.2)
t.
In
We want to be sure that F
zero for all s
-+
~
O.
l
o(s)
0
and F (s) are strictly greater than
2
- 32 -
=I
-
fxxe -cyt.k+l
ykaCt. y)dy
I
I
Joo e -c Yt.k+l ykaCt. y)dy
°
~
I -
n
n
In
In
aCt. x) Joo e -c yyk dy
In
x
JXoe -cyykdy
aCt. x)
In
IX°
=
-c k
[r (k+l) _
e yy dy]
k+l
c
I -c k
e yy dy
JX°
= 2 -
.
r(k+l)
----:--~--
c k+l
JX
e
e -cyy kdy
°
This is strictly positive for sufficiently large x.
Therefore FI(x)
all s
~
O.
>
° for all
°
large x and hence FI(s)
°
By exactly the same argument, F (s)
2
>
>
° for
a for all s
~
0.
From (2.3.2) we see
<j>0(c+s)
t.
In
1/Jo(~)
t.
In
'4J0 (c+s)
<p0(~)
t.
In
t.
In
+
°
F2 (s)
°
FI (s)
But by assumption, the product of these factors converges to I as
i.e. ,
e
f:
e -cy t.
J:
e -cy t.
~(t.
In
In
y)dy
+
~(t.
y)dy
e -cy t.
~(t.
In
In
a. .
Thus
l
fa
In
In
y)dy
~----.,;~-~-
J:
e -cy t.
In
e-cJl t.
a
In
~(t.
In
~(t.
:;; a.
y)dy
y)dy
In
- - - - - - - - :;; a.
Z
Jo t.I n$Ct.I ny)dy
- 34 -
t.
(n Hu)du
t.
Jn
zJo t/J(zu)du
e -c
lim
n-+<>o
0
s
Cl
t.
e -c
lim
n-+<>o
(n
Hu)du
s
0
Cl
t.
J
k
zk+1Jo n u b (zu)du
t.
lim
n-+<>o
=>
en
en
Ij>(u)du
s
t.
e
c k+l
e z Cl
k
u b(u)du
A S e c zk+l Cl.
But c is arbitrary and z is arbitrarily close to 1.
AS
Cl.
l l·m
t -+<>0
By reversing the roles of
~
and
~(t) < -1
1· ~(t) >
- "'"
=> ~ 'f'(t) - Cl.
(
)
~ t
t-+<>o
N
Hence
. Ht)
=
t-+<>o
hm ~(t)
Cl.
~
Therefore
we will find
- 35 -
Proof II:
We now assume that lim
x-+<»
<Hx)
= a.
'¥(x)
Let
o
lim ~o (5)
=A~
~ .
s.J..O \jJ (5)
Let
{t.}
t ~
J
be such that
-+L
We can select a subsequence {to } such that
In
e
)
~(xt.
In
~(t.
In
)
'II (xt,
w
-+ F (x)
for 0
because
~(xt)
~(t)
and
1
x
~
~
I
'II
)
n
(t. )
In
Thus F (x)
1
F (x)
2
'¥(xt)
,¥(t) are d.f's on [0,1].
and
(xt. )
I
= lim 'II (xt. n )
n-+<»
J
n
~
(2.3.3)
1
point of both F and F2 ,
1
F (x)
1
F2 (x)
w
-+
.
= F2 (x) - F(x).
'¥(t. )
In
Ht. )
In
= 1.
Let x be a continuity
- 36 -
We want to be certain that F* (s)
X/2
= f0
1 k+1
(2)
k
y a(y)dy
>
O.
x 1 k+1
=
(-)
fo 2
IX
k
0 v a(v)dv
=
1 k+1
(2)
Note that
k v
v a(-2)dv
4>(x)
t.
J
F C!-)
4>(~)
2
lim
2 = n-+oo 4>(t.
J
)
1 k+1
~
Hence F* (s)
(2)
>
O.
n
By the continuity theorem,
J:
e -cxd4> (xt. )
In
4>(t. )
In
and
-+
J:
e -cxd'Y (xt. )
In
'Y(t. )
In
and their ratio converges to 1.
-cx k+1 k
t.
x a(xt. )dx
In
In
J:
-cx k+1 k
e
J: t.I n x b(xt.I n)dx
e
F* (c)
Thus we see
4>(t. )
In
'Y(t. )
In
-+(1
•
(2.3.4)
- 37 -
Consider
=
z
I
a
e
-ex x ka(xt. )dx
In
Let
Then
e
-ex k
x a(xt. )dx
In
But
-eu k
Zk+l e-e/Z Jl
e
u a(ut. )du
In
liZ
:$
Zk+l e-e/Z
Ia
l
e -eu uk a(ut. )du.
In
+
- 38 -
Thus
<p0(~)
t ..
In
1jI0(~)
t.
In
=
foo e -cx t.k+l
o
In
foo e -cx t.k+l
o
In
k
x a(xt. )dx
In
xk b(xt. )dx
In
l -cx k
~ {I + 2k +l e- C/ 2 [1+
f
e
x a(xt. )dx
[m]
0
In
I (2k+l)me~ 2 -1 c]} ~
___
m=l
Jl e -cx x-b(xt.
~
)dx
00
o
In
From (2.3.4), we conclude
A~
""
m
{I + 2k +1e- c / 2 [1 + I (2 k +l )m e -[2 -l]C]}a.
m=l
By letting c
we see A ~ a.
+ "",
Reversing the roles of
<pO Cs)
lim
s+O 1jI0Cs)
~
a
<p
and 1jI, we will discover
and the conclusion follows.
o
In order to apply Theorem 2.1, we derive a relation between
the tails of Hand F.
- 39 -
H(t)
=>
= F(t)
+
I:F(t-T)dH(Tl
H(m)-H(t) = F(m)-F(t)
= F(~)-F(t)
+
+
~~w
I:
-
F(t-T)dH(T)
w[H(~)-H(t)+H(t)]
-
It
F(t-.)dH(.)
o
Thus
(1- w)[H(m) -H( tlJ =
F(m) -F (t)
+
I>..-F(t-T)] dH (TJ
and hence
(1- w)
fa e
-st
= ~0
J
e
[(H(~) -H(t) ]dt
-st
[F(~)-F(t)]dt +
= ~a e -st [F(~)-F(t)]dt
J
+
[0
Jt0 e -st
J~
e
a
-5. [
[F(~)-F(t-.)]dH(.)dt
.e
-S(t-.)
[F(~)-F(t-.]dtdH(.).
Therefore
(1- w)
fa
e -st
[H(~) -H(t)] dt
(2.3.5)
=>
lim
s~
fa e-st[H(~)_H(t)]dt
fa e-st[F(~)_F(t)]dt
=
1
(l-w)
2
(2.3.6)
- 40 -
This is true regardless of whether F and H have finite first moments.
~n < ~
Suppose
but
~n+1
=
~
(n=O, 1, ... ).
Then
d n f~o e-st[H(~)_H(t)]dt
= (l-w)(- ds)
by (2.3.5).
Thus
I:tne-st[H(~)-H(t)]dt
r:tne-st[F(~)-F(t)]dt
1 + f:e-stdH(t)
=
l-w
n-1
L
+
j=O
Therefore
I:
lim
5+0
Hence if
tne-st[H(~)_H(t)]dt
f: tne-st[F(~)_F(t)]dt
~n+1
=
1
(l-w)
2
(2.3.7)
is the first infinite moment of F (n = 0, 1, ... ),
then by (2.3.7) and Theorem 2.1,
- 41 -
f:
f:
lim
x-+Gl
yn[H(w)_H(y)]dy
=
yn[F(w)_F(y)]dy
1
(1-w)
2
Lemma 2.6
Suppose
lim
x-+Gl
w
0 t
J
n+1
dF(t)
= w (n = 0,
J: tm[H(co)-H(t)]dt
J:
=
1, ... ).
Then
1
(1-w)
2
tm[F(w)-F(t)]dt
for every integer m ~ n.
Proof:
From the previous discussion we know that there is an integer
no s n such that
JW
t no + 1 dF(t)
= co
and
o
n
e-stt O[H(co)-H(t)]dt
lim
s-+{)
fa n
e-stt O[F(co)-F(t)]dt
fa
=
1
(1-w)
2
- 42 -
As a result,
-+-
0 as
s
-+-
0;
i.e. ,
I:
e-sttnO[H(OO)_H(t)]dt
{I:
I: e-sttno+l[F(OOl-F(t)]dt
e-sttnO[F(~)_F(t)]dt}
-+-
O.
2
Consider
[
J
o e-stt
~
o e-stt
n +1
0
n +1
0
[H(~)-H(t)]dt
[F(~)-F(t)]dt
+o(l)}x
- 43 -
fa
n +1
e-stt
°
[F(oo)-F(t) ]dt
J:
fooo e-sttnO[H(OO)_H(t)]dt
I:
= ---.,;....--------- +
e-sttnO[F(OO)_F(t)]dt
o(l){-~---n-+~l-----
fa
e-sttnO[F(OO)_F(t)]dt
e-stt
Therefore
n
+1
J~ e-stt a [H(oo)-H(t)]dt =
lim
s-+O
r:
1
e-sttno+l [F(~)-F(t)]dt
(l-w)
By induction,
lim
s+o
r:
I:
e-sttm[H(oo)_H(t)]dt
=
e-sttffi[F(oo)_F(t)]dt
for every integer ffi
~
n.
And now Theorem 1 ensures the result.
Lemma 2.7
1
(l- w)
2
2
a
[F(oo)-F(t)dt
}.
- 44 -
Proof:
00
H(oo)-H(x)
n
= L
n
n
[w -F(x) +F(x) -F
(n)
(x)]
n=l
w
F(x)
F(x)
= 1-w - 1-F(x) + 1-F(x) - H(x).
=>
H(oo)-H(x)
F(oo)-F(x)
~
w
F(x)
1-w 1-F(x)
w-F(x)
=
w - F(x)
(l-w)(l-F(x)) (w-F(x))
Therefore
H(oo)-H(x)
x-+<>o F(oo) -F(x)
lim
~
1 ~
(l-w) 2
4It
Lemma 2.8
If
~.
J
= 00
for some j
lim H(oo)-H(x)
F(oo)-F(x)
x-+<>o
1, then
~
1
= ---~2
(1- w)
Proof:
From Lemma 2.6, we have
JX
t j - 1 [H(00)-H(t)]dt
lim _0
_
x-+<>o
I:
t - 1 [F(00)-F(t)]dt
j
=
1
(l-w)
2
- 45 -
Since both the numerator and the denominator diverge, this can be
true only if
H(oo)-H(x)
x-lo«J F(oo)-F(x)
lim
•
=
1 =
(1_w)2
In the next chapter, we shall see that if
H(oo)-H(x) = l~
(1_w)2'
x-lo«J F(oo)-F(x)
lim
then conditioning on {A(t)
= I}
behavior of the renewal process.
has no ultimate effect on the
Thus this last lemma tells us that
if F has an infinite moment and is well enough behaved to ensure
.
H( 00) -H(t)
hm F( 00) - F( t)
t-lo«J
exists, then it is hopeless to try to develop a theory paralleling
standard renewal theory by conditioning on {A(t)
= I}.
CHAPTER 3
THE SUB-EXPONENTIAL CASE
3.1
a
DEFINITIONS AND BACKGROUND
. H(oo) -H(x) __
We need to know when 11m F(oo)-F(x)
of this equality.
1 2 and the ramifications
(l-w)
x~
Some preliminary definitions are required.
Let G be a proper distribution on (0,00) such that G(O+)=O and
G(x) < 1 for 0 < x < 00.
G belongs to S, the sub-exponential class
of distributions, if
l_G(2) (x)
I-G(x)
2.
lim~~~~
x~
(3.1.1)
The class S was introduced by Chistyakov (1964); he found it useful
in studying branching processes.
Lemma A (Chistyakov)
If G
S, then lim 1-G(x+s) = 1 for all
I-G(x)
x~
E
5 < 00
Because I-G(x) is a function of moderate growth,
~
I-G(x)
where a(u)
e
st
~
k
exp{-f:
0 and a(u)
[l-G(t)]
+
00
as
a(u)du}
+
O.
t +
Hence
00
for aIls> O.
- 47 -
The fact that the tail of G converges to 0 more slowly than any
exponential function explains the label "sub-exponential."
Lemma B (Chistyakov)
If G
5, then
E
l_G(k) (x)
lim
l-G(x)
(3.1.2)
= k.
x~
Athreya and Ney (1972) have also used the class 5 to deal with
branching processes and have proved the following useful lemma.
Lemma C (Athreya and Ney)
If G
5, then given any
E
>0, there is a D <
€
~
such that
l_G(n) (x)
l-G(x)
for all nand x.
Teugels (1975) used these lemmas to prove an important theorem
for transient renewal processes.
Theorem A (Teugels)
If
F(~)
=w <
1, the following statements are equivalent.
(i)
(ii)
(iii)
.
1l.m
x-+oo
H(~)-H(x)
F(~)-F(x)
-=
1
(l-w)
2
- 48 Chistyakov's result on the expected number of particles at
time t in an age-dependent branching process is now an easy consequence of Theorem A.
A(t)
= l-G(t)
+
From equation (1.4.18) we have
Jt [l-G(t-T)]dH (T)
o
a
where H is the transient renewal function corresponding to the
a
defective distribution aGo
Evaluating the integral, we find
A(t)
or
A(t)
a-(l-a)Ha(t)
1- G(t) = --a---aG="("""'t"""")as t
-+
00
if and only if G
E
1
(3.1.3)
-+ - -
I-a
S.
We shall see that when w-1F(x)
E
S, the resulting transient
renewal process has remarkable properties.
It is worth noting,
therefore, that distributions of this type can ge quite simple in
form.
For example, w-1F(x)
(i)
(ii)
(iii)
w-F(x) - x
-a
E
L(x), a
S if
~
0 and L is slowly varying.
w-F(x) - k exp{_x 8 }, 0 < 8 < 1.
w-F(x) - k exp {x(logx)-8}, 8
However, if 8
<
1 in (iii), w-1F(X) i S.
>
1.
(See Teugels (1975).)
- 49 -
3.2
PROPERTIES OP TRANSIENT RENEWAL PROCESSES WHEN w-lp(X)
Conditioning on {A(t) = l} focuses
alive at time t.
E
S
on only those processes
One feels intuitively that if the process is alive
at t, it should be behaving like an ordinary renewal process.
In-
-1
tuition is far from correct, however, when w P(x) is a sub-exponential
distribution.
In fact, many limit results are the same regardless
of whether one conditions on {A(t) =
l} ..
Lemma 3.1
If w-1 P(x)
E
S, then
k
lim P{N(t) = kIA(t) = l} = lim P{N(t)=k} = w (l-w).
t~
.
e
t~
Proof:
P{N(t)=k. A(t)=!} = f:[W-F(t-T)]dF(k)(T) =
(3.2.1)
Thus
P{N(t) = kIA(t)
=
l}
= wp(k)(t)_p(k+l)(t)
(l-w) [H(lXl) -H(t)]
= wk+l_p(k+l) (t)_w[wk_p(k) (t)]
(l-w) [H(lXl) -H(t)]
= {wk+l_p(k+l)(t)
w-P(t)
w[wk_F(k)(t)]}
w-F(t)
w-F(t)
(l-w) [H(lXl)-H(t)]
- 50 -
+
{(k+l)w
k
k
k
- kw }(l-w) = w (l-w),
(3.2.2)
by Lemma B and Theorem A. But
(3.2.3)
o
If N(t) is the renewal count for a proper renewal process, then
lim P{N(t) = k} = 0 for all fixed k.
t-..
Thus this simple lemma warns us that even those transient processes
alive at time t do not behave like proper processes.
The fact that
N(t) is not growing large as t increases is reflected in the asymptotic
properties of the forward and backward delays and in XN(t)+I' the
-
lifetime spanning time t.
e
Lemma 3.2
P{~(t)+l > tIA(t) = I}
+
I as t
+
~.
Proof:
P{~(t)+l > t, A(t) = I}
= P{N(t) = 0, A(t) = I} + L P{N(t)=n, X l>t, A(t)=l}
n=l
= w-F(t) +
n~-\-l Ito
L
=[w-F(t)] [1+H(t)].
n+
[w-F(t)]dF(n)(T)
(3.2.4)
- 51 Hence
lim P{XN(t)+l
t-+<><>
= lim
t-+<><>
[
>
tIA(t)
= I}
w-F(t) ] [l+H(t)]
H(~)-H(t)
l-w
=
..,
(l-w)-
(3.2.5)
(l-w) 2
o
Therefore, if the process is alive at some distant time t, this
seems to be because one of the lifetimes begun prior to the instant
t is itself longer than t.
Evidently we have not observed a large
number of moderate lifetimes; processes built up of many such
"reasonable" X's would have died before reaching time t.
Given this result about
~(t)+l
= nt
+ St' it is not surprising
that the forward and backward delays have degenerate limiting distributions.
From (2.2.6),
lim p{stSxIA(t)=l}
t-+<><>
Slim H(t+x)-H(t)
= 0 for all fixed x.
t-+<><> (l-w) [H(~)-H(t)]
Similarly, (2.1.1) yields
t
lim P{ntSx!A(t)=l} = lim
t-+<><>
t-+<><>
J
w I" H(t)-H(t-x)
H(~)-H(t)
- l-w t~
<
[W-F(t-T) ]dH(T)
t_x(l-w)[H(~)-H(t)]
=0
for all fixed x.
Before studying St and n more closely, we prove a simple lemma.
t
- S2 -
Lenuna 3.3
Suppose G E S.
a
Then for any £ > 0 and integer k, there exists
such that
~(£,k)
' ft [1-G(t-T)] d (k) ( )
l 1.m
1- G( t)
G
T < £ .
t-+oo ~
(3.2.6)
Proof:
Choose ~(£,k) such that G(k)(~)
1-£.
>
We know
. G(k)(t)_G(k+1)(t) = =
t [l-G(t-T)] dG(k)(T)
l1.m
1-G(t)
1
lim J
1-G(t)
t-+oo
t-+oo 0
(3.2.7)
dG(k)(T)
1-G(t)
J~o [1-G(t-T)]
o
and
~ G(k)(~)
>
1-£.
Lemma 3.4
-1
Suppose w F(x)
E
S. Then
lim P{nt~t-cIA(t)=l, N(t)~l} _ H(~)-H(c)
t-+oo
-
H(~)
Proof:
P{n
t
~
t-c, A(t)=l} = Jt [W-F(t-T)]dH(T)
(3.2.8)
C
while
PlA(t)=l, N(t»l) =
J:
[W-F(t-T)]dH(T) = F(t)-(l-w)H(t).
(3.2.9)
- 53 -
Note that
.
w-(l-w)H(t)
= {w-(l-w)H(t)-[w-F(t)]}
11m F(t)-(l-w)H(t)
w-(l-w)H(t)
t-+<><>
= {l
-
-1
(l-w)}-l = w-1 .
(3.2.10)
Thus
_ftc (l-w)
[W-F(t-T) ]dH(t)
[H(~)-H(t)]
Choose E
J~
u
O.
>
Then there is some
W-F(t-T)
(l-w) [H(~)-H(t)] dH(T)
by Lemma 3.3.
• w - (l-w)H(t)
F(t)-(l-w)H(t)
~(E)
>
(3.2.11)
c such that
< E
Hence
[w-F(t-c)][H(~)-H(c)]
(l-w)[H(~)-H(t)]
<
-
ft
[w-F(t-T)]dH(T)
(l-w)
[H(~)-H(t)]
c
< [w-F(t-~)][H(~)-H(c)]
-
(l-w) [H(~)-H(t)r---- +
(3.2.12)
E.
But
w-F(t-s)
~~ (l-w) [H(~)-H(t)] = (l-w)
for all fixed sand
E
is arbitrarily small while
~
is as large as
- 54 -
we like.
Therefore
lim Jt lw-F(t-T)]dH(T) = (l-w)
(l-w) [H(~)-H(t)
t -+00 c
[H(~)-H(c)].
(3.2.13)
The conclusion follows from (3.2.10), (3.2.11), and (3.2.13).
o
Lemma 3.5
If w-F(x) - x-aLex), then
Proof:
-a
By Theorem A, H(~)-H(x) _ x L(x)
(1_w)2
.
P{~ ~.\tIA(t)=l} =
t (1+.\)
ft
t
_ H(t+.\t)-H(t) + t
- H(~)-H(t)
t
e
[l-F(t+.\t-y]dH(y)
(l-w) [H(~)-H(t)]
(1+'\)
(w-F(t+.\t-y)]d~(y)
J
(3.2.14)
(l-w)[H(~)-H(t)]·
Now
,
, H(t+At) -H(t)
11m H(~)-H(t)
= 1 - 1 1m
t-+oo
t-+oo
H(~)
while
lim
t-+oo
ftt
(1+.\)
-H(t+At)
H(~)-H(t)
[w-F(t+At-y) ]dH(y)
(l-w)[H(~)-H(t)]
=
1- (1+.\)
-a
(3.2.15)
- 55 -
Slim H(oo)-H(t+\t)
t~ (l-w) [H(oo) -H(t)]
t +At
J
t
H(oo)-H(t+At-y)dH(y)
H(oo)-H(t+At)
= 0 by Lemma 3.3.
(3.2.16)
o
To study the moments of N(t), we begin by finding expressions
for these moments.
The factorial polynomial x(n) is defined as
x(n) =x(x··l)·"Cx-n+l); x(o) =1 (n-=O, 1, 2, ... ).
Stirling's
numbers of the second kind are used to express xn as a linear
combination of factorial powers of x no higher than the nth.
n
x
=
Lemma 3.6
k
a)
EN(t)kA(t)
= I jltkJo[WH(j)(t)-(l-W)H(j+l) (t)] and
j=l
b)
where the tkj's are Stirling's numbers of the second kind.
Proof:
a)
EN(t)kA(t)
=
In=l nk fto
[w-F(t-T)]dF(n) (T)
00
=
I
nk[wF(n) (t)_F(n+l) (t)]
(3.2.17)
n=l
00
=>
L{EN(t)kA(t)}
= [w-F*(s)] I nkF*(s)n.
n=l
(3.2.18)
- 56 -
n
k
k
= I
j=1
tkJ,n(J')' so we can write
L{EN(t)kA(t)}
= [w-F
* (5) ]
k
00
I
t
j=1
:::
k
*
[w-F (5)]
I
j =1
kj
00
t k/
*
(5)
j
I
I
n=j
* n
n(j/ (5)
* n-j
n(j)F (5)
n=j
k
::: [w-F*(s)]
I
j=1
: : Ik
j! t
j=1
jlt ,F*(s)j[1-F*(s)]j+1
kJ
*
*
j
*
-1
· [w- F (5)] H (5) [ 1-F (5)] .
kJ
(3.2.19)
*
*
*
From the equation H (5) = F (5) [1+H (5)], we see that
(1-F * (s))(1+H * (5)) = 1.
Hence
[w-F * (5)] [1-F * (5)] -1 = w - (1-w)H * (5).
(3.2.20)
Substituting the expression in (3.2.20) into (3.2.19), we find
L{EN(t)kA(t)}
and hence
(3.2.21)
- 57 -
b)
EN(t)k =
In=l nk It0 [l-F(t-T)]dF(n)(T)
co
= L
(3.2.22)
n=l
Therefore
co
L
[1-F*(s)]
nkF*(S)n
n=l
k
= L j!tkoH*(s)j from (3.2.19).
j =1
.J
Hence
k
L
EN(t)k =
j !tkJoH(j) (t) .
(3.2.23)
j=l
o
Theorem 3.1
lim E[N(t)kIA(t) = 1] = lim EN(t)
k
k
-1
only if w F(x)
€
j
j=l
t~
t~
I
=
w j
] if and
ltko [-l
-w
J
S.
Proof:
(i)
-1
Suppose w F(x)
€
S.
w
H(co) -H(t)
Then H(co)
-1
H(x)
€
S and by Lemma B,
n-1
n=l, 2, .,.
-+ n ( - )
1-w
(3.2.24)
Thus
(3.2.25)
H(n) (t) = (l-w)H(n)Ct) =
wn
q ( t)
-(1---W":':-)n--"""-l--(t -)
H(co)-H(t)
q
nw n-1
(1_w)n-1
--.:.:...._- + 0(1).
- 58 -
w
Multiplying by l-W ' we have
=
wH (n) (t)
q (t)
n+l
n
w
_n_w_
-(-l--w-)n-q-(-t-)
+ 0(1).
(3.2.26)
(l-w) n
Taking n = k in (3.2.26) and n = k+l in (3.2.25) and subtracting, we
find
wH(k)(t)_(l_w)H(k+l)(t) =
q(t)
k
w k + 0(1).
(3.2.27)
(l-w)
Therefore
k. I
wH(j)(t)-(1-w)H(j+1)(t)
J.t kj [
q(t)
]
t-+oo j=l
lim E[N(t)k IA (t)=l] = lim
t-+oo
=
k
\'
L
j =1
I
j
JOlt
• kO (w)
-1-
J-W
= lim EN (t) k
by (3.2.21) and (3.2.23).
t-+oo
(ii) Now suppose
lim E[N(t)k IA (t)=l]
t-+oo
=
k
til
I
j =1
j! t
°
j
(-1-)
k J-w
Taking k = 1, we have
t lH(~)-H(t-L)]dH(L)
Jo
H(~)
-H(t)
W
+-
1-w
by (2.1.2).
- 59 That is,
'1
H(oo) 2_H(2) (t)
H(oo)-H(t)
H(oo)'"-H(oo)H(t)
H(oo)-H(t)
W
+--
l-w
But then
2w
l-w '
+--
which is equivalent to saying H(oo)-lH(t)
-1
w F(t)
E
E
S.
Theorem A implies
0
S as well.
Smith (1959) has shown that when
{X.}
1
is a proper renewal pro-
cess E[N(t)k] is asymptotically a kth degree polynomial in t.
However, when
{X.}
1
is transient and w-lF(X)
E
S, the moments of
N(t) converge to finite values, even when we restrict our attention
to moments of processes alive at t.
This is an important discrepancy.
For example, many of the standard renewal theoretic results cited in
Chapter 1 are direct outgrowths of the Key Renewal Theorem, which
depends in turn on the approximate linearity of H(t) = EN(t).
Our
efforts to recover these standard results in the transient setting
by conditioning on {A(t) = I} are completely fruitless when
w-1 F(x)
E
S because E[N(t) IA(t)=l]
+
w .
l-w
In the sub-exponential case, conditioning on {A(t) ~ I} seems
to have little effect on the long term properties of the transient
process.' We have seen, for example, that
lim(P{N(t)=kIA(t)=l}
t+oo
- P{N(t)=k}) = 0
- 60 -
and
lim(E[N(t)kIA(t)=l) - E[N(t)k)) = O.
t~
This is particularly surprising in light of later chapters, where
we shall see that when the tail of F decreases exponentially, the
conditioning scheme is very effective in developing a theory paralleling the standard renewal theory.
CHAPTER 4
THE EXPONENTIAL CASE
4.1
THE PROPER DISTRIBUTION
F
Suppose
. q(t+s)
11m
(t)
t~
q
=
-as
ljJ(s) = e
a > O.
Then, as we recall from Chapter 2, eat[H(oo)_H(t)] is a function of
moderate growth and
.
e
f:
ect[H(oo)_H(t)]dt
<
00
for all c
<
a.
The tail
of H (and of F) now falls off exponentially, in contrast with the
sub-exponential case examined in the last chapter.
We can take -s
=c
<
a in (2.3.5) and conclude
oo
[
foo
f
(l-w) 0 ect[H(oo)_H(t)]dt ={ 0 ect[F(oo)-F(t)]dt}{l+ 0 ectdH(t)}
or
I:o
NO\'i
(l-w)
ect[F(oo)_F(t)]dt =
I:
a)
lim
cta
b)
lim foo
cta 0
ect[H(oo)_H(t)]dt
------~--------------------------(l-w)
either
f:
-1
ect[H(oo)_H(t)]dt
foo
+ c 0 e
=
~
or
ect[H(~)_H(t)]dt =
00
<
00
ct
[H(cn)-H(t)]dt
(4.1.1)
- 62 -
When a) holds,
e»
=
f
0 e
at
(4.1.2)
[F(co)-F(t)]dt
by the Monotone Convergence Theorem.
However, if the integral diverges and b) obtains, then
lim fco ect[F(co)_F(t)]dt
eta 0
= (l-Ul)
0'
(4.1.3)
Of course
F* (-a)-Ul
0'
and hence either
a)
.*
b)
F* (-0')
t;"
(-0') =
l-Ul
-, + Ul<l
l+[a(l-Ul)i] ...
or
= 1.
.
e
- 63 -
Lemma 4.1
If q(t) - ke
-at
*
,then P (-a)
= 1.
Proof:
If q(t)
= (l-w) [H(~)-H(t)]
- ke-
at
,
and the result follows from the previous discussion.
Suppose that F* (-a) = 1.
bution
o
Then we can define the proper distri-
F by
(4.1.3)
Write E and P for expectations and probabilities with respect to
- k
this distribution; let ~k
= EX
~~.
We say that a stochastic process G(t) is natural if its distribution depends only on t and Xl' XZ' ... , XN(t)+l.
cesses are therefore natural.
all t if Xl -
F;
Cumulative pro-
G(t) is a proper process defined for
it is transient if Xl
F.
The proper and transient
processes are said to be equivalent and their expected values are
related.
Ee
-as
N(t)+lG(t) =
Y
n=O
-a(x +···+X
e
1
)
J
{N(t)=n}
G(t,x l , ... ,xn+l )
n+l dF(x
1)···dF(x n+l )
- 64 -
(4.1.4)
In the right hand side of (4.1.4), G(t) is a transient process.
However, the expression is well-defined because on each set
{N(t)=n},
Xl+···+Xn~t<~
respect to dF(x n+ 1).
Xn+l<~
and
since we are integrating with
Thus the process G is alive at time t.
Remembering that SN(t)+l =
t+~t'
we can rewrite (4.1.4) as
-a~
e-atEe
tG(t) = EG(t)A(t).
(4.1.5)
Lemma 4.2
If F* (-a)
-
=1
~, then q(t) ~ (l-wle
aJ.!l
and J.!l
-at
Proof:
=1
Let G(t)
e
-at~
Ee
-al',;t
in (4.1.5).
Then
= q(t).
(4.1.6)
l-w
-*
= --=
K (a).
aJ.!l
o
From (1.1.13),
lim Ee
t-+«>
-a r
"t
,..
We shall assume that F (-a)=l throughout the remainder of this
chapter and in the next two chapters.
Equations (4.1.5) and (4.1.6)
yield
-a I',;
E[G(t) /A(t)=l]
=
Ee
Ee
tG(t)
-al;
t
(4.1.7)
- 65 -
Suppose G(t) is a cumulative process.
Then the right hand side
of (4.1.7) involves G(t), a proper process built up from N(t) independent random variables and some extra piece unlikely to affect the
distribution of G(t) substantively for large t, and
t until the next event.
In a loose sense,
~t
~t'
the time from
is determined by local
X's around time t while G(t) depends on all tours up to time t.
We
are led to consider the circumstances when
-0' ~
Be
Ee
tG(t)
-O'~
- EG (t)
+
0 as t
(4.1.8)
+ co
t
for when (4. 1. 8) holds, we have
E[G(t) !A(t)=l] - EG(t)
+
0 as t
(4.1.9)
+ co
and we can apply our considerable knowledge about EG(t) to the
transient process.
4.2
THE ASYMPTOTIC INDEPENDENCE OF
AND G(t)
~t
The notions of sluggish events and processes are helpful in
finding conditions guaranteeing that (4.1.9) holds.
is sluggish if P{X(A(t)) F X(A(t+T))}
T
O.
>
+
0 as t
+ co
An event A(t)
for all fixed
A(t) is natural if X(A(t)) is a natural process.
The
following unpublished theorem is a major step toward the desired
conditions assuring the asymptotic independence of G(t) and
Theorem B (Smith)
Suppose A(t) is a sluggish and natural event.
-
~l
<
co,
Then if
~t'
- 66 -
P{~t~X,A(t)} - K(x)P{A(t)}
+
0 as t
=.
+
(4.2.1)
Proof:
= P{~t~y,A(t)}
Let GA (y)
t
= 1:
fo
P{~
1:-y
and choose E
~x}dGA (y) +
t
J1:+x
1:
We know that when ~l < =, lim P{~t~x}
O.
>
(4.2.2)
dG A (y)
t
= K(x),
a continuous distri-
t+=
but ion function.
x.
Thus the convergence is uniform with respect to
Hence there is some T(E) such that
whenever t
~
T.
Let
1:
= 2T in (4.2.2). Then
2T
=J
o
P{~2T- ~x}dGA (y)+
Y
t
= dGA
JT
(y)
t
J2T+X
2T
dG A (y)
t
(4.2.3)
- 67 -
But for sufficiently large T, I-R(T)
<
e and thus
(4.2.4)
(4.2.3) also yields
~ [K(X)-e]G A (T)
= [K(x)-e] [P{A(t)
-P{A(t)'~t>T}]
t
.
e
(4.2.5)
Therefore
(4.2.6)
where
e(T)~
as
T~.
Because A(t) is sluggish,
P{~ t+ 2T~x,A(t)}-P{~t+_?T~x,A(t+2T} + 0
and
P{A(t)} - P{A(t+2T)}
+
O.
- 68 -
Therefore
That is,
o
Corollary 4.1
If A(t) is a sluggish and natural event and
Ee
-0 r;
Ee
t
X(A(t)) - P{A(t)}
-or;
+
-
~l
<
~,
then
O.
(4.2.7)
t
Proof:
Choose e:>0.
There exists an N such that if
r;t~N,
-Or;t
e
<e:.
(4.2.8)
- -O!;t
c
Ee
x(A(t)nB )
N
<
e:.
(4.2.9)
Establish a partition 0 = x <xl<ooo<x =N with mesh m.
o
n
e
o
- 69 -
By Theorem B
n-l -ax.
tx(A(t)nB )- ~ e
J+lp{A(t)}[K(x. l)-K(x.)]} ~ 0
N j =0
J+
J
-a~
lim {Ee
t-+oo
(4.2.10)
n-l
I
lim
j=O
t-+a>
e
-ax.
-a~
Jp{A(t)}[K(x. l)-K(x.)]-Ee
tx(A(t)nB N)} ~
J+
o.
J
(4.2.11)
Letting m + 0, we conclude
=
o.
(4.2.8), (4.2.9), (4.2.12), and the arbitrary size of
-a~
Ee
tx(A(t)) - P{A(t)}K*(a)
(4.2.12)
E
imply
o.
+
The conclusion follows because when ~l
_
< ~, Ee
-a~t
+
-*
K (a).
A direct
application of this corollary produces the following lemma.
Lemma 4.3
-
fl
l
<
- *
Let Wet) be a cumulative process such that EY
l
~
Then
-
IW~t)
_
;~ I >
Let A(t)
=
{I W~t)
P{
I A(t)
E
= l}
Proof:
-
~~ I >
d.
-+-
0 as t
+
~.
= K- l * <
~
and
(4.2.13)
- 70 -
A(t) is sluggish because P{X(A(t)) F X(A(t+T))} S P{A(t)} +
.
Wet)
Kl
P{A(t+T} ~ 0 as t ~ ~ Slnce ---t-- ~~ a.s.(P) when PI < ~ and
*
III
<
~.
= Ee
-or;
tx(A(t))
Ee-or;t
= P(A(t))
(4.2.14)
o
+ 0(1) by Corollary 4.1.
To translate the asymptotic independence of 4 and A(t) into
t
an independence result for 4 and some process G(t), we define
t
sluggish processes. A real valued process G(t) is sluggish if for
all x outside a set
E
of Lebesgue measure
0
and for all fixed T
> 0
P{G(t) sx<G(t+T} + P{G(t) >x~G(t+T) } ~ o.
Lemma 4.4
If G(t) has a limiting proper distribution, then G(t) is sluggish
if and only if G(t+T) - G(t)
P 0 as
~
t
~ ~.
Proof:
(i)
Fix e > O.
Suppose G(t) is sluggish and has limiting distribution J.
There exists some N(e) such that ±N are continuity points
of J and J(N) - J(-N)
>
I-e.
Let
~
= (-N,N].
P{!G(t+T)-G(t) \>e} S P{IG(t+T-G(t) I>g, G(t+T)E~,
(4.2.15)
- 71 -
Let -N = xO<x1<"'<~ = N be a partition of
c
-and x. e:E
~
such that x j +1-x j
e:
< -
2
J
M-1
S
L
j=O
P{G(t+T)e:(x.,x. 1]' G(t) i (x.,x. 1])
J J+
J J+
S
M-1
P{G(t)sx.<G(t+T)}
+
I
L P{G(t+T)sx.<G(t)}
J
J
j=O
j=O
+
0 as t
~1-1
+
00
since G is sluggish.
lim [P{G(t+T) e:~ c} + P{G(t) e:~ c}]
t+oo
S
2e:.
(4.2.16)
(4.2.17)
e: is arbitrary; thus (4.2.15), (4.2.16), and (4.2.17) imply
G(t+T) - G(t)
P o.
+
(ii) Now suppose G(t) has limiting distribution J and G(t+T) G(t) ~ O.
Let xe:E
c
Define E = {x: J(x) f J(x-)}; E has Lebesgue measure O.
and choose e:
>
o.
P{G(t)sx<G(t+T)} = P{G(t)sx} - P{G(t)sx, G(t+T)sx}
S
P{G(t)sx} - P{G(t)sx-e:, !G(t+T)-G(t) I<e:}
= P{G(t)sx}
S
P{G(t)sx-e:} + P{G(t)sx-e:, IG(t+T)-G(t)I~e:}
P{G(t)sx} - P{G(t)sx-e:} + P{IG(t+T)-G(t) I~e:}
(4.2.18)
- 72 -
J(x) - J(x-e) by assumption.
+
But x is a continuity point of J and e is arbitrary.
P{G(t)sx<G(t+T)}
0 as t
+
+ co;
Thus
the argument for P{G(t»x~G(t+T)}
o
is exactly the same.
Lemma 4.5
Suppose G(t) is a sluggish and natural process such that for
every e
0 there is a
>
EG(t)x(IG(t)
I
~(e)
making
~ ~) < e
If PI
for all sufficiently large t.
> co,
then
-(]I;
Ee
tG(t)
Ee
-or;;
- EG(t)
+
0 as t
(4.2.19)
+ co
t
Proof:
Fix e and choose a
_ -or;;t
Ee
_ -or;;t
G(t)
-or;;
1 Ee
= Ee
-0 r;;
B~
=
(-~,~].
_ -Or;;t
G(t)x(G(t)EB~)
t G(t)x(G(t)EB~)
c 1
Ee
s
-*
eK (0)
c
G(t)x(G(t)EB~)
+ Ee
-1
(4.2.20)
.
t
Now establish a partition
-~
= xO<xl<···<x
N
N-l
I
n=O
and hence
Define
concomitant~.
=
~
having mesh m.
-or;;
Ee
tG(t)x(x <G(t)sx 1)
n
n+
- 73 -
~
N-l xnEe
L
n=O
-crT;; t
~
Ee
x(x n<G(t)Sx n+ 1)
-crT;; t
n=O
G(t)x(G(t)€B~)
S--------....::..-crT;; t
~
xn+ lEe
L
S
Ee
-crT;; t
Ee
~
N-l
~
-crT;; t
~
Ee
X(x n<G(t)Sx n+ 1)
-crT;; t
(4.2.21)
Because the events An = X(x n<G(t)Sxn+ 1) are sluggish, Corollary 4.1
implies
~
lim
t-+<lO
Ee
-crT;;t
G(t)x(G(t)€B~)
[-------~
Ee
-crT;;t
N-l
L
n=O
x P{x <G(t)sx l}] ~ 0
n n
n+
and
N-1
lim [
t-+<lO
L
~
~
Ee
-crT;;t
xn+ lP{xn <G(t)Sxn+ l}n=O
G(t)x(G(t)€B~)
-
Ee
]
-crT;; t
~
0
(4.2.22)
Letting the mesh m approach 0, we find
~ -crT;;t
Ee
G(t)x(G(t)€B~)
[--------..,;::~ -crT;;t
Ee
EG(t)X(G(t)€B~)]
(4.2.20) and (4.2.23) guarantee the result.
= O.
(4.2.23)
o
The following lemma is useful for checking that the conditions
of Lemma 4.5 are fulfilled.
- 74 -
Lemma 4.6
Let G(t) be a natural process having proper distribution J t .
If J t yt. J and
(4.2.24)
then for every E
>
a
there is a
~(E)
making
for all sufficiently large t.
J
{Ixl>M
IxldJt(x) < E
Proof:
Fix E > O.
There is a T such that
1
(4.2.25)
by (4.2.24).
There is a
~(E)
such that J
{Ixl>M
where ±~ are continuity points of J.
~
otherwise.
~
+
±~
= lxi, Ixl
~ ~,
Jg(X)dJ(X).
T2'
IIg(X)dJt(X) - Jg(X)dJ(X)
Because
Define g(x)
g is a bounded continuous function and thus
Jg(X)dJt(X)
Thus for t
(4.2.26)
IxldJ(x)<E,
I<
E.
are fixed continuity points of J,
(4.2.27)
- 75 In particular, for t
~
T3 ,
(4.2.28)
for t
~
T by (4.2.27) and (4.2.25).
4
=>If {IX/>A} [/xl-A]dJ t (x) - f {lxl>A} [!X I-A]dJ(X)!<2<:
=>
for t
~
J
{lxl>A}
[lxj-A]dJ (x) < 3<:
(4.2.29)
(4.2.30)
t
T by (4.2.26) and (4.2.30).
4
(4.2.28) and (4.2.26) and hence
o
4.3
RESULTS
fu~D
EXAMPLES
We now use the technical lemmas of the last section to derive
some results about transient cumulative processes.
We also return
to the "time until ruin" problem discussed in Chapter 1.
Lemma 4.7
Suppose Wet) is a cumulative process such that
-2*
K
<
~
Then
PI
< ~
and
- 76 -
(4.3.1)
is a sluggish process.
Proof:
From (1.2.4), we know that P{n1(t)sa}
+
~(a)
and hence by
Lemma 4.4 it is sufficient to prove that n (t+T) - n1 (t)
1
p
+
O.
~II
-
=p
e
K [N(t+T)-N(t)]
W(t+T)-W(t) _ 1
cr-~+T
-_y
cr_ ~+T
-_y
].l1
(1 _ ~ t )
t+T
].l1
(4.3.2)
Hence
€:
>-
3
77 -
Kl[N(t+T)-N(t)]
+
P
(j
Y
=A +
e:1t+T
> -.::........:....-:...._-
~t~T
3(/t+T-/t:)
].11
B
+
C.
(4.3.3)
In his paper introducing the concept of cumulative processes,
Smith (1955) proved that if U
l
W(t)-W(SN(t))
(i)
(ii)
_ _ _'-':":...l:...:.L..- -+
<
00
and K- Z*
0 a. s. (P)
-+
a a.s.
00,
and
It
W(SN(t)+l)-W(t)
<
(P).
It
-
e
These two results coupled with the fact that
N(t+T)
W(SN(t+T)-W(SN(t)+l)
I
= j=N(t)+Z
It+T
Y.
J
N(t+T)-N(t)-l
N(t+T) -N(t) -1
It+T
-+
a
a.s. (P) by the Strong Law of Large Numbers ensure that A -+ O.
B -+
a by
the Strong Law also.
Finally, C -+ 0 by the asymptotic
0
normality of n (t) .
l
Lemma 4.8
Suppose Wet) is a cumulative process such that 0z
K *<
Z
00
Then
<
00
and
- 78 -
(4.3.4)
is sluggish.
Proof:
(1.2.5) and obvious modifications to the argument in Lemma 4.7
o
wi 11 suffice.
Theorem 4.1
Let Wet) be a cumulative process such that U
l
< ~
and K;
< ~
Then
(i)
(ii)
<0
If, addition, U
2
<
I A(t)=!! +
(4.3.5)
.(0)
~,
KIt
IW(t) - -~l
P
s;
aIA(t)
{f
=1
(4.3.6)
-+ ~(a).
~l
Proof:
Using the notation n.(t) established in Lemmas 4.7 and 4.8,
J
{n.(t)
J
S;
a} is a sluggish event if condition j holds; j
=
1, 2.
e
-
- 79 -
-or;
P{n.(t)SaIA(t)=l} =
Ee
tx(n. (t)Sa)
J'-- __
_ -or;t
Ee
J
~(a)
+
0
by Corollary 4.1.
Let us re-examine the "time until ruin" problem.
*
P{T St}
We found that
Hit)
= P{ L
k=l
L > u}
k
where M(t) is the renewal count for the transient renewal process
Note that Lk *
{Zi}'
=~.
Let J(z,£) be the joint distribution
of Zl and L and suppose there is a
1
-
e
[
[
a a
making
eozdJ(z,£) = 1.
Define dJ(z,£) = e oz dJ(z,£).
4.1 implies
0
<
co
, then Theorem
- 80 -
where y
Therefore
P{. * ~t}
~
(1)
-crt
-w e
crf2
(4.3. 7)
1
where
1-w = P{Zl =oo}.
Equation (4.3.7) agrees with Cramer's estimate for the time until
ruin (1955).
Theorem 4.2
Suppose Wet) is a cumulative process such that O
2
< 00. Then
K t 2
E [ (W (t) -
~)
1-1
IA(t)
= 1]
1
yt
= iiI
+
o(t).
Proof:
as before.
~
P{11 2 (t)
2
~cd
-+
,-.
2lP(Vll)-1,
II
~o
since
<
00
and
e
(4.3.8)
-
- 81 -
En 2 (t)2 ~
Smith (1955) has shown that
1 which is, of course,
the first moment of the limiting distribution of n2 (t)2
as n (t) is a sluggish process, n2 (t)2 is too.
2
Lemma 4.5 and 4.6,
n (t)
2
Ee
Therefore, by
2
_ -<11;t
Ee
Note that
-<11;
2
- En
(t)
2
t
~
0 as t
(4.3.9)
~ ~.
Hence
o
1.
~
In light of this result, it is reasonable to ask whether
.
e
Var[W(t) IA(t) = 1] = ~t
+
~l
oCt).
(4.3.10)
We have
K t 2
E[{W(t)-E(W(t) IA(t)=l)
+
E(W(t) IA(t)=l)- ~} IA(t)=l]
~l
= E[{W(t)-E(W(t) IA(t)=n}2IA(t)=1]
I
Klt
=
+ {E (W (t) A (t) 1) - -_-}
~1
If we can show that
{E (W(t) IA(t) =1)
then we will know
-t
Var[W(t) IA(t)=l] = 1-
P1
+
oCt).
2
= -iiyt
l
+
o(t).
- 82 -
In the next two chapters we will derive the asymptotic forms
of the conditional cumulants of Wet); in the process we will discover conditions under which (4.3.10) does indeed hold.
CHAPTER 5
THE CUMULANTS OF
5.1
T~~SIENT
RENEWAL PROCESSES
PRELIMINARY RESULTS
We shall show that when the proper distribution F exists, the
kth conditional cumulant of N(t) is asymptotically linear in t, just
as is the kth cumulant of the renewal count of a proper process.
To
deal with cumulants, Smith (1959) had to assume that (the then
proper) F
.
e
C, the class of distributions for which F(n) has an
E
absolutely continuous component for some finite n.
We will need
F E C and will see that it is sufficient that F(oo)-lF
E
C.
Lemma 5.1
Suppose dF (x)
= e ax dF(x).
Then
(5.1.1)
Proof:
-* (s)
F
=
F* (s-a)
-* (s) n
=> F
=
F* (s-a) n .
That is,
(5.1.2)
- 84 -
By the uniqueness theorem for Laplace Transforms,
o
Lemma 5.2
If F(co)-lF
C, F
€
C.
€
Proof:
This is clear from Lemma 5.1, for if F(n)(x) has an absolutely
continuous component
component
JX
o
ecryf
n
JX
(Y)d~.
f (y)dy, p(n)(x) has an absolutely continuous
0
n
As a first step toward an examination of the conditional
cumulants, let us consider
Ee
.
-01:;
tN(t)k
Ee
-01:;
= 1,
k
e
2, ...
t
Lemma 5.3
Proof:
by Lemma 5.1
co
= eot I
n=l
nk[wF(n)(t)_F(n+l)(t)].
(5.1.3)
- 85 -
=
eat Joo dF(T) + eat
=
ItI
a
t
oo
dF(y)dH(T)
t-T
e at [w-(l-w)H(t)].
(5.1.4)
00
EN(t)k =
I
nk[p(n)(t)_F(n+l)Ct)].
(5.1.5)
n=l
Hence
-al;
Ee
t
* EN(t)k
=
00
= eat I
nk[wFCn)(t)_wFCn+l)(t)_(l_w)FCn+l)Ct)]
n=l
00
= eat I
nk[wF(n)(t)_F(n+l)(t)].
n=l
(5.1.6)
o
Define
~(t) = EN(t)(k) = EN(t)[NCt)-l] .. ·[N(t)-k+l]
(5.1.7)
- 86 -
Lemma 5.4
= k!H(k)(t).
~(t)
Proof:
~(t)
~
0
""
=
L
j=k
(s) =
j(k)
I:o
""
L
j (k)
j=k
""
=
dP(y)dP(J)
(T)
rJ""
.
o t-T
I
OCI
[
j(k) J
j=k
o 0
(5.1.8)
e- st Jt J"" dP(y)dP(j)(T)dt
0 t-T
JY+Te- st dtdF(y)dF U ) (T).
T
""
ex.
= s-
1
*
\'
-*
j
(l-~ (5)) L j(k)F (s)
j=k
= 5 -1 k!H-* (5) k .
Since
~(t)
e-
k!~*(s)k
=
-*
k
s [l-F (s)]
(5.1.9)
is nondecreasing, its Laplace-Stieltjes Transform exists
and is
~
* (5) =
s~
0
-*
k
(5) = k!H (s) .
(5.1.10)
The uniqueness theorem for Laplace Transforms implies the result.
o
Combining Lemmas 5.3 and 5.4, we discover
I
E[N(t) (k) A(t) =1]
= k,.Jt Ee
0
-as
t-TdH(k)(T)
_ -a
Ee
St
(5.1.11)
- 87 -
We know about the properties of H(k) from Smith (1959), but we need
-cr~
*
to find the rate at which Ee
t converges to K (cr). The next
lemma provides valuable information on this point.
Lemma 5.5
Suppose m ~ 1, 0m+l
< ~,
and F
€
C.
Then
~ -cr~t ~"'"
-K (cr) = oCt -m).
Ee
Proof:
e
-cr(X+T-t)
(5.1.12)
dH(T)dF(x).
Note that
~
f
~t -cr(x-t) [t]
\l. e -crj [H(j+l)-H(j)]dF(x)
e
j=O
(5.1.13)
K
is a finite upper bound on H(j+l)-H(j); this bound exists by a
well-known inequality.
- 88 -
Hence
(5.1.14)
t
=~
JY
Ja a
III
Jt
1
=--_
fY
a a
III
e-a(y-x)dx dF(y) + ~
j
foo .y
III
a
t
e-a(y-x)dx dF(y)
e -audu dF(y)
+ oCt -m-1 ) .
(5.1.15)
Therefore
lEe
-ar;t
_*
- K (a) 1=
t
1f
~
~
a IXa e
I
to
t
/
JO
J['~=o]
L
-au du [H(du+.t-x) - 01 ]dF(x)
I + oCt -m-1 )
e- aj !H(t+j+1-x)-H(t+j-x) _ 1 !dF(x) + o(t- m- 1 )
°1
2
J['x=\o]
L
e-
-a -1
+ K(l-e )
aj
1 I dF(x)
jH(t+j+1-x)-H(t+j-x) - --_
III
t
Jt/2
dF(x) + oCt -m-1 ).
(5.1.16)
- 89 -
On the range 0 ~ x ~
t,
IH(t+j+l-x)-H(t+j-x)
-.J:-I
)Jl
m
is o(t- )
by Theorems 3 and 4 of Smith (1959) since Um+ 1 <~. Thus the first
summand in (5.1.16) is oCt -m) and the second is oCt -m-l ). The
0
resul t follows.
-or;;
In Lemma 5.2 we found that Ee
= e ot [w_(l_w)H(t)].
t
This
function is of bounded variation in every finite interval and thus
its Laplace-Stieltjes Transform exists.
Lemma 5.6
- -or;;t
L{ Ee
}
s[P*(s)-w]
= ..,......-7"':-~- ~T
(o-s) [l-F* (s)]
Proof:
0_
= I:
r: e-ste-cr(x-tldF(xldt + I:
= I:
e- OX
I:
f:
r:_Te-ste-cr(Y+T-tldF(YldH(Tldt
et(O-S)dt dF(x)+ I: e-OTI: e- OY J:+Yet(O-S)dt dF(y)dH(T)
-* (s)]
= (o-s) -1 [F-* (s) -w] [1 + H
*
F (s)-w
= ----"--'---:---since H*(s) =
-*
(o-s) [1-F (5)]
-* (s)
F
I-F-* (s)
(5.1.17)
o
- 90 -
Returning to expression (5.1.11), we have
[Ee
-or; t-T - -or; t
-Ee
]dH(k)(T).
-or;
t
Ee
(5.1.18)
From Smith (1959), H(k)(t) is asymptotically a kth degree polynomial
in t.
Given Lemma 5.5, it is reasonable to expect E[N(t)kIA(t)=l]
(and hence E[N(t)kIA(t)=l]) to be a kth degree polynomial in t whose
leading coefficient is the leading coefficient in k!H(k)(t) if
~k+l < ~
Our arguments to this effect depend upon the following
lemma.
Lemma D (Smith 1959)
A necessary and sufficient condition"for the proper distribution
F of a non-negative random variable to have its first k moments finite
is that there exist another distribution function F(k) such that
*
fl Z Z
F (s) = l-flls + -Z! s
for real s
>
-
••• +
fl k _l
k-l flk
k *
(k-l)! (-s)
+ k!(-s) F(k)(s)
O.
(5.1.19)
*
fl r
r *
In fact, F (s) = l-flls +••• + r!(-s) F(r)(s) for 1
~
~
r
k where
F(r)' the rth derived distribution of F, has k-r finite moments.
jth moment of F(r) is a rational function of fl l ,
Theorem 5.1
Suppose FEe and
E[N(t) IA(t)=l]
~Z <~.
Then
Z-w
l-w
- + 0(1) .
.•• ,
fl
..
r+J
The
e-
- 91 -
Proof:
From (5.1.11) we know that
-ell;
E[N(t) IA(t)=l] = Jt
o
Ee
t-TdH(T)
(5.1.20)
_ -OZ;t
Ee
Consider
wet) = Jt0 Ee -OZ;t -'dH(T)
or
(s)
w* (s) = s [r* (s) -w]H*
-*
-
(5.1.21)
-* (o)H-* (s) by Lemma 5.6
K
[l-F (5)]
(0-5)
= A*(s) [
--
*
-
- K (o)H(t).
-*
5[F (s):w]
(o-s) [l-F (s)]
(5.1.22)
By Lemma 0,
-*
= I-ill sF-*(1) (s)
F (s)
which implies
*
-*
(1) ilZF(z) (5)
1
w (s) = H (s) [s -~] [-~*~- + (0-5) III 2- F
() 0
III (1) s
lim
5+0
*
*
1-w il z
W(s) = oil1 [2il
1
+
1
ill
ill
1-w'
- -]
-*
cr - 1-w] lim sH (5).
s+O
But
lim sH-* (5)
s+O
= lim
-*
sF (5)
5+0 1-F-* (s)
.
= 11m
-* (s)
F
_*
5 +0 ill F (1) (5)
(5.1.23)
- 92 -
Therefore
lim ljJ(t)
t-+<»
1-w
= a11 1
P2
Z- 2
\.11
1
a\.ll
1
1-w
(5.1. 24)
[ _ + -_- _ - ]
and hence
11 2
R* (a){H(t)
1
1
+~+-----}
2\.1 1
a\.ll
1-w
E[N(t) IA(t)=l] = -----------------cr~------------­
Ee
+ 0 ( 1)
. (5 . 1. 25)
t
From Smith (1954),
H(t)
_ -a~t _*
and Lemma 5.5 guarantees t[Ee
-K (a)]
+
o.
Thus
o
E[N(t) A(t)=l]
The method employed in Theorem 5.1 can be used to derive forms
for higher cumu1ants.
However, the computations involved become
increasingly burdensome as the order increases.
·For example, to
find the conditional variance, we begin with
E[N(t) (N(t)-l) IA(t)=l]
If P
3
<
~,
(5.1. 26)
we can show that
P2
- H( 2) (t) - H(t)
[-
2_2
\.11
1
1
+ -_- - - ]
a\.ll
1-w
(5.1.27)
- 93 -
The argument depends heavily upon Lemma D and the distributions
F(l)' F(2)' and F(3)'
Combining this result with Theorem 5.1 and
Smith's work on the cumulants of proper renewal processes, we discover
E[N(t)2 - (E[N(t) IA(t)=1]2 IA (t)=1] =
_1___ _
cr2ili
w
+ 0(1).
(l-w) 2
(5.1.28)
In the next sections we provide a preferable method of deriving the
cumulants.
--
We prove they are asymptotically linear in t and find
properties of their error terms.
5.2
THE CONDITIONAL
~-MOMENTS
Smith discovered asymptotic formulas for the cumulants of a
renewal process by studying what he called the
~-moments
of the
process:
~r(t)
= E[N(t)+l][N(t)+2]·"[N(t)+r].
(5.2.1)
These moments have generating function
= E[
If Un
< 00,
1
]
(l_~)N(t)+l
then
~j
n
1 +
L
j=1
n
~ . (t) -., + 0 (~ )
J
J.
as
~ +
0
(5.2.2)
- 94 -
and
~t(~)
~t(~)
= log
n
L W.(t)
j=l J
admits the expansion
~j
--.,
J.
+
n
o(~ ).
(5.2.3)
The coefficients W.(t) are called w-cumulants of N(t).
J
with
~-moments
By working
and w-cumulants rather than conventional moments and
cumulants, one bypasses certain computational complexities.
The rth
conventional cumulant kr (t) is a linear combination of wl(t),···,w r (t).
(5.2.4)
n-l
k (t) =
n
L
j=O
(-l)jt
·W .(t), n
n,n-J n-J
~
2
where the t 's are Stirling's numbers of the second kind.
n
See
Smith (1959) for details.
We will study the conditional
~-moments
and w-cumulants of
transient renewal processes and then relate our results to the
ordinary conditional cumulants, which we specify up to an error
term.
Let
~nc(t) = E[(N(t)+l)·"(N(t)+n) !A(t)=l]
-cr~
_ -cr~t
Ee
t*~ (t)
= Ee
[N(t)+l]···[N(t)+n] =
n~_
_ -cr~t
_ -cr~t
Ee
Ee
by Lemma 5.3.
(5.2.5)
e-
- 95 Consider
UEe
-o~
t*~ (t)}
n
by Lemma 5.6.
=
s[w-F-* (s)]~- * (s)
---:-.....;n~_
(s-o) [l-P* (s)]
(5.2.6)
Our goal is to rewrite (5.2.6) as a linear combination
*
- *
of ~-*n (s), ~- n- l(s),···,
~1 (s) because Smith carefully studied the
-
-
properties of
~n(t),.··, ~l(t)
in his 1959 paper.
Suppose n is a positive integer, p is nonnegative, and
- + +
1l
n p l
<
00.
Assume temporarily that s
>
o. Note that
OO
w-P* (s)
I
= _w_-_o
_
s-a
s-a
--
e-(s-a)xdF(x)
OO
=w
Ja
= Jr
e
e
-(s-a)x
dx - (s-a)
-sx
{e
ax
O
-1
*
F (s-a)
r
[w-F(x)]}dx- J
e
-sx
L(x)dx.
(5.2.7)
O
Since
= lim
e
ax
[w-F(x)]
=a
x~
and hence
= (n+p+l)
J: yn+PL(y)dy = (n+p+l) I: yn+Pe<JY[w_F(y)]dy.
- 96 -
(5.Z.8)
Therefore
s[w-F-* (s)]
= sL o (s) = L * (s)
s-o
is the Laplace-Stieltjes Transforms of a function of bounded variation on
[O.~)
defined throughout the half plane R(s)
L(x) and the identity L* (s)
= sloes)
lsi
< ~
-+
o.
Thus given 0
n+p+l
applies with m = n + p and q
-*
The transform L* (s) is
having n+p+l absolute moments.
_s....,,[_w_ _-F---,(,--s~) ]I!--_
-*
(5-0) [l-F (s)]
=
0; the integrability of
imply that L* (s)
FEe.
and
= 1.
>
Theorem
= O(lsl)
Gof
as
the appendix
It follows that
*
=
-*
l-F (s)
L (5)
S >0,
where 01 and 0z are distributions on
[O,~)
(5.2.9)
having n+p finite moments.
Notice that the right hand side of (5.Z.9) is an analytic function
throughout the half plane R(s)
for R(s)
>
>
O.
Thus equation (5.Z.9) must hold
O.
Let the first n moments of 01 and 0z be v ll ' v lZ ,"', v ln and
We can write
0. * (s)
J
-
••• +
v. (-s)
In
n!
n
(5.Z.l0)
e-
- 97 -
where Dj(n) is the nth derived distribution of OJ' j = 1,2.
Dj(n)
has p finite moments.
Define
Q(s)
-
02S2
-2!
+ ••• -
°n(-s)n
n!
_
n+1
lln+1(-s)
-*
F(n+1) (s) .
(n+1) !
(5.2.11)
We can rewrite D. * (s) as
J
A. Qn + R. * (s)
D. * (s) =
In
J
In
(5.2.12)
where the Aj'S are chosen to make the coefficient of sk in A Q+···+
j1
A. Qn equal to
In
•
v· k (-l)
k
J
k!
k
in D. * (s), 1 :::; k :::; n.
J
*
Let L. (s) = 1 + A.J 10. + ••• + A. Qn , j = 1, 2.
In
J
the coefficient of s
Lemma 5.7
L.(t) is a linear combination of distribution functions on
J
[0,00) having n + p + 1 finite moments.
Proof:
L. * (s)
J
=1
k
n
I
+
k=l
AJ·kQ
k
n
=1
+
I
k=l
Ajk
I
Q,=O
= 1
+
n
I
k=l
-*
k
AJ·k[l-F (s)]
- 98 -
=1
n
+
L
k=l
n
A' k
J
L
+
Q,=1
Define
- 1 +
Then
L. * (s)
J
and hence
(5.2.13)
L. (t)
J
e-
We remark that
<
00
for
Q, ;:::
1 since i1 n +p +1 <
00
o
As a result of the last lemma,
L. * (s)
J
*
= 1+a.J1 s+···+a.In s n +a.J,n+ IS n+1 L.(
1)(s)
J n+
(5.2.14)
where Lj (n+1)(t) is a linear combination of distributions having p
finite moments.
But from the definition of the A.
J
1 :::; k :::; n.
IS,
we must have
- 99 Thus
Vjn(-S)
L. * (5) =
J
n
+ ••• + ~--n~'.-- +
a.J,n+ 15
n+1 *
L.(
J n+ 1) (5).
(5.2.15)
From (5.2.10) and (5.2.15) we conclude
R. * (5)
In
= OJ
*
*
(S)-L j (5)
- a.J, n+ 15
V. (-5)
n
*
=.. . J<-n-n-=-"!-[OJ (n) (5)-1]
(5.2.16)
n+1 *
L.(
J n+ 1)(5).
Now define
-e
By equations (5.2.9) and (5.2.12) we have
(5.2.17)
Lemma E (Smith 1959)
- *
~
n
(5)
= n!Q(s) -n .
Lemma 5.8
If Pn+p+1
< 00, P ~
5 [w-F-* (5) H-*n (5)
-*
(s-a) [l-F (5)]
0, and FEe, then
n
n!R * (5)
*
= k~-O n(k)Ak~n_k(S)
+
~*
[l-F (5)]
n
- 100 Proof:
The expression follows directly from expression (5.2.17) and
o
Lemma E.
When we invert the transforms in Lemma 5.8, we will find
- cr r;
Ee
t*~n(t) is a linear combination of ~n(t),· •• , ~l(t), a constant
term, plus an error term.
We will prove the error term belongs to
a particular class of functions.
Let m
0 and
~
~(t)
be a function on [0,00).
We say
~(t)
belongs
to the class B(m) if and only if it is of bounded total variation and
It is clear that if
~(t)
=
~(t) E
B(m) and
~(oo)
= 0,
we may write
A(t)
(l+t)m
(5.2.18)
where A(t) is of bounded variation, tends to zero as t approaches
infinity, and if m ~ 1, A(t) belongs to the class L .
l+t
l
Lemma 5.9
If Un+p+l
R * (s)
n
[l-F * (s)] n
where
~(t)
E
< 00,
P
~
0, and
FEe,
then
= ~ * (s)
B(p) and
~(oo)
= o.
Proof:
R.
* (s)
It is sufficient to show that ~J~n~
__ has the described
-*
n
[l-F (s)]
e-
- 101 properties.
Identify p with m and n with q in Theorem G.
C and U + < 00
p 1
1)
F
2)
R. (t)
€
In
on [0,00) having n
functions.
= D.(t)-L.(t)
J
J
*
p absolute moments since D. and L. are such
+
J
= 0 ( Is In )
In
by equation (5.2.16).
*
-e
is a function of bounded variation
J
Also,
R.· ( s )
where
We have
R.
(s)
_ .....J-:-n_ _
= If'.* (s)
[l_F*(s)]n
J
~.(t)
J
If'.(oo) =
J
€
as
I5 I
-+ 0
Theorem G implies
B(p).
° since 151-+0
lim
R. * (s)
In
= 0.
-*
n
[l-F (s)]
o
We now want to find A explicitly for k = 0,1,2,3 and indicate
k
its general form for higher values of k.
We learned in equation
(5.2.17) that when 0n+p+ 1 < 00 •
First we rewrite
(5.2.19)
- 102 We have derived c. for j
S
J
distributions of
c
1-w
1
F.
(l-w) 02
1
0
=----+
o
2_
~1
4 by using properties of the derived
20°1
2
(l-w) 02
1
c =----+
3_
2
2
2 2_ 2
0
o ~1
o ~1
(l-w)03
1-w
c3
l-w
1
o
0
= -.r:::- - "3
~1
6 _2
0~1
(l-w) 02
+
3
40°1
2
(l-w Hi 2
+ 2 3_ 2
0
~1
e-
+
(l-w) \1
1
4
(J
-+
(l-w) \1
+
4 3_ 3
o ~1
.,
1200°12
2
5
(l-w) °2\1 3
6 2_ 3
o ~1
Cl-w)\1_2
(l-w)0204
,)
+
+
240013
360°1
3
(l-w)\12
+
8 2_ 4
o ~1
2
4
(l-w) \1 2 03
(l-w) \1 2
+
5
8 2_ 4
o ~1
160°1
There are no conceptual difficulties in deriving c
s'
the arithmetic involved becomes increasingly tedious.
c '
6
... ,
but
3
- 103 -
Notice that
s=O
and hence c
k
is a rational function of the first k moments of F and
-
That is, c k = Yk+1' where Yk+1 represents any rational function
°
of 01'
2 ,---, 0k+1'
Now to find the A's, we equate coefficients of powers of s in
the equation
n
I
LQ(s)j + o(s n )
J
j=O
n
=
I
c.s
J
j=O
j
n
+ o(s ).
(5.2.20)
We find
-e
1-w
A
O = 0°1
1-w
Al = 2_ 2
o J.l l
A
2
(l-w) 02
1
-+
3
20°1
0°1
1
1-w
=
o
3_ 3
J.l
1
o
2_ 2
J.l
1
+
°2
(l-w)04
---=5
24 0 ii
1
l
20il l
3
+
(l-w) 0z
2_ 4
o J.l l
(1-w)03
6OJ.l- 4
l
(1-w)Oz2
+
2 OJ.l- 5
1
- 104 -
In general. A =
k
k
_
Yk~l since ck = Yk+l and the coefficient of sk in
*
-
Q = [lJ I sF (1) (s)]
k
will be of the form Y .
k
Lemma F (Smith (1959)
If 0n+p+l <
~.
p
~
-
0 and F
C. then
€
A(t)
~ (t)
n
(l+t)p
where A(t) is of bounded total variation. is 0(1) as t
+~.
satisfies
the condition
A(t) - A(t-a) = o(t-l) as t
for every fixed a
that A(t)
l+t
€
and when p
> 0,
e-
~
+
~
1 has the additional property
L
l'
Theorem 5.2
If Un +p +l
< ~,
p
~
-
0. and F
€
C, then
E[(N(t)+l) (N(t)+2)··· (N(t)+n) IA(t)=l]
- t
= Y
1
n
+
- t n- 1
Y2
+
Y
n+l
0
(t -p) .
- 105 -
Proof:
E[(N(t)+l)···(N(t)+n) IA(t)=l] = ~nc(t)
by Lemmas 5.5, 5.8, and 5.9.
Define
•
Then
n
p
~nc(t) = k~O n(k)~k~n_k(t) + o(t- )
- t
= Y
1
n
- t
+ Y
Z
n- l
+
Y
n+l
(5.2.21)
o(t- p )
by Lemma F.
5.3
THE CONDITIONAL
C~IDLANTS
OF N(t)
In the last section we learned that when 0n+p+l
the nth conditional
~
~-moment
of N(t),
nc (t) = E[(N(t)+l}"· (N(t)+n) IA(t)=l],
is asymptotically an nth degree polynomial in t.
Let
< ~,
p
~
0,
- 106 -
~t
c
(r;) = E[
(l-r;)
be the conditional
~(t)
1 I A(t)=l]
+
~-moment
generating function.
n
r;j
When il n+p+l
<
co,
we can write
=1
n
I~. (t) -- + o(r; ) as r;
+
j=l JC
jl
n
=
I
+
r;j
tV. ( t) -., +
j =1 J c
0
(5.3.1)
n
(r; ) as r; + 0 .
(5 . 3 . 2)
0
J.
The coefficient tjJ
(t) is the nth conditional tjJ-cumulant of
nc
N(t); discovering its asymptotic form is our next task. We have
that tjJ
nc
(t) is the coefficient of (nl)-lr;n in
logO +
n
r;j
(t) -., } ,
j=l JC
J.
I
cf>.
•
which, by equation (5.2.21), is
logO +
where r.(t)
J
+
~
r;.J,.
L.
~
2. j(k)Q.k~J·-k(t) +
j=l J. k=O
a
as t
+
n ~
L.
j=l J l
r. (t)
J
.}
(l+t)n+ p - J
(5.3.3)
00.
Let
c. J
From the Taylor expansion for log(l+z), we know tV
efficient of (nl)-lr;n in
nc
(t) is the co-
- 107 -
n
n
r;j
l:
c. -.-, +
J J.
j=l
1
r. (t)
l:
J
j=l (l+t)n+ p - j
n
- -2 { . l: 1
J=
+ (_l)n+l
l {
n
l:
c"
j=l
•
~ _ 1
-2
r
J J.
ret)
(l+t)P
+
n
l:
j =1
r. (t)
I
n
n
l:
l:
J=
n
j=l
+
+ •••
r. (t)
(5.3.4)
J
j=l (l+t)n+ p - j
. 1
{l:
. 2
Jn+p j r;J}
j=l (l+t)
-
J J.
n
=
n
r;j
c. -.-, +
j 2
~}
J J.
c.
+ ... + (-1)
n+l 1
-
n
n
{l:.
j=l
r;j
C.
-'!
n
}
J J.
terms involving mixtures of powers of
r;j
c. -.-, and
J J.
n
r. (t)
l:
J
r;j
j
p
j=l (l+t)n+ -
(5.3.5)
We want to show that the coefficient of r;n in the mixture of
powers can be represented by
r(t) , ret)
(l+t)P
+
0 as t
+00
A typical
summand in the mixture is
n
{L
j=l
r;j m { ~
rk(t)
rk
q
--'!}
L
k
~
}
J J.
k=l (l+t)n+ p -
(5.3.6)
C.
where m ~ 1, q
~
1, m + q
~
n.
We now bound the absolute values of
the coefficients of r;j and r;k individually.
We know that Ic.\
J
<
Mt
j
- 108 -
and Irk(t)
I
<
ret) where ret) = 0(1) as t
+
00.
Thus the coefficient
of (n!) -1 ~ n in (5.3.6) is smaller in absolute value than its coeffident in
n
I
~r(t)q {
. m
(t~)J}
j=l
n
{L
1
k=l (l+t)n+l
k q
(~(l+t)) }
(5.3.7)
The coefficient of (n!)-l~n in (5.3.7) is
Therefore Wnc(t) is the coefficient of (n!)-l~n in
n
j 2
n
~j
1
{
L
~}
+•••
+
(_l)n+l
~
{
L
c.
-.,
-2
c.
-:--,
I
j=l J J.
n j=l
j =1 J J.
n
j
~,j
C. -.,
n
J J.
(5.3.8)
plus a term of the form o(t- p ).
of (n!)
-1 n
~
But we recognize the coefficient
in (5.3.8) to be identical to its coefficient in
~j
n
logU +
I
c. -., }
j=l J J.
~ ~.J,' ~
= log{l + L
L j(k)£k~J'-k(t)}
j=l J. k=O
n
L ~j
= 10gU +
j=l
~ £k
k=O
~. k(t)
J-
(j -k) !
}
(5.3.9)
ajJ 1
Recall that £ = ( - ) A = 1.
0
0
l-w
Thus expression (5.3.9) is simply
- 109 -
log{
n .
L Z;J
j=O
~
2.
k=O
Since
~
~
2. ~
k=O k
~J'-k(t)
(j-k)!
}
(5.3.10)
.
~j_k(t)
is a term in the convolution sequence of
k (j-k)!
~k(t) n
n
{t }
,the coefficient we seek is the coefficient
k k=O and { k'}
. k=O
of (n!)
-1 n
Z;
in
-e
(5.3.11)
We recognize
n
L
-
~k(t) Z;k =
k=O
k!
n
1 +
L
Z;
k
k=l
as the first n+1 terms in
~t(Z;),
the usual
~-moment
generating
function.
Hence
ljJ
p
nc (t) = nn + ~'l'n (t) + o(t- )
where nn is the coefficient of (n!)-lz;n in
(5.3.12)
- 110 -
n
log{
I
J
j=O
and
~
n
(5.3.13)
t .I:;j}
(t) is the nth
~-cumulant
lifetimes have distribution
F.
of a proper renewal process whose
= Yn+ 1
from
C, and Pn+p +l
< ~,
We can conclude nn
(5.3.13).
Theorem 5.3
If n is a positive integer, p
~
0, F
€
then
the nth conditional cumulant of N(t) is
k
nc
(t)
= kn (t) + Pn + o(t- p )
= Y-nt
The constant
p
n
+ y- n+l + o(t- p ).
is
e-
n l if n = 1
n-l
L
j=O
(-l)jt
.n. if n
n,n-J n-J
~
2.
Proof:
We know that
~nc(t) = nn +
¢n (t) + o(t- p).
The relationship cited in (5.2.4) between
~-cumulants
and conventional
cumulants also holds between their conditional counterparts.
That is,
(5.3.14)
- III -
n-1
k (t) =
nc
I
(-l)jt
j=O
.
n,n-J
1jJ
•
n-Jc
(t), n
~
2
.
Hence
k1c (t) ={~1 (t)-l} + n +o (t- P)=
1
~1
k
nc
k1 (t)
+ n + o(t- p )
t
(5.3.15)
~l
= ~ (-l)jt n,n-J.~ n-J.(t) + ~ (-l)jt n,n-J.nn-J.+o(t- p)
j=O
j =0
(t)
=
kn (t)
+ P + o(t- p ).
(5.3.16)
n
Finally, we appeal to Smith's 1959 theorem on the cumulants of
renewal processes, which was quoted in Chapter 1.
"-
o
given our hypotheses.
In order to derive the correction factor
p
n
we must find nn' the coefficient of (n!)-l~n in
n
log{ ~
j=O
By definition,
~O
9.
~.
J
crill
= --- A and thus
1-w
= 1
1 =
1
crill
1 + il 2
--2_2
l-w
~l
It ensures
j
of Theorem 5.3,
- 112 -
-
-
- 2
~3
~4
3a01
24U1
5- 3
5- 2
~2
~2
~2
---+-------:-+---
In general,
+ -6-'
4ao~
2(1-W)01
8°1
= y.J+ l'
~.
J
Now
n
log{
I
.
n
L1;J} = logO
I
+
j=O J
.
j =1 J
1
.2
n
2 { L ~.1;J}
j =1 J
Hence
11
1 = ~l
11
2
11
3
= 2~2
=
6~
e"
L1;J}
-
~ 2
1
- 6~ ~
312
and, in general,
11 n
+ U
1
= Yn+ l '
3
n
+
31 { L ~.1;
j =1 J
j3
}
- 113 -
Using the previously derived values of the t's we have
1
(l-w)
2
Corollary 5.1
•
If U
2+p
< 00,
P ~ 0 and if
E[N(t) !A(t)=l]
= -Ut
U2
+ -
_2
l
FEe,
+ -
].11
2-w
1
aU
then
- -
l-w
l
+
oCt
-p
).
Proof:
kl(t) = H(t)
t
=-
iiI
U2
- 1
2-].112
+ -
+
o(t- p )
by Smith (1959) and thus the result follows.
Note that this formula
agrees with the one derived in section 5.1 by different methods.
o
Corollary 5.2
If ii
+
3 p
< 00,
P ~ 0 and if
FEe,
then
- 114 -
Var[N(t) IA(t)
= 1]
Proof:
P2 =
11 -11
2
1
=
3-~22
il 2
il 3 il 2
1
1
-+-+--_3
2_2
4_4 - 3_3 - 2_2
crill
~l
cr~l
cr ~l
~l
~l
w
(l-w)
2
and by Smith (1959),
The result follows and agrees with the independently derived equation
o
in (5.2.28).
If we are interested in higher cumulants we can proceed in this
same way.
For example,
whenever il 4 +
p
< ~
The form for
-
and F
E
C.
k3 (t)
can be found by the methods outlined in
Smith (1959) and the correction factor has already been derived.
find
We
•
- us -
•
+
_-.;;3~2 + _1
(l-w)
a0
1
1_ } +
0
(t -p)
.
1-w
That is,
(5.3.17)
We will confirm this last formula in Chapter 6.
CHAPTER 6
•
THE CONDITIONAL CUMULANTS OF CUMULATIVE PROCESSES
6.1
INTRODUCTION
We want to investigate E[W(t)nIA(t)=l] and the conditional cumu-
lants of the transient cumulative process W(t).
Ee
-a~
We know that
tW(t)n
~ -a~t
Ee
In the last chapter we were concerned with the special (Type
B)
cumulative process
•
N(t)
Wet) - N(t)
= I
1
j=l
and found that in this case
-cr~
The fact that Be
tW(t)n is the Stieltjes convolution of two fami-
liar functions explains the success of our Laplace Transform attack.
In this chapter we shall restrict ourselves to general Type B
processes
Wet)
for we shall find that whenever EIYlln
<
~,
- 117 -
and the Laplace Transform is again effective.
Suppose Wet) is built from the iid sequence (Xl' YI)' (X Z' Yz),
where G(x,y) is the joint defective distribution of (X ,Y ). Let
I 1
F(x) = G(x,~) and assume F(~) = w < 1. As in the last two chapters,
we assume there is some 0 > 0 making
F(x)
= Jxo eOu dF(u)
and
"e
-* (s,8)
proper distributions. Define G
E-X r
d E-Y s
l.lr = l.lrO = l' an KS = l.lOs = 1
_ -sX +i8Y
- Ee
1
1
Let
ii
"rs
Lennna 6.1
<
Ee
-01;
tW(t)n
~
, then
= Ee
-01;
t*EW(t)n whenever Wet) is a Type B cumulative
process.
Proof:
L
o{Ee
- -01;t+ i8W (t) }
=
J~o -st
dt.
e
We can reverse the order of integration to find
- J~
=E
o
e
-(s-o)t
e
-OSN(t)+I+iaN~:) Y.
J
J
dt
- 118 n
S
=E
n 1
In=O fs
e
+
-(s-o)t -oS
+i8. L Y.
e
n+l
J=1 J dt
n
n
00
e
-oS
n+
-(s-o)t
l+i8.L Y.
]=1 J
dt
e
n
00
= (5_0)-1
EL
e
-oS
n+
l+i8.Ll Y. -(S-o)S
-(s-o)X 1
J=
J e
n(l_e
n+ )
n=O
n
00
-sS +i8.L Y.
-oX
-sX
]=1 J [e . n+l_ e
n+l]
= (5_0)-1 E L e n
n=O
n
00
L
= (5_0)-1
E[e
e"
-sS +i8. L1y·
-oX 1 -sX 1]
n
J= J]E[e
n+ -e
n+
n=O
since (X ,Y ) is independent of (X ,Y ) for n F m.
n n
m m
= (5-0)
-1
-*
-*
[F (o)-F (5)]
00
L
-* (5,8) n
G
n=O
(6.1.1)
=
~ext·
*
w:::;..-..:..F--,-.:,..;(s~)~_
(s-o)(I-G-* (5,8))
we consider
Nit)
LO{Ee i8W (t)}
=[
e
o
-st-Ee
i8
~
J' =1
Y.
J
dt
- 119 -
n
n
= E
5
n 1
00
I J
n=O
+
S
e
-st
'a . L1 y
j
dt =
e J=
1
00
EI
'a . L1 y
j
e J=
1
n=O
n
5
J n+l
e
-st
dt
5
n
n
e
-s5 +ia. L y.
-sx
n
J=l J(l-e
n+1)
n
00
=
s
-1
\'
Ee
L
-s5 +ia. Ll Y.
-sx 1
n
J=
J E(l-e
n+)
n=O
= _-=l_--:-F-*....;(~s~) _
(6.1.2)
-*
5 [l-G (5, a)]
From Lemma 5.6 we recall
L{ Ee
-cr~t
}
s[w-F*(s)]
= -~~--:...;~-(s-cr) [l-F-* (5)]
and therefore
o _ -cr~t+iaW(t)
L {Ee
If EIY11n
<
00,
}
(6.1.3)
we can conclude
EW(t)n is of bounded variation in every finite interval.
Lap1ace-5tieltjes Transform exists and we have
Hence its
- 120 -
-or,;
= L{Ee
-or,;
t} L{EW(t)n} = L{Ee
t*EW(t)n}
(6.1.5)
By the uniqueness theorem for Laplace Transforms we conclude
o
To study the properties of E[W(t)nIA(t) = 1], we study the
Laplace Transform of its numerator, which we have just found to be
-or,;
sL {Ee
-or,;
t} LO{EW(t)n}.
We examined L{Ee
and thus focus our attention on
t} in the last chapter
LO{EW(t)n}.
A special notation for
W(t)n will help us calculate its transform.
00
I
Wet) =
Z.(t)Y. where Z.(t) =
J
j=l
J
J
if S.
C
t
S
J
otherwise
00
W(t)n =
Z.(t)Y.] n .
[ l:
J
j=l
J
Consider the particular case n = 3.
00
Wet) 3 =
[j=lI
Z.(t)Y.
J
J
00
=
I
j=l
z.(t)y.
J
2
J
3
00
L
+ 2
j >k2:l
+
3
z.(t)Y.Y k ]
J
J
2
z.(t)y. Y
j>k2:l J
J k
L
+
[It=l
3
Zt(t)Y t ]
l:
j>k2:l
z.(t)y·y
J
J k
2
- 121 -
In general,
(6.1.6)
A typical summand in (6.1.6) is
k
--
L
where
PJ"
= nand
c is a constant depending upon (P1,···,Pk)·
j=l
This notation is adapted from notation Smith developed for Type A
processes in his 1979 technical report.
O
Let S(P1'''·,Pk) ::. L
{EL t (P1,···,Pk)}·
(6.1.7)
Thus
o-
n
L {EW(t) }
= Sen)
+ nS(n-l,l) +"·+n!S(l,"·l).
Consider the term S(P1,···,Pk ).
EIY1l q
<
00
Let q
= max(P1,···,Pk )
(6.1.8)
and assume
- 122 -
= E
J:
I
PI
P
Y ••• y k
rl
rk
J:
I
y PI ••• y Pk
rl
rk
I:
rl>···>rk~l
-
= E
rl>···>rk~l
=
e
P P
Pk
dt
e -st Z (t)Y ly 2••• y
r
rl r2
r
l
k
1.
rl>···>rk~l
e -st Z (t) dt
r1
e-stdt
rl
(6.1.9)
We could interchange the order of integration in the course of these
p.
manipulations because if we replace each term Y J by its absolute
r.
J
value, the resulting k-fold sum is convergent.
Define
= Ey.
J
k -sX.
e
J
j=1,2, .. •
Then
r
_ -1
k 1
I
r =1
k
c
(s)"·C
(s)
PI
Pk
= --s--:(;";"k--"':"l'::"'")"'-!--
r
I
=k-l
1
r
1(k_1)
~.
•
- 123 -
C
=
(5) •• ·C
PI
(5)
Pk
-*
s[l-F (5)]
(6.1.10)
k
-O'i;
Therefore
L{Ee
tW(t)n}
5
is composed of terms like
[w-F-* (s) ] _
(6.1.11)
(5-0') [l-F-* (5)]
If we assume EIY11Q
Q
<
= max(P1,···,Pk)'
~, Pk +p+1
~, P ~
<
~(t)
where
<
~ where
what are the properties of this term?
In Chapter 5 we found that when Ok +p+ 1
•
EX~+P1Y1IQ
0, and
<
~
,
is a function of bounded variation on
absolute moments, and R*(s)
= o(jsjk)
as 151 +0.
[O,~)
having k+p
If we write
-*
[l-F-* (5)]
B* (5) = __s...l..[w_-_F-+(5:....:;).-/.]_
(5-0')
then A = B* (0) = Y and for j ~ 1 A. is a rational function of the
O
1
J
first j moments of B(x) and F(x). As the jth moment of B(x) is a
rational function of 01'···'P j +1 , Aj = Yj +l , j
We now express C (s) in powers of Q.
Pr
C
(5)
P
= EY
1
Pr
=
L{ [
y
-~
re
-sX
1
e
-sx
<Xl
f
~
o.
P
Y rG(dx,dy)
.-<Xl
Pr-
G(x,dy)}-
L {C
Pr
(x)}
(6.1.12)
- 124 -
00 implies C (x) is a function of
Pr
bounded variation on [0,00) having k+p absolute moments. Therefore
The assumption EXlk+PIYllq
<
Smith's Lemma 0 guarantees
C (s)
Pr
=
C
Pr
(k) (s)
(6.1.13)
where Cpr(k) (x) is a function of bounded variation on [0,00) having p
absolute moments.
By the same argument applied to
B* (s) =
s[w-F-* (5)]_
-*
(5-0) [l-F (5)]
e-
we can rewrite C (s) in powers of Q(s).
Pr
+ YIp Q
C (5) = ii
Op r
Pr
r
+ ••• +
_
Ykp
Qk + *
0 (s)
Pr
r
(6.1.14)
where 0 (t) is a function of bounded variation on [0,00) having k+p
p
r
*
k
absolute moments and 0 (s) = 0(151 ) as lsi + 0. The coefficient
Pr
is a rational function of ilO
and the first j moments of C (x)
Pr
Pr
is a rational function of K , 0l,···,il.,
That is, Y
jp r
Pr
J
and il
,' •• , il
.
1pr
jp r
Expression (6.1.11) consists, in part, of C (s)···C (s)
PI
Pk
(6.1.15)
As usual, Dk(t) is a function of bounded variation on [0,00) having
*
k+p absolute moment; Dk(s)
= o(lsl k )
as 151
+
0.
- 125 -
Hence
c
PI
(s)···C
Pk
(s)
Q(S)k
+••• +
-e
• s[w-F-* (s)]_____
-*
(s-o) [I-F (s)]
Ykq [A 0 +A 1Q+ ••• + AkQk +
*
+ Dk (s) [AO+AIQ
+ ••• +
~
*
(s)]
A Qk + R * (s)]
k
k
(6.1.16)
\\There R(t) is a function of bounded variation on [O,eo) having k+p
absolute moments.
coefficients
In addition, R*(s)
°1 , ••• ,
= o(lsl k)
as lsi ~ O.
The
0k+I are rational functions of 01'···' 0k+I'
KI ,···, K , and Prs for r = 1,···, k and s = 1,···, q.
q
As in the last chapter, this means that
- 126 -
c
(s) ••
PI
·c
(s)
Pk
-* L
[l-F-* (s)]
}
s [w-F (s)
Q(s/
(s-o)
(6.1.17)
The function
~(t)
=
A(t)
~(t)
can be expressed as
(l+t)P
where A(t) is of bounded variation, tends to zero as t approaches
infinity, and if p ~ 1, Ai:~ belongs to the class L .
l
Thus far we have focussed our attention on just one of the
e-
-01;
summands in Ee
tW(t)n
If we are to make statements similar to
-01;
(6.1.17) about all the terms making up Ee
our assumptions.
tW(t)n, we must broaden
Equation (6.1.17) depends on the suppositions:
1)
- + + <
J.l
k p l
2)
Elyl,q
3)
EX k+PIY ,q
1
1
00
<
00
<
00
•
We need 1, 2, and 3 to hold for all possible partitions (PI'···' Pk)
of n.
The largest possible value of q arises when k
Thus we must assume Elyl,n
<
00
and pn+p+ 1
<
00.
=1
and q
= PI = n.
As for the assumption
- k+p IY ,q < 00, note that q ~ n+l-k. Hence we require
that EX
I
l
- r+p IY IS < 00 for r+s ~ n+l, s ~ n, and r ~ n.
EX
I
l
- 127 Lemma 6.2
Let n be a positive integer and p
~ IY1 In <~, and EX
- r+p IY IS
E
1
1
Then
< ~
for r +
~
°
o.
5 S
Suppose n+p+ 1
n + 1,
5 S
< ~,
n, and r
n.
S
where ACt) is of bounded variation, tends to zero as t approaches
infinity, and if p
~
1,
ACt)
l+t
The coefficients 01"'"
5 S
-e
belongs to L .
1
0n+1 are rational functions of
n.
Theorem 6.1
Suppose F
0n+p+1
r
S
< ~,
n, and
€
C
-, Y In
E
1
5 S
n is a positive integer, and p
<~,
- r+p IY Is
and EX
1
1
S
n, and
5 S
S
If
n + 1,
= 1]
The coefficients 01' 02"'"
r
for r + s
O.
n, then
E[WCt)n\ACt)
°1,"', 0n+1'
< ~
~
~"'"
n.
0n+1 are rational functions of
Kn , and the product moments il rs for r
+
s
S
n + 1,
- 128 -
Proof:
E[W(t)n!A(t)=l]
n+l
=
O'ii l
n
} {--- + o(t- - p )}
(l+t)P
l-w
{Ik=l
0k~n+l_k(t) + A(t)
(6.1.18)
by Lemma 5.5 and Lemma 6.2.
Since F
filled.
E:
C and ii n+p+l <
Hence
~ (t)
r
for 1
the conditions of Lemma Fare ful-
00,
r
~
Y
r+l
~
n.
+
oCt -n+r-p )
(6.1.19)
The result follows easily from (6.1.18) and (6.1.19)
o
6.2
THE CONDITIONAL CUMULANTS OF Wet)
We have learned that under certain conditions on the moments and
product moments of (X ,Y ), E[W(t)nIA(t)=l] is asymptotically an
1 1
nth degree polynomial in t. We now turn from the conditional moments
to the conditional cumulants of Wet) .
Define
Mec(t)
= E[eieW(t) IA(t)
= 1].
(6.2.1)
Mec(t) is the conditional moment generating function of W(t).
E!Yll
n
<
If
00,
n
1 +
I
j =1
m. ( t)
JC
(ie) j
j!
n
+ 0 (e
) as 6
+
0
(6.2.2)
e-
- 129 where m. (t) = E[W(t)jIA(t)=1].
JC
Let
n
I
j=l
=
(i8) j
k. (t)
JC
j!
n
+ 0(8 )
as 8
+
(6.2.3)
O.
K8C (t) is the conditional cumulant generating function of Wet).
The nth conditional cumulant is
k
•
nc
(6.2.4)
(t) =
where the summation is over all partitions (Pl,---,Pk) of n and where
C(Pl,---,P k) is a constant.
Under the conditions of Theorem 6.1,
-n+p -P
+---+ 0p +1 + oCt
r)
r
for Pr = 1, 2,---
n.
Hence
(6.2.5)
As in Theorem 6.1, the coefficients
°1,---,
0n+1 are rational functions
of certain moments and product moments of (X 'Y1); they do not depend
1
on the particular form of the distribution G(x,y). While (6.2.5) is
valid whenever 0n+p+1
r + s
~
n + 1, s
EX n+1\y \n+1
1
1
<
~
00
<
n, r
00,
~
r p
and EX l + \y 1 \S < 00 for
n, we can make the extra assumption that
EIY11n
<
00,
Then if we can find the values of
°1,---, °n+1
- 130 -
in this restricted case, these will also be the coefficients' values
in the more general case.
n l
.
that EX n+ll Yl I + < ~, we can say that 01 ,"', 0n+l
Assumlng
I
are rational functions of U for r and s in {O,l,···,n+l}. As
rs
rational functions of a point in [(n+2)2- l ]-dimensional Euclidean
space, the o's are determined uniquely by their values in any sphere
in this space.
For every positive integer n, we shall find a sphere
in which 0l,···,on_l vanish identically.
This will tell us they are
also 0 outside the sphere and will allow us to conclude the nth
conditional cumulant is
knc (t)
= 0nt + 0n+ 1 + o(t- p )
whenever the conditions of Theorem 6.1 are fulfilled.
We will also
find the functional forms of 0n and 0n+ 1.
Our arguments are based on the properties of Mec(t).
Mec(t)
_ -Ol;t+ieW(t)
Ee
= ~---­
_ -01;t
Ee
We know
Ne(t)
Ee
-01;
t
and we consider
N
o
e (s) =
w-F-* (s)
(6.2.6)
-*
(s-o) [l-G (s, 6)]
by equation (6.1.1).
Set m = (n+2)
2
- 1.
In his 1979 technical report on the cumu-
lants of proper cumulative processes, Smith discussed sequences
{(X. ,Y.)} "generated by a A-model.·'
J
J
For such a model, the proper
•
- 131 -
distribution G(x,y) of (Xl,Y l ) depends solely on the positive
parameters Al,···,A ; also, moments and product moments of all
m
orders exist.
He showed that there is a sphere in the m-dimensional
space of vectors having coordinates
(ll
10'
••• II
'n+l,O'
•••
II
'n+l,n+l
)
such that these product moments are generated by a A-model in which
Al > A2>···>A •
m
G* (z,e)
In these circumstances, he proved that the equation
=1
(6.2.7)
has m distinct roots zl(e), ••• , Zn(8) which are analytic functions
of the complex variable e in some neighborhood of
=a
Zl (0)
Smith found
and for e in a sufficiently small neighborhood of 0,
R{z.(e)}
-e
o.
J
<
~1
-1
j = 2,3,···,m.
(6.2.8)
Thus from Smith's results, we know there is an m-dimensional
sphere in the space of vectors (010'···' On+l,O'···' On+l,n+l)
such that these moments are generated by an appropriate A-model having
the properties already outlined.
G* (z,e) = 1.
Let zl (e),···, Zn(8) be the roots of
(6.2.9)
In this case we can expand
N
e
a(z) in partial fractions when e is in
a suitable neighborhood of 0;
N
a
e (z) =
w-F-* (z)
(z-cr) [l-G-* (z, e)]
m
= L
z. (e)
J
j=l z-z.(8)
J
(6.2.10)
- 132 -
where
w-F-* (z.(6))
=
Z. (6)
----"J"--
J
(z.(6)-0)
J
n
IT
k=l
k#j
_
(6.2.11)
(Z.(6)-Zk(6))
J
Note that for 6 in a sufficiently small neighborhood of 0,
I
R{Zj(6)} <
for j = 1,"', m and Zj(6) # zk(6) for j # k.
We can easily invert N~(Z) to find
Z. (6) t
n
= I
Na(t)
Z.(6)e J
But since R{z.(6)}
-A -1 for j=2,3,···,m, we can conclude that
<
J
1
Zl(6)t
N6 (t) = Zl (6)e
where the
a
(6.2.12)
J
j=l
-tA -1
+ O(e
1 )
(6.2.13)
term can be shown to be uniform for all sufficiently
small 161
Therefore
-tA -1
-or;
= zl(6)t
+
log Zl (6) - log Ee
t
+
O(e
1
).
(6.2.14)
This shows that
-tA -1
k
nc
= An t
(t)
+
Bn
+
O(e
1
)
whenever G(x,y) is based on an appropriate A-model.
(6.2.15)
The constant
An is from the expansion
I
j=l
00
zl(6)
=
A
j
(ia) j
j!
(6.2.16)
e-
- 133 -
while B is from
n
00
log Zl (8) = B +
O
(i6)j
j!
J
I
B.
j=l
(6.2.17)
Theorem 6.2
Suppose F
~
C and n is a positive integer.
- r+p IY
and EX
I
l
p
S
IS
<
00
for r + s
S
If 0n+p+l
<
00,
n + 1, r s n, s
S
n, and
0, then
k (t) = A t + B + o(t- p ).
n
n
nc
An and Bn are rational functions of iiI'··· , °n+l'
ii rs for r + s
-e
S
n + 1, s
K
1'
•••
, Kn , and
n, and r s n.
S
Proof:
From equation (6.2.5) we have
and we know the o's are determined by their values in any sphere.
Thus it is enough to consider the o's resulting from a sphere whose
points are generated by a A-model.
But in such a sphere, 0 , ••• , 0n_l
1
vanish identically and the values of 0n and 0n+ 1 are given by equations
(6.2.16) and (6.2.17).
6.3
This proves the theorem.
THE DERIVATION OF A
----------n
and
o
Bn
Smith found that if Wet) is a proper Type B cumulative process
generated by a A-model, then k (t), the nth cumulant of Wet), is
n
-tA -1
k (t) = A t
n
n
+
B
n
+
O(e
1
)
- 134 -
where the generating function for the A 's is precisely (6.2.16).
n
Thus to find A we need only refer to Smith (1979) and assume the
n
joint distribution of (Xl,Y ) is C(x,y).
l
EY
1
= 0 when deriving A =
n
A.
n
However, Smith assumed
As we wish to avoid this assumption,
we rederive these coefficients.
Since Zl (6) is a root of the equation
G* (z,6) = 1,
we have
00
=
\'
L
(6.3.1)
E[i6Y-z- X] n
1 -
--I
n=l n.
I
Using the expansion Zl (6) =
L
(i6) j
.,
J
J.
A.
j=l
0= (i6)K
°
L
00
l
- 01
j=l
A.
J
(i6)j
.,
00
(i6) 2
j
(i6)j
00
(i6)011.
L
J=l
J.
~ (i6)3 L
(i6)
6
j=l Aj
j!
, we find
A
j
j!
e-
- 135 -
Since the coefficient of
(ie)j
is identically 0 in this expression,
j!
we conclude
K
2]111 Kl
]12 Kl
+
A2 = iiI2 2
3
]11
iiI
2
+
--
3 - K- 2
112l l
_ 3
+--~
\.1.
Values of
An
for n
>
1
4 can be ~alculated by this same method.
To find B we must solve the equation
n
log Zl (6)
=
LB.
co
j =1
J
(ie)j
.,
J.
+
BO
- 136 -
From equation (6.2.10),
1i~
=
z
Z-+Zl (6)
[z- l (6)]N 60(z)
-*
[w-F (z)] [z-Zl (6)]
= lim
z-+zl(6) (z-o) [l-G-* (z,6)]
=
lim
z-+zl
-
Z - 0
1
-* -
w-F (zl)
1
= --~- [ - -
(6.3.3)
- K(6)
Z -0
1
asa G-* (s, 6) I
where K(6) =
is as in equation (4.13) of Smith
_
s=zl (6)
(1979) .
We expand the two factors in (6.3.3) separately.
-* -
w- F (z 1)
*
F (z 1) -w
= --::l"---w-
l-w
0
•
-0--z-l
•
(6.3.4)
0
Z -0
1
As the generating function for the B 's is based on a A-model, the
n
distribution function F has moments of all orders.
3A l A2
il 4 _ 4
4
3,. [A- 1 3 (1' 6) 3 +
(.16 ) 4 +... ] + 4T
[A 1 (.16 ) + •••.
]
2
3
P
+
••• ]-
- 2-
We can write
e"
- 137 Therefore
(i8)j
l:
00
=
j =0
CL
.,
J
J•
where
el
o =1
ell
el 2
el 3
(14
-e
=-
°l'A. l
l-w
=-
- 2
A
Pl A2
°2
l
--+
l-w
l-w
=-
30i·lA2
°lA 3
--+
l-w
l-w
=-
4il2AlA3
°lA4
--+
l-w
l-w
- 3
A
°3 l
l-w
- 2
3il 2A
2
l-w
+
- 2-A
6il 3A
l 2
l-w
+
- 4
il 4Al
l-w
It follows that
-* -
F (zl)-w
log [l-w
00
= l:
]
j=l
el. '
J
(i8)j
(6.3.6)
j !
and
ell
,
°l Al
= - l-w
,
el 2 = el 3 , = -
+
il l A2
- 2
il 2A
1
1-w
1-w
--+
- 2 2
l,.11 A
1
2
(l-w)
il 1A3
3il2A1A2
--+
1-w
1-w
3il 1il 2A1
2
(l-w)
3
2- 3A 3
l,.11 1
3
(l-w)
- 3
il 3A1
1-w
3il 2-A A
1 1 2
2
(l-w)
- 138 -
Cl.
4'
3J..I- 22A- 14
(1-w)
2
a
in powers of (ie) .
Now we expand ---z-a1
zl k
00
a
---- = I
a-z 1 k=O
(-)
a
l-2[A- 2 (1'e)2
1
a
= 1
+
00
1
I
+ -
a
j=1
(ie)j
j!
J
A.
+
A- 12
A- ('e)3
1
- 21
3
3
3A1 A2
4
1
4
4
+-[A (ie) +--=--(ie) +... ] +-[A (ie) +... ]+ ••.
a
3
~
1
= j =0 Sj
So
=
1
2
(ie)j
j!
a
4
1
(6.3.7)
- 139 -
A
2A- 2
2
1
B.z = -(J+ -(J 2A3
13 3 = -
(J
Z4A 4
8A AZ 6A Z 36A- Z-A
Z
1 Z+
1
1
+ --+
+
Z
Z
3
4
(J
(J
(J
(J
A4
13 4 = -(J
S
6A 3
6A A
Z + -11
+
3
Z
(J
(J
•
lnce
-(J - =
1
~1
(J-
.II I3 J"
J=
00
(J
log - (J-z1
--
(i8)j
00
+
=
I
j=l
13 •
J
I
j!
( i8)j
j!
where
13 1
,
,
I3 z
13 3
13 4
,
Al
= - (J
- Z
AZ Al
=-+(J
Z
(J
-
A3
= -
(J
,
A'
4
= - (J
3A AZ 2A 3
1 + -1+
2
3
(J
(J
6A 4
3A 2 1ZA- 2-A
4A A
1 3 + - -Z+
1 Z
1
+
+ -4-'
2
2
3
(J
(J
(J
(J
Combining results, we find
-*
-*
w-F (zl)
log (
~
-(J
1
)
F (zl)-w
(J
1-w
= 10g( 1-w
) + 10g(cr_~) + log(7)
(6.3.8)
- 140 -
=
~
C.'
j=l J
L
(ie)j
1-w
+ 10g(-)
j!
a
(6.3.9)
where C.' = a.' + S.' by equations (6.3.6) and (6.3.8).
J
J
J
.
We must also expand -K(e) in powers of ie.
From Smith (1979),
p. 40. we have
[-Zl (e)']j-1
00
- K(e) =
I
(j-1) !
j=l
y. (e)
(6.3.10)
J
where
00
y. (e) =
J
I
ojn
n=O
n!
(ie) n.
(6.3.11)
Thus
A3
(ie) 3
_ (ie) 4
1
0 32
2
+ A
+0 ° o]}+ ~ [0 +0 (ie) + -(ie) +0 .. ]
3!
4
4!
2]11
30 31 .
2
2
[AI (ie)
2
- -
- 2
- 3 A1A3
A2
4
+ A A (ie) + ( - - + -)Cie) +ooo]}
1 2
3
4
1
- -_- {[iI +iI (ie) +0
6]11
40 41
°°
HA 1
3
(ie)
3
- 2-
3A 1 A2
4
+
(ie) +o.o]}
2
- 141 -
=
co
I
j=O
k.
(i6)j
.,
J
(6.3.12)
J.
where
k
o=1
+
6- A- Z 6- A- ZA4- A- 3
- A- Z 12- A- A3~3
Z +
~31 1 Z + ~3Z 1 - ~4 1 Z - ~41 1
Therefore
=
co
I
j=1
where
k ' 1 -
1 [i;
ii A ]
iiI "11-"Z 1
k.'
J
(i6)j
j!
- A- 4]
1 .
+~S
- 142 -
k
k
,
2
- 2
2021Al + °3AI ]
- A = 011 [-1112-].I2
2
1 [- A
I
3 = iiI ].113-].12 3
1 [_
- :-z
llll
2
- 201102Al+022AI2]
].11
- 2 _ - 3
30 21 A2 - 3ii 22A1 + 3ii3AIA2 + 3ii 3l AI - ].I4 AI ]
-
Returning to equation (6.3.3), we have
-*
w-F (zl)
log ZI(e) = log( ~ -a ) - log(-R(e))
I
<Xl
=
I c.J ,
j=l
<Xl
=
I B.J
j=l
and
B. = C.' - k.'
J
J
J
(ie) j
1-w
+ log(-)
j!
a
(ie)j
j!
+
l-w.,
log (-_-)
0].1 I
log( ~(e))
III
-
log(Ol)
(6.3.13)
- 143 -
-
- Z
AZ Al
+-+cr
-
- 3A- 3
A
2III
1
3
+ -+
3
cr
Bn
Z
2ft.. 3
1
3
+--
(l-w)
Values of
cr
cr
for n ~ 4 can be derived by careful arithmetic.
-
-
As a check on our derivation of A and B , we compute their
n
n
values for the special case Y. = 1. The formula
J
k (t)
nc
= Ant
+
B
n
+
o(t- p )
should then reduce to the formula for the nth conditional cumulant
of N(t) found in Chapter S.
- 144 -
= 1,
When Y.
J
KS
=1
for all sand P _ p.
rs
r
The first three
A's reduce to
n
and the first three 8 's are
n
w
(l-w)
2
2
7P 2P3
3P 2
04
6°2
°3
_ 5 .+ -_ 4 + -_-5- -4-- --4
3
- 8°2
8 =--3
6
~1
PI
3°3
+_-3+
~1
1
+ --
aP
1
~1
a~l
3P
3P
P2
2
2
2
- -_-3+ _ +
2
2_ 4
3_ 3
a ~1
a ~1
a~l
~1
2
(l-w)
3
3
+
(l-w)
Thus we have that if Pn + + 1 <
p
a)
and n
ail 1
ill
= 1, then
~
1
2 - 1-w
and F
€
C
a
3
2_ 2
~1
2
- 145 -
t
=-
ill
b)
and n
il 2
+ -
- 2
+ -
III
1
_
2-w
- -
l-w
O'lll
+
o(t- p )
= 2, then
+
2- 2
ll2
-_-4III
il
1
2
- O'-il + -_3l
c)
O'll 1
and n
= 3,
-
w
Cl-w)2
+
oCt- p)
then
3- 2
ll4
ll2
+
_4 + -_-5III
O'lll
2
3
(l-w)
+
_3_'="2 __1_
l-w
Cl-w)
+
oCt-p).
Since these formulas are the same as those derived in Chapter 5,
we believe the cited values of A and B are correct.
n
n
Finally, we return to a question raised in Chapter 4.
assumed il
2
< ~
and K * <
2
Var[W(t) IACt)
~
and asked whether
We
- 146 -
We concluded that the statement is true if
I
=
E[W ( t) A( t) 1] -
KIt
ill
= 0 (It) .
and K *
2
2
C and Wet) is a Type B process, Theorem 6.2 yields
The assumption that il
suppose that
F€
E[W(t)IA(t)=l] = Alt
+
81
+ 0(1).
K
Since A
1
=--l ,
ill
the result is true for these special cumulative
processes.
Of course, if we are willing to assume more about the moments
and product moments of (Xl,Y ), we can specify the error term much
l
more precisely. It is worthwhile to note that A , the coefficient
2
of t in Var[W(t) IA(t)=l] under the conditions of Theorem 6.2, is
Y, as we know it
].11
calculations.
should be.
This is a further check on our
CHAPTER 7
PROCESSES WITH VARIABLE DEFECTS
Although we have assumed that each lifetime X. has the same
1
defect l-w, there may be cases in which it is reasonable to suppose
that the defect depends on the number of lifetimes preceding X..
1
Let {w.} be a sequence in (0,1] and suppose {X.} is a sequence of
J
1
independent positive random variables such that
where J is a proper distribution on
0_
[O,~)
P{X.~x}
and 3(0+)
1
<
1.
= w.J(x)
1
Note that
w w J(2) (t)
1 2
and in general
P{S
n
n
= ( II w.)in)(t).
j=l J
~t}
(7.1)
It follows immediately that
n
~
L
H(t) =
n=l
w.)J(n) (t)
( II
J
j=l
(7.2)
~
and
H(~)
=
~
n
1:
( II
n=l j=l
wj )
~
~.
If the w. 's are not bounded above by some w < 1, it is entirely
J
possible that
H(~)
=
~
Also, unlike the sitl1ation in which
- 148 -
w.
J
S
w, the probability the process is alive at time t does not
necessarily converge to O.
We find
q(t) = P{A(t) = I}
00
L
= wl[l-J(t)] +
n=l
I
t
w l[l-J(t-T)](
o n+
n+l
( IT w.) [J(n)(t)_J(n+l)(t)].
n=O j=l J
~
j=l
w.)dJ(n)(T)
J
00
L
=
(7.3)
Leuuna 7.1
n
00
q(t)
I
= wI
n= 1
(l-w ·1)( IT w.)J(n)(t).
n+
.
J
J=l
(7.4)
e"
Proof:
The right hand side of (7.4) equals
n
00
I
wI -
(l-w
n= 1
)( IT w.) [J(n) (t)_J(n+l) (t)]
n+l.J= 1 J
n
00
- I
(l-w
n= 1
00
= wI -
I
n
(IT w.) [J(n) (t)_J(n+l) (t)]
n=l j=l J
n+l
IT
n=l j=l
00
+
,)( IT w.)J(n+l)(t)
n+.l..
J
J=l
L(
- 149 -
n+1
00
I
=
w.) [J{n) (t)_J(n+1) (t)] + (w J(t)
1
J
(IT
n=O j=l
n
00
- I ( IT
~.) [J(n) (t)_J(n+1)(t)]
n=l j=l J
n
00
- I
n=l
(l-w
1)( IT w.)J(n+1)(t)}
n+
j=l J
00
= q(t)
Note that (11 =
I
+
= -IT
- W = 0 and
1
W
1
k-1
j=l
w.
IT
+
J
(7.5)
J
j=l
k
~
(1.J U ) (t) .
j=l
k-1
w. - (l-w ) IT
k
J
j =1
= o.
w.
o
J
Formula (7.4) shows the monotonicity of q(t) much more clearly
than (7.3).
Because
n
IT w.
is nonincreasing in n, it converges to some limit
j =1 J
~ ~
O.
In fact, since we are assuming w. > 0 for all j,
J
00
positive if and only if
I
(l-w.) <
j=l
~
is
00.
J
Lemma 7.2
lim q(t) = lim
t -loOO
n-loOO
n
IT w. j =1 J
~
.
Proof:
q(t) ~ ~
00
I
[J(n)(t)_J(n+1)(t)]
n=O
= ~[1- lim J(n)(t)] = ~.
n-loOO
(7.6)
- ISO Now choose €
whenever n
There exists some N(€) such that
O.
>
N.
~
n+1
IT w. <
j=l J
!I. + €
Thus
N-1
n+1
(IT w.) [J(n) (t)_J(n+l) (t)]
t~ n=O j=l J
~
lim q(t) S lim
t~
co
+ lim (!I.+€) ~ [J(n)(t)_J(n+1)(t)]
t-+<»
n=N
Slim [l_J(N)(t)] +
!I.
+ €
t~
=~
(7.7)
+ €.
Since € is arbitrary, lim q(t) =
o
!I..
t~
Lenuna 7.3
n
co
Suppose
~ (IT w.) =
n=l j=l J
P{A(t+s) = l!A(t)=l}
+
co
Then
1 as t
+ co.
Proof:
If
IT
j=l
w. =
J
n >
N
0 , t h en by Lemma 7 . 2 I'~m q(t+s)
(t) = 1 .
t-+<» q
n
Suppose then that lim
IT
n~ j=l
P{N(t)=kIA(t)=l}
+
w. = O.
J
0 for fixed k.
We will first show that
- 151 k+l
C IT w.)[JCk)Ct)_JCk+l)Ct)]
P{NCt)=kIACt)=l} = j=l n+lJ
I CIT w.)[JCn)Ct)_JCn+l)Ct)]
n=O j=l
J
00
JCk)Ct) _ Jek+l)et)
n+1
.
l_JCk+l)et)+ I C IT w.) [JCn)Ct)_JCn+l)Ct)]
n=k+l j=k+2 J
00
C7.8)
Observe that
m-l
1 = C1-wn+2) + I C1-w) IT w
m=n+3
m p=n+2 p
00
-e
n = 1, 2, ...
n
since lim IT w. = O.
n-l<lO j=l J
Hence
n+1
C IT w.)[JCn)et)_JCn+1)et)]
j=k+2 J
.
m-1
n+1
= [Jen)et)_Jen+l)et)]{el_w 2)e IT w.)+ I el-w) IT w}
n+
j=k+2 J m=n+3
m p=k+2 p
00
m-l
Cl-w) IT w.
m=n+2
m j=k+2 J
00
= [Jen)Ct)_Jen+l)Ct)]
I
Thus it follows that
n+l
C IT w.) [J en ) et)_J en +l ) Ct)]
n=k+l j=k+2 J
00
I
(7.9)
- 152 -
00
m-1
(l-w) IT w.
m=n+2
m j=k+2 J
00
1:
=
[J(n) (t)_J(n+1)Ct)]
n=k+1
1:
=
m-1
m-2
(l-w)( IT w.) 1: [J(n) (t)_J(n+1) (t)]
m=k+3
m j=k+2 J n=k+1
=
m-1
(l-w)( IT
w.) [J(k+1)(t)_J(m-1)(t)].
m=k+3
m j=k+2 J
00
1:
00
Also,
1:
(7.10)
m-1
1_J(k+1)(t) = [l-J (k+1) (t)]{l-w 2+ ~L (l-w) IT w.}.
k+ m=k+3
m j=k+2 J
00
e'
Therefore,
n+1
1_J(k+1)(t)+ 1: ( IT w.) [J(n) (t)_J(n+1) (t)]
n=k+1 j=k+2 J
00
(7.11)
= (l-w k
m-1
2)[l-J(k+1)(t)]+ 1: (l-w)( IT w.)[l_J(m-1)(t)]
+
m=k+3
m j=k+2 J
00
and we have
(7.12)
P{N(t)=kIA(t)=l}
1_J(k+1)(t)
~-------~.::..-_~:.::----------00
m
(l-w
2) [l_J(k+1)(t)]+ 1: (l-w 1)( IT w.)[l-J(m)(t)]
k+
m+.J=k+2 J
m=k+2
- 153 -
= [k~I]'
Now define am
and thus
(7.13)
Combining (7.12) and (7.13), we see
P{N(t)=kIA(t)=l}
--
1 _ /k+l) (t)
~--------------~-<----------00
m
a
(l-w
2)[I-J(k+l)(t)]+ ~ (l-w 1)( IT w.){I_[J(k+l)(t)] m}
k+
m=k+2
m+
j=k+2 J
1
=--------------------------:-m
(k+l)
(k+l)
am-I'
(l-wk + 2)+
00
~
(l-wm+l)(
m=k+2
IT w.){l+J
j=k+2 J
(t)+···+[J
(t)]
(7.14)
Note that
m
00
~
m=k+2
for some c
(l-wm+l) (
>
O.
00
m
IT w.)a > C ~ m(l-w 1) ( IT w.)
j=k+2 J m
m=k+2
m+
j=k+2 J
But'
}
- 154 -
co
m
m
L m(l-w 1)( IT w.) = (k+1) L (l-w 1)( IT w.)
m=k+2. m+
j=k+2 J
m=k+2
m+
j=k+2 J
co
m-k-1
co
L
+
L
m=k+2
1=1
(l-w
m+
1)(
m
co
= (k+1)
L
m=k+2
m
IT w.)
j=k+2 J
(l-w
m+
1) (
co
m
co
IT w.)+ L
L (l-w 1) IT w.
j=k+2 J 1=k+2 m=1
m+ j=k+2 J
(7.15)
Now since
n
IT w.
j=l J
co
+
0,·
L (l-w
m=1
m+
1)
eo
m
1
IT W.= IT
w.
j=k+2 J j=k+2 J
and so
m
co
L
m(l-w
m=k+2
1
L (
w.)
IT
1=k+2 j=k+2 J
by assumption.
=
co
Therefore
lim P{N(t)=kIA(t)=l}
t-+a>
m
1)( IT w.)=(k+1) L (l-w 1)( IT w.)
m+.J=k+2 J
m+
m=k+2
j=k+2 J
co
+
co
=
o.
(7.16)
- 155 ,-
Now we use this fact to show
lim q(~~~) = 1 for fixed s.
q
t-+<x>
Choose
E
>
a
There exists some N(E,S) such that for
and fix s.
<Xl
all n ~ N, 1- J(n)(S»l-Eo
Also, since
n
I ( IT
w.)
n=l j=l J
= CD,
w.
J
+
1
and there is some K(N) making
k+1+N
IT
j=k+l
w. > 1-
whenever k
E
J
~
q(t+s) = P{A(t+s)=l,N(t)<K}
+
q(t)
P{A(t) =l}
-e
~
P{A(t+s)=l,N(t)~K}
P{N(t)~K,A(t)=l}
K.
P{A(t+s)=l,N(t)~K}
P{A(t)=l}
P{N(t) ~K,A(t) =l}
P{A(t)=l}
We already know
P{N(t)~K,A(t)=l}
+
1
P{A(t)=l}
and now consider
<Xl
P{A(t+s)=l,N(t)~K} =
I
P{A(t+s)=l,N(t)=n}
n=K
<Xl
=
I
P{A(t+s)=l!N(t)=n,A(t)=l}P{N(t)=n,A(t)=l}
n=K
<Xl
~
•
I
n=K
P{N(t)=n,A(t)=l}P{A(t+~t+s)=lIN(t)=n,A(t)=l}
(7.17)
- 156 -
n+m+l
= ~ P{N(t)=n,A(t)=l} ~ ( IT
w.) [J(m) (s)_J(m+l) (s)]
n=K
m=O j=n+l J
QC)
QC)
N n+l+N
QC)
~
~ P{N(t)=n,A(t)=l} ~ ( IT
n=K
m=O j=n+l
w.) [J(m) (S)_J(m+1) (s)]
J
QC)
> (I-E)
~ P(N(t)=n,A(t)=l}[l-J(N+l) (S)]
n=K
(7.18)
e-
Hence
q(t+s)
11m
0
q
t-+<>o
Since
E
(t)
~
2
(l-E) .
o
is arbitrary this proves the result.
In Chapter 3 we learned that if J(x)
E
S, the class of sub-
exponential distributions, and if P{Xosx} = wJ(x) , then
1
11m -q(t+s)
(t) = 1 .
°
t-+<>o
q
aur
next 1 emma genera I"lzes t h·lS resu I t.
Lemma 7.4
Suppose wSw
n
°
q(t+s)
11m
(t)
t-+<>o
q
<
1 for all nand J(x) E
S.
Then
= 1.
•
- 157 -
,e
Proof:
Since
n
co
w
~
WI
=
n
w, 1
=1
- WI
+
t
n=1
(l-w
n+
1)( IT w.)
j=1 J
or
n
co
t (l-w n+ 1)( j=1IT
n=l
(7.19)
w.) .
J
Combining (7.19) and expression (7.4) for q(t), we discover
n
co
t
q(t) =
(l-w
n= I
1)( IT w.)[I-J(n)(t)].
n+
. I J
J=
(7.20)
Hence
n
co
. q(t+s) = {1-J(t+S)}
11m
(t)
I-J(t)
t~ q
I_J(n) (t+s)
L (I-wn+ 1)( . IT 1w.)
J [I-J(t +s )
n= I
J=
co
n
I-J(n)(t)
L (1-w n+ 1 )(. IT .) [I-J(t) ]
n=1
J=1 J
(7.21)
By Lemmas A, B, and C in Chapter 3, we know that
I-J(t+s)
I-J(t)
-+
1;
I-J(n)(t)
I-J(t)
-+
n;
and for any € > 0 there is some D(€)
<
co
making
1-J(n)(t)
I-J(t)
n
~D(I+€).
- 158 -
Thus if we choose € to make w(l-€) < 1,
n
l_/ n ) (t)
n
(1-w n +1)(.IT w.) [l-J(t) ] S D I (l-w 1) [w(l-E)] <
n=l
J=1 J
n=l
n+
QI)
QI)
I
QI)
and the Dominated Convergence Theorem implies
n
1-J(n)(t)
n
11m I (1-w n +1)(.IT w.) [l-J(t) ] = I (l-w 1)( IT w.)n.
t-+oo n=l
J=l J
n=l
n+
j=1 J
•
QI)
QI)
(7.22)
Therefore
· q(t+s)
11m
(t)
t-+oo q
o
= 1.
e-
The next lemma generalizes Lemma 3.1.
Lemma 7.5
Suppose wSw < 1 for all nand J e: S.
n
Then
k+l
IT w.
lim P{N(t)=kIA(t)=I} = j=l n+lJ
t-+oo
I ( IT w.)
n=O j=l J
QI)
Proof:
When J(x)
E:
S,
J(k) (t)_J(k+l) (t) = fJ(k) (t)_J(k+l) (t)l [ I-J(t)
]
J(n) (t)_J(n+I)(t)
I-J(t)
J(n) (t)_J(n+1)(t)
L
J
- 159 But
J er ) Ct)_J Cr +1) Ct)
1-JCt)
+
=
1_J er +1 )Ct)
1-Jet)
{r + 1 - r} = 1 as t
+ ~
1_/ r ) et)
1-JCt)
for r = 1, 2, ...
Thus
J Ck ) Ct)_J Ck +1) Ct)
J Cn ) Ct)_J Cn +1) Ct)
+
1 as t
+ ~
(7.23)
Now
k+1
CIT w.) [Jek)Ct)_JCk+1)Ct)]
J
P{NCt)=kIACt)=l} = ~j=1 n+1
1 CIT w.) [J Cn ) et)_J Cn +1) et)]
n=O j=1 J
•
=
\'Ie
k+1
e IT w.)
J
J' =1
-_,,:=-----::-~---::--:~~ n+1
J Cn ) Ct)_J Cn +1) Ct)
1 C IT w.) [ Ck
]
n=O j=1 J J )et)_J(k+1)et)
e7.24)
know that
~
DC1+£)
n+1
[1+0 (1)] .
e7.25)
- 160 -
The Dominated Convergence Theorem applies to the denominator of
o
(7.24) and the result follows.
co
We will consider some of the consequences when
IT w. = ~ > O.
j=l J
As noted earlier, for this product to be nonzero, the w.'s must
co
J
approach one quickly enough to make L (l-w.) < co. It is easy to
j =1
J
see that when
~ > 0, lim P{N(t)=kIA(t)=I} = 0 for all fixed k.
t~
Thus if the renewal process is alive at some time t and t is large,
then with high probability the process has survived many lifetimes;
intuition suggests the future development of the process will be
much like that of a process whose lifetimes have proper distributionJ.
Lemma 7.6
00
Suppose
IT w.
j=l J
=~
>
O.
Then
E[N(t+s)-N(t) IA(t)=l] - [HJ(t+s)-HJ(t)]
Proof:
co
L(
+
0 as t
+ co.
n
IT w.) [J(n) (t+s)-J(n) (t)]
E[N(t+s)-N(t) IA(t)=I] = n=l j=l J
q(t)
Fix E > O.
There exists an N(E) such that whenever n
n
IT w.
j=l J
<
~
+ E.
Thus
~
N,
(7.26)
- 161 -
E[N(t+s)-N(t) !A(t)=l]
N-l
00
~l
~N
L [J(n)(t+s)_J(n)(t)]+(t+€) L [J(n)(t+s)_J(n)(t)]
(7.27)
Similarly,
t[HJ(t+S)-HJ(t)]
E[N(t+s)-N(t) IA(t)=l]
~-------
(7.28)
t + 0(1)
o
Therefore the result follows.
In order to study more general questions, suppose G(t) is a
natural process depending on the defective random variables Xl'
= w.J(x)=P(X.$x).
X2""'~(t)+1' Let F.(x)
1
1
1
L
00
=
n=O
Then
J
n+l
G(t,xl,···,x 1)( II w.)dJ(xl)···dJ(x 1)
{N(t)=n}
n+
j=l J
n+
= EJG(t)(
N(t)+l
w.)
II
J
j=l
(7.29)
where E indicates integration with respect to the proper distribution
J
J.
- 162 Taking G(t)
= 1,
N(t)+l
II
J
j =1
q(t) = E (
we find
(7.30)
w.)
J
arid
EJG(tH
E[G(t) !A(t)=l] =
N(t)+l
II w.)
~~~j=~l
J_
(7.31)
N(t)+l
E ( IIw.)
J j=l
J
Lemma 7.7
00
Suppose IG(t)
I
~ M.
E[G(t) IA(t) = 1] - EJG(t)
If
+
II w. = Q,
j=l J
0 as t
+
>
0, then
00.
Proof:
With no loss of generality, we may assume that G(t) ~ O. Choose
n+1
€ > O.
There exists an N(E) such that if n ~ N, I IT w. - Q,I< £-.
j=l J
M
Let B = {N(t) ~ N}.
t
By assumption,
N(t)+l
EJ[G(t)( IT
w.)]
j=l
J
= EJ[G(t)(
N(t)+l
IT
wJ.)x(B t )] + 0(1).
j=l
(7.32)
~.
- 163 -
Also,
(7.33)
and
N(t)+l
IT
w.) = t
EJ (
j=l
+ 0 ( 1)
as t
(7.34)
-+ co.
J
Therefore,
EJ[G(t)(
N(t)+l
IT
w.)]
j=l
J
EJ[G(t)]+o(l) s ------N~(~t~)~+l~-----E ( IT
w.)
J
j=l
S
EJ[G(t)]+ E: +
0(1).
(7.35)
J
o
As E: is arbitrary, the lemma is proved.
Applications of Lemma 7.7
Let
=
r:
Suppose Wet) is a cumulative process
based upon {(X.,Y.)}
where X.1 ~ w.J
and EJy r = Kr (J). If Kl * (J) < co
1
1
1
xkdJ(x).
and u (J) <
l
co,
then let
G(t) =
x(
I-t-
If
•
uk(J)
IT w.
j=l J
=t
Wet)
p{l-t-
Wet)
>
K
-
U
l (J)
(J)
1
I
>
E:).
0, it follows that
K l (J)
- u (J)
l
I
>
E:IA(t)=l}
Wet)
K l (J)
- PJ{I-t- u (J)
l
I
>
E: }
-+
0
- 164 -
and hence
Wet)
p{l-t- -
K l (J)
- ~ (J)
If K2* (J) <
I
>
€IA(t) = l} ~ 0 as t ~
= and
~2(J)
<
=,
Kl(J)
= X(W(t)
G(t)
=.
(7.36)
1
<
- "1 (J) t
define
a)
~:~~J)
where y = E [Y J
J 1
Kl(J)
2
X]
~l (J)
1 .
Kl(J)
Wet) - ~l (J) t
P
~":~~)
1
whenever
~
II w.
j=l J
= Q,
>
~
Lemma 7.7 implies
I A(t)=l
~
~(~)
as t
+
=
(7.37)
O.
Lemma 7.7 only applies to bounded natural processes.
The next
lemma gives a crude indication of the behavior of more general
processes.
Lemma 7.8
Suppose G(t) is a natural process such that
for every fixed N.
II w.
j=l J
then
= Q,
> 0,
If, in addition,
EJG(t)X(N(t)~N) ~
0
- 165 -
E[G(t) IA(t) = 1] - EJG(t).
Proof:
Again we may assume Get) ~ O. Choose E > O. There exists
n+l
some N(E) such that I IT w.-i\ < E whenever n ~ N. Let
j=l J
B = {N(t) ~ N}.
t
EJ[G(t)(
N(t)+l
N(t)+l
IT w.)] = EJ[G(t)( IT
w.)x(B t )]
j=l
J
j=l
J
N(t)+l
c
+ EJ[G(t)( IT
w.)x(B t )].
j=l
J
Again
by assumption.
and thus
iEJ[G(t)] + 0(1) S EJ[G(t)(
The result follows.
N(t)+l
IT
w.)] S (i+E)EJ[G(t)]+o(l).
j=l
J
o
- 166 -
Application of Lemma 7.8
Suppose G(t)
= N(t) k .
If
=t
w.
IT
J
j=l
>
0, then Lemma 7.8 implies
(7.38)
If
~k+l(J) < ~,
this indicates
Of course, as we indicated earlier, the assumption that
IT
w.
j =1
=t
>
0 is a very strong one.
Rather than assume w.
J
+
1 with
J
a certain speed, it may be more reasonable to suppose that
w.
J
+
w < 1.
exists a cr
As before, let F.(x)
J
>
= w.J(x) = p{X.
J
J
s x}.
If there
0 making
(7.39)
then
(7.40)
defines a proper distribution F(x).
Let G(t) be natural and let E indicate integration with respect
to the proper distribution F(x).
Then
- 167 -
I
n+1 w at ar;; t
\'
G(t.x1.···.x 1)( IT --)e e dF 1 (x 1)"'dF l(x 1)
= L
n=O {N(t)=n}
n+
j=l wj
n+
n+
00
and thus
-ar;;
e-atEG(t)e
q(t)
=e
t(
N(t)+l w.
IT
-1..) = EG(t)A(t)
w
j=l
-at _ -ar;;t N(t)+1 w.
Ee
(IT
-1..)
. 1
J=
w
(7.41)
(7.42)
•
Therefore
-ar;; N(t)+1 w.
EG(t)e
t( IT
-1..)
w
E[G(t) IA(t)=I] = _ _~"....."...-,!!j~=;;:...I_ _
-ar;;t N(t)+1 w.
Ee
(IT
-1..)
. 1
J=
(7.43)
w
Lemma 7.9
00
w.
IT -l = ~ > O. If there is a a making dF(x)
w
j=l
[
a proper distribution. and if 01 = 0 xdF(x) <
then
Suppose
00
Ee
-ar;;t N(t)+1 w.
(IT
-1..)
j=l
w
1
,
_*
... ~[ :w] = ~K (a)
a].ll
= we axdJ(x)
as
t ...
00.
Proof:
Ee
-ar;;t N(t)+l w.
(IT
-1..) =
j=l
w
(7.44)
- 168 -
Choose €
O.
>
There exists an N(€) such that
whenever n
~
N.
Then
Ee
-Ol;t N(t)+l w.
(IT
-L)
. 1
w
J=
<
wI [
w
()
N-l n+ 1 W. ftf<X>
(
)
( )
e- o x-t dP(x)+ L (IT -L)
e- o y+T-t dF(y)dF n (T)
t
n=l j =1 w a t-T
(7.45)
_ -Ol;t N(t)+l w.
(IT
-L)
t-+'<X>
j =1
w
lim Ee
e-
wI
S
lim -
t-+<><>
[l-F(t)]
W
N-l n+l w
+ lim L (IT -1) [p(n)(t)_p(n+l)(t)]
t-+<><> n=l j=l w
= (Q.+€)(l-w)
OU
by the Key Renewal Theorem.
(7.46)
I
Similarly, one can show
_ -Ol;t N(t)+l w.
lim Ee
(IT
-L) ~
t-+<><>
j =1
w
This proves the result.
o
- 169 -
Lenuna 7.10
Assume the proper distribution
F exists.
Also suppose G(t)
is a sluggish and natural process such that for every E > 0 there
exists some
making
~(E)
EG(t)x(IG(t)1 > ~) < E
co
for all sufficiently large t.
Then if
II
j=l
E[G(t) !A(t)=l] - EG(t) ~ 0 as t ~
w.
~ =
w
Q, >
0,
co.
Proof:
Fix E> O.
whenever n
Ee
-a~t
~
There exists some N(E) making
N.
Let B = {N(t)
t
~
I
n+1 w.
, II ~-Q,<E
. 1 w
J=
N}.
-a~ _
N(t)+l w.
N(t)+l w.
tG(t)X(B )( II
~)
G(t) ( I I
~) = Ee
t
. 1
w
. 1
w
J=
-a~
+ Ee
J=
N(t)+l w.
tG(t)x(B c )( II
~).
t
. 1
w
(7.47)
J=
But
m w.
where M is an upper bound on
II
j=l
s MEG(t)x( IG(t)
if we choose
~
I
~
w
for m=l,···, N.
> ~ ) + ~P(B/) < E
correctly and if t is sufficiently large.
(7.48)
- 170 Also,
-a~
N(t)+l w.
_ -a~t
Ee
G(t)x(B ) ~ Ee
tG(t)X(B )( IT
-l)
t
t
. 1
w
J=
(7.49)
Therefore
-a~
-€+tEe
tG(t) ~ Ee
N(t)+l w.
-a~
tG(t)(
for sufficiently large t.
-l) ~ (t+€)Ee
IT
. 1
J=
w
-a~
t G(t)+€
(7.50)
Hence
-a~
E[G(t) IA(t) =1] _
..:.;;.;tE=e-,-_t.:::..G(~t~) -+
l-w
aO ]
o
(7.51)
t
[-
l
by Lemma 7.9 and expression (7.50).
E[G(t)IA(t)=l] - EG(t)
-+
Now Lemma 4.5 implies
0
O.
This lemma allows us to extend several of the results in
Chapter 4 to the situation in which the w.'s are not necessarily
J
identical but do converge to some w quickly enough to make
00
w.
-l = t .
For example, by linking Lemma 7.10 and Theorem 4.1,
*
we find that if 01< 00 and K < 00, then
IT
. 1
J=
w
Z
P l'I(t)-K 1N(t) < a
(i)
a-Y~iltl
and if il
Z
<
00
I
A(t) = 1\ -+ <tl(a)
(7.5Z)
- 171 -
(ii)
S a.
I A(t)
~ ~(a.).
(7.53)
Linking Theorem 4.2 and Lemma 7.10 yields
Kt 2
E [(W(t)
if 02 <
co
-~)
lJ
IA(t) = 1] =
oCt)
(7.54)
l
and K2* <
co
Thus we see that the assumption that all lifetimes have the
same defect w can be relaxed somewhat without overturning our
results.
APPENDIX
TheoremG
Suppose that q is a positive integer, m ~ 0, and
C
~d
1)
F
2)
J(t) is a function of bounded variation on
€
~m+l
m + q absolute moments and J*(s)
3)
and
~
on
~
* (s) is defined for R(s)
~
* (s)
[O,~)
~(t)
>
as lsi ~
having
o.
0 by
•
J * (s)
=--~~--
[l-P*(s)]q
* (0) is defined to make
Then
= O(lsl q)
[O,~)
€
~
* (s) continuous.
B(m), the class of functions of bounded variation
having m absolute moments.
Proof:
This theorem is a special case of Theorem 1 in Smith (1966).
Smith's Theorem is couched in terms of Fourier-Stieltjes transforms
but
be adapted to the current Laplace-Stieltjes setting.
m
instance we take a l = Y = q and M(x) = x .
c~
In this
BIBLIOGRAPHY
Athreya, K. B.
and Ney, P. E. (1972).
Branching Processes.
New York: Springer-Verlag.
Blackwell, D. (1948).
~,
A renewal theorem.
Duke Mathematical Journal,
145-150.
Bellman, R. and Harris, T. E. (1948).
stochastic branching processes.
On the theory of age-dependent
Proceedings of the National
Academy of Science, 34, 601-604.
Bondarenko, O. N. (1960).
Age-dependent Branching Processes.
Ph.D.
Dissertation, Moscow State University.
Chistyakov, V. P. (1964).
random variables.
~,
A theorem on sums of independent positive
Theory of Probability and its Applications,
640-648.
Cramer, H. (1955).
Collective Risk Theory.
F~rsakringsbo1aget Skandia,
Feller, W. (1941).
Stockholm.
On the integral equation of renewal theory.
Annals of Mathematical Statistics,
Feller, W. (1963).
der Mathematik,
Feller, W. (1971).
Jubilee volume of
~,
243-267.
On the classical Tauberian theorems.
~,
Archiv
317-322.
An Introduction to Probability Theory and Its
Applications, Volume 2, Second Edition.
New York: Wiley.
- 174 Galambos, J. and Kotz, S. (1978).
Distributions.
Characterizations of Probability
Lecture Notes in Mathematics, No. 675.
Berlin:
Springer-Verlag.
Larnperti, J. (1961).
A contribution to renewal theory.
of the American Mathematical Society,
Levinson, N. (1960).
processes.
~,
724-731.
Limiting theorems for age-dependent branching
Illinois Journal of Mathematics,
Siegmund, D. (1975).
Proceedings
!Y
100-118.
The time until ruin in collective risk theory.
Mitteilungen der Vereinigung schweiz Versicherungsmathematiker,
~,
No.2, 157-165.
Smith, W. L. (1954).
Asymptotic renewal theorems.
Proceedings of
the Royal Society of Edinburgh, Series A, 64, 9-48.
Smith, W. L. (1955).
Regenerative 'stochastic processes.
Proceedings
of the Royal Society, Series A, 232, 6-31.
Smith, W. L. (1959).
On the cumu1ants of renewal processes.
Biometrika, 46, 1-29.
Smith, W. L. (1961).
On some general renewal theorems for non-
identically distributed variables.
Berkeley Symposium
Volume 2, 467-514.
Smith, W. L. (1966).
~
Proceedings of the Fourth
Mathematical Statistics and Probability,
Berkeley:
University of California Press.
A theorem on functions of characteristic
functions and its applications to some renewal theoretic random
walk problems.
Proceedings of the Fifth Berkeley Symposium
Mathematical Statistics and Probability, Volume 2, Part 2,
265-309.
Berkeley:
University of California Press.
~
- 175 Smith, W. L. (1972).
On the tails 'of queueing-time distributions.
Institute of Statistics Mimeo Series No. 830, University of
North Carolina at Chapel Hill.
Smith, W. L. (1979).
•
On
the cumu1ants of cumulative processes .
Institute of Statistics Mimeo Series No. 1257, University of
North Carolina at Chapel Hill.
Teuge1s, J. (1975).
The class of subexponentia1 distributions.
Annals of Probability,
Vinogradov, O. P. (1964).
~,
1000-1011.
On an age-dependent branching process.
Theory of Probability and its Applications,
von Bahr, B. (1974).
•
•
131-136.
Ruin probabilities expressed in terms of ladder
height distributions.
190-204 .
~,
Scandanavian Actuarial Journal, No.4,