Murthy, V.K.; (1961)On the general renewal process."

f
ON TH!':: GENERAL RENEWAL PRCCESS
by
V. K. Murthy
University of North Carolina
This research was supported by the Office of
Naval Research under Contract No. Nonr-855(09)
for research in probability and statistics at
Chapel Hill. Reproduction in whole or in part
is permitted for any purpose of the United
states Government.
Institute of Statistics
Mimeograph Series No. 293
April, 1961
•
ii
ACKNOWLEDGEMENTS
I wish to express my deep sense of gratitude to Professor
Walter L. Smith for suggesting the problem to me and providing
gUidance and encouragement during the course of my investigation.
My thanks are due to the office of Naval Research for financial assistance.
My heartfelt thanks go to Mrs. Doris Gardner for her patient
and quick typing.
Finally, I wish to express my deep indebtedness to Professor
Harold Hotelling who is responsible for my deciding to come to
Chapel Hill for higher studies and research in Statistics •
•
iii
•
TABLE OF CONTENTS
Page
Acknowledgements
Introduction •
Notation . o . o
0
•
0
•
•
•
• •
•••
ooeco
••
11
c
•
•
•
•
• •
•
•
0
iv
•••••
o
0
•
0
•
•••••••
0
•
ela
PART
I. ON TEE CUMUlANTS OF A GENERAL RENEWAL PROCESS •
III.
••
•
e
•
•
vii
1
1
1.0.
Summ.ary..
1.1.
1.2.
1.3.
22
1.4.
105.
Some notation and preliminary lemmas
The Asymptotic form of -~. (t-).
n
The As;vn:.ptotic behaviour of the ~-cumulants .
a
•
•
~
The formal calculations
•• ;.. • •
Check on the formal calculations • • o •
1.6.
Cumulants of the Equilibrium Process
47
0
II.
•
0
•
•
•
0
0
•
Qi
•
...
0
0
•
•
a
•
•
•
•
•
"
•
0
•••
1
•
7
0
•
•
•
(>
•
•
0
0
0
•
•
~o.e
ESTIMATION
•
••••
·.
..·.
Q
• • • • •
OCl
•••••••
•
•
0
0
2 .0.
Summ.ary v
2.1.
2.2.
The Statistic and properties • • • • • • •
1\
As"ymptotic normality of the statistic V(-r)
when the underlying process is Poisson ••
2.3.
Some difficulties in certain non-null
1\
distributions of the statistic V(-r)
•
•
•
•
0
Q
0
•
•
•
•
•
..
•
0
31
44
51
51
51
72
•••
77
SOME GENERALIZATIONS OF THE RENEWAL PROCESS • • • •
81
81
:3 eo o.
Summary
3.1.
The Asymptotic mean and variance of the
3.2.
random variable Y~. • • •
The Asymptotic mean and variance of the onedimensional version of Hammersley's random
ca.
0
•
•
•
•
•
~
0
0
0
•
•
0
•
•
.;t
•
•
•
•
0
iJ'
•
variable Y~.
•••
The multidimensional cumulative process
treated as a special case of the one dimensional cumulative process • • • • • • • • • •
0
•
•
•
•
•
•
•
•
•
82
•
93
,
INTRODUCTION
This dissertation considers some problems that arise in Smith's
expository article
"On Renewal theory and its ramifications" 1217
and the discussion thereon.
In his article Smith deals very compre-
hensively with the subject, from its historical background right up
to the present stage.
renewal theory.
He discusses also several applied aspects of
He does not consider the inferential problems
since this aspect was discussed at some length by Cox ~17 and Cox
and Smith ~ 4, ~7.
It was Feller who recognized and championed
the methods of renewal theory in his papers ~7, ~7 where he deals
with what we call discrete renewal processes.
Later notable contri-
butions have been made, among others, by Blackwell ~l, g7, Takacs
~2J:..7, Kendall, D.G.
jj.3,
Hammersley 19, lQ7 and
lJ:..7, Karlin £11, 197, Cox ~3, 4, §J ,
Smith ~17, 18, 19, 20, 2g7 •
This dissertation is divided into three parts.
part, the results of
Smith
£2Q7 on
In the first
the cumulants of a renewal pro-
cess are extended to the case of a general renewal process.
We
first establish the asymptotic representation theorems for the
$-moments and w-cumulants and thereby for the cumulants of
for a general renewal
process.
Nt
The table of the first eight
cumulants of a renewal process is then extended to the case of
a General Renewal process.
We finally prove in this section
a theorem which leads to a check on the calculations.
As a particular
v
•
application of our general results we derive the cumulants ot the
"Equilibrium process l1
0
In the second part of this dissertation an estimate is proposed for the estimation of the variance-time curve of a renewal
process.
Assuming "equilibriumll it is shown that this estimator
is asymptotically unbiased for any underlying renewal process.
Under null hypothesis that the underlying renewal process is
random (Poisson process) the variance and autocovariance function
of the estimator are computed, and the consistency and asymptotic
normality of the estimator are established.
The later result pro-
vides a large sample test of significance for randomness of the
underlying process, other tests based on the likelihood ratio criterion having been discussed earlier by Sukhatme ~2L7o
Finally,
several difficulties which one encounters for obtaining useful
non-null distributions of the estimator are discussed.
In the third part of this paper we have shown that a certain
"generalization" of renewal theory proposed by Hammersley ~21):I
is essentially included in Smith's theory of cumulative processes.
We have shown that a separate treatment for the multidimensional
version of Hammersleyi s random variable is unnecessary and that
the one dimensional case of his random variable differs from the
r.v. considered by Smith ~2g7 in his theory of cumulative processes
in that Smith's r.v. contains just one more additional term of the
renewal sequence.
This simple difference takes the degree of com-
plication a step further in Hammersleyi s case.
Besides showing
vi
how it is necessary to make a certain
lI
gl obal ll a.ssumption of
Smith, we have finally demonstrated that apart from the minor
difference in the one dimensional version of the generalizations
considered by both, which is a matter of definition, the formulae
for the asymptotic mean and variance of Smith, which have been
carried to a further degree of accuracy in this part, will meet
the requirements of Hammersley's multidimensional version completely.
vii
NOTATION
Standard Symbol
e
Meaning of the Symbol
r.v.
random variable
R.P.
Renewal Process
G.R.P.
General Renewal Process
L.T.
Laplace Transform
L-S.T.
Laplace-Stieltjes Transform
o(T)
A quantity which, when divided by
T tends to zero as T tends to
its limit
O(T)
A quantity of the order of magnitude
of T
i.e.
•
.
€
~
~
that is
therefore
belongs to
tends to
converges in distribution to
B.V.
Bounded variation
w.r.t.
with respect to
R.H.S.
Right hand side
L.H.S.
Left hand side
2
N(O, ( )
Normal distribution with mean
and variance
-e
;.
is asymptotically equal to
0
PART I
ON THE CUMULANTS OF A GENERAL RENEWAL PROCESS
1 .0.
Summary
In this part the results of Smith[20Jon the cumulants of a
Renewal Process are extended to the case of a General Renewal Process.
After establishing the asymptotic representation theorems
for the
~-moments
and
~-cumulants
of a General Renewal Process,
the table of the first eight cumulants of a Renewal Process has
been extended to the case of a General Renewal Process.
A theorem
is proved leading to a check on the calculations. As a particular
case of the General Renewal Process, the cumulants of the "Equilibrium Process" are obtained.
1.1.
Some notation and Preliminary Lemmas:
Let [ xi)' i
= 1,
, •• n j
be an infinite sequence of in-
•••
dependent, non-negative, identically distributed random variables
which are not all zero with probability one and let their distribution function be
assumed that
F (x).
Unless otherwise stated
F (x) is such that for some
has an absolutely continuous component.
moment exists.
cess (R.p.).
x.(i~
1
Such a sequence of
Further, let x
o
Let Sk
r
=E
(x~), if this
~
be a non-negative, r.v independent of
being identical with F(x) and let
Renewal process.
~
r.v's is called a renewal pro-
1) with distribution function
Then the augumented sequence
Let
it is
th
Kits K convolution
K(x), K(x) not necessarily
v
[xi)' i
r
= E(xr)j
0
if this moment exists.
= 0,1, .•••
is called a General
(G.RoP.)
= x l + ••-txk.
Nt is defined as the maximum suffix
k
2
8k~
for which
t, subject to the condition
the obvious renewal application the
of articles being renewed and
by time
t
at time
t
Nt
= 0,
if
Xl> t.
In
x.~ represent the lifetimes
Nt is the number of renewals made
subject to the original object having been installed
= o.
On the other hand, one considers a G.R.P. when
the initial installation was not known to be at
t
= 0,
but only
to be at some point chosen on the positive semiaxis of the time
scale in accordance with a certain probability distribution.
particular,
X
o
of the GvR.P., might be the "residual lifetime" of
the article in use
at the arbitrary chosen origin
t
N
For a G.R.P ••, let 8_ = o', Sk = x0 + x1+ ••+ x '
k
1
and for all
integer
k
In
t~o,
= o.
(k = o,l,.~)
define the random variable Nt as the greatest
such that
Sk_l ~t.
...
In this chapter the random variable
Nt associated with a
G.R.P. is studied following the lines of study by Smith[20]of the
random variable
Nt associated with a R.P.
We deal only with a
continuous G.R.P., that is, we suppose there is no
that with probability one every
x
n
ill>
...
is divisible by ill
A,)
k, such that
will be seen in lemma 2, that the study of
valent and we therefore concentrate on
as its study is less complicated.
N't
Sk
....,
S
It
N't
will be convenient to introduce another random variable
fined as the maximum suffix
such
0
de-
t, for t > o.
Nt and Nt
It
are equi-
rather than
...
Nt
Let us introduce some moment
formulae w.r.t. the r.v. Nt; when these moments involve either
...
Nt or Nt instead of
Nt' the corresponding symbol
put on the moments.
Let
m (t) and
r
K
r
or
(t) denote respectively
is
3
the
th
r
~-
conventional moment and cumulant of
moment of order
~t(
The corresponding
Let
k with generating function
=
En1 -0 Nt+l
w-
cumulant of order
0,
Nt'
is obtained by a
k
formal power series expansion of the cumulant generating function
~ ( ~)
= Ee
Then
-
=
zN t
log
<P t
(0
K ( z ) = log M (z ).
t
and let
t
(Smi th [20J )
it can be shown
Mt ( z ) = E e
and hence that
K ( z ) = - z
t
and
By these formulae, the
·
the. t
zN t
=
+
'f t (1
~
-z
e
=
log (1 -
w-
cumulants of
)
<P
- e -z
t
(1- e -z ) ,
)"
+
Nt
can be expressed in
terms of the conventional cumulants and vice versa.
In what follows we will assume that the functions we are
dealing with vanish
identically for negative values of their
argument.
is any function of bounded variation
If-fL(t)
(B.V.)
in every finite interval, th;eIl'eY)we write
-st d J"L(t),
..fL*(s) =
o
for the Laplace -Stieltjes transform
JL(t)
-e
terval
(L - S. T) of J!L(t).
If
is any function which is integrable in every finite inthen we write
J) o(s)
00
=
/
e-st
. .~(t) dt,
4
for the Laplace transform
of ~~(t). All the trans-
(L.T.)
forms that arise in this chapter will exist for
when _:..;.. (t)
has both an
...rl!: ( s )
,.{L ( s )
A necessary and sufficient condition for the distribu-
tion function
k
and an L.T o1 then
O
s
=
Also,
make repeated use of the following lemma of Smith[2~.
We will
Lemma A
L - S. T
(~s>o •
F(x)
of a non-negative
r.V. to have its first
moments finite is that there eixsts another distribution func-
tion
F(k)(x)
such that
2
=
F*(s)
1-
,\S + 1-L2'~':
-
+ ~ (-s)
+ ~-l
_
k
kt
Let
11 ,
2
Iln
•••
stand for finite rational functions of
and
1.
n
and v l '
v 2 ' ••• v n
denote a rational function in
If
n<
11
and
00,
K* n (s)
=
'\'
n
1
._.
any multinomial expression in
~
,
and v l'
r
'e
'r\.
Proof
where
(s)
=
[20] ); Let M*dw stand for
K*l' K*2' •• K: ' with coefficients
K*(s)
*i1> (s)
n
By definition
,
~
i
n
(t)
I
=
k
,
'j
E (Nt + 1) (Nt
Nt is the maximum
J
n2 1, then
We shall now prove
*$'
v .•
v 2'
and with maximum degree of any term not greater than
Lemma 1.
ij
F*(s)
(Smith
is a function of B.V.
.. Ili
11 ,
2
F*(n)(s)
1 K (x)
respectively, and let
Ill'
such that
+ 2)
...
Sk
(Nt
<
t.
+
n) ,
d.
...
Pr
=
where
F .(t)
Pr
[Sk < t
K(t)
i:f..
< ~+ lJ
Fk(t) - K(t)·~,E'k+l (t),
denotes the jth convolution of
J
,
F(x), and
~
represents the Stieltjes convolution.
$~(t) = ~
.....
(k+l) (k+2) ... (k+n)(K(t)A'Fk(t)-K(t)Mk+i t1
'j
k=o
Taking
L - S. T on both sides we get
'* (s) = ro1;
$
n
(K+l)tk+2) ••• (k+n)
k=o
=
$
00
*
n
(s)
K*(s)
k
<D
=
E
(k+l) ... (k+')
[F*(S)]
k=o
:::
n~
-
_.-
[1 - F*(s)]
n
Hence the lemma is proved.
Lemma 2.
~
n
u
[F*(S~
r: (k+l) ... (k+n)
k=o
Smith [~qhas shown that
But
•
K*(s)
,
-.,J.
n
(t)
=1
+
jh
$j(t)
n~
Proof:
--
By definition
i n (t) =
Taking
...l
E (Nt +1)
L - S. T
(N +2)
t
(Nt + n)
on both sides, we get
[1- F*(sU
6
=
nt
[1 - K* {s
(k+n)
=
nt
X
K*{s)
+
+
~n+2) t
k-
0
n+1 )
1
E-
K*{s)
F*{sU
[1 - F*{SU
F* +in+3) t F*2 + •••••
2t
nt [1 -
X
E (k+ 1 )( k+2 ) ••
k=l
[F*{s)]
[
=
D+
[1 - K*{sU
I(n+1) t
00
3t
K*{Sj]
+
J
(n+1)t K*(SlL~-=..!_~~~il
(n+1) F*{s)
F* + (n+1)(n+2) F*2
+ .....
J
2
=
•
...
nt
x
[1 - K*{S)]
[tJ. - F*{s)j{n~i)J
-*
$ n(s)
nt
=
1 - K*(s)
+
n~
+
F*{s)
•
K*{_~l[_~_-n - (1 _F*(sV1
F*{s)
=
1 - K*(s)
+
-
=
1 - K*(s)
+
K*{s)
$*(s)
..R2t
+
[1 - F*{Sil
K*{s)
...
~
1 - F*{s)
~1
+
<1>* (s)]
.!!-nt
$~
"
(s)
+
i
=
n
4l~{s)K*(s)
r;
1 +
J'--
j=l
..
~
... -..
=
1 +
=
1 +
~
cl>*l
by Lemma 1.
j/j t'
j=l
n
(t)
n _-
I
r;
~.(t)
_.J__
j=l
n~
j~--'
j~
The importance of the above result is that it shows that
~ (t)
n
the asymptotic behavior of the moments
can easily be inI
ferred from the asymptotic behavior of the moments
1.2.
The Asymptotic form of ~
4l (t)o
n
(t).
n
~
We shall first obtain the asymptotic form of
then, by using Lemma 2, the asymptotic form of
~ritten
I
n
(t)
and
~ (t) can be
n
down immediately.
From lemma 1, we have
(1.2.1)
*$
I (
n
=
S)
~* (s) ,
n
K* ( s )
and we have therefore,
I
(1.2.2)
<p
If
n
t
Iln+ l < co
4l (t)
n
where
f
o
=
(t)
4l
n
dK-(x).
(t-x)
, we have from theorem 1 of Smith[20], that
=
"l2tn-l + ••• +
Mn(t) n
functions
to zero as
A.( t)
"l
n+
1
€
B
"l
n
t
+
~(t),
n
, B being the class of
which are of bounded variation, tend
t -} +
00
and are such that
8
p.-( t) - p.-{ t-a)
=
0
as
({)
t-1
Q)
,for every a>o.
(1.2.3) in (1.2.2) we get
Substituting from
t
J~
:=
(1.2.4)
'1. (t_x)n+l- j d'-(,(x)
j=l J
o
+
t
f
o
s.
Taking L '* (s)
n
=
Mn
n
(t-x) d":.(x).
T on both sides of (1.2.4), we get
n
'1.,J
K*(s).
-'.n+l-'j) K*{s) + W*(s)
n
s
From (1.2.5), it is evident that we should assume
¢
E
j:=l
v<
n
Q)
in order to account for the entire polynomial part of
,
¢In(t).
.
But, from Lemma A, for
sufficient that
(1.2.6)
,
K*(s)
=
where
K* (t)
1 + '1 s +
1
n
n
In
S
is a distribution function.
K*(s)
... 1'1
=
'1n
s
n
+ ••• +
Now
¢t*(s):=
v < co, it is both necessary and
[1 +I'~
+
'Yn _1
K*(s)
--"2
s
+ ••• I'n+l_j
7
K* ( s ) + Mn*( s) K* (s ) •
n
s K*(1)(s1
+
+ • ~ •• + ?'n+1- j
sj
K*(s) +
9
+
=
-t [~j=l
'1
8
1
• •• + --k
8
1[2
n-1
t
. 1
J=
+
n-u~
y.
+
j-1
[n~i-k
'1
J=l
S',
-
1
•••••
~-1
'1n+2-k- j ] + ••• +
7;_J
7 j _1
+
+
}
n
1: '1. '1 +l_ j K*( .)(8)
j=l J n
J
+
+ ~(8)K* (S),
n
'1 n+ l-k 1 n-k + ••
+ ••• + ..----Ji...S
+ '1 1 + '1n,l K*(l)(S)
sn
+- I n _1,2 K*(2)(S) +
n
+ Mn* (8) K*(s) •
• •• + 11,nK* (n) ()
s
Hence
I
(1.2.8)
~ (t)
n
'"
'1
1n, n-1 t
,
2 1 t
+
n-1
+ •••• +
I
ill
(t),
y
n-1, n-2
t
2
+
10
where
I
(1.2.9)
=
(t)
w
Yn,l
+ Yn _l ,2K(2)(t)
K(l)(t)
Yl,nK(n)(t)
+
j
t
o
We will now prove that rot (t)
the class
Lemma
-
...
M~ (t-x) d"K (x)o
Yn+l, n belongs to the
< 00••
vn+1
if we further assume
B
+
1.2.1
If the first moment of a distribution function
finite then
(F (x) - 1)
belongs to the class
F(x)
if
B
Proof:
Clearly F(x) - 1
as
x ---?
00
To prove
•
x [F(X) - F(x- 0: )]
for every 0: >
x
is of bounded variation and tends to zero
0,
tends to zero as
x -) +
CD
we notice that
[F(X) - F(x- 0: ) ]
[1 - F(x- 0:)]
<
x
=
ro
x . r dF(x)
x:.a
00
=
(x-a)
CD
x~J
hand side is always. non-negative.
L:t
x->oo
x
dF(x)
t dF(t)
+0:
f
co
x-a
x -1 ro and the left
Hence
[F(X) - F(X..a)]
+
dF(x)
CD
<J
x-a
The right hand s ide tends to zero as
J
x-a
=
0
dF(t).
11
and our lemma is proved.
In fact if
<
I-l
n
CD,
x
Lt
X-} 00
n
[F(X) - F(x-
o;)J
=0
> 0;
for a:
this result is a straight forward generalization of the above
1e!llDlB. •
Since
V
n< ro the means of
K(l)(t), ••• K(n_n(t)
finite and if we further assume
V
are all
n+ l < 00 the mean of
K(n)(t)
is finite and applying lemma 1.2.1 we have
(1.2.10).
'Yl,nK(n)(t)
Smith
flo ]
-'Yn,n
has proved that if
~ (t )
Using (1.2.11)
'Y n+ 1
n+1
I-l
=
B
€
<
CD
A( t ), where
we discover that if
t
q> ( t )
=
~ (t -x )
o
f
d K
f
::
A.( t )
€
B.
(x)
t
A. (t-x) dK (x),
o
then
(1.2.12 )
t
[O(t) -
+
j
t
o(t
[t.( t-x}
_a~
=
- A.( t-o:-x
t
U dK(x}
o
L
t
+
t
A.( t-a-x)
7n+l[K(t) -K(t-a)]
dK(x)
12
It is not difficult to show that every term on the right hand
side of (1.2.12) tends to z~ro as
t-i
<p(t)
Combining
V
< co,
n+ l
CD
B.
€
(1.2.10) and (1.2.13) we obtain that
< co,
if I-1n+ l
then
,
ro' (t)
I'n n
ro'(t)
=
"II
'n+l
B
€
and we have therefore proved the following theorem.
Theorem 1.1
If
=
<p' (t)
n
CD
"Y
Y
where
<
v
n+l
n+ 1 <co,
1-1
n,n-l
2,1
+
t
then as
,
t
n l
-
of a
from Lemma 2 and the closure properly of B
n
n-l,n-2
it is in fact
n
nt
the coefficient of
~
(where
2
+
G.R.P. follows
under linear com-
Notice that in all three of the moments
and $ (t)
t
B
n
<pI (t)
f
ro (t),
+
The corresponding theorem for $ (t)
binations.
co
+ •••• +
y n+l,n
ro(t) belongs to the class
t -_.j
<p (t),
n
t n is the same
~ ,
'.L
is the first moment of the dis-
n
1-11
tribution) •
Now for a discussion of the asymptotic behavior of the cumulants of a G.R.P., we need additional information concerning the
remainder term in theorem 1.
tional restrictions on the
We have from
(1.2.14)
(1.2.7)
We will achieve this by putting
I-1'S
add~
and v's.
tba~
+ ___.1.f 2 1
s n-l
+ ••• + Y
n-l,n~
s2
+ f
n,n-l
s
13
+
+ ......
Therefore we
o
(1.2.15) <1>' (t)
n
+ M*n(s)K*(s).
n
1 ,nK*( n )(s)
70
have for the Laplace transform of ~'(t),
n
+ . /'2 ' 1 + ••• +
=
---a·s
Yn;l~: (1) (s
+
s
Yn n-1
--,'~-
:2 -.
s
L.
Yn_l,2K*(2)(S)
s
+
s
+
s
We will now prove the following basic lemma.
Lemma 1.2.2.
n+p<
V
and
00,
p>o, then
....
Y2 1 +
-'n+p
s
1
,
+ 0(--)
s
as
s
-~
Y
n,n-l
p+2
s
+
Y
n+l,n
p+l
g
0+.
Proof~
Differentiating
(1.2.16)
c-~)
ds
p
(1.2.15)
<1>0' (s)
n
=
p
Y
1
-n+p+1
s
times on both sides, we get
+
Y2 ,1
s
n+p
+ ••••• +
Y
n,n-l
p+2
s
+
+
+
...
14
+
(=---)
M
d P [*n
ds
n
---s
We show that if the first
tion
K*]
p moments of any distribution func-
of a non-negative random variable are finite then
F(x)
(-~/r*~SlJ = s~l
+
0
(-~).
To begin with we remark that since the first
ff(X) are finite, we have from Lemma A,
distribution function
F*(s)
s
= -1s
+
where
F(p)(t)
p moments of the
+ ..•
Y p- 1 8
p-2
+
Y PS. P -1 F* (p) (s ),
is a distribution function
Differentiation
p
times on both sides of this equation
yields
1
(1.2.18 )
= ~~p+l
+
But for any function ~t) of bounded variation, it follows
from
8mithe2
(1.2.19)
oj that
[(a) _'1*(p)(s)
(b)
= o(~)
P
as
C~s)PFp-1~p)(siJ =
From (1.2.18) and
If we now apply
(b)
8
-7
0
+
)
s
above,
0
(~).
(1.2.17) follows.
(1.2.17) to all the terms
K*(l) ,
s·
••. and
~n)
and use the fact that
K*(2)'
S'
K*(2) and other K*'s have
s
their first p moments finite (in view of
v
n+p< <D), then we find
+ ••••
(1.2.20)
= .,
+
n,n
p+1
s
But if
can be written as
then
CO,
.,
sp-1
n+p
+
is B.V., and hence
where
+
.,n+2K*
+
.,n+..J:>;sK*(s)
s
s
+ ••
Upon differentiating
(1.2.21)
w.r.t
s,
p times, and using
(1.2.17;and (1.2.19), we find
(1.2.22 )
(-dds )p rM*n
n K*J
--..
.
?::n,l Cd
ds.)P
1
o( - ),
s
s
and on combining (1.2.20)
(1.2.23)
+
=
f*()J
1 (s.
and (1.2.22)
we discover that
+
s
+
C~)
ds P[M*nn Kj =
-
s
The lemma is therefore proved.
From Lemma 1 we have that
1
~+l,n + 0(-).
s
16
(1.2.24)
$*' (s)
~:(s)
K*(s)
::
n
If we divide both sides by
•
(1.2.25)
~ot(s)
n
s we find that
cjl°(s),
n
K*(s)
c
and hence that
cpo,(p) (s)
n
(1.2.26)
cpo(p)(s) K*(s)
n
c
1
cpo(p-r)(s)
+ •••
(p)
r
+ ••
cpo (s)
n
(-l)P
K*(P) (s)
_._----P
cpO(P-1)(s) K*(l(s)
(P)
+
n
n
K*(r) (s)
K*(P)(s) ,
Let us write
(1.2.27 )
K*[P(s)
Then,
if K(t)
n moments of
:=:
has its first
"I t
c
P
K*( p) •
n+p moments finite, the first
K*[pJ are finite, and upon using
(1.2.27), (1,2,26)
can be written as
~o(p) K*
n
(1.2.28)
r.'
+
where
K* [rJ,
r
r=o,l, ••• p
epo(p-1) K* [11
+
n
if- _r~
+ "I'
has its first
n+p-r
moments
<
n>0,
cpo(p-r)
n
~o K* (pJ
n
p
finite.
Now, we have from Smith(20], that if
Iln+p+l
co,
...
p/o,
then
"11
=
s
Where
..st. is
the L.T.
n+p+l
+
+ ••• +
"I n+1
s
+ -.eE!s),
p+l
of a function which belongs both to the
17
class
B and to the class
such that
&u L
L (L
A(t) belongs to L
1
(l+t)
bY'
Applying
is the class of functions
(0, 00)). We denote the intersection
R.
(1.2.29) repeatedly in
(1.2.28), and using the fact
that
4>0
n
A(t)
=
)'1
s
n+l
+
!£.
o
+
+ •••
M n
n '
-n
s
we obtain
1'2
+ n+p
s
0]
_'litH
+ ••• p:tr- +..fL 1
[~p s~:-l
s+
+..Q 0
1 '1'
K*
?jl
+
.!J .. ~ .....
+['1n+2
s
1_
+
s
Comparing
(1...2.30)
--
[11
O ••••• O ••••••• '
'1
••••
JL~]
+{'1n+l
+
...
2
+ - +
n+l
s
1
+
+
K*
M0 n
.2
n+2
+... +
s
7
r:J
'Y' K*
p
'Y n
-2
s
rpJ
"With lemma 1.2.2 and using Lemma A, we get
=
+ --::::.;..:
'Y 2 1
n+p
s
+ •• ('.
'1 n,!!=l
s
p+2
18
where
(1.2.32)
1L
p-1
n+1
t
Z
1=0
j=l
=
,.J,
n
+
KO[P]
!:
'j
j=l
+ •••
Q
(n+l-j)
P
n
... 0
(1.2.33)
=
,-.../
G
t->
-..10
(1.2.31)
,
n
(t)
=
'1 t
+
where ..fL( t)
= ..n..(oo) =
o.
00
~'o(p)
( -1 ) p
~
is the L.T. of
tP
<l>
I()
t,
n
that
n +
'n+l,n
,
2,1
t n-l +
+ .?L(t)
---;P-
....
,
since Jh-(t)
tP
can equivalently write ~(t) as
tt>
and tends to zero at +
--
A' (t)
(l+t)P
00.
n,n-1
t
,
is of bounded variation and tends
Also, in view of theorem 1,
B.V.
is the Laplace transform
.1l-0 = Lt ..?L(t)
Noticing the fact that
<l>
[1)
leads to
Lt
i.e.
we get from
+ ..n2 K*
2
K* [pJ
o(!.)
s
s
(1.2.34)
J\ K*
n1
(n+p+l-j-1 )
B.V.
But lemma 1.2.2
~
+
on
+..L'12 K* [p-11 + M
It is clear from (1.2.32 ) that J)...
of a function of
KO
n+p+1-j-l
to
zero
at
00.
must be bounded, we
,
where
A (t)
is
19
From
(1.2.35)
(1.2.32) we get
fi.. (t)
n+1
r.
j=l
p-.l
r.
!JSo
=
+ptl' I
J, n
-J-
[£J
(t)
(n+pt1-j- ~)
t
n
+
K
"I.
Z
J JL1 (t-x)
+
o
J=l
f
+ •••••
o
dK(x)
t
.fL (t-x) dK[P-JJcx) +
P
t
+
J
A.
(t-x)
dKL!itx),
where
a
~(t) -
n+l
"I
SinceA(t) -7
0
;:
A. (t)
€
as
t ~
00
B •
the right hand side of
(1.
2.35) must also tend to zero and hence (1.2.35) can also be
written as
+
p-l
n+l
L:
L:
£ =0
j=l
E
j-l
,£ .
]
[ . K(n+ptl_j_.dt) -1
1
"I. [KIP; (t)
_1
J
(n+l-j)
~
t
+
J lLl(t-x)
o
!.J:Lp(t-X) dK [p-1{x) +
+
o
The terms in
Jt
A.
(t-x)
dK(x)
7 +
n
+ •••••
1
[KtR{tl-J]
d KCP](x)
(1.2.36) whose first moments are not finite in
accordance with the assumptions made so far are
and
K[f~) and all these have their means finite
K(n+p),
if
K[f~~l)
we further
20
assume that "n+p+l< co • Also, if
~l' then
tion with a finite mean
is a distribution ~unc-
F(x)
F(x)-l
(0,
Ll
€
co) as is
seen from the fact that
CO
(1.2.34)
0
1
dx
=
~.
In view of lemma 1.2.1 ,
p-l
z
n+l
J=o
j=l
[l
Y.J, n.l:"
+..w.l·
-J- 1 [ K
Z
n
L.
+
[K CPJ
(t)
(n+l-j)
j=l
belongs both to B and to
Now -O..~
long to
•... -n..P
,A~,
B.
L
l
0
~(t), ••
AlsO
-lJ
=
if in addition
~ (t)
n
Yn+l
to belong to
A.0
and so
If
(1.2.36)
A.(t)
l
(0, 00).
are
L.T's
B,
~(t)
=
of functions which be-
n(t)
belong to the class
P
Ll (0, co),
~n+pr2< co;
belongs to B
where M*~+l
€
-J
as shown by
the function
A.(t)
and it can also be seen
(0, co) by noticing that A.* = M*nn -
=
Further, if
to
L
(t )
+
Ll (0, co), and ~(t) belongs to
Smith [~o],
J
we have shown in
It
A. (t)
L (0, 00).
l
We have thus proved
A. (t -x)
€
is B.V. since
(1.2.12) that
dK(x)
L (0, co),
l
€
then
B
~(t)
also belongs
21
(A)
If
ro, vn+ p+ 1< co, n>o, p>o
<
Iln+p+ 1
I
$
n
n
=
(t)
"lilt
+ "lI2,lt
"
+
:0-1
then
+ ••• +"lI'n,n-l t
'7 n+,n
1
+ jL
(t)
____
tP
where ~. (t)
B
€
(B)
Iln+ p+ 1 < co,
(1.2.38)
$
I
n
=
(t)
and if
vn+ p+ 1<
'7 t
n
1
,
+
'Y 21 t
n>o, p>o, then
00,
n-l
+ ••+
+It.(t)
where .sL( t)
Comparing
€
B
and also to
(A) and (B)
we must have
= SL (t)
...lL (t)
•
L (0, co).
l
€
t
(1. 2 .39)' • ~
~ (t
_ )
or equiva1 ent 1 y as
t
,,-'<tl
€
l
l+t
Combining (1.2.j'n and (1.2.39) we have proved
Theorem 1.2.
I
+
"- (t)
I
,
where
"- (t)
(l+t )p
Let us now recall the relationship (Lemma 2)
(1.2.40)
~
n
n
(t)
=
.---"
l+j,h
n~
between
I
~),
j~
I
$
n
(t)
and ~ (t)
n
of the G.R.P.
€
H
22
In view of the additive property of the class
R and
for a G.R.P.
the result corresponding to Theorem 1.2
(le2.~O),
may be
stated as
Theorem 1.2
t
If
>
p
r n,n-l t
+
where
"-( t)
E
'I n+l,n
"-( t)
+
(i;t~)p
R.
1.3 The Asymptotic behavior of the
) n (t)
then
0,
=
~ (t)
n
(1.2.41)
n>
0,
~-
cumulants:
is a linear combination with numerical weights of a
finite number of terms like
1>
(1.3.1)
Using
...
1>
(t)
also the property that if
"-l (t)A2 (t)
(1.3.2)
E
,
(t)
~
(t)
where
P2
Pr
(1.2.41) in the individual terms of
1\
"-l(t)
E
BJ
"-2 ( t)
Pl+P2+ •• Pr=n,
(1.3.1)
and
R then
E
R, we get
'v n (t)
=
+
where
A( t)
'llt
n +
In+l,n
E
,
'1 2 It
+
n-l
,,-(t)
(i:;t)p
+ •.. +
n,n-l t
,
R.
In the corresponding representation for
Renewal Process,
1
~
n
(t)
of the
Smith [21] has shown that all the coefficients
of the non-linear terms in
t
vanish identically.
tablishing that the same thing is true for
)
n
(t}
Before esof the General
Renewal
Process, we will demonstrate the truth by computing the
first of three *'s of a G.R.F.
In what follows the finiteness of the
appropriate moments is
assumed, the ;remainder term omitted and the sign " - " is used to
represent the "approximation" sign.
"'1 (t):
Assumption:
We have by definition,
,
4>1 (t)
Taking L - S. T on both sides,
But
K*
l-F*
=
1 [1
-~s
=
1 [1
1
~s
~s
+
.
- 112 s + ••• ]
2t~
~2
s +
2\~
112
+(
2~2
..
..
t
~
+(
-1
112
VlS+ v2 s
2
...J
2t
. J~
~
[1 - v1 s +
...J
~
+
If
2~2
we get the corresponding result of a Renewal Process.
24
Also
~ (t)
= ~l (t)
= 1 1 (t) - 1
- 1 -
!:. "'"(
~
1J.2
2~2
-
'VI).
~
By definition
"'2(t)
=
1 2 (t)
$2(t)
=
2 +
But
Now
*'
$2
- $12(t)
•
I
= K* 4>2* =
24>1(t)
2~ K*
+
J
=
(1_;*)2
+
.• .
0 •••••
25
I
Substituting for
1>2(t)
-
t
2
~
'\
+(2
.• .
~2(t)
<P
2
I
(t)
+(~
+
+- 2 +
'\3
~Vl2
¢2(t)
in
)
1>2 (t)
1
we get
t
'\
2
211
2v
l _ 2V l ll2 +
III
11 3
3
3'\3
2
- 1>1 (t)
+(1+ 5~
4 4
--
-
;tt
'\
~l
=
2~
3~2
~2
~l (t)
and
-
-
1
~2
t
~3
2~3
vl~2
_
'\3
3'\3
'\
:~ ) .
_ v 2
1
+
2
'\
tt
22)
V
'\'
Also
Var [ Ntl
=
¢2(t)
- (~
'\3
If we put
v1
-
2
+
31> l(t)
~) +e
= '\,
variance-time relation of the
t
v2
=
-
1
-
~2
rene~al
(t)
2
~2
2'\
m1
+
2
2~3
5~2
4'\
4
3'\3
in the above we get the
process obtained earlier by
.
26
~3(t)
Assumption
114
< 00
"11
,
<
3
00 •
By definition
1J (t)
3
But....
~ ~
...
=
I
=
¢3(t)
31.-
3
t
~3
+
+
I
I
<Pl(t)
+
+
¢2( t)
<P
--2C
=
-¢3(t)
+
1
~1 (t)
3 4> ~(t)
4>3(t)
3
¢13(t)
2
( t)
•
~"3"l
(?~ + 6"J.\ - 2"J.
t
v1 )
2
2 III
+
t
~5'
[6"J.4
2
6~ 112 +
+
2
9112
-
31l, 11
1
9 vI ~1l2 +
+
1
-6
~60~
10~
+
30~~
+
9~1l~
- 72 1l1 1l2 1l3
+
-
12
v
Coefficient
1J (t): -3
of
Coefficient of
t
t3
=
2
=
2
v2 -
45~~~
9~1l4
+
2
108~1l2
2
+36~
We find in
+
~1l2
54
~
3~
3 v2
v2
60~
J.
vI
2
+ 36111 113 vl
3
27
Also as in the case of
1rl (t)
and ;2( t)
we find for the
K(x) to the coefficient of t
contribution due to
in
~3(t)
to be
3~
e
v2
-
3
-
6J..ll vl
9 v l J..ll J..l2
J..l 5
1
~ 1]J..l~
+ J..l'2 - '2
vl~J
'2
'2!J:I.5
+
3 ['2J..li + J..l'2
-
'2
vl~1
'2
~
'2~5
0
We will now prove that the above observations are true for
;
n
(t)
of a General Renewal Process.
coefficient of
t
in the asymptotic representation of ';Jr (t)
n
K(x) is by way of an additive function
to the constant term due to
J..l1S
and v's.
We have proved in lemma
=
1+
<fl
*i
does
K(x) and finally that the contribution
not involve the moments of
of
We will also show that the
that
'2
*'
+ ~'2
'2t
+ ••
<fl
*'r
rt
28
~
(1.3.4 )
¢
t
'*
r+l
(r+1)t
<l>r+1.
(r+l )t
Multiplying both sides of
w.r. t.
from
r
to
0
=
=
1 +
\ (8) Z
=
1 +
K*
=
1
But
<1>
<1>
s
=
-
~
<1>
K*
+
l-K*
* (z) =
8
and summing
(z).
,*
<l>
+
<I>
* z
2
(s)
z
-"2t
K*
<l>
.....
+
* z2
+
2 2t
K*
[1+
*8
K*
2
+
1
+
SUbstituting from (1.3.5)
(1.3.6)
s
,*
'*(z)
r+1
,*
6
(1 - z )
z
co, we get
* (z)
~
by
(1.3.4)
<I>
* (s) z +
...
<I>
* (s)
2
1
2
z + •••
2"t
(z).
<1>'
in (1.3.4), we obtain
+
1 - K*( s)
1 _z
K*(s)
<1>
*s (z)
1 - z
furthermore from (1.3.3) we find
(1.3.7)
-*
=
<l>
_.--.J
K*
[1+ <1>1* +
*
jt
If we now let
and express
"_
(1.3.8)
K*
=
Q =
1 - F*
...
5J
+ 1 - K*.
jl
,
K* in the form
1
+
~
Q
+
then on substit.uting from
that
+
<1>2
2t
~
Q
2 +
(1.3.8)
~
a _ •
into
0
(1.3.'7)
we
discover
J)
29
¢:"nt
(1 +
~
+ •••
+
=
+ )..2 + •• + \ )
R*(s) )
+
1
Qn
where
...j
00
(1.3.10)
R*{s}
E
:::
)...
j =1
Now
j
Qk
E
""'
11<=1
J
co
Qk +
Q 1 - Qj
1 - Q
=
k=1
,j
1:
j=l
\~
.
and substituting in R*{s)
we get
= 1-Q
=
j=1
1:
"-j
j=1
+
)..j
-
~§.
j
Qj
r
Qj
1:
j=1
)..
1 - 2Q]
1 - Q
"-1 = -1,
= 1 - 2Q
1 - Q
=
E
j=l
co
E
1-Q
co
+
1:
00
Q
Thus, since
R*(s)
00
Q
R*{s)
K*
00
"-.Qj
?:
J=o
1 - 2Q
J
Q
- 1-Q
1 - Q
- 1
1 - Q
=
(1.3.11)
K* (2F* - 1)
F*
-1
Thus, we have
$ (t)
n
-n1-<
:::
+
(1 +
"-l
+ •• )..n-2)
¢2{t)
2t
+ ••• ( 1 +
"-l)
<l> n -1 ( t
)
(n-1)t
+
<l>
(t)
_n_
nt
+
R{t),
30
where in view of
(1.3.11)
=
Lt R*(s)
Lt
t
S-)O
R(t)
= o.
00
From (1.3.9) we derive
~*
~*
¢l
¢l
n+1
-(n+l) ~
(1.3.13)
=
n
n~
""n+l
Multiplying both sides of
n from
(1.3.14)
to
0
¢l*
f,.
n
... \
+
for
+
¢l
*
n+l
¢2*
2t
*
¢In+1
(n+i) t
and summing
co, leads to
~*(z) =
(1 - z)
~*
A(z)
<P
s
e
+
n
nt
by z
(1.3.13)
+ ""n-l
(z),
s
so that
(1.3.15)
~t(z)
=
A(z)
L\-(z)
=
1 +
(1 - z)
where
(1.3.16)
\z
+
Upon taking logarithms on both sides of
(1.3.17)
~
t(z)
=
'±{(z)
+ log
~(z)
(1.3.15), we find
-
log (l-z),
where
~ t(z)
and
is the
~t(z)
*
""
cumu1ant generating function of the
w-
is the
G.R.P.
cumu1ant generating function of the
R.P.
Let
Y. . denote the coefficient of
J, J
expansion of log ~(z)
Then from
(1.3.18)
(1.3.17)
~ (t)
n
=
-
log
(1 - z).
we obtain
in the power series
31
Smith
[20]
has shown that if
= )'n
\If (t)
n
(1.3.19)
+
t
11.( t)
SUbstituting
that)'
t
1
1 '.
2"
+
n
-~i"p
(l+t
,
0,
then
where
1 n+l +
we find
ln,n +
2:lJ!1
(l+t )p
,
1
2 vanish identically and
n- ,nwhich means that the coefficient of
•••
)',
n
)'
of the G.R.P.
n
P>
and (1~3.2) we see that the rational
1
is a
n,n-l
~ (t)
in
1
+
in (1.3.18)
)' ·t
(1.3Q20)
Comparing
functions
=
)' n+l
< CD ,
R
€
(1.3.19)
~ n (t)
(1.3.20)
Iln+pt1
does not involve any moments of
K(x), and finally that
(1.3.21 )
1 n+l,n
+
)' n+l
=
This shows that the contribution due to being a G.R.P.
is
to the constant term in the
by way of an addition of a
asymptotic representation of the \If
n
- cumulant.
We have thus proved
that
Theorem 1.3.1
If
j.Ln+ptl
< 00,
Vn+ptl
)' nt
-e
11.( t)
1.4
€
+
<00,
n>o, p>o,
)' n+1 +
then
)' n n + 11.( t )
,
(l+t)"P
,
where
R.
The formal calculations:Assuming
~
= 1,
Smithf!.6] has tabulated the first eight
)'n
and
7 +l
n
of (1.3.22).
To complete Smith's table for a G.a.p,
in other words to extend the table of the
~-
G.R.P, we will now tabulate the first eight
under the assumption
...
cumulants to the
y
n = 1,2, •• 8
n,n
= 1.
~
~, A2 , ••• An' ••• are the coefficients of
(1 - F*), (1 _ F*)2,
(1 - F*)n •••• in the following power series representation of
K* ,
(1.4 .. 1)
K*
~(1 -F*) + A2 (1 _F*)2+ .......
= 1 +
Denoting the moments about the origin of the distribution
by v's and the corresponding moments of
F(x)
~'s,
by
~=1,
setting
we obtain, by equating the coefficients of the first powers of
on either side of the relation
K(x)
s
(1.4.1), the following:
=
=
~
--?
=
?
~
v
4
24
=
-1l4 ~ +
24
v
=
5
i20
~5
\
+
~A2
2
4
-
120
=
-~6
720
+
3
(~.+ ~~2)
12
\
~
(~2 + ~ ~
- 21l2 A4 +
v
6
720
+
6
A
3~2
3
2
A2
+
. 2
(3~g
~
(~;
+ ~+
60
36
~2~~)
24
A
2
+
A4
+
~~)
A3
33
t
-~
= ~
5040
5040
+
(~
12
--
116 +
- (
1.1
1l2 1l5
360
+
~+
40
120
1l2 1l4
8
(1.4.2)
Solving
(1.4.2)
for the A'S we get
+
+
1l31l4 ).
72
1l821l~ \ 1.3
1
A
2
34
•
=
•
=
=
=
5 v1 1J.2
3
8
=
-
v
--5..
+
120
•
=
V21J.5
120
+
7 v2 1J.2 1J.4
48
35
~
# -
=
50 0
-
720
3V1 1l2 11
+
V1lJ.2lJ.yt4
4
2 2
5 V1 lJ.2 11
3
-
2
V3lJ.:i
18
'40--
4
90
e
+
2
+
v31l21l4
12
3 V lJ.2 11
3
3
-
-
v4 1l4
144
33 V1 1l2
2
V11l4
144
6
"'8
=
v
-
e
40320
+
~
e
v6 1l2
240
vJ!.a
40320
4
+
3
3
+
2 0
2
.~V2Ili
+
114
4
55 V1 1l2 11
3
16
3 +
-
33~~2
-
v 1l 1l
4 2 3
9
5
2
V51l2
~
.
2
V21l4
128
-
V11l31l6
480
+
V21l7
V21l~
1bO
5040
+
128
v1 1l2 1l3 1l5
16
144
+
-
v4 1l2
4
·2
5 v1 1l2 1l4
-
V21l31l5
80
55 V1 lJ.2 1l
+
15 V 1l
3 2
16
+
~a~2
+
8
16
+
3
5 V1 1l2 114
3
V11l2...
18
+
+~
2
V21l21l3
2
+
4
3
5 V2 1l2 113
2
-
30
5
-
V2 1l3lJ.4
-18
V2lJ.n
2
+
VtlJ.31l5
-
v2~~
+
5040
V1lJ.21l6
180
-
"L
-
V11l2~
1120
-
2
V11l21l6
b4
2
1 5V1 1l 1l4
2
288
2
55 v1 1l2 1l3 1l4 - ~
64
14 0
<I:
3 v 1l2 1l
3 5
160
-
2
3 V2 J.L2 J.l 5
32
+
3
55 V2 J.l2 J.l4
64
4
165V1J.l2J.l4
12t3
The first eight
of
Zn
nt
a
5 V 2 J.l2 J.l:2J.l4
16
+
64
+
4
55v4J.l2
128
2 2
55 V2 J.l2 J.l
3
32
3 2
55 V1 J.l2 J.l
3
16
3
llV J.l J.l 1 2 5
(n=1,2, .. 8),
n
+
+
where a
2
~n
48
n
is the coefficient
in the power series expansion of log~(z), are given by
1
=
~
a~.
=
A.
a
2~
2
2
- Ai/2
37
~1±.
=
_3-
=
"4
-
4t
(A~
\~)
+
+
t{
A2 -
+
(\A~
~/4
2
A
5~
AA)
2 3
- (\>-4 +
5
- iJA
-1 2
+
~A3)
+ A5
1
5
(1.4.4)
a
__7._
71
=
2
2
2 )
+ ( AlA6'"2\ A2~+2\A3\ +A2 1\3+ 1\21\4
-
(_~ + ~~~~
4
3\A~~+~~+3~A2A4)
+
2
+(
~A4+4~~~+2~~)
+
~A2 - ~/8
6
8
.
-
(51~ + ~~ )
38
Now
'V
I
-
n,n
an +
Substituting for the
finally from
(n-l) ~
A's
from (1.4.3)
in
(1.4.4), we have
(1.4.5),
,1
'1 2 , 2
:::
:::
2
v2 - V11J.2 - v1 + 1
:r 3,3
:::
2
V11J.3 - v3 + 3V2 1J.2 - 3 V1 1J.2 + 3 v1 v2
'1 1
- V + 1
1
2
3
V4-V11J.4+15V21J.2-15V11J.2-4v21J.3-6v31J.2
222
2
+ 10V11J.2~ -3v2-15vlIJ.2 +18v 1 V21J.2+4V11J.3
r
6,6
:::
3
2
3V11J.2-2vl +2
39
2
2
210Vl~2~4+35Vl~3~4-15V3~4+210V3~2~3-840V2~2~3
40
41
, =
1'8 8
42
2 6
~ 2
222
- 56v6~3-28v7~2-315Vl~4-69300Vl~2~3-135135~1~2
43
~
2
4
4
2100vlv2~4+25200vlv3~2~3-7560vl~2~4+1344vl~5
44
20160vI~2-8vlv7-28v2v~56viv6-56v3v5+336~v2v5
-
+ 5040.
1.5 Check on the Formal Calculations.
In the work so far, apart from the assumption of finiteness
of the appropriate moments, it has been implicitly assumed that
the distribution function F(x) is such that for some
kits
convolution Fk(x) has an absolutely continuous component.
Let
G be the class of functions
as
1 _ 1
n
and the
[
-x/Vl
e
...
+
+
1
e. ,J,
=
U G
n n
shown that if
F(x)
€
G, then
F*(s) is analytic in some open neighborhood of
(ii) there is a unique function
vanishing at
8=0
+ F*(Zl(s», and further
(1.5.2)
zl(s)
=
OD
-E
1
n
=
I'n
We will now prove that if
terms of I' (= a )
n
n
and
and satisfying the relation
s - 1
a
8=0,
zl(s) defined on this neighborhood,
(1.5.1)
where
x>o
1.
Smith[~Jhas
e
-x/Vn
v!s are real and positive and let
G
(i)
F(x), which can be written
n
a
n
--!!... s
nt
,
of (1.3.22).
F(x)
€
G,
and the cumulants of
I'
can be expressed in
n,n
K(x), and this relationship
provides a check on the calculations of the previous section.
We
prove
Theorem 1.5.1
Ql)
a
= n~ -E
nt
n
S
•
Proof: We have
OD
K*(s} =
Z
n=o
=
1 +
!\(z)
where
~z
+ >-2z
Also
+ ••••
a zn
00
log
2
E
.1L
n=l
nt
therefore
log
A.(Q) = nh an [Q(s)1 n
nt
but
log
cumulant of
•
.
n
n a , where )( is the
;\
n
K*(s}
K
K(x)
00
n~ (_l)n
Putting
th
n ••
CD
nh
s=zl(s)
(105.2) we get
~n
n t•
a
n
nt
an
CD
-
a
n~ ntn [Q(S)] n
[1 _ F*(SU n
in the above and making use of
(1.5.1) and
46
.m
Ym_. s·
mt
I
n
;::
CO
L
n=l
.. n
~s •
n!
Comparing the coefficients of the first eight powers of
on either side of
a
e
1
;::
a2
=
a
;::
3
a4
;::
a
;::
5
(1.5.3) we get
'\CX
2 +
11'>:2
$
t\
•
~
et
8 + 11: 2 05 04 + 8 ex7 + 28 Ct 2 Ct6 +56 (1Y,3 0: 5)
+ 210
+ '\(105
Ct
Ct
2
2
0:
4)
422
+ 280 cx +840 ~2
2
3
Ct
+ 420
3
Ct
2
Ct
4
+
Now
'Y
n,n
=
a
n
+
(n -l)t
.
Smith ~~7 has tabulated the first eight
[;5"]
has tabulated the cumulants.
individual terms in
'Y
Ct's
and Kendall
Using these two tables all the
tabulated in the previous section for
n,n
n=1,2, •• 8 are checked and they all passed the check.
obvious check is that 'Y
in the case of the
1.6.
,
,
n,n
=
0,
when
Another
F(x) = K(x), which is verified
s
'Yn n •
Cumulants of the
Equilibrium Process.
Consider a Renewal process which has been in progress for a
long time.
Fix a point
t on the time axis (it is not a matter
of necessity that the event
the arbitrarily chosen point
t:
t
of the Renewal process happen at
on the time axis).
Let us denote
48
•
by yet)
event of the Renewal process.
function of
Let
Kt(x)
on to the next sucyeeding
denote the distribution
Y(t) • What we now observe from the instant
is a general Renewal process whose
bution is
t
the time that elapses from
"residual life time"
t
onwards
distri-
Kt(x); the intervals between events being independently
identically distributed random variables with the same distribution
function
F (x).
(l.6.l)
lim Kt(x)
t-"'? 00
Cox and Smith[t,J have shown that
-1l IX [1- F(YU
=
f.l
dy
=
K(x), say
0
In other words, ifa Renewal process with distribution function
F(x) of the intervals between successive events, has been in progress
a long time since, and if observations are made starting from an
arbitrary origin
t, then what we are actually observing is a
General Renewal Process whose
is given by K(x)
of
"Residual life time" distribution
(1.6.1).
the "Equilibrium process".
This particular
G.R.P. is called
We can now write down the cumulants of
the Equilibrium process as a particular case of the General Renewal
Process.
We put small "e" on
"!
n,n
to stand for the equilibrium
process.
We have from Section 1.5
=
(1.6.2)
where
e
an
of
(n-l}t ,
are given by
should be replaced by
K!x) given by
that
~
e
(1.5.4) with the modification that
PC
• We will tabulate the first eight
K
(1.6.1)
assuming
~
= 1.
's
e' S
49
Ke
1
if
=
J.l2
"2
Ke
2
=
Ke
3
=
J.l 3
3
J.l4 4
Ke
4
=
J.l
-
..:2
5
e
T
~2J.l3
2
J.l2~4
2
J.l3
2
+
4
2
-.2-
2
4
+ J.l2 J.l - 3J.l2
3
3
8
e
5
=
2
2
J.l6- J.l2 J.l -5J.l3 J.l4 + ~J.l21J·4 + .2.J.l2 J.l3 5
"6 2
b
4
3
Ke
6
=
2
2
lJ-lr - ;J.l2 J.l6 - J.l3 J.l5 + ~J.l2J.l5 - ~J.l4 + 5J.l2 J.l3 J.l4
7
tt
(1-/'·.3)
2
J.l2
3
5
~~2J.l3 + ~J.l2
3
322
4
3
- ~J.l2J.l4 + 1~J.l3 - ~J.l2J.l3 + 1~J.l2J.l3 - ~J.l2·
Ke
7
2
= ~J.l8 - ;J.l J.l - ¥3J.l6 + 4J.l J.l6 - iJ.l4J.l + 7J.l J.l J.l
2
2 7
2 3 5
5
32224
- 2tJ.l2 J.l5 + ~J.l2J.l4 + ~J.l3J.l4 - ~!-12J.l3J.l4 + ~J.l2J.l4
33257
- 3 J.l2 J.l3 + 35J.l2 J.l3 - ~2J.l3 + 4§J.l2 •
S
e
K
8
~e
=
2
-l:.J.l2 J.l8 - ~3J-lr + 2J.l2 J-lr - 1.J.l4J.l6 + 28J.l2 J.l3J.l6
92333
1Y9
3 2 2
2
- 7J.l2 J.l6 - Jl!5 + 14J.l2 J.l J.l + 28J.l J.l - 42J.l2 J.l J.l
3 5
3 5
3 5
5
3
4
222
2
3
+ 21J.l2 J.l5 + 35J.l3 J.l4 - 1~5J.l2J.l4 - 70J.l2 J.l3 J.l4 + 140J.l2 J.l3 J.l4
3"
50
Substituting (1.6.3) in (1.5.4) and (1.5.4) in (1.6.2) we obtain for the first four
e
'111
=
1 -
1J.2
2
ye
n,n
the following
PART II
ESTIMATION
2 .0.
Summary
Empirical investigations on the estimation of the variance--
time curve of a Renewal process were carried out earlier by Cox
and Smithfb_7.
In this part, a statistic is proposed as an esti-
mate of the variance-time curve at a given point on the time-axis,
based on observations in a given interval of a Renewal process.
"Equilibrium" is assumed.
Some sampling properties are derlved
when the basic renewal process is arbitraryo
Under the Null
Hypothesis that the underlying renewal process is purely random
(Poisson Process), the consistency and asymptotic normality of the
statistic are
established~
the later result incidentally provides
a test of significance for randomness of the underlying renewal
process.
Some difficulties involved in certain non-null distri-
butions of the statistic are discussed.
2.1.
The Statistic and Properties
The variance-time curve is the graph of the variance of
Nt
plotted against
t
j
the time parameter.
mainder which tends to zero as
-e
Neglecting the re-
t -teD, the explicit expression
for the variance-time curve in terms of the moments of the underlying renewal process is given, assuming equilibrium, by
52
(2.1.1)
Vet)
=
Var (Nt)
ex
=
112 - III
2
= ex
t + 13', where
,
~
and
=
13'
2
3~ - 2~lJ.3
6~
Suppose observations on the process are made in the interval
(o,T).
We propose as estimate of
VeT), O<T<T, the statistic
A
veT), defined thus:
T-T
A
=
V{-r)
1
T-T
j<N ( t+T, t
»)
2 dt
o
where
and
N(t+T,t)
denotes the number of events in the interval (t,t+T)
N(o,T), the number of events in (o,T).
Let
N~ denote the number of events of an Equilibrium process
in (o,t).
A
E(V(T»
Taking expectations on both sides of (2.1.3) we obtain
=
=
=
=
1
T-=T
1
T-T
53
as
T~
co, where
~I
=
1
Thus
III
/\
E(V(,-»
0(1), as
T-ico
(2.1.4)
But we have from page 23 of Part I, that
E (Nt)
=
Q'(t),
1
and
*4>'(s) =
1
K* ll>*1
•
However, assuming equilibrium,
=
K*
l-F*
----s
III
and so,
*l > ' e
=
1 (s)
1
Renee
E~N~)
=
exactly,
E2(N~) _
and thus
We have, therefore, that
(2.1.6)
E
("vcr»
In other words,
\
=
V(T)
+
A
V(,-)
is an asymptotically unbiased estimate of
VeT), for any arbitrary underlying renewal process.
-e
In the particular case when the underlying renewal process is
Poisson with parameter
~,
we obtain
E
(~( T))
:::
E
(N~ ) 2
5Q;
_ T2 E(N;)2
-2
T
(2.1.7)
2
f..
f..T +
:::
:::
f..T + f..
:::
kr
2 2
1:"
(1 -
2
l-
2
- T f..
T
- f..2 T,2
i)
It is evident from (2.1.7) that
biased estimate of
(f..T + f..2T2)
- T
2
T
1\
T VeT) is an exactly uu-
r-;-
VeT) in this case and this estimator is also
asymptotically unbiassed in the general case.
EVidently,
(2.1.8)
Cov
(~(T), ~(T+h))
for arbitrary h.
However,
1\
1\
V( T) - EV(T):::
1
T::r
JT (N2
(t+T,t) -
EN2(t+T,t»)
o
T: (N2Co,T) - E
N2(O,T~)
T
and if we write
2
2
N (o,T) - E N (o,T),
then we have that
-e
(2.1.9)
(~. (T)
- E
~(T)) (~(r+h)
-
E
~(T+h) )
,
dt
55
=
1
if:T
x
1
T-oT-h
where
f
I
t=T-T
11
=
(T-~)(T-T_h)
t =T-T-h
~ 6N2(t+T,t)6N2(t'+T+h,~)dtdt'
I
t
t=o
=0
T-T-h
1
2
j
2
= --:::'2--t
T (T-T-h)
o
T_T
(2.1.10)
1
3
=
(T.+h )2
2
T (T_T)
~6N2 (T)6N2 (t+T,t)
dt
,
0
1
4 =
T2 (T+h)2
T
2
(6N (T») 2
4
Upon taking expectations on both sides of
(2.1.9), we discover
that
(2.1.11)
Cov
(~(T')' ~(T+h1
=
E(I ) - E(I ) - E(I ) + E(I4 ).
2
1
3
E(I ), E(I ) and E(I 4 ) first, and then obtain
2
3
We will evaluate
the more complicated
E(I l ).
Clearly,
E
J
T-T.-h
2
2
Cov (N (T), N (t+T+h, t») dt.
(2.1.12 )
o
Consider the diagram below
I,
T
In this figure:
1 1
(o,t)
(t+T.+h,':£]
and
Let
terval
N
I
stand for the observed number of events in the in-
Cov
(N2 (T), N2 (t+T+h,t))
r
Then
(2.1.13)
=
Cov
(Ni , (NI + Nr + Nr ) 2)
2 123
57
So far we do not know how to evaluate expressions like
E(Ni Ni )
1
for an arbitrary underlying R.P.
2
However, when the underlying
renewal process is purely random (Poisson), since events in disjoint intervals are independent, we can evaluate the terms of the
R.R.S.
of (2.1.13).
Thus, when the underlying process is Poisson, (2.1.13) reduces to
(2.1.14)
cov
(N2 (T), N2 (t+T+h t) )
,
j
and so
'l
,
(T-T-h)
T2 (T_T_h)
Similarly, when the underlying process is Poisson, we find
(2.1.16)
.-
E(l )
3
(2.1.17)
E(l 4 )
=
=
( T+h)
2
T
2
i~J2
T
Var
(N! 1
var(
N~)
,
On putting
(2.1.18)
=
A.t
we have) for a Poisson proGess,
~,
E(N )
t
=
~
E(N;)
=
~
E(N~)
=
~+
3~2 + ~3
E(N~)
=
~+
7l· +
+
2
~
6~3 + ~4
Thus
=
=
From this it follows that
4r..3 l( T+h)2 + 6r..2 T 2 ( T+h)2 +
=
T
T
2
Before we calculate the more complicated expression
cov(
~(T), ~(T+h))
,we will obtain
Var
(~(T))
quantity still to be calculated for obtaining var(
E
(\) ,
(2.1.20)
Case I:
a)
t
>
when
h
= o.
>
t' +
T
~(T)),
is
We have
dt dt 1
E (I )
1
t
• The only
t'
59
-------lI--_....J
If
o
..-i
-a..I
t+\.
t;
if';
_
T
In this case we evidently have
2
N (t l +T,t l
cov (N2(t+T,t),
b)
t
I
<t <t
I
t'
Let
(t
2
-
(t,tl+q-) ,
r)
-
( t '+1', t +
1
1
I
t
I
e+t..
t +
-c
I, t]
-
1
o.
'+T
0
_
=
))
'1
Then
=
N( t+ 1j t)
N(t'+ 1',t')
N + N
1
12
)
=
,
N + N
r1 r2
Therefore,
2
2
coV(N (t+1',t), N (t'+"1',t'))
=
=
'_
cov (N
2
2
+ 2N N )
+ N + 2N 1 N1 ' N~N2
r 1 12
1 1
12
I)
1 2
2 3
Var (Ni )
2
,
when the underlying process is Poisson,
=
Var
(N~, _t+ l' )
and therefore
T
60
Case II:
a)
t'
t'
>t
> t.
+'r
o
t
t l + -r,
T
In this case
2
cov (N (t+'r ,t),
b)
t
<
<
tt
2
N (tt+ T,t'))
=
o.
t + ,. •
In this case one obtains, as before,
2
2
CoV(N (t+T,t), N (tt+ T,tt))
-
var(N~_t'+T)
The Domain of Integration.
I
/;
T-'C
k::......_~--
o
(-rIO)
-\-_--\---+---7 t
T-:t"t
T-t:
T
61
Now
T-T
(2.1.21 )
T-T
if
o
2
cov ( N (t+T~t),
0
=
By symmetry, we discover that
T-T
T-T
JJ
(2.1.22)
o
cov
(N2(t+ ,t),
7
o
t'=T-2'r
=
J
2
dt
dt'
(~,-t+T) dt
dt'
t'=o
t=T- 1"
t'=T-T
+2
J
t l =T-21"
/
Var
t=t i
If we write
¢(t
then
where
, t t)
=
Var (
N~! _t+ T)
,
62.
;:;:
B
=
'/-. + 12'/-.2T + 12",3T2 :
C
=
6'/-.2 + 12",3T
D
=
4",3
E
=
_(12",2 + 24",3 T) ,
F
= 12'/-.3 •
On integrating
by
~(t,tl),
kr
,
+ 6'/-.'2 T'2 + 4'/-.3 T3
A.
,
,
~(t,t')
with respect to
t
and denoting the result
we obtain
(p(t,tt)
=
Therefore,
t=t'+T
~(t,t') ]
t=t'
+ ( C T2 _ D
T
3+ E ,,2
+
T
2
+ 2",2
T
after substituting for A1 B,C,D,E,F
T
3 +
t'
3
2
= '"
~3)
",3
T4
=
from (2.1.24).
say;
Hence
t l =T-2T
(2.1.25)
f
J
2
t'=o
t=t'+T
Ivar (N~'_t+T)
dt dt'
t=t'
t'=T-~T
=
2
it
dt'
t'=o
~
=
2
=
(T - 2,.)
(T - 2T)
(~ ,.2 + 4 ~2T3 + 2 ~3T4).
Now
J
t=T-T
~(t,t')
=
[! + Bt' + Ct' 2 + Dt' 3 ]
(T-T) - ] (T-T)
2
2
t=t'
+ C(T-T)
1
3 - D(T-T) 4 + E t'(T-T) 2
22
- F t' (T-T)
~
~.
~
+ .! t'(T_T)3 _ At V _Bt i2 _ Ct i3 _ Dt i4+!at ,2:Qt,3
3
2
3
+
=
D/4 t,4 _ E/2 V 3 + F/2 t,4 _ F/3 t!4
~(T-T) - ~(T_T)2 + £..(T_T)3
t:
2'
3
- ]l(T-T)g
4
•
64
Hence
t'=T-T
(2.l.F6)
J
dt dt'
t'=T-2T
=
~
4l
3
2
A (T-T) - ~ (T-T) + .Q.(T-T) - J2(T-T) j
234
J
T
+
1.
+
~r(T_T) - ~(T-T)2-i) (<T_T)3 - (T-2Ti)
+
i[D(T-T) - ~-~]
+
1 rE - 31lf
iB(T-T) + !(T_T)2+F (T_-r)3 -A
2 ~
2
3
5[6
r(T-T)5
4J L'
0T-T
)4 -
[cT-T)2 -(T-2T&
(T-2T)j
-~T-2T)iJ
After a straightforward and laborious simplification the right hand
side of (2.1.26) reduces to
f.. T
3
Hence
3+
2 4
3 f.. -r
2""
+
(2.1.27) and (2.1025) in (2.1~22), we
and on substituting
obtain
T-T
T-T
Jj
1
-2
(T-T)
o
2
2
COY (N ( t+T,t), N ( t '+T, t'
Vdt dt'
o
T-T
=
(2.1.28)
4
"'T2+4 ",2T3 + 2 ",3 T
+
T
o(])
T
Upon combining (2.1.15») (201c16), (2.1.19) and (201.28), we
obtain
(2.1.29)
var(~(T») =
2
2)
'" T +4 '" T +2
"')T
4 +
T
2
",2 T ) + 6 ",3 T 4
+4
= "'T
T
4 "')T4
+
0(1)
T
T
+ oOJ
,
as
T
T----"1 +co.
From (2.1.29) it is evident that
T
var(~(T»)
tends to zero as
tends to co, and combining this result with the asymptotic un1\
biasedness of
V(T), we have the fo110wingQ
66
Theorem 2.1
A
When the underlying renewa1 process is Poisson
is
V(,·)
a consistent estimator of VeT).
A
We shall now obtain the correlogram of the process
namely,
cov(~('T), ~(T+h»).
VeT),
Our main problem hinges upon
evaluating
t'=T-T-h
=
/
cov
(N2(t+T, t) ,N2(t ·+·....h,t' ))dt
t'~o
To calculate
2
cov (N (t+T,t),
N2(t'+T+h~tf»)
, we consider
the following cases.
Case I:
a)
>t
t'
t'>t+,.
~
~
o
I
I
t
tt'"t
/
I
In this case the covariance is zero for an underlying Poisson
Process.
b)
t
< tt <
t + ,.
r ---_.-/'-_---
1
o
t
t'
t+-c
~
dt f
•
In this case, for an underlying Poisson process,
2
2
cov ( N ( t+1", t ) , N (tt+T+h;tt) )
Case II:
a)
=
Var
(N~_ti+~
t > t'
t> tt + T + h.
In this case the covariance ve.nishes •
b)
t' + h < t < t
i
o
+ T + h.
t -+ t
t'
" 1- 1: + f"
t
~
t +-
<:
In this case
2
2
cov (N (t+T;t); N (tt+T+h,t i
c)
))
=
Var
(N~i_t+T+h)
t' < t < t' + h
.... ~
t
I
o
i
..
t+1;.
I
I
~+l
~ _---~.---_..->
~+r+~
In this case
cov
(N2 (t+T)t),
2
N (t i +Nh,?t!))
=
Var
(N~)
•
Hence
dt dt r
68
=
dt dt'
1
(T-T }(T-T-h)
+
/
+
1
(T-T)(T-T-h)
f
Jvar (N~!..t+T+h)
dt dt I
,
D
where the domains of integration
3
D , D and D are as given in
2
l
3
the diagram below.
i;
I
T
T-l:
;-7:-1-. t-----------------~---/
-r-1,,1:
1'------------------/----#
/
T- '). t-h 1 " - - - - - - - - - - - - - - - 7 ' - ; . . . - - - 7 1 '
o
T-2."t-
T-~.... -h
T-1:
T- T;-h
T
t
Now
dt dt 1
1
(T-T)(T-T-h)
t=o
JJ
t=T-T
+
1
(T-T) (T-'T'-h)
t=T-2'T'
tt=T-T
(N~_t'+T)
Var
dt
t1-t
=
+
oOJ ,
T
T
Also,
Area of
=
(T-T-h)
j
D2XVar
X Var
Hence
1
(T-'T')(T-T-h)
+
-e
aU)
T
Now
T
dt
dt 1
(N~)
(N~)
dt'
70
t'=T-~T-h
J f
=
t=t'+T+h
Var
t'=o
t=t'+h
If we put
t":= t' + h
J J
t l1 =T-2T
(2.1.35)
t"=h
t=t"+T
dt dt'
then (2.1.34) becomes
(N~"_t+T)
dt dt"
t=t"
t"=T_T
+
Var
(N~. -t+T+h)
t=T-T
J f
t"::T_2T
Va:r
(N~" -t+T)
dt dt"
t=t l1
But, from the calculations immediately preceding (2.1.25), we
obtain
t=t"+T
(2.1.36)
f
Var
'\ 2 +
2
AT
t=t"
and the second part of
•
(201.35) is, in view of
(2.1.27), given by
71
kr 3 + 3 )..'2'"(4 + 4 )..3. j5
2
5
3
=
Thus
(2.1.38)
f
1
(T-'r)(T-T-h)
(N~
Var
D
3
=
2 + 2 t.,2 T 3 + t.,3 T 4
2
i
-t+T+h) dt
+
)..T
dt'
oOJ
T
T
Upon combining
(2.1~33),
(2.1.38),
J
J
T-T
...
(2.1.32) and (2.1.31) we obtain
T-T-h
2
2
co. (N (t+T,t),N (t'+Tl-h,t'
1
(T-T)(T-T-h)
o
o
+
Upon combining
(2.1.40)
(2.1~39),
cov
~dt dt'
0(1)
T
(2.1.15), (2.1.16) and (2.1,19) we obtain
(~(T), ~(T+h))
=
t., r 2+4 t.,2 T 3+ 2 t.,3 T 4+4
o
t.,\..2(T+h)2 +
h var(N;}
T
+
o(])
T
Note that, when h=o, (2.L40) reduces to the formula for
obtained in (2.1.29).
var(~(T))
1\
2.2. Asymptotic Normality of the Statistic VeT) when the
underlying process is Poisson.
We have seen that
(2n2.1)
A
A
1\
=
VeT)
Vl(T) - V2 (T) ,
where
1
T-T
and
When the underlying
.
E(~l (7»)
(2.2.4) {
=
=
Var
R.P. is Poisson we have proved that
)."T2+4 ).,,2 T3+ 2 ).,,3T4
+
0(1)
T
T
and
(2 ..2.5 ) {
E
. Var
(~2(T))
( 9 (T»)
2
=
;..2 T2+ ;"T2
=
4 ;..3 T4 +
T
0(1) •
T
T
and finally
cov (
~1 (T), ~2 ( T))
=
0(1) •
if
We will establish the asymptotic normality of
A
Step 1:
V (T)
1
is asymptotically normal.
Let
(202.6)
t-T
Wt
=
J
0
( N( t+T, t»)
2
dt.
A
VeT) in several steps.
73
Consider the development of the underlying Poisson process
on the non-negative time axis.
at t=t
o
Let us say the event
"E" happens
if there are no Poisson events in the interval (t , t
0
Assume "Ell happened at the origin
O.
0
-T).
It is clear that the inter-
vals separating the occurences of the events "Ell are independent
and identically distributed, and the points on the time axis
corresponding to the occurrences of
renewal process say
T
n
(t ).
i
=
"E" mark the development of a
Let
•• t n
when the underlying renewal process is Poisson we will show that
W defined by (2.2.6) is a cumulative process in the sense of
t
Smith jJ.l,2?} 'With respect to the regeneration pOints (T ) •
n
Let us write
Y
n
=
T -T
n
= ~ (N(t+T,tl) 2 dt.
T
n-l
-T
Since the number of Poisson events occuring in disjoint
intervals are nutually independently distributed, and since the
Yn's are made up of Poisson events in disjoint intervals, the
lengths of which are independently identically distributed, the
Yn 's are therefore independently identically distributed.
since the number of Poisson events in any finite
finite with probability one,
t
Also,
interval is
W of (2.2.6) is with probability
t
one of bounded variation in every finite
t-interval.
Since W
t
is non-nega.tive condition (C ) of Smithffil:.7 is also satisfied. We
3
have thus proved that W is a cumulative process w.r.t. the regent
T
eration points
When
(2.2.8)
where
~2
.,ft
n
<
00,
SmithL2l,2g/has proved that
(:t -)(~) t~ CD
is the mean of a
N (0,
e-")
Yn •
On combining (2.2.8) with (2.2.4) we find that
(2.2,9)
IT (~l (7)
-
~l)
AT -
D
T
N(o,
e2 ),
--1 co
where
2
e =
(2.2.10)
Step II:
t = N(o,t)
W
is a cumulative process w.r.t. the same
sequence of regeneration points
For
Y'
n
=
6 W'
n t
=
N(T
T
n
1 T ) , are eVidently independently
n- , n
identically distributed r.v's,
for an underlying Poisson process.
Therefore~ according to the theorem of Smithffig7
on the
joint asymptotic normality of cumulative processes referred to the
same sequence of regeneration points we have that
and
are asymptotically distributed according to a bivariate normal law.
Step III ~
distributed.
v'T
~(~2(T0
1;2
AT]
is asymptotically normally
75
! N(o,T),
where under the present hypo-
T
theses,
~
N(o,T) is distributed in Poisson form with paremeter
=A- T.
But
(2.2.12)
N(o,T) - ~
N(o,l),
D
T-4
IIJrr
CD
from which the desired result is evident.
Step IV
is asymptotically normally distributed.
Step V:
(2.2.14)
+
where
0
p
0
p
(1) ,
(1) is a quantity which con7erges to zero in probability.
3, page 225 of Mann and Wald
To see this, we refer to corollary
If.§] ,
putting
r=l, j=l, Yn=1, and Xn_n
=X
Poisson with parameter ALet
fen) =
,
n
n
p real and
and this implies that
X - 1
=
Xl
n
is
An
•
g(x) = x P ,
Tl(x,a) = gt(a) (x-a)
,where
0 (1 )
p A-
n
,
4- 0 •
Then their
i.e.
X'n
- "'n
= op (1) ,
/An
which follows from the fact that
•
XI - A
n
n
IAn
N(o,l)
D
An ---1
00
The Corollary then states that
(2.2.15)
Upon putting
XIP - AP
n
n
p-l
PAn -2
=
XI
n
A
n
+ o p (1).
A 1/2
n
p=2 in the above, (2.2.14) is evident.
Step VI:
(2.2.16)
is asymptotically normal
For, the difference between (2.2.13), which is asymptotically
normal, and (2.2.16) above, converges to zero in probability as is
proved in Step V.
We have therefore proved
Theorem 2.2
When the underlying renewal process is Poisson, then
is asymptotically N(o,l).
Theorem 2.2, incidentally provides a
test of significance for the randomness of an underlying renewal
process.
77
2.3 Some difficulties involved in certain non-null distributions
A
of the statistic
V(T)e .
Cox and Smith ~§7 eonsidered the f0llowin£ problem.
Suppose
we observe a
Stochastic process which is assumed to be the super-
position of
N independent renewal processes each having the same
underlying distribution function
F(x).
ber of underlying renewal processes?
How to estimate the num-
Assuming equilibrium, and
2
writing C for the coefficient variation of the underlying
R.P., then as
t ~<D,
v (t)
p
where
>-.,-1
= ~,
.,..J
2
C t >-., ,
the mean interval between events in the pooled ou~
N
put, and
Vp(t) is the variance of
and Smith further assumed
Nt for the pooled output.
Cox
F(x) to be of the X2 -type, in which
case
and then equated the empirical (observed) slope of the asymptote,
and the intercept of asymptote, to the corresponding theoretical
2
ones given in (2.301) and (2.3.2) to obtain estimates of
C and
of
None
N~
In order to obtain confidence limits for
C and
A
needs to know the distribution of
process is of the
not know how to
E
X 2 - type.
VeT), say, when the underlying
Even in this simple case, we do
evaluate terms like
(N2 (t+T,t),
N2 (t'+T,
t) ,
78
for arbitrary
t and t'.
Furthermore little is known about the
joint distribution of events in different intervals,
sarily coMecutive.
not neces-
With this in mind let us define "estimabilityll
of functions of events in different intervals on the time axis as
follows!
Definition:Let 1 , 1 , ••• I
be a set of consecutive intervals on the
2
k
1
time axis and let
in the interval
be assumed.
I
j
,
'
j=1,2, •• k
of a certain renewal process.
Equilibrium may
and the intervals which are unions of any
consecutive of them. A function
NI
£
NI
r
m
m
l
m , m , •• m
2
l
r
where
••• • N r
1£
£1
of
r
NI
£1
'
are non-negative integers
r
I.e'
and
denote the number of events
Let ~ be the class of intervals which consists of
... I k
1 , 1 ,
2
1
N
Ij
••
I£
1
are any
r
r
of
I.
J
, j=1,2,. ok,
is
said to be "estimable" if it can be identically represented as a
linear combination with constant coefficients
independent of N' s)
I
where
I
€
(being functionally
of the
powers of
'8 .
m
-e
Thus if a function
N r
I£
is
r
estimable its expected value can be represented as a finite linear
79
combination with constant coefficients of the expected values of
powers of
the
expected values of the powers of
N
r where I
N where
I
€ -{;
•
These
I €~ are explicitly
known in terms of the moments of the underlying renewal process.
These functions are therefore aptly termed
"estimable"
in
that their expected values for any arbitrary underlying renewal
processs can be explicitly written down without the complication
of running into intractable distribution problems:Examples of Estimable functions:-
_ _ _J.--
o
-j)
'--
r, +
1:,
t
L'l-
Consider the two intervals
J
11
=
(o,1"l
12
=
(T , T
and
l
l
+ T2]
In this case the class ~ consists of the three intervals
and
I
0=
=
(0,T,+T 2 ]
•
Squaring both sides of
get
=
Hence
N N
1 1
1 2
1 ,
1
is estimable and
and rearranging we
1
2
80
Similarly
=
Hence
1
3
the symmetric function of
N
I
1
and N
I
On
the L.H.S.
of
2
the above is estimable and
2
2
E(N N + N N )
r1 I 2 I 1 I 2
=
1
-
E(Nf ) - E(Ni ) ) •
1
2
3
Examples of Nonestimable functions:The function
then
2
N N
is not estimable. For, if it is estimable
I I
1 2
not functionally dependent on N , where I € '€,
I
should exist such that
This is possible only when
where
c
=
1
3
,
which contradicts the requirement that the
ally independent of the NI's.
£'s should be function-
2 2
Similarly N N
1 1
1 2
is not estimable.
However in the particular case when the underlying process is
poisson, since events in disjoint intervals are independently distributed, all the above functions are estimable.
For example in
formula (2.1.13) on page 56, we come across several nonestimable
functions on the R.H.S. for the case of arbitrary underlying renewal
processes.
PART III
SOME GENERALIZATIONS OF THE RENEWAL PROCESS
3.0
Summary
This part is more or less expository in nature, the main
object being to show that a certain generalization of renewal processes proposed by Hammersley
plicated in the
1 21.7 ,
though unnecessarily com-
form considered by him, is an easy extension of
Smith's theory of cumulative processes.
definition of
By slightly changing the
Hammersley's proposed generalization we will show
that Smith's formulae for the asymptotic mean and variance of a
cumulative process, extended by us to a further degree of accuracy,
can be directly applied to yield the asymptotic mean and variance
of the random variable considered in the generalization.
Also, to
examine Hammersley's generalization in the light of Smith's theory
of cumU1ative processes, we need to modify slightly the random
variable considered by Smith
L2g..7.
Furthermore, we will demon-
strate the necessity of the "global" assumption, namely, that the
basic distribution function
th
k •.
F(x), is such that for some
convolution has an absolutely continuous component.
kits
Both
Smith and we have found this assumption necessary in order to
obtain the asymptotic representation theorems for the cumulants
--
of the renewal proceas and general renewal process respectively.
3.1.
The asymptotic mean and variance of the random variable
•• + t ,
...
n==1,2,
n
T is the time instant corresponding to the
n
occurrence of an event
fined
W be a cumulative process de-
t
iT)
w.r.t. the sequence of regeneration points
n
D
Then
' n=1,2,
n-l
independently identically distributed random variables.
yn
are
Let
tEt.
"" WT
""
- WT
n
•• 0
Let
(3.1.2)
~
E r
Yn
""
p.
and
=E
jJ.. .
tr
= E( t ni
r
n'
~J
8mithlJ.!7 has considered the random variable
e
r
(3.1.3)
Nt+l
S
Yt
Nt
i~
""
being the number of events up to time
l <
if
K
'"1.
00,
E
""
It, where
,
Yi
t
of the R. P.
8mi thlJ.~}has shown that
For the random variable
(A )
yJ)
n
< co
1':1 t
+ oCt)
as
t --} co
'"1.
(B)
if
~<CD,
Var
K2 <
(y 8)
t
""
~l
CD ,
and
t
(1':2 +
~
t~
= E( t i yi) < CD,
2
jJ.2 1':1 - 21':1'"1.1)
-2
Ili
~
CD
Before we consider Hammersley's random variable
extend formulae
+ oCt) as
H
Yt ' we need to
(A) and (B) to a further degree of accuracy.
83
Let
G(t,y) denote the joint distribution of
t
i
and Yi.
Let
8
"y t(Y) = Pr
in other words
-
S
denotes the distribution function
Tj(t(Y)
of
Then it may be seen that
1.f St(Y)
(3.1.5)
[G(oo, y) -
=
t
G(t,y~
f
T.r
S
--r t-t
I
(y_yl) d t , ,G(t',y').
,y
'0
Let
JJ
co
CD
(3.1.6)
G*B(m)
=
t=o
-eBt+i",y dt,y G(t,y).
y=-co
and write
co
=
j
s
-st
e
dt ,
o
for the Laplace-Stieltjes transform
seen that, for a fixed
in
y.
(3.1.8)
s,
\fT*:(Y)
of ~t(Y)' It is easily
is a distribution function
Also
"ys*
s (ill)
y
(J)
d
Y
V s*s (y) ,
-co
is a characteristic function.
From (3.1.6) we see that
of the random variable
G*(w)
is the characteristic function
o
y, and if we write
n
F(t)
= G(t,ro)
for the
84
distribution function of
= F*(s),
tn' then G:(o)
the Laplace-
Stieltjes transform of F(t).
Applying the transform (301.6) on both sides of (3.1.5), we
discover that
Y
(3.1.9)
~«(j)
= G*(ro)
- G*(ro) +
o
s
S
T.1 (ro)
"TS
G*(<D)
s
'
and hence
R e(e)
> 0,
1\ < CD,
if
the R.li.S. is differentiable v.r.t. w ,
we obtain
00
.
0.1.11)
J
i
Y
dy
G0 *
(1 -
-00
Hence
t
y~(y) =
~l-F*(s)
u.J=o
co
jY
(3.1.12 )
S*
dy Vs (y)
=
'\
1 -
-00
If
11
2
another
<
CD ,
=
F*(s)
we know from a basic lemma of Smith that there exists
distribution function
F*{s)
1 - illS + 112 s
F(2)(t)
2
F@){S)
2
Hence
It
....:.:.L.
= It1
-e
G*(<D)
s
=
l-F*(s) u- s
• J.
=
(
1 + 112 s
-
2~
F*
(2)
(s)
+
0(1)
such that
=
(3.1.13)
(s) - 1) + 0(1)
(F*
(2'
=
+ lt11J.2
K
l
~s
+
0(1)
as
,
s - i +0.
2~2
But the L. H.S. of (3.1.12) is the L-S.T of
(3.1.14)
(~) =
E
J
CD
Y dy
V~
(y)
..00
Renee
,e
If
K
l
<
E (Y~ )
(A I)
1J.2
00,
::;
K
1
<
t
+
,
if
CD,
then
0(1 ) , as t --)
00 •
~
Now for
Re(s)
>0
, the R.R.S.
twice differentiable w.r.t.
Smith
is
and we find, as earlier shown by
12?:-7 ,
]
2
Y
d
Y
'/sS* (y) =
-00
where
ill
of (3.1.10)
R*(s) is the L-S. T.
Now i f
written as
1J.21
+ 2K l
2
1 - F* ( s ) (l-F* )2
K
R*(s)
of a function of B.V. and
< co , then the L-8.T. R*(s) of R( t) can be
86
R*{s)
ttl - s 1-1.1 + ~l
=
2
6
2
.;. 0 (
s ) ,as
s ~ + o.
2t
Also, i f
IJ.
3
< 00
,
=
Also
""
that i f
IJ.
3
<
co ,
IJ. 1
2
<
(i)
as
6
-7 + o.
+ 0(1)
as
6
~ +0.
+
1
6
1-1.
00,
0
then
CD
j
2
Y
dy
Y6S* (y)
-co
+
tt2IJ.2
2~
87
Noticing that the L.R.S. of (3.1.20) is the L-S.T. of
E(y~)2
we
discover that
+
t ---7 + ro •
as
Now when
K
(3.1.22) E(~)
<
l
1J.2 < co , we have proved that
OJ,
= Kl
~
t
K11J.2 + 0(1) as t
+
where behaviour as
(1)
0
co •
2~
(3.1.22) we obtain a term like
If we square both sides of
t x
-t
t
~
+ ro, we do not know so far.
We now appeal to the cumulant representation theorem of
SmithL2Q7 for the R P., proved under the global assumption
IJ.n+p+l <
that if
\jrn (t
)
Since
1
l-F*
=
=
00,
p>o, then
1nt + r n+1 +
¢1 = W1 '
A.( t ) ,
(l+t )p
and since
where
IJ.,
< co
A.( t )
€
R.
, the remainder in
(,.1.22) is actually 0(1) and hence under the global assumption,
t
88
< co, the term t x 0(1) above tends to zero as t --t + co •
3
Thus if F(x) satisfies the global assumption and if ~3 < 00;
if
1i
1 < 00, then
1t
(3.1.23)
2
2
= 11:1
E(Y;)
t
2
+
2
~l
2
+ t 11:11-12 + 0(1)
44·
~
~
t ~ co.
as
(3.1.23)
Subtracting
22
1\1i2
from (3.1.21) we find that, if
satisfies the global assumption
~we
F(x)
omit this statement hereafter
to avoid tedious repetiti0I2.7 , and if
11:
2
(B')
< co,
1i
3
< co, and
Var
~1
< co, then
t
~
+
+
K:
1J.
2 2
+
0(1) as
t ~
co •
2~
Thus
(A') and (B') are extensions of Smith's (A) and (B)
to an additional degree of accuracy.
If we put
y == 1, (A I) and
n
(BI) reduce to Smith's original formulae for the expectation and
variance of
Nt' which has also incidentally proviaed us with a
(At) and (Bt).
check on our formulae
3.2
The asymptotic mean and variance of the one dimensional version
of Hammersley's random variable
Let
(t,
n
~
-n
(pxl») , n=1,2, •••• be a sequence of independently,
identically distributed random variables with the same distribution
G(t, l'n (PX1~
function
j
where
tive random variable while
~
-n
t
is a one dimensional non-nega-
n
(pxl)
is a
p-dimensional random
vector.
Then Hammersley fl"~}
considers the random variable
Nt
'Ill(t)
where
=
~
I:
n=l
-n
(pxl)
Nt' as before, is the number of events up to a specified time
instant
t
(t )
w.rot. to the renewal process
n
In this section we will only consider the one-dimensional
version of Hammersley's generalization
(3.201) and in the next
section we will show that by a certain "linearization" technique
we can at once obtain the results corresponding to the asymptotic
covariances of the components of
sional results of this section.
version of
'Ill(t)
of
002.1) by
y. (pxl)
-n
.
from the one dimen-
.
We will denote the one dimensional
H
Y where
t
Nt
I:
n=l
(y ) being independently and identically distributed random variables.
n
90
Notice that the only difference between
Y~ of Smith and Y: of
Hammersley is that while Smith takes the sumaation from
Hammersley takes it from
1 to Nto
This simple
1 to
differe~ce
Nt+l,
intro-
duces a lot of complication in the calculations as will be shown
below.
This complication has been foreseen by the former author,
but eVidently not by the latter.
We will adhere to the notation
of the previous section.. As before,
distribution of
G(t,y) denotes the joint
t i and Yi.
Let
Then
=
1 - G(t,co
Applying the transform (3.1.6) on both sides of (3.2.3) , we
find
1 - G*(o)
s
+
G*(OO) •
s
Hence
=
1
G*(o)
s
1 - G* (00)
s
Bearing in mind that the L.H.S. of the above is the characteristic function of the
that for
Re(s)
> 0,
distribution function
the R.H.S. is differentiable
V R*s (y) and
w.r.t. ~ ,
91
H*
we find that the first absolute moment of ~s (y)
is finite and
that
00
i
j
VsH* (y) = (1 - G:( ») G:
0
yd y
t
(m )
(1_G:(m») 2
-00
i R*(s)
=
1 - F*(s)
Thus
co
f
(3.2.5)
y dy
V~(y) =
R*(s)
1 - F*(s)
-00
=
+ 0(1) as
+
(Note we have assumed
Kl
< CD
,
11
2
< CD and
Il
ll
s --1 +0,
< co, and then
the calculations are as before).
Since the L.H.S. of (3.2.5) is the L-S.T. of E(~), we find
that if
Kl
< co,
11
< CD and
t
K
2
~l
< co then
(3.2.6)
E(Y~)
=
l
~
Similarly,
+
Klll2 -
Illl
2~
1-1.
+ 0(1) as t --1
00
•
92
H*
d \11-
Yr
s
=
(y)
II
G*
S
«(1)
]
.(!FO
1 - F*(s)
-co
=
- Q*(s)
+
2
= -
Q*(s)
a.: «(1)~J (1)=0
R*( s) )
2
(1 _F*(S») 2
1 - F*(s)
where
(i
is the L-S.T. of
co
Q(t)
•
a function of
=
J
y
2
dy
G(t,y),
-co
B.V. Hence
dy
V H*
s (y)
=
Q*(s)
+
1 - F*(s)
-CD
After a straightforward calculation we find
2 2
'\ t
2
1-11
as
t --} co ,
2 (R*(S») 2
(1 _F*(S») 2
93
l'f
I':f"I
c;
< ..... ,
"""
113
< 00,
1121
< co.
an d
~2
We finally disccver~ combining
if
1':2
< 00
().2.9)
,
11
3
< 00 ,
fl
=
Var (Y )
t
< co and
1121
t(K
2
+
+
(3.2.8) and (302~6), that
< co
1112
-
1':2
1112
5 K2 2
lll2
2
IJ.:L
2
lll3 - 3 \1l111l2 +
-
2 K
---
~
+ 0(1)
+
Yn
~
3~
44
In the special case
then
2 I\~l)
~
~
< (D.
=1, both
as
2
~l
2
III
t---) co •
E(Y~) and Var(Y:) reduce to
Smith's formulae for the asymptotic mean and variance of
Nt'
By
taking the summation from 1 to Nt' we become involved in the complication of dealing with
taking the summation from
Q*(s), which could be easily avoided by
1 to
Nt+l, as in
3.3 rhe multidimensional cumulative process treated as a special
case of the one dimensional cumulative process.
Consider the multidimensional version of
given by
(3.2.1).
variance of
Y:, namely
~l(t)
Suppose we want to obtain the asymptotic co-
~lj(t) and
~lk(t), Jtk
for any two components
and
Tl lj
of the
~lk
p-dimensional vector
TIl.
Let
Nt
~
n=l
and
Nt
CJQ3.2)
where
vector
=
Tl lk (t)
y(j)
n
(k)
Yn
and
.
y (pxl)
-n
y(k)
n
~
n=l
e.re the
.th
J
k
and
th
components of the
Consider
Nt
(3.3.3)
TI(t)
'Ill / t ) +
=
'Illk(t)
~
=
Yn
n=l
where
=
y
n
clearly
y( j)
n
(y ) a.re independently identically distributed random
n
(zn (pxl») are independently identically dis-
variables, since
tributed; and furthermore the summation in
(3.3.1), (303.2) and
(3.3.3) is over the number of events w.r.t. the same renewal
process
(t
i
) • Hence taking the variance of both sides of
(3.3.3) we obtain
=
+
Var ('Illj(t»)
+
Var
2 cov ('Illi t ), 'Illk(t») ,
and so
(3.3.5) cov (Tllj(t) ,''Illk(t»)
2
~ far (~(t))
-
Var (~l/t))
('Illk(t»)
95
In the notation of Hammersley let us write
~
=
o:t3:r
E
r;(n j) ]
LJ'
t3
t
:r
n
and
fJ
=
1
=
~
1
J..Lool
Then our formula (3.2.9) for
Var
(~(tV in
the above notation
becomes
(3.3.6)
[P(J..L200+1J.020+2~lO) - 2p2(~OO+J..L010)
Var (Tl(t)) = t
x
+
("J.Ol+~Oll) + p3("J.OO+~OlO)2 ~Oa2]
2232
.2t4
(~OO+J..LOlO)
~ (~OO+IJ.OlO) IJ.003
J..L002 -
3
3
- 3p (~OO+IJ.OlO)(J..L10l+IJ.Oll) IJ.002
Var (Tllj(t))
becomes
(3.3.7)
Tll/t)
Var
=
+
t (p
J..L200+p3J..L2l6oJ..L002-2p2J..L100~0l)
22232
~ ~OOJ..L002 - 2p ~OOJ..L003
4
3
t
and
+ 5p
22232
1l0l0 1l002 - 2p 1l0l0 11
003
4
- 3p
3
322
2
1l0101l01l1l002 + P 1l011+ 2p 1l0101l012
2
+ 2p /2
1l0021l020 - P 11021 + 0(1) as
t~ CD
•
Combining (3.3.5), (3.3.6), (3.3.7) and (3.3.8) we find
0.309)
cov (
Tllj~lk) =
+
P~OlIlOlO
pt (1l110 5p
- PIlOll ~00+p2~OOIlOlOIl002
423
~OOIlOlOIl002 - 2p ~OOIl0101l003
4
3
2
- 3p3 ~00IlOllIl002 -3p3 1l0101l1011l002+ p ~OlllOll
2
+
2
2
2
2
P ~OOIl012+P 1l0l0~02+ P
2
( )
1l002~10 - P ~11 + 0 1
as t ---7
CD •
Using our formula (BI) for Var ( '{ ), we find, after simplification, that
0.3.10)( s
s )
cov Tl lj' Tllk
=
97
&
for
Both
(3.309) and (3.,.10) reduce to Smith's original limit
Var
Nt when
In(pxl) El, and this has also incidentally pro-
vided a check on the formulae.
Actually there is nothing special about formula
is not in
(3.,.9) which
(3.3.10) and what we have essentially demonstrated is that
our extended formulae
A' and B' (which are extensions of Smith's
formulae A and B) can be exploited to meet completely the requirements of Hammersley's proposed generalization, which is
rather a complication, as Smith very aptly terms in his symposium
L2!7,
than a generalization in any real sense.
BIBLIOGRAPHY
Blackwell, D.. , "A
~enewal
Theorem," Duke Math Journal,
15, 145-150, (1948) •
£g.7
Blackwell, D~, "Extension of a Renewal Theorem,1I Pacific,
J. Math, 3, 315-320.
Cox, D.R., "Some Statistical Methods Connected with Series
of Events," J~ R., Statist. Soc. B, 17, 129-164, (1955).
Cox,ll D.R. and Smith, W. L., "The Superposition of Several
Strictly Periodic Sequences of Events,1I Biometrika
40, 1-11 (1953).
Cox, D. R.. , and Smith, W. Lo, "A Direct Proof of a Fundamental Theorem of Renewal Theory," Skand.. Akt, 36,
139-150, (1953).
e
-
£§.7
Cox, D. R., and Smith, W.. L.. , "On the Superposition of
Renewal Processes," Biometrika, 41, 91-99, (1954).
£17
Feller, W.. , "On Probability Problems in the Theory of
Counters," Courant Anniversary Volume, 105-115, (1948).
£§.7
Feller, W., "Fluctuation Theory of Recurrent Events," Trans.
Amer. Math. Soc., 67, 98-119, (1949).
£27
Hammersley, J. Mo, Discussion in £~7 , (1958).
1 1 27
Hammersley, J. M., "On Counters with Random Deadtime, I,"
Froc. Camb. Phil .. Soc., 49, 623-637, (1953).
£117
Karlin, So, Discussion in £~7, (1958)
Karlin, S., "On the Renewal Equation,"
0
Pacific J. Math,
5, 229-257, (1955).
Kendall, D. G., "Some Problems in the Theory of Dams,"
J o R. Statist. Soc. B, 19, 207-212, (1957).
Kendall, D. G., "Some Problems in the Theory of Queues,"
J. R. Statist. Soc. B, 13, 151-173, (1951).
99
M. G., The Advanced Theory of Statistics,
London: Charles Griffin.
Kenda~l,
•
if
Vol~
I,
Mann, H" B., and Wald, A., liOn Stochastic Limit and Order
Relationships, II Annals. Math. SE:1., 14, 217-226,
(1943).
.
t
Smith, W. L., I1Asymptotic Renewal Theorems,11
Soc. Edinb., A, 64, 9-48, (1954).
Proc
3
Roy.
Smith, W. L., "On Renewal Theory) Counter Problems and
Quasi-Poisson Processes," Proc. Cambe PhiL Soc. 53,
175-193, (1957).
Smith, W. L e, "Extensions of a Renewal Theorem,1I
Camb. Philo Soc~, 51, 629-638, (1955).
£227
Smith, W. Lo, "On the Cumulants of Renewal Processes,"
Biometrika, 46, 1-29, (1959).
["21.7
Smith, W. L., "Renewal Theory and its Ramifications,"
J o R Statist. Soc. B~ 20, 243-302, (1958).
3
~
~e
•
["2g7
Smith, W. L", "Regenerative Stochastic Processes,"
Roy. Soc. A, 232, 6-31, (1955)e
["22.7
SUkhatme, P. V., "On the Analysis of K Samples from
Exponential Populations with Especial Reference
to the Problems of Random Intervals, II Sta.t. Res"
Memoirs, Vol. I, 94-112, (1936).
Takacs, L., Bibliography in
.e,
1./
!:!:££.
£12.7
£'47•
~.