ON DISTRIBUTION FUNCTION - MOMENT RELATIONSHIPS
IN A STATIONARY POINT PROCESS.
by
M. R. Leadbetter *
University of North Carolina
Department of Statistics
Chapel Hill,N. C.
R. J. Serfling
Florida State University
Institute of Statistics Mimeo Series No. 611
FEBRUARY 1969
*
Research supported by the Office of Naval Research, Contract Nonr 855(09).
On Distribution Function - Moment Relationships
in a Stationary Point Process.
by
M. R. Leadbetter *
University of North Carolina
R. J. Serfling
Florida State University
SUMMARY.
G (t)
Let
denote the distribution function for the time to the
n
event in a stationary regular point process, and
the corresponding dis-
F (t)
n
tribution function, conditional on an event having occurred
F (t)
represents the distribution of the time from an
n
the n-th subsequent event).
Let
N(o, t)
moments (unconditional and conditional) of
series are given for
for
F (t)
n
event
" at"
G (t)
n
"at"
t = 0.
(I.e.,
"arbitrary event"
to
denote the number of events in
Relationships between the distribution functions
(0, t).
n-th
N(o, t)
G , F ,
n
n
are obtained.
in terms of factorial moments of
and the
SpecificallY,
N(o, t),
and
in terms of such moments, conditional on the occurrence of an
t
= O.
In particular, a series given in [2] for
F (t)
n
is ob-
tained as a corollary.
1.
INTRODUCTION.
We shall, throughout, consider a stationary point process, and
N(s, t)
the number of events in the (semic10sed) interval
(s, t].
further be assumed that the mean number of events per unit time
denote by
It will
EN(o, 1) = A
is finite, and that there is zero probability of the occurrence of multiple
events (hence the process is regular (orderly) in the sense that
Pr{N(o, t) > 1} = o(t)
(1)
*
as
t + 0). Write
G (t) =Pr{N(o, t) 2:n},
n
n=O, 1, 2 . . . .
Research supported by the Office of Naval Research, Contract Nonr 855(09).
2
This is the probability that the n-th event after time zero occurs no later
than time
t,
i.e.,
is the distribution function for the time to the
G (t)
n
n-th event after time zero (which is a well defined random variable).
We now modify
to define a corresponding probability, but now con-
G (t)
n
ditioned by the occurrence of an event
"at"
time zero in the following precise
sense:
(2)
lim Pr{N(o, t) ~ nIN(- 0, 0) ~ 1}
0+0
F (t)
=
F (t)
n
= Pr{N(o,
n
and we shall write
(3)
t) ~ n!Event at
o}
it being understood that the right hand side of (3) is defined as the limit
occurring in Equation 2; i.e., the zero probability condition that an event
occurs precisely at time zero, is replaced by the positive probability condition
N(- 0, 0)
~
1,
o + o.
and the limit is taken as
The limit in (2) does in fact exist and moreover
function for each
n.
Further, we may interpret
F (t)
n
F (t)
n
is a distribution
as the distribution
function for the interval from time zero to the n-th event after time zero,
given an event occurred
"at"
time zero.
Alternatively
F (t)
n
may be regarded
as the distribution function for the time between an "arbitrary event" to the
n-th subsequent event.
For a discussion of these points, we refer to [1].
Corresponding to (1) and (2), we define
(4)
(5)
v (t)
n
Pr{N(O, t) = n}
nlEvent at
a}.
3
Then for fixed
t, v (t)
is the probability distribution of the non negative
n
integer valued random variable
of
N(o, t)
N(o, t),
whereas
u
n
(t)
conditional on the occurrence of an event
Correspondingly, we may define moments of
and conditional moments given an event
"at"
N(o, t)
time zero.
is the distribution
"at"
time zero.
(where they exist)
It is the purpose of
this paper to explore relationships between the distribution functions
F
n
and these moments.
(t)
f\(t)
Specifically, we will use factorial moments
defined as follows (writing
N for
N(o, t))
(6)
E{N(N-l) ... (N-k+l)}
(7)
E{N(N-l) ••• (N-k+l) IEvent at
(i. e., strictly
ak(t)
E{N(N-l) ••. (N-k+l) IN(- 0, 0) ~ l},
lim
=
a}
in accord-
0+0
ance with the remarks after
Equation (3)).
Clearly we have
()()
(8)
=
In Section 2, we shall prove some results which will include the existence
of the limit in (7), and a corresponding formula for
()()
=
(9)
as one would intuitively expect.
In Section 3, we relate
(a)
ak(t)
with
I3 k+ l (t)
(b)
G (t)
n
with
13 (t) , 13 2 (t)
1
(c)
F (t)
n
with
a (t) , a (t)
l
2
and hence from (a) and (c),
(d)
F (t)
n
with
13 (t), 13 2 (t)
1
...
ak(t) ,
viz.,
4
The relationship (Eqn (21)) given under (d) is that given previously in
[2]t generalizing
problems.
results of Longuet-Higgins [3] concerning zero crossing
However t the point of view in this present paper is that it is most
natural to relate the unconditional distribution functions
conditional moments
Sk(t) t
to the conditional moments
G (t)
to the un-
n
and the conditional distribution functions
ak(t).
F (t)
n
Then the previously known results of [2]
follow as a corollary by use of the relationship given under (a).
In essence t the results of Section 3 depend on Section 2 only through the
derivation of Equation (9) (Theorem 3).
of
ak(t)
Thus if (9) is taken for the definition
(rather than (7))t the Section 3 results may be obtained independently
of Section 2.
However t it does appear to us to be both natural and important
to make the (Section 2) identification between the expressions (7) and (9).
FinallYt we note that in [2]t assumptions concerning the radius of convergence of the generating function
the calculations involved.
Evk(t) z
k
were made in order to justify
In the present work t we use somewhat different
(though· closely related) assumptions.
A brief comparison of these approaches
is made in Section 4 t where further results are also indicated.
2.
THE CONDITIONAL FACTORIAL MOMENTS
~(t).
In this section t we prove two theorems which have independent interest
and from which Equation (9) follows.
function"
H(x)
THEOREM 1.
A special case concerns the "renewal
=
Let
t.
1.
(i = It 2 ••• ).
denote the position of the i-th event after
Then with the above notation t for
x
>
0t
t
h
0
>
0t
5
00
n - 1 ]
k - 1
[
n=k
I:
€
Proof:
I:
(o,h)
1.,n
= 1
That is
if
N
n,x
h = 1,
N(t., t. + x)
1.
1.
and we do so.
~
o
n, A.1.,n
is the number of
t1.'
n,x
~
otherwise, and
n
1 EN
n,x
n,x'
00
I:
n=k
[ kn-- 1]
F (x)
n
1
=
=
E{;
[nk -- 111
E
(n - 1)
k - 1
n=k
{
I:
00
n=k
=
=
=
~ (~=i)
n=k
=
Ek
Ek
Ek
J
i
€
I:
00
I:
I:
(0,1)
E
n=k
(0,1)
€
I:
I:
(0,1)
€
n=k
+xl [~= ~J}
[N(t!. :1 +Xl]}
I:
€
A.1.,n}
[~ =~J 't,n}
N(t., t.
1.
1.
(0,1)
Thus the required result follows.
(0,1)
~
n.
and hence, since
N }
n,x
t.
1.
A.
I:
t
N(t., t. + x)
1.
1.
such that
F (x) = 1..-
N
0,
A
since
Write
(0,1)
€
follows from [1, Section 5, Lemma] that
N
n
By additivity of the mean and stationarity, there is no loss in gener-
a1ity in taking
A.
F (x)
1.,n
It
•
6
COROLLARY.
The "renewal function"
H(x)
(Ah)-l
=
00
= El
H(x)
E{
E
t.].
N(t , t +
i
i
(o,h)
€
Fn(x)
satisfies
X)}.
The next result relates the k-thconditional factorial moment of
to the left hand side of (10), under the condition that the
of
N(o, t)
lim
0+0
Proof:
If
ENk+l(o,t) <
{[Nto ,X)] IN(-
E
Again, we may take
00
d}
0,0)
k
L ..m,
for
(for all
t)
(Ah)- 1
=
then for
E{
=
X.
J,m
1
E
if
h
>
(o,h)
N(t., t. + x)
].].
N(j-l i) > 1
m 'm - ,
Now (with probability one) for fixed
that
E
(j-l
i].
= N(i,
i + x)
m m
For i f
m ' m
if
= j(i, m)
j
remembering that
number of events in the semiclosed interval
m
E
j=l
(12)
of)
x.
t.]. E (0, 1)
J ,m [
(s, t].
X)]
].
M(w)).
= 0,
i
and
w") ,
is that integer such
00.
N(s, t)
refers to the
Considering all of the
[N(ti'k ti +
E
-+
t.
m -+
j
in this way, we see that
N(j/m'k j / m +
with probability one as
X)]}
m is sufficiently large,we have
N(t., i) = N(t + x i + x) = 0
]. m
i
' m
'
(finite number
+
k
m sufficiently large (depending on the particular "sample point
t]..
moment
0, x > 0
[Ntt i • t.].
E
t.].
Let
h = 1.
otherwise.
X. . =0
J ,m
we have
some
(k+l)-st
is finite.
THEOREM 2.
(11)
N(o, x)
E
X)]
(0,1)
(The two sides are in fact equal for
m 2
7
Now
m
L
j=l
x.J ,m
(N(j/m, kj/m +
X))
m
::;
L
j=l
(N(O, ~ +
X.
J,m
~+
(N(O,
::;
N(o, 1)
::;
k 1
N + (1, 1 + x)
X))
X))
which has, by assumption, a finite mean, and thus from (12), by dominated convergence,
E{
t.
J.
t
L
E
i
(0,1)
k
+
m
=
=
=
lim
m-+
OCl
lim
m -+
0Cl
m E{x
Pr(x
o,m
lim m E{(N(O, x)) Ix
k
A lim
m-+
since
L
j=l
m -+
o
Specifically if
= 1)
0Cl.
0 -+ 0
as
in an arbitrary manner, the result follows by approximating
[0- 1 ].
o,m
E{(N(ok x)) IN(- ~, 0) ~ 1}
= 1) = Pr{N(- 1, 0) ~ 1} ~ l as
o,m·
m
m
its integer part
X = 0
= 1} Pr(x
OCl
Thus the conclusion of the theorem follows with
0-+0
o,m
X = 1 when
o
11m.
o-1
N(- 0, 0) ~ 1
otherwise,
1)
For
by
and
8
(13)
Now
and hence, for
(N(o,x»/o
Multiplying by
(14)
and taking expectations, we see that
k
[0 (m+l) ]
-1
E{ (
m=
N(o x)
-1
k' ) XII (m+l)} I (m+1)
Using the conclusion of the theorem for
together with the fact that
om
+
1,
0
of the form
11m
and (13),
it follows that the outer members of (14)
each converge to
N(ti'
E{ t
i
e:L(o,l) [
yielding the conclusion of the theorem.
We note that the assumption of the existence of the
for all
t
in the theorem, need only be made for some
well known that this implies existence for all
t.
(k+1)-st moment made
t
>
0,
since it is
(This is an easy consequence
of Minkowski's inequality and stationarity).
Using the two results above, we now give precise conditions for the vaUdity of (9).
Suppose that for some fixed
THEOREM 3.
t
>
0
(and hence for all
conditional factorial moment
ak(t)
t).
of
k,
EN
k+l
(0, t) <
00
for some
Then Equation (9) holds for the
N(o, t) (ak(t)
k-th
being defined by (7».
9
That is
00
=
where
~(t)
Pr{N(o, t) = klEvent at
O}
in the usual limiting sense.
Proof:
By (7) and Theorems 1 and 2, we have
=
lim E{(N(o,t))IN(_ 0, 0) ~ l}
6 +0
k
;
(15)
n=k
(~=
00
(n L:
=
n=k k -
Fn ( t)
11) m=n um
00
L:
m
u (t) L: (n m
n=k k m=k
00
=
i)
L:
(t)
11)
00
=
as required.
As a corollary following from Equation (15) (with k = 1), we have for
the "renewal function"
(16)
H(x)
H(x),
=
=
lim
6
provided
2
E N (0, t) <
00
for some
E{N(o, x) !N(- 0, 0) ~ l}
+0
t > O.
10
3.
SERIES EXPRESSIONS FOR
F (t)
n
AND
G (t).
n
In this section, we first relate the conditional and unconditional factoria1 moments of
and then give series expressions for
N(o, t)
in terms of these moments.
with the
relate
F (t),G (t)
n
n
We shall use Equation (9) (shown in Theorem 3) to
u 's
(and hence with the
n
F
n
's).
As noted previously,
the results of this section may be made independent of Section 2 if one defines
ak(t)
by means of (9) instead of (7).
First we state three simple lemmas (the first of which has in fact already
been proved in the course of Theorem 3).
{w :
k
k = 0, 1, 2 ... }
In the first two lemmas, we let
be a probability distribution,
00
00
L:
and
w'
J
j=k
kl
=
L:
n=k
the corresponding k-th factorial moment.
00
Lemma 1.
yk/kl
=
L:
n=k
[n-1] H
k-1
n
(::=;
00).
For the right hand side may be written as
; (n - 1) ;
n=k
k - 1
m=n
w
m
=
00
L:
m=k
w
00
L:
[
m n=k
as required.
Lemma 2.
H
n
=
;
( - ) k-n (k
n -- 1J
1 Yk/kl
k=n
if this series converges absolutely.
n - 1)
k
-
1
(positive terms)
11
Proof:
By Lemma 1, the right hand side may be written as
1) ;k
; (_) k-n (k n - 1
k =n
(16)
=
.
J=
1) H.J
(j k - 1
i (- )k-n
; H.
j=n J k=n
(k n
i] (~ - i],
the change in summation order being justified since the replacement of each
term of the double series by its modulus yields
by assumption.
But (10) may be written as
;
j=n
(~-
i) H.J k=ni (_ )k-n
1)
00
=
j 1
j=n (n
L:
H. j -n
L:
( - ) k (j k- n)
J k=O
H
n
since
j-n
L:
(_)k (
j ~)
n
k=O
is one or zero according as
j = n
or
(~ - ~)
j > n.
12
Lemma 3.
For all
u (t)
=
n
inwhich
D+
t
A-
-1
+
n = 1, 2 ....
D Gn+l(t),
denotes the right hand derivative.
Hence
This is shown by a simple transformation of the "Palm Formulae" and restates the results of [1, Section 3.1].
We now relate the conditional and unconditional factorial moments of
N(o, t).
THEOREM
(17)
4.
Sk+l(t)/(k+l)!
Hence, if for some
t
>
0), then
ENk+l (0,
k
Sk+l(t)
1, 2 ..•
k
=
t) <
for some
00
t
>
0
(and hence for all
is absolutely continuous with density
Further, under this finiteness restriction,
Sk+l(t)
A-(k+l)ak(t).
has, for all
right hand derivative
(18)
Proof:
=
By Lemmas 1 and 3
00
Sk+l(t)/(k+l)!
=
[n~l)
I;
n=k+l
(n~lJ
00
I;
A-
n=k+l
=
A-
r
{
0
Gn(t)
00
I;
n=k
r
0
un- l (s) ds
n
(k) un (s)} ds
t
~
0,
the
13
by virtue of the positivity of the functions involved.
Thus the first state-
ment of the theorem follows from Theorem 3, (Equation 9).
(18) follows from
(17) by differentiating the integral on the right, using the easily proved
~(t)
right-continuity of
(e.g., from (9)).
The main results of this section are given in the following theorem, and
provide series expressions for
Sk(t).
F (t)
n
and
G
n
(t)
in terms of the
Specifically the theroem gives conditions for the validity of the fol-
lowing equations (where
n = 1, 2, 3 ••• ).
00
F (t)
n
(19)
~
(
-
) k-n (kn -
i]
ctk(t)/kl
(
-
) k-n (kn -
i]
Sk(t)/kl
k=n
00
(20)
G (t)
n
=
~
k=n
Equation (19) relates the conditional distribution function
ditiona1 factorial moments
uncondi tioned)
G
n
to the con-
whereas (20) correspondingly relates (the
to the (unconditioned)
n
THEOREM 5.
ctk(t),
F (t)
Whenever either of the series in (19) or (20) converges abso-
1ute1y the corresponding equality holds.
Further, if the series in (19) converges absolutely so does that in (20)
(and hence both equalities hold).
absolutely for
hold for
Proof:
cf.
0
~
0
t
~
t < s
Finally, if the series in (20) converges
then (19) also converges and hence both equalities
< ~.
The first statement follows at once from Lemma 2 (using Equation (9) Theorem 3).
To prove the remaining statements, we first assume that the series in (19)
converges absolutely.
We prove that this is also true for (20).
In fact, by
14
Theorem 4, the positivity of the functions involved, and monotonicity of
00
(~
L:
k=n
-
i]
00
A Jt
Sk(t)/kl
L:
k=n
0
00
::;
At
L:
k=n-1
<
since
k
(n-1)
(k-1)
n-1
k-+
as
-
[n~l)
C1.
i)
C1.
k - l (u)
du
(k - 1) I
(t)
k
kl
00
and
oo
00
L:
k=n
is assumed to converge.
(~
[~ -
C4
i)
k
(t)
kl
Thus the desired conclusion follows.
Finally, we assume the series in (20) converges absolutely for
For, again by Theorem 4
and show that this also holds for (19).
Jo k=n
t
;
.(~
-
0::; t < s
~-1 (u)
i) _. __._._.- du
<
00
for any
o ::;
< s.
t
1) I
(k -
The integrand is finite almost everYWhere and hence everYWhere in the interval
(0, s)
by monotonicity of
C1.
00
L:
k=n-1
Using the fact that
absolutely, as required, when
k
C1.
(u) ,
k
from which we have when
(u)
kl
<
as
for any
00
k -+
0::; t < s.
00
o ::;
u < t
0::; t < s
•
it follows that (19) converges
15
It is clear that we can use Theorem 4 again to express
and
of the
Sk
problems.
G (t)
n
in terms of the
F (t)
n
The expression of
in terms
F (t)
n
in terms
is of particular interest for application to certain zero crossing
This expression (which was obtained by somewhat different methods
in [2]) is
(21)
F (t)
n
=
ao
L
(-)
k
-n
2)
[k
n - 1
SkI (t)
k!
k=n
is written for the right hand derivative
in which
Equation
(21) is valid whenever (19) holds, and thus in particular whenever the series
in (21) converges absolutely.
and equality holds whenever the series in (20) converges absolutely for
t
<
s
t
<
s.
4.
Again the series (21) converges absolutely for
COMPARISON WITH PREVIOUS RESULTS.
In [2], use was made of the probability generating function
N(o, t)
to obtain conditions for the validity of (21).
be the radius of convergence of this generating function.
[2] that if
P
>
t
2,
P
Specifically, let
t
Then it was shown in
the series (21) converges absolutely, and hence also so
do (19) and (20).
Conversely the absolute convergence of (19), (20) or (21) (for some
implies
P
t
~
2,
and in fact is a slightly stronger assumption than this, but
apparently does not necessarily imply
convergent at
t
= s
and (21) are valid for
If
P
t
<
2
verge absolutely.
n)
and i f
t < s,
2
=
Ps
P
> 2.
t
However, if (20) is absolutely
it follows from Theorem 5 that (19)
'
but may conceivably not hold at
t = s.
then, from the above remarks, neither (19) nor (21) can conHowever, if
1
<
P
t
~
2
other expressions can be found for
16
e.g.
F (t)
n
in terms of the
ak(t).
P. Imrey) has shown that if as long as
(22)
An investigation (involving also
p
t
>
1,
then, for example,
F (t) = A- 1
1
~ (_)s-k-1 (k)(S - 12) qk-s
;
n
k=n+1 (1+q)k+1 s=n+1
s n
for any
q
chosen such that
q > (p t - 1)
accomplished by means of Euler's
-2
- 1.
"q-transform".
this and other expressions in a later paper.
~'(t)/s!
s
The proof of (22) may be
It is planned to develop
17
REFERENCES
[1]
Leadbetter, M.R.,
"On streams of events and mixtures of streams",
J. Roy. Stat. Soc. (B) 28,
[2]
Leadbetter, M.R.,
(1966) p. 218-227.
"On the distribution of the times between events in
a stationary stream of events",
Institute of Statistics Mimeo
Series No. 524, University of North Carolina Statistics Department.
Also to appear in
[3]
Longuet Higgins, M.S.,
J. Roy. Stat. Soc. (b), (1969).
"The distribution of intervals between zeros
of a stationary random function",
(1962) p. 557-599.
Phil. Trans. Roy. Soc. 254
UNCT.ASSIFIED
SE'cuntv
'
Cl aSSl'f"lcatlOn
DOCUMENT CONTROi.. DATA· R&D
(Security classification of title, body of abstract lind indexing annotation must be entered when the overall report is classified)
ORIGINATING ACTIVITY
1
(Corporate author)
].
REPORT
,
Za. REPORT SECURITY CLASSIFICATION
University of North Carolina
Department of Statistics
Chapel Hill, North Carolina 27514
UNCLASSIFIED
Zb. GROUP
TITLE
ON DISTRIBUTION FUNCTION
-
MOMENT RELATIONSHIPS IN A STATIONARY POINT PROCESS.
4. DESCRIPTIVE NOTES (Type of report and inclusive dates)
TECHNICAL REPORT
5· AU THOR{Sl
(First name, middle initial, last name)
MALCOLM R.'LEADBETTER
ROBERT J. SERFLING
7a. TOTAL NO. OF PAGES
6. REPORT DATE
17
FEBRUARY 1969
8a.
r
b • NO. OF REFS
3
9a. ORIGINATOR'S REPORT NUMBERIS)
CONTRACT OR GRANT NO,
Nonr 855(09)
INSTITUTE OF STATISTICS MIMEO SERIES
NUMBER
b. PROJECT NO.
c.
9b. OTHER REPORT NO{S) (Any other numbers that may be assigned
this report)
d.
1 C. DISTRIBUTION STATEMENT
NO LIMITS ON DISSEMINATION
11. SUPPLEMENTARY NOTES
12. SPONSORING MILITARY ACTIVITY
Logistics and Mathematical Statistics
Branch, Office of Naval Research
Washington, D.C. 20360
13, ABSTRACT
G (t) denote the distribution function for the time to the n-th
n
event in a stationary regular point process, and F (t) the corresponding disn
tribution function, conditional on an event having occurred "at" t = O. (I.e. ,
F (t) represents the distribution of the time from an "arbitrary event" to
n
the n-th subsequent event). Let N(o, t) denote the number of events in
(0, t) • Relationships between the distribution functions G
the
n' Fn' and
moments (unconditional and conditional) of N(o, t) are obtained. Specifically,
series are given for G (t) in terms of factorial moments of N(o, t) , and
n
for F (t) in terms of such moments, conditional on the occurrence of an
n
Let
event "at"
t
=
O.
In particular, a series given in [2] for
F (t)
n
is ob-
tained as a corollary.
i
I
;
I
I
i
I
Security Classification
© Copyright 2026 Paperzz