Meyer, Richard M.Some Poisson-type limit theorems for sequences of dependent RAE events."

I
I
.te
I
I
SOME POISSON -'7YPE LIHIT THEOREMS FOR SEQUENCES
OF DEPENDENT
EVENTS, WITH APPLICATIONS
RfE
by
I
I
I
I
Richard M. 14eyer
University of North Carolina
Institute of Statistics Mimeo Series No. 529
Ie
I
I
I,
I
I
I
I
Ie
I
Partially Supported by National Institutes
of Health, National Institute of General
Mediqal Sciences Grant No. GM-10397
DEPARTlvIENT OF STATISTICS
University of North Carolina
Chapel Hill, N. C.
I
ii
I
P
I
I
I
I
ACKNowLEDGEMENT
The author is deeply appreciative of the opportunity to have the
creative and valuable guidance of Professor W. J. Hall in the writing
of this thesis.
His reading of and comments on the text have con-
tributed a greal deal to its present form.
I
I
Ie
I
I
I
I
I
I
I
II
Thanks also to Profs.
S. Ikeda, M. R. Leadbetter and J. Th. Runnenburg for many stimulating
conversations concerning, in particular, Chapters 2 and 3.
Apprecia-
tion is expressed to the entire faculty.
Thanks are due to the Department of
Stati~tics
and the National
Aeronautics and Space Administration for financial support during
studies.
Thanks to Mrs. Linda Burgess for her nice typing of the manuscript, and to the secretarial staff of the Department for their aid.
I
I
et,
I
TABLE OF CONTENTS
Page
I
II
III
ACKNOWLEDGEMENT . . • • •
ii
INTRODUCTION AND SUMMARY
iv
POISSON-TYPE LIMIT THEOREMS FOR SEQUENCES OF m-DEPENDENT,
f(n)-DEPENDENT AND STRONGLY MIXING 'RARE' EVENTS
1
1.0
Summary
1
1.1
Introduction and Preliminary Results.
1
1.2
A Poisson-Bernoulli-Type Limit Result for f(n)dependent events . • . • . . . . . • • . .
12
1.3
A Multivariate Analogue of Theorem 1.1.2 • • . .
24
1.4
Some Generalizations of a result due to Loynes [12J
31
ASYMPTOTIC STRUCTURE OF A QUANTUM-BIOPHYSICS OF VISION
MODEL • • • .
. .'. .
49
2.0
Summary
49
2.1
Introduction.
49
2.2
The Limit Problem for non-random n vs. non-random t
53
2.3
Solution of the Limit Problem for known cases
56
2.4
The Distribution of 'Scores' .
71
2.5
The Limit Problem for some new situations
81
ASYMPTOTIC DISTRIBUTION OF THE NUMBER OF UPCROSSINGS OF A
HIGH LEVEL BY CERTAIN DISCRETE-TIME STOCHASTIC PROCESSES
87
3.0
Summary
87
3.1
Introduction and Preliminary Comments Concerning·Continuous Time Normal Stationary Processes • • • •
87
Related Results for discrete time processes that are
not necessarily normal • • • • • • • • • . •
91
3.2
APPENDIX A:
REMARKS CONCERNING NON-RANDOM t and
NON-RANDOM n
•...•.•.
108
APPENDIX B:
SOME RESULTS FOR POSITIVE RANDOM VARIABLES • .
114
APPENDIX C:
MULTIVARIATE ANALOGUE OF THE BONFERRONI INEQUALITIES
• • • •
BIBL IOGRAPHY
118
125
I
I
I
I
I
_,
I
I
t
I
I
I
I
wl
,I
4
1
j
I
I
I_
I
I
I
I
I
I
Ie
I,
I
I
I
I
I
I
Ie
I
INTRODUCTION AND SUMMARY
The main objective of this dissertation is to develop and apply
generalizations of some well-known results, valid for sequences of
independent events (or random variables) to sequences of dependent
events (or random variables).
In particular we consider some genera-
lizations of such results as the Poisson limit of the binomial, the
Poisson limit for Poisson-Bernoulli trials, and a Bore1-Cante11i lemma
to the case of dependent events (or random variables).
These generali-
zations turn out to have many interesting and diverse applications.
We
consider two applications; one relates to a probability model for the
quantum-biophysics of vision (essentially the initiation of periods
when k or more servers are busy in an M/G/oo or G/MVoo queueing system),
the other relates to the problem of upcrossings of a high level by certain discrete time stochastic processes.
In Chapter 1 we develop some general theorems for sequences of
certain types of dependent events.
In particular, we extend the Poisson
limit theorem (for sequences of independent and identical binomial
events) to f(n)-dependent and certain strongly mixing sequences of
events.
Furthermore, the Poisson limit for Poisson-Bernoulli trials
is also extended to such sequences of events.
A dependent analogue of
a Bore1-Cante11i lemma is used in the process of establishing the above
result.
The motivation for many of these results is the work of Watson
[19J and Loynes [12J both of which deal specifically with discrete time
v
stationary stochastic processes.
By using a multivariate analogue of
the Bonferroni Inequalities, also developed in this dissertation, we
establish multivariate analogues of the Watson and Loynes results, and
these have applications in certain investigations concerning asymptotic
independence of groups of dependent events.
Chapter 2 deals with a very specific probability model for the
quantum-biophysics of vision considered, for example, by Ikeda [10].
This work, in fact, originally motivated a closer analysis of f(n)dependent events and this lead to the work of Chapter 1.
This chapter
is essentially an application of Corollary 1.1.3.1 (i.e. essentially a
Poisson-type result for dependent events).
The work provides an
approach to the asymptotic behavior of the model not before considered,
and it provides some new and interesting results.
In Chapter 3 we apply the general results of Chapter 1 to obtain
some new and interesting results on the asymptotic properties of upcrossings of a high level by certain discrete time stochastic processes.
The results, valid for a broad class of discrete-time processes, are
similar to those obtained by Cramtr and Leadbetter [4] for certain continuous time normal stationary processes.
I
I
_I
I
I
,
I
I
I
_,
I
I
I
I
I'
I
I
_I
I
I
I
I_
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
'CHAPTER I
POISSON-TYPE LIMIT THEOREMS FOR SEQUENCES OF m-DEPENDENT,
f(n)-DEPENDENT AND STRONGLY MIXING 'RARE' EVENTS
1.0
Summary
In this chapter we establish several Poisson-type limit theorems
for 'rare' events which are not necessarily independent.
These results
are generalizations of well-known (see Feller [5], von Mises [13], and
Hodges and LeCam [7]) Poisson-type limit theorems for sequences of independent binomial (multinomial) events. Specific results are given for
certain sequences of m-dependent f(n)-dependent and strongly mixing
'rare'events.
For example, under mild assumptions on the probabilities
of f(n)-dependent 'rare' events, it is shown that the probability that
exactly k among n of them occur is asymptotically Poisson as n ->
00.
The results obtained will be used to obtain useful generalizations of a
result due to Watson [19] dealing with the asymptotic distribution of
extreme values of certain m-dependent strictly stationary stochastic
processes.
Results related to those obtained by Loynes [12] concerning
strongly mixing processes will also be given.
1.1
Introduction and preliminary results
There has been considerable interest in extending the validity of
certain limit results, known to hold under certain 'ideal' conditions,
to situations where these 'ideal' conditions are not satisfied.
In this
chapter we shall, in particular, be concerned with extending the validity
2
of certain Poisson-type limit results for 'rare' events to 'non-ideal'
situations which allow for some dependence.
In a sense, such results serve to establish the robustness of the
asymptotic result under deviations from 'ideal' conditions.
Of course,
the ultimate goal of such investfgations is to find necessary and sufficient conditions under which the limit result of interest is valid.
Needless to say, such a task is generally very difficult.
Frequently,
too, it is impossible to evaluate the .'c1oseness' of the asymptotic
result to the finite situation.
The following investigation was originally suggested by a theorem
of Watson [19] concerning the limiting behavior of extreme values for
certain m-dependent strictly stationary stochastic processes.
begin by stating Watson's original theorem.
We shall
First we state two basic
defini tions.
Definition 1.1.1
A sequence of p-dimensiona1 random vectors
{X.(i=1,2, ••• )} is termed strictly stationary if (i)
-~
non-negative integers nand h, and (ii)
of n distinct indices, and (iii)
for every pair of
for every choice i ,i , ••• ,i
n
1 2
for every choice Sl,S2, ••• ,Sn of n
p-dimensiona1 (Borel) sets, we have that
n
(1.1.1)
De fini tion 1. 1. 2
P( n {X.
j=l
-~j
n
€
S.}) = P( n {X. +h € S.})
J
j=l
-~j
J
A sequence of p-dimensiona1 random vectors
{X.(i=1,2, ••• )} is termed f(n)-dependent (f(n) a non-negative integer
-~
~
n) provided the following condition is satisfied:
s-r
> f(n) implies
that the two sets of random vectors (X
X +l' ••• 'X
)
- 1 'X 2 ' ••• ,X
-r ) and (X
-s -s
-n
are independent.
Definition 1.1.2 of f(n)-dependence, introduced by Hoeffding and
I
I
_·1
I
I
I
I
I
I
el
I
I
I
I
I
I
I
el
I
I
I
Ie
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
3
Robbins [8J, implies that any number of blocks of successive terms from
{!i' i=1,2, ••• ) are independent whenever the largest subscript is not
larger than n and successive blocks are separated by more than f(n)
indices.
=m for
An important special case is when f(n)
all n (n
~
m).
In such a case the sequence {!i(i~1,2, ••• ,») is termed m-dependent.
We now state the theorem due to Watson [19J.
Let {l£i(i=1,2, ••• ») be an m-dependent strictly stationary
Theorem 1. L 1
sequence of random variables which are unbounded above, and which have
the property that
1
lim
(1.1. 2)
c ->
00
Then i f S =nP{1£.
> c n (s»
(1.1.3)
lim
1
n
->
p{x.
P{1£i
max
> c) 0 <
I
i- j
<
P(x. > c, x. > c) = O.
m
-1
-J
for S > 0 fixed,
<
-1 -
c (s) (i=1,2, ... ,n)} = e- S •
n
00
This result, already a generalization of a well-known result for
sequences of iid random variables, may be expanded in several directions.
First, Watson's theorem may be generalized so as to include m-dependent
vector-valued strictly stationary sequences, along with 'extremal sets'
more general than half-infinite intervals.
bilities of events such as "x.
-1
may be derived.
Second, the limiting proba-
> c n (s) for exactly k among i=1,2, ••• ,n"
In addition, the frequency with which the -1
x.'s fall in
the cells of a multichotomy rather than a dichotomy may be considered.
Finally, more general types of dependence may be considered, and the
requirement of stationary may be relaxed.
A closer examination of the above situation suggests that a fruitful approach is to abandon reference to a stochastic process altogether,
4
and instead deal only with events.
Such an approach does, in fact,
lead to general results about Poisson-type limit theorems, of which the
Watson theorem and its generalizations become special cases.
have need of the analogue of Definition 1.1.2 for events.
For comp1ete-
A sequence of events [E (i=2 2, ••• ») is termed f(n)i
1
dependent provided the following condition is satisfied:
s-r> f(n)
implies that the two collections of events (E , ••• ,E ) and (E , ••• ,E )
r
s
n
1
are mutually independent.
The remarks following Definition 1.1.2 also specialize to the case
at hand.
In order to provide a proper setting for treating Poisson-type
limit theorems for 'rare' events, for each n we introduce a basic probability space (0 ,~,P ) and sequence of events [A~(i=1,2, ••• ») therein.
n n n
~
For notational convenience we, shall supress the index n on the probability measure P , and we shall, for example, write P(A~) instead of
n
~
n
the (proper) expression P (A.).
n
~
There mayor may not be a relationship
between the various probability spaces. It is felt that no confusion
will arise from our notational simplification.
It is well-known that if, for each n , the events [A~(i=l, , ••• ,n»)
~
are independent and P(A~) = ~/n (~> 0; i=1,2, ••• ,n), then for any non~
n
negative integer k, P(exact1y k among A. (i=l, , ••• ,n) occur) -->
~
-~
e s
~
k
/k! as n ->
This is a simple example of the Poisson limit of
00.
the binomial distribution.
a result is.
It is not clear, however, how robust such
That is, how much and what kind of dependence can exist
amongst the events (A~(i=1,2, ••• ,n») and still have the same Poisson
~
I
_I
We shall
ness we state it here.
Definition 1.1.3
I
I
I
,
I
I
I
_I
I
I
I
t
I
I
I
el
I~
I
I
I_
5
limit result?
It will be seen that the following theorem, concerning
f(n)-dependent events, gives a sufficient condition.
of Watson's theorem is a special case.
I
I
I
I
I
I
Ie
I
I
I
I
I
I
Ie
I
Our proof is patterned after
Watson [19].
For each n(n ~ m) let (A~(i=1,2, ••• ,n)) be a sequence of
Theorem 1. 1. 2
m-dependent events such that for some g > 0 we have, as n ->
a
(1.1. 4)
n
=
max
'-1 , ... ,n
]-
00,
Ip(A~)-g/nl = o(1/n) ,
J
and
(1.1.5)
b
Then, as n
->
=
max
I i- j I :5 m
i,j=1,2, ••• ,n
o<
p(A~IA;) = 0(1) •
,
n
P(exactly k amongst Ai (i=l, , ••• ,n) occur)
(1.1. 6)
Proof.
00
n
Let S
n
r
=
~
r-tuples (il, ••• ,i ) such that 1:5 i
r
l
the case k=O.
-> e -g gk /kL
P(A, ••• A ), the summation extending over all
ir
~l
< •.. < i r :5 n. We first consider
By the Bonferroni Inequalities (see, for example, Feller
[5] p. 100) we have for any even integer £, 0:5 £ :5 n,
n
(1.1. 7)
B~,O :5 P( i=l
n 7(;)
:5 u~,o '
~
h
I
A generalization
h
where
(1.1. 8)
and
(1.1.9)
We have from (1.1.4) that S~ =
n
~ P(A~) -> g as n ->
i=l
~
00
since,
6
n·
Now we consider S2.
n
~
(1.1. 10)
~
P(A~ A;) =
i=l j=i+l
n-m-1
mt
~
~
n
n
P(A~ A~) +
1. J
i=1 j=i+1
From (1.1.5) we have that as n ->
where m' = min(n,i+m).
~
n-1
m'
00
,
~ p(AnA~) < nm [1 + m+1]
i=l j=i+l
max
P(A~ A~)
1. J
2n . 0 < Ii- j I :5 m
i,j=l, , ••• ,n
i J
~
m
rll + .2n
m+1] b -> 0 .
n
Further, the uniformity of condition (1.1.4) guarantees that as n ->
n-m-l
(1.1. 12)
~
i=l
I
I
I
I
I
P(A. )P(A.) ,
1.
J
j=m'+1
i.-I
n-l
n
~
~
(1.1.11)
-,
By m-dependence we have
n-l
00,
2.1
n
1
~2 /n 2+o(1/n)J ->
~
P(A~)P(A~) ""2(n-m)(n-m-l)L~
j=m'+l
1.
J
~2 /2!
The general term Sn (r
r
> 2) contains (n)
terms. Let e~(j=O,l, ••• r)
r
-
J
n
denote that subset of these terms in which exactly j of the A.'s appear1.
ing differ in subscript by more than m from their nearest neighbor.
(Assume n is large enough for all cases to be possible, and observe
that en = e n 1.)
r
r-
n
. en., it is
Letting N. denote the number of elements 1.n
J
n
'+1
known (see Watson [19]) that N. = O(n J ) for j
r
"" n /r! as n ->
J
00.
r
K. n
n
n
r-l and N = N 1
r
r-
Using m-dependence and (1.1. 4) it then follows that
the contribution to Sn of the elements in e~ (j
(1.1.13)
J
<
j
J
J
<
r-l) is bounded by
[~/n + o(l/n)]j b
J
in (1.1.13) tends to zero as n
->
00.
r
From (1.1.5) the expression
However, the contribution to Sn
r
of the elements in en is, asymptotically,
,I
_I
I
I
I
I
I
I
n
where K. is a constant independent of n.
I
I
I
el
I
I
I
Ie
7
r
n
-r:
(1.1.14)
~
+
[~
n
1
0(-)]
£, we have established that
I
I
I
I
I
(1.1. 15)
I
I
I
I
I
I
I
Ie
I
-->
r
~
,....
Th us, f or r >
_ 2 , Snr _......
/ s~r/r.' as n -/
I
Ie
r
n
£-1
L:
j=O
(- s) j
/r! •
00,
.
an d so f or every even 1nteger
n
< lim inf
n -> 00
j!
P(
n
P(
n
~) <
~) ~ lim sup
i=l 1
i=l
n -> 00
£ (- S) j
n
L:
j=O
.,
J.
Since £ is arbitrary,
n
lim
(1.1.16)
n
->
P(
00
n
. 1
1=
'l"Il
A.) = e
_O!:
5
1
For k > 0, again using the Bonferroni inequalities, it follows
that for any even integer £ such that k+£
(1.1.17)
~
n,
B~,k ~ P{exactly k amongst A~(i = l, ••• ,n) occur} ~
n
U£,k '
where
(1.1.18)
n
B£,k
= Skn
U~,k
=
-
(k+l)
(_l)£-l(k+£-l) n
k
Sk+l + ••• +
k
Sk+£_l'
and
(1.1.19)
S~ - (k~l) Sk+l + ••• + (_l)£(k~£) S~+£ •
We have already shown that S~+j --> ek+j/(k+j)! as n ->
£-1
(1.1. 20)
n
and
lim
->
00
B~,k = J. __L:
O
00,
and so
8
it
lim
n
-:;>
l:
_I
j=O
00
I
I
Thus, since it is arbitrary,
n
P[exactly k amongst A.(i=l, ••• ,n) occur} -> e
n
->
I
-g k
,1
g Ik! as
00.
With this the proof is completed.
As one might suspect, the conditions (1.1.4) and (1.1.5) can be
still further weakened and yet have (1.1. 6) remain valid.
pursue this point later.
We shall
For the moment, we shall concentrate on some
immediate consequences of this particular (robust) version of the
Poisson limit to the binomial.
First, it is clear that Watson's theorem is a special case of
Theorem 1.1.2.
If [x.
-1
(i=1,2, ... )} is an m-dependent strictly station-
ary sequence of random variables, unbounded above, define
A~ = [x. > c (g)} where c (g) is chosen so that
J
n
-J
(g> 0).
n
gin
= pfx. > c (g)),
~J
n
If the hypotheses of Watson's Theorem 1.1.1 are satisfied,
the sets [A~(j=1,2, ••• ,n} satisfy the conditions of Theorem 1.1.2.
J
In
fact, we state two vector-valued versions (one stationary, one not necessarily stationary) of Watson's original result.
They will be of use
in Chapter II.
Let [Y.(i=1,2,
••• )} be a strictly stationary
-1
Corollary 1.1.2.1
sequence of p-dimensional, m-dependent random vectors.
sn
=
sn (g)(g >
0, n = 1,2, ••• ) be a sequence of p-dimensional (Borel)
->
sets such that as n
(1.1.21)
Further, let
pa.
1
€
00
S ) =
n
sin
I
I
I
I
I
_I
I
I
I
I
I
I
I
el
I
I
I
I_
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
9
and
(1.1. 22)
Then P[1.
n
->
00
max
P(1.
o < /i- j I < m
i
€
€ S
i
n
Y.
I -J
€
S ) -> 0 •
n
Sn for exactly k amongst i = 1,2, ••• ,n} -> e-~~k/k! as
•
Note that Theorem 1.1.2 applies to sequences of m-dependent events
all of whose elements need not have equal probability.
Condition
(1.1.4) requires only that the differences amongst the probabilities of
the events in the sequence be uniformly o(l/n) as n ->
Thus, Corol-
00.
lary 1.1.2 has an analogue for sequences of p-dimensional m-dependent
random vectors that are not necessarily stationary.
For completeness
we state this analogue.
Let [1. (i=1,2, ••• )} be a sequence of p-dimensional,
i
Corollary 1.1.2.2
m-dependent random vectors.
Sn(~)
Let Sn =
(~>
0; n=1,2, ••• ) be a
sequence of p-dimensional (Borel) sets such that as n ->
(1.1.23)
a'n =
max
/P(Y.
-1
i=l, ••• ,n
S)
n - gin/
€
00.
= o(l/n)
and
(1.1.24)
b'
n
Then, as n ->
P[1.
=
max
o < Ii- j I <
m
i,j=l, ••• ,n
P~.
SlY.
n -J
€
1
€
S ) =
n
0
(1) •
00,
i
€
Sn for exactly k amongst i=1,2, ••• ,n} -> e-g~k/k!
The case where [A~(i=1,2, ••• )} are mutually independent suggests
1
that Theorem 1.1.2 may be valid under certain weaker assumptions.
In
particular, the following result will be of use in later applications.
Theorem 1.1.3
n
For each n (n> m) let [A.(i=1,2, ••• ,n)} be a sequence
-
1
of m-dependent events such that for some g > 0, as n ->
00
10
(1.1. 25)
a'
(1.1. 26)
b'
max
Ip(A~) - ~/nl
]
j=l, ••• ,N
n
=
n
o < Ii- j I :5
i~j=1,2,
I
-,
= o(l/n)
p(A~IA~) = 0(1) ,
max
m
1
]
'I
••• ,N
and
I
I
(1.1.27)
Then, as n ->
Proof.
00,
( ·,-•••
l ·,n) occur } _.......
P ( exact 1y k amongst An
. 1/ e -~ ~ k/k'•
(1.1.28)
1
It is easily seen that the proof of Theorem 1.1.2 carries
through when n is replaced by N = n+o(n) in the appropriate places.
It will be of use in Chapter II to have the following related
result.
Let (Y.(i=1,2,
••• )} be a stationary sequence of
-1
Corollary 1.1.3.1
p-dimensional m-dependent random vectors.
(~>
Further, let S
n
= S
n
(~)
0; n=1,2, ••• ) be a sequence of p-dimensional (Borel) sets such
that as n ->
00
(1.1.29)
o<
(1.1.30)
b'
n
pcr.
1
o<
and
E
S ) = ~/n+o(l/n),
n
max
/i-j I
<m
P(Y.
-1
E
Sn /Y.
-1
E
Sn ) = 0(1),
N ,..., n.
(1.1.31)
Then, as n
->
(1. 1.32)
p{Y.
00
-1
E
S
n
I
for exactly k amongst i=1,2, ••• ,N} -> e -~~k/k!
The following result elaborates the asymptotic Poisson process
n
character of the occurrences of events of type Ai.
It is of interest
in later applications, and also in suggesting future results in the
I
I
I
_I
I
I
I
I
I
I
I
_I
I'
I
I
I_
I
I
I
I
I
i
~e
j
!
]
11
f(n)-dependence and strong-mixing cases.
Theorem 1.1.4
]
1
]e
]
n
let [Ai (i=1,2, ••• )} be a sequence of
m-dependent events satisfying the following conditions.
s>
0 and as n ->
a
(1. 1.33)
n
00
For some
,
=
max
Ip(A~)- sin / = o(l/n),
i=1,2, •••
=
max
p(A~IA~) = 0(1) •
1
J
0< /i-j/
m
i,j-l,2, •••
and
b
(1. 1.34)
n
:s
Further, let sand t (O:S s < t) be any two real numbers.
n
->
00
Then, as
,
k
P (s,t)
= P(exactly
n
Proof.
n
n
n
A
A
occur) ->
[sn]+l, [sn]+2, ••• , [tn]
k amongst A
For simplicity we shall treat sn and tn as integers.
This
avoids certain notational difficulties, and the theorem is valid
whether or not this is the case.
N
Let N = (t-s)n, and B
j
j = 1,2, ••• ,N.
Now using (1.1.33) we have
(1. 1.35)
N
Na' = N
max
IP(B.)
N
·-1 , ..• , N
J
J-
1
J
For each n
n
Ast + j
- 5(~~s) I <N j=l,max•••
- sin/
=
;
IP(A~)
J
(t-s)na -> 0 as n->oo.
n
Furthermore, as n -> 00 (N -> 00),
b' =
N
by (1.1.34).
max
o < Ii- j I <
m
i,j=l, ••• ,N
p(B~IB~)
:s
P(A~ IA~)
max
1
J
0 < Ii- j I
m
i,j=1,2, •••
:s
-> 0
Thus, via Theorem 1.1.2 the desired result is established.
12
.
k
t
We note that under assumptions of Theorem 1.1.4, Pn(O'g) ->
e
-t
k
t /k!.
Thus, judged in terms of 'time units' of length
n/~,
the
waiting time until the k-th occurrence of an event of the type A~ has
a distribution function F~(t) satisfying the relationship
(1.1. 36)
p(Time to k-th event> t} = 1 _ Fn(t) =
n/~
k
k-1
L pj(O,t/s)
n
j=O
Since the right hand side of (1.1.36) tends to a limit as n -> 00,
F~(t) has a limit, say Fk(t), for every fixed k~· 1.
n/~,
of 'time units' of length
Thus, in terms
the distribution function Fk(t) of the
(asymptotic) waiting time until the k-th occurrence of an event of
type An is that of a random variable with a Gamma (k) distribution,
i
viz.
(1.1.37)
Theorem 1.1.4 could have been stated in slightly more general
terms.
For example, if
°. = (01j,02j)'
J
bounded intervals of [0,00) and
e=
r
j = 1,2, ••• ,r, are r disjoint
(° .-°
L
.), then under the assumpj=l 2 J 1 J
tions of Theorem 1.1.4 the limiting probability that exactly k amongst
n
A~ with subscripts i in U [en 01.]+1, ••• ,[no 2 .]} occur is e-e~(eg)k/k!.
~
j=l
J
J
We refer ahead to Corollary 1.3.1.2 for a more refined result.
1.2
A Poisson-Bernou11i-type limit result for f(n)-dependent events
In the preceding Section, the Poisson limit of the binomial was
extended to include certain sequences of m-dependent events.
These
extensions permitted the individual events in the sequence to have
differing probabilities, yet the differences were required to be, in
a sense, uniformly o(l/n) as n -> 00 •
In the present section we show that by examining the structure of
I
I
_I
I·
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
I_
13
f(n)-dependent events more closely, an analogue of the PoissonBernoulli limit theorem (see Feller [5] p. 264, Hodges and LeCam [7])
can be proved for certain sequences of f(n)-dependent events.
I
I
I
I
I
I
versions of such a result will be stated.
Two
In addition, an extension of
the results of the preceding section to sequences of f(n)-dependent
events is available upon specializing the results.
Von Mises [13] proved that if for each n {A~; i=1,2, ••• } is a
1
sequence of mutually independent events such that as n ->
n
a
n
= ~ P(A~) =
1
i=l
a <
A~(i=l, ••• ,n) occur}
1
00
00
,
n
P(A.) -> 0, then P{exactly k amongst
and
1
-> e- a
as n
->
Hodges and LeCam [7]
00.
proved a more powerful result, still for independent events, that
provided a bound on the difference between the limit value and the
actual finite state value of the above probability.
Ie
I
I
I
I
I
I
I
Ie
I
The work of this section provides for f(n)-dependent events a
result intermediate to the Von Mlses and Hodges and LeCam results.
Unfortunately, no bound is available for the difference between the
limiting probability and its value at any finite stage.
Theorem 1.2.1
n
For each n, let {A.(i=1,2, ••• ,n)} be a sequence of
1
n
.
n
f(n)-dependent events. Let Pi=P(A )(i=1,2, ••• ,n) and define
i
n
n
a = ~ p.. Suppose that as n -> 00
n
i=l 1
(1.2.1)
max
l<i<n
n
Pi
- -
(1.2.2)
fen) = o(n),
= O(l/n),
14
(1.2.3)
c
(1)
n
_I
= max P(A~ IA~ ) = o(l/f(n)) and c(r) =
In
11 12
n
1
0
) /1 <
0
where I n.
r' r= 1 , 2 , ••• 1S th e se t ((01 , ••• ,10
_ 1,
r +l
l
n; ij+l-ij:5 fen); i
j
I
I
< ••• 2 10 + <
r l -
= 1,2, ••• ,n; j=l, , ••• ,r} •
I
I
I,
Then, as n -> 00 ,
-a
n
n
k
P(exactly k amongst Ai (i=l, ••• ,n) occur} '" e
an /k!.
(1.2.4)
Proof.
We prove the theorem for the special case where a
n = 1,2, ••••
n
=a < 00;
It will be seen that the proof carries through with
appropriate modifications, for the general case, since (1.2.1) implies
that (a } is a bounded sequence.
n
First we prove the theorem for the case k = O.
(1.2.5)
Let
P~ = P(exactly k amongst A~(i=l,2, ••• ,n) occur},
and
(1.2.6)
< .•. <
i
<
r-
n}.
Using the Bonferroni inequalities we have for any even integer £,
o :5
£ :5 n,
.(1.2.7)
We propose to show that Sn
r
n
Sn
for r=l since
= L: p~
1
1
i=l
_> a r /r!
=a.
as n
->
Consider now Sn2·
Certainly this is true
By definition
I
I
I
_I
I'
I
I
I
I
I
I
_I
I
I
I
I_
I
i
!
3
]
15
L:
(1.2.8)
( L:
N
l<i<j~n
j-i~f(n)}
N = [(i,j) 11~i<j~n,
(1.2.9)
n n = L: P(An/ A.)P(A.)
n
n < P L: P(A.n, A.)
n < np f(n) c 1 -> 0
Now, L: P(AiA.)
i J
J
N
J n N
1
J n
n
N
as n -> 00 via (1.2.1) and (1.2.3). Furthermore, from f(n)-dependence,
n n
_
n n _
2/,
1
2. - '2
n
n n .
- L: PiP .• Now using (1.2.1)
i=l
N
J
n
n 2
n n
. 2
and (1.2.2), 1/2 L: (p.) < 1/2 p a -> 0 while L: PiP. < nf(n) Pn? 0
i=l 1
n
N
J as n -> 00. Therefore, 8n -> a 2 /2! as n -> 00.
2
L: P(A.A.) - L: p.p]. - a
F
1 J
F 1
We now consider Sn , r > 2.
(1.2.10)
]e
(1.2.11)
n 2
L: (Pi)
Recall,
Sn = L: P(A .••• A. ) = (L: + L:) P(A ••. A. ) , where
i
11
1r
NFl
1r
r
C
C = [(i , ••• ,i r )1 1 <
i 1 < ••• < i r < n},
1
]
F = C [(i , ... ,i )li - i _ >f(n), ... ,
r 1
r
1
r
i2 - i
1
]
1
1
Je
J
and F = [(i,j)11~i<j~n,
j:1>f (n) }.
1
J
n n
L:) P (A •A .) ,
F
1 J
where
r
]
+
N= C
r
(a) L: P(A .••• A. ) -> a /r! and
F
1
1
r
1
We first prove (a). Note that via f(n)-
(b) L: P (Ai ••• A. ) -> O.
N
1
1r
dependence,
00,
n
n
11
1r
L: P(A .••• A. ) = L: p .••• p.
F
> f(n)}, and
F.
We shall show that as n ->
(1.2.12)
1
11
1r
n
n
11
1r
F
= (L: - L:)
C N
and via symmetry
(1.2.13)
L: p .••• p.
C
1
= -,
r.
(a
r
r
-
L:
n
n
11
1r
L: p . . . . p. ),
k=2 D
k
16
For fixed k, (2 ~ k ~ r), and some constant K
the i.'s are equal}.
J
_I
independent of n,
n
(1.2.14)
as n ->
D
k
00
n
1
notation.
1
1
via (1.2.1).
n
that I: p .••• P
N
n
I: p . . . . Pi
1
ir
->
r
"" K(np )
r-k
n
Thus to establish (a) it remains only to show
° as n ->
To do this, we introduce the following
00.
Let A = [Al, ••• ,A _ ) IAi=O,lj i=l, ••• ,r-lj ~i
r l
each A = (Al, ••• ,A _ )
r l
€
(1. 2 .15)
n
N, = N
1\
A let N
A
[(Al,.·.,A
r-
I
1)
i.-i. 1
J
(1.2.16)
J-
=
J
> fen) «- fen))
1 (0); j
=
2, ••• , r
UNA' and this is a (disjoint) union of 2
A € A
We shall show that for each A
n
n
I: p. • •• p.
N
1
1
1
r
€
->
A, as n ->
r-l
if
J•
-1 terms.
00
By symmetry it will suffice to show that for fixed but arbitrary k,
~ k
< r-l,
(1.2.17)
n
n
I: p .••• p.
~,1 l
1r
->
° as n ->
A'
=
00,
where
(1, ••• ,1 0, ••• ,0)
r-l-k
k
Now, assuming n is large enough, we have
u
(1.2.18)
u _
r l
r
I:
=
i
II
I
I
_I
°.
A
°
For
I
I
be the following subset of N:
A.
Clearly, N =
< r-l}.
I
I
=£
r r
I:
i
r-l
=£ r-l
I:
I
I
I
,II
_I
I
I
I
I_
I
I
I
I
I
I
17
where
Ie
I
u
1
= i 2-(f(n)+1)
£2- (f(n)+l)+l
u2
= i 3 -(f(n)+1)
£k-1 = (k-1)(f(n)+1)+1
u k _ =i -(f(n)+1)
1 k
£k = max{(k-1) (f(n)+1)+2,i k+1-f(n)}
uk = i k+1-1
£k+1 = max{(k-1) (f(n)+1)+3,i k+2 -f(n)}
u k+1=i k+2 -1
£r-1 = max{(k-1) (f(n)+1)+r+1-k,i _ -f(n)}u _ = i _
r 1
r 1
n 1
£r = (k-1)(f(n)+1)+r+2-k
u
Thus,
u
(1.2.19)
n
n
r: p l..••• p.l . <a
N
Ie
I
I
I
I
I
I
I
£ =1
1
A
1
t
r
k
u _
r 1
r
r:
i =£
r
r
r-1
=£
r:
i k+1=£k+1
r-1
n
p.
= n •
u k+1
...
r:
i
r
n
l.k+1
• •• p.
l.r
Now, a geometric argument (analogous to the case r=2) can be used to
show that
u
(1. 2.20)
u _
r 1
r
r:
i =£
r
r
r:
i
r-1
=£
r-1
n p [p fen)] r-k
n
n
.
and thus via (1.2.19), (1.2.1) and (1.2.2) we have, as n ->
(1.2.21)
r: p.n • •• p.n < a k n p [p f (n) ] r-k
N ,l.1
l.r n n
A
With this, we have established (a).
->
0
00
,
18
To establish (b) we again make use of the above decomposition of
the set N and write
(1.2.22)
n
n
~1
~r
L: P(A . . . . A. )
N
A
L:
L:
E
A N
n
n
P(A .••• A. )
~1
A
~r
n
n
We shall show that for each fixed A E A, L: P(A .••• A. ) -> 0 as
~1
~
r
N"
n -> 00. An appeal to symmetry again shows that i t is sufficient to
prove that for fixed but arbitrary k, 0
(1.2.23)
n
n
~1
~r
where
A' =
->
P(A .••• A. )
~
0 as n
< r-1,
k
->
00,
(1, ••• ,1, 0, ••• ,0)
--...-
--..
r-1-k
k
By f(n)-dependence we have
u
(1.2.24)
n
n
P (A. • •• A )
i
~1
r
L:
NA'
=
u
r
L:
i =£
n
r-1
L:
i =£
r
r
n
n
~1
~k
u _
r 1
r
r
n
n
~1
~k
"~s' eas~
1y seen that
s~nce
i
r
r-1
=£
n
n
P (A.
~k+1
• •• A. )
~r
u r +1
P(A~
••• A~)
i k+ 1=£k+1
~k+1
~r
L:
2:
i =£
L:
r-1
(where the product p .••• p.
is understood to be unity if k=O).
P(An.
~t
.
~k+1
n
An) = P(An.
•••.
~r
~k+1
)P(An.
A.
) ,
~r-1
u
(1.2.25)
u _
r 1
r
2:
i =£
r
n
P(A.
L:
r
i
r-1
=£
~k+1
r-1
<
_I
I
I
I
I
I
I,
_I
r-1
p. • •• p.
u
I
I
n
.. .A. )
~r
r-k-1
n p
IT
[f(n)C(j)].
n j=l
n
~k+2
Now
IAn.
~k+1
)
I:
I
I
I
I
I
I
-I
I
I
I
I_
19
Thus, by (1.2.25), (1.2.1) and (1.2.3) we have,
k
~ P(An ••• An ) < a n p
(1.2.26)
i1
,
A
N
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
ir
r-k-l
IT
[f(n)c(j)] -> 0
n j=l
n
as n ->
00
•
With this, (b) is established.
From this point on, the arguments presented on the proof of
Theorem 1.1.2 (from 1.1.15 onward) apply word for word.
We may there-
fore consider the proof for k=O, and general k, accomplished.
It will be seen that conditional probabilities in (1.2.3) can be
'much larger' than the unconditional probabilities, though they must
tend to zero as n ->
Furthermore, setting f(n)
00.
Bernoulli limit theorem for m-dependent events.
=m
'better' result can be obtained when f(n)
complete we now state this theorem.
(n
=m gives
a Poisson-
However, a somewhat
2:
m).
In order to be
It is related in form to Von Mises
[ 13 ] theorem.
Theorem 1.2.2
For each n, let (A~(i=1,2, ••• ,n)} be a sequence of m~
n
Let p. = P(A~) (i=1,2, ••• ,n) and define an =
dependent events.
~
Suppose that as n ->
(1.2.27)
n
~
i=l
n
p ••
~
00,
{a } is a bounded sequence
n
(1.2.28)
p
n
=
n
= 0(1),
max
p.
~
l:s,i:Sn
and
(1. 2.29)
b
n
=
n
p(An/A ) = 0(1) •
max
o < Ii- j I :s
i
m
j
i,j=1,2, ••• ,n
Then,
(1.2.30)
as n
->
00
,
P{exactly k amongst A~(i=1,2, ••• ,n) occur}""e
.
~
-an k
a /k! •
n
20
The proof of this theorem is similar to that of Theorem 1.2.1.
It
will be noted that the major difference between the present theorem and
Theorem 1.2.1 specialized to fen)
~
In the present theorem we
only that p
req~ire
m is the order of p
n
max
Pi'
n
l<i<n
= 0(1). Since this does
n
Theorem 1.2.2 is to be compared with the Von Mises' theorem for
That theorem (requiring that a
does not need (1.2.29) because of independence.
n
-> a as n -> 00)
The Hodges-LeCam
theorem does not require fa } to be bounded, nor does it require
n
,
(1.2.29) since the events considered are independent as well.
Some strengthening of Theorem 1.2.2 is possible if the sequence
fa } properly diverges to + 00.
n
preceding theorem a
->
We propose to show that if in the
as n ->
then the conclusion (1.2.30)
-a
.
.
k/k'·
···
rema1ns
va l·d
1 wh en t h e express10n
en a
. 1S rep 1ace d b
y ·1tS 1
1m1t1ng
n
00
00
n
value of zero.
In order to do this, we need the following lemma.
It
will be seen that it is in one sense an m-dependent analogue of a
well-known Bore1-Cante11i lemma (see Feller [5], p. 189).
Lemma 1.2.1
For each n, (n ~ m), let fA~(i=1,2, ••• ,n)} be a sequence
1
of m-dependent events.
n
a
=
n
as n
n
ZP .
i=l i
->
If a
->
n
1
1
Let p. = P(A.) (i=1,2, ••• ,n)
00
as n ->
00,
~
m), and
then for any non-negative integer k,
Pfexact1y k amongst A~(i=l, ••• ,n) occur} -> O.
1
We first prove the lemma for the case m = 0 (Le. , for inde-
pendent events).
Now by independence, as n
->
n
00,
n
(1.2.32)
(n
00,
(1.2.31)
Proof.
n
n
_I
n
=
not imply that fa } is bounded we need assumption (1.2.27).
independent events.
I
I
n
(l-p.) < e
0< pfno A~(i=l, ••• ,n) occurs} =
1
i=l
1
-a
n
= e
Z
-i=l
n
Pi
-> and
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
21
Thus, P{at least one A~(i=l, ••• ,n) occurs} -> 1 as n ->
00.
For each n > 2 it is possible to divide the sequence
{A~(i=1,2, ••• ,n)} into two subsequences, say {A~ (i=l, ••• ,N~)} and
~
i Nn
1
{An (i=1,2, ••. ,N~)} in such a manner that a(a) = ~ pn ->
n
i=l 13 i
13·~
Nn
2
a(13) = ~ pn ->
n
i=l 13 i
00
as n ->
00.
00
Then, as above,
n
n
P{no An (i=l,2, ••• ,N ) occurs} =
l
a.
~
-
n
<
(l-Pa )
i
e
~
i=l
-an
N~
'<"
n
<
P{no A~. (i=l, , ••• ,N ) occurs}
I-'~
n
Pa
i
(a)
= e
J
1
and
.Lo
e ~= l
->
n
PA
1-"
0, and
-a (13)
~
=e
n
2
->
Je
Thus, combining these two results, we have as n ->
]
(1.2.33)
0
as n
->
00
•
00,
P(at least two A~(i=l,2, ••• ,n) occur} ~ P(at least one ~i
and at least one A~. occur}
1
]
~
= 1 - P(no An
>
->
1
J
]
]
or no An occurs}
a.~
13 i
1 - P{no An occurs} - P(no An occurs}
a.~
13 i
1 •
This procedure can be carried out inductively for any positive integer.
n
In particular P{at least (k+l) A.(i=1,2, ••• n) occur} -> 1 as n ->
~
that P(exactly k amongst A~(i=l,2, ••• ,n) occur}
~
-,
so
-> 0 asn ->00 for any
non-negative integer k.
We now consider m > O.
Assuming that n is large enough, for each
n extract from (A~(l=l,2, ••• ,n)} a subsequence (An ( =l,2, ••• ,N )} with
~
n.
n
~
le
00,
the following properties:
22
(1.2.34)
n
- n _
i 1
i
N
n
> m; i=1,2, ••• ,Nn and
a~
n
E p
->
n.
i=l
1.
as n
->
_I
00
00
•
(A proof of the fact that such a subsequence can always be selected is
contained in Lemma 1.2.2.)
Obviously, N ->
n
00
as n ->
00.
By m-depend-
ence the events in the subsequence (An (i=1,2, ••• ,N )) are mutually
n.
n
1.
independent. We may therefore apply the results of the preceding case
(m=O) to them.
(1.2.35)
Clearly for any non-negative integer £,
Peat least £ amongst An (±=1,2, ••• ,N occur))
n.
n
1.
<
Peat least £ amongst A~(i=l, , ••• ,n occur))
1.
Moreover, from the preceding case m=O, it can be deduced that
(1.2.36)
n
peat least £ amongst A (i=1,2, ••• ,N occur)) -> 1
n.
n
1.
as n
->
00
This, in conjunction with (1.2.35), serves to establish the Lemma.
As mentioned in the proof of the preceding Lemma, we need the
following result.
For each n, let (p~(i=1,2, ••• ,n)) be a set of real
1.
n
n
n
numbers such that
<
p. < 1 and a = ~ p. -> 00 as n -> 00. It is
- 1.n
1.
i=l
always possible to extract from each (p~(i=1,2, ••• ,n)) a subset
1.
Lemma 1.2.2
°
(pn (i=1,2, ••• ,N )) with the property that for some fixed positive
n.
n
N
1.
1
n n
integer m, ni-n _ >m, 1=1,2, •.. and an = E p
-> 00 as n ~> 00.
i 1
i=l n i
Proof.
We give a constructive proof. For each n, let
(1) (2)
(1) [n/2]+1
(2) [n/2]
b = max(a
a
) where a
= E
pn.,a
=
~ pn., that is
n
n' n
n
i=l
21.-1 n
i=l . 1.
b is the maximum of the sums of the odd numbered terms and even
n
numbered terms in
(p~(i=1,2,••• ,n)).
If b =
n
a~l), let
I
I
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
Ie
I
I
I
I
I
I
I·e
I
I
I
I
I
I
I
Ie
I
23
C = max(b(ll) b(12) where b(ll) is the sum of the terms in
n
n' n
n
{p~(i=l, ••• ,n)} which have subscripts
1
=1
(mod 4) and b(12) is the sum
n
of terms in {p~(i=l, ••• ,n)} which have subscripts
1
=3
(mod 4).
If
b = a(2) let C = max(b(2l) b(22»
where b(2l) is the sum of the
n
n'
n
n'p.'
n
terms in (p~(i=1,2,••• ,n)} which have subscripts
1
is the sum of the terms in
(mod 4).
p~(i=1,2, ••• ,n)
1
=2
(mod 4) and b(22)
n
which have subscripts
=0
This procedure is carried out a total of r times, where r is
r
the smallest integer such that 2 > m+1.
By this construction we arrive
at a subsequence (pn (i=1,2, ••• ,N )} of (pn (i=1,2, ••• ,n)} for each n,
n.
n
i
1
r
with the property that n. = n. 1 = 2 _l > m; i=1,2, ••• ,N', and
N
n n
a' = E p. ->
1
n
1-
00 as n -> 00.
With this the Lemma is established, and
n i=l 1 n
Theorem 1.2.2 is shown to remain valid when (a } properly diverges to
n
+
00.
From the statement of Theorem 1.2.1 it is easily seen that an f(n)dependence version of Theorem 1.1.2 is available.
Since in many cases
f(n)-dependence is a more realistic assumption than m-dependence, such
a result is of interest.
Theorem 1.2.3
For each n, let (A~(i=1,2,••• ,n)} be a sequence of f(n)1
dependent events.
Suppose that for some
max
j=l, ••• ,n
(1.2.33)
a' =
(1.2.34)
fen) = o(n),
n
IP(A~)
J
~
> 0 we have, as n ->
00,
- ~/n 1= o(l/n)
and
(1.2.35)
and
C(1) = max P(A~ IA~ ) = o(l/f(n»
n
1
1 12
In
1
C(r) = max peA.n I
nn
A.•••
A.
) = O(l/f(n» , r> 2,
n
1
1
1 +
In
r l
1
2
r
24
where I~ = [(i.l'i 2 , ••• ,i r + 1 1 1:S i 1
Then as n ->
i. = 1,2, ••• ,n; j=1,2, ••• ,r}.
J
(1.2.37)
< •.• < i n+ 1 ::; n;ij+1-i j < fen);
00
n
P(exact1y k amongst A.;
i=1,2, ••• ,n occur) -> e -~ ~ k Ik! •
~
n
n
Proof.
We note that (1.2.33) implies that a = .E P(A ) -> ~ as
n
i
i=l
max
P(A~) = O(l/n). Thus the conditions of
n -> 00. Certainly
J
j=l, ••• ,n
Theorem 1.2.1 are satisfied and (1.2.37) follows directly.
We remark that under assumptions such as (1.2.34) and (1.2.35),
f(n)-dependent versions of the results of Section 1.1 are almost
immediately available.
It will be necessary to state these extensions
directly.
1.3
A Multivariate Analogue of Theorem 1.1.2
As mentioned in Section 1.1, Theorem 1.1.2 is essentially a
generalization (to m-dependent events) of the Poisson Limit theorem
for sequences of independent binomial trials.
A logical extension
of Theorem 1.1.2 is, therefore, to an m-dependent equivalent of
sequences of independent multinomial trials.
Some care must be taken
in stating such a theorem, and hence it may at first appear a bit
cumbersome.
However, some applications of this theorem prove quite
simple and interesting.
We shall attempt to state the theorem in the
most general form practical.
Following the pattern of the previous
sections of this chapter, we shall defer a discussion of some specific
applications to later chapters.
As before for each n we postulate the
existance of a basic probability space
(1l,~ , P ) in which the col1ec.
n
n
r)
tion of sequences [A~.(j=l, ••• ,n)} (i=l,2, ••• ,r) discussed in the
~J
following theorems are event sequences.
For notational convenience we
shall suppress the index n on the probability measure n.
I
I
_I
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
I_
I
I
II
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
25
We state a basic form of the multivariate analogue of Theorem 1. 1. 2 as
Theorem 1.3.1.
n
2J
{A~.(j=1,2,... ,n)J, {A .(j=1,2, ••• ,n»),
For each nlet
~J
n
••• ,{A .(j=1,2, ••• ,n)} be r sequences of mutuallym-dependent events.
nJ
That is, events A~
Ij2
.
~l,Jl
- jl'
some S.
1
>
>
and A~
.
1 2 ,]2
are independent provided only that
m, regardless of the values of i
0; i=1,2, ••• ,r we have, as n
(1.3.1)
a
(i)
n
->
l
and i •
2
Suppose that for
co,
1
=
= 0(;); i=1,2, ••• ,r
and
b
(1.3.2)
= max
S
n'
0(1) ,
where
S is the set defined as S = {(il,jl)' (i ,j2)I(i l ,jl) r/: (i ,j2)'
2
2
IJ2 - j l l ·::; m,
i ,i = l, ... ,r; jl,j2 = l, .. oon}.
l 2
Then, for any choice
of r non-negative integers m ,m , ••• ,m we have as
r
l 2
]J.
->
00
r
(1.3.3)
P( f"'I {exaotly m. amongst A~. (j=1,2, ••• ,n) occur}
i=l
1
1J
->
r
II (e
i=l
Proof.
-Si m.
S.
1
1
1m .!)
1
We prove the theorem for the case r = 2.
The changes necessary
to establish the result for general r are straightforward and mainly
n
J denote the probability that exactly m.l events
m ,m
l 2
amongst {A~j(j=1,2,••• ,n)} occur and exactly m events amongst
2
. notational.
Let p[
{A~j(j=1,2,••• ,n)} occur.
Further, for 0 ~ k ~ n, 0
<
h
<
n define
the quantity
(1.3.4)
C
n
is the set consisting of all distinct (k+h)-tuples of integers of
26
the form (i1,..qi k ; jl, ... ,jh); 1
_ J
••• <.<
J _ n.
1<'<
h
l
I
I
_I
S i 1 < ... < i k S n;
/
It is known '(see, for example, Frechet [6] that
kih-(m +m )
L:
L: '(-1)
1 2 (k )(h) S~,h' where
k=m h=m
ml m2
1
2
n
(1.3.5)
n
n
Sk,h is understood to be zero if k + h
the asymptotic value of (1.3.5) as n
behavior of (1.3.4) as n
n
->
> n. We now proceed to evaluate
->
00
by evaluating the asymptotic
Consider the terms contributing to the
00.
n
sum Sk h'
,
Let C
(s=O,l, ••• ,k; t=O,l, ••• ,h) denote the subset of
s,t
terms in S~,h in which exactly S events of type (A~j(j=1,2,••• ,n)} and
exactly t events of type (A~j(j=1,2,••• ,n)} appearing differ in (second)
subscript by more than m from their nearest neighbor of either type
(assuming n is sufficiently large for all cases to be possible).
Let
n (and note that
Nn
denote the number of elements in the class C
s,t
s,t
n
n
n
Ck-l,h == Ck,h-l == Ck,h).
Nn = O(n s + t + l ) for s+t
s, t
As in Theorem 1.1.2 it can be shown that
< k~h-l and Nn t ~ nk+h/k!hl as n -> 00. Theres,
fore, from mutual m-dependence and (1.3.1) it follows that the contribution to S~,h of terms in C~,t (s+t
(1.3.6)
Ks,t n
s+t
<
k+h-1) is bounded by
[Sl/n+o(l/n)]
s
t
[S2/n+o(1/n)]bri'
where K
is a constant independent of n.
s,t
tends to zero as n ->
00
by (1.3.2).
The expression in (1.3.6)
n
However, the contribution to Sk,h
n
of terms in C ' is, via mutual m-dependence and (1.3.1), asymptotically
s,t
(1.3.7)
:-... k h/ kfhf
Th us Snk, h -.....
srs2 •• as n - ".....
00
•
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
I_
I
I,
I
I
I
I
27
Using a multivariate analogue of the Bonferroni Inequalities (see
Appendix C), we have for any non-negative integer £, £
(1.3.8)
i+1=t
Ie
I
[n/2],
<;) (~ )
ml:::;i~£+ml 1
n
S. , ,
~,J
2
m2::;j~£+m2
and
n
(1.3.9)
S.
. •
~,J
Thus, as n -> 00, we have by (1.3.7) and (1.3.8),
(1.3.10)
L:
i+j=t
ml'S.:i.::;£+ml
m ::;j::;£+m
2
2
Ie
I
I
I
I
I
I
I
~
2£
L:
( L:
t=o
i+j=t
~i::;£
0<'<£
_J_
and similarly by (1.3.7) and (1.3.9),
2£+1
(1.3.11)
lim inf
n
->
( L:
t=o
L:
i+j=t
0::;1::;£+1
0::;j::;£+l
i! j!
)
.
Since £ is arbitrary, and the expressions on the right hand side of
(1.3.10) and (1.3.11) both tend to a common limit (as £ -> 00), we
28
n .
conclude that P
tends to tnis common limit as n ->
[m ,m ]
n
->
l
00.
Hence, as
2
00,
(1.3.12)
With this the proof of the theorem is complete.
We snall have use for several results that follow directly as
corollaries from the above theorem.
Rather than postponing tnese until
a later chapter, we state them here.
n
Corollary 1.3.1.1
For each n, let (A .. (j=l,Z, ••• ,N.)) i=l,Z, ••• ,r be
1J
1
r sequences of mutually m-dependent events.
Let N , Si
i
o~ ci i ) ~ ci i )
~ 1, i=l,Z, ••• ,r be such tnat as n ->
(1.3.13)
a,
(1.3.14)
b'
n
(i)
(1.3.15)
N
i
n .
= max P(A.
S'
~
0 and
,
o(l/n); i=l,Z, ••• ,r ,
=
n
00
>
1
l ,J l
IAin
.) = 0(1),
Z,JZ
Ai n, i=l,Z, ••• ,r •
Then, for each choice of r non-negative integers ml,mZ, ••• ,m we have
r
that as n
->
00,
r
(1.3.16)
P (
n
i=l
(exactly m. amongst
1
[C
_
(i)
(")
N ] occur))
i
Z
(i)
where 0i - Aisi(C Z -C l
1
A~j
r
with j = [Cii)Ni]+l, ••• ,
-0. mi
~. /m.!) ,
-> II (e
i=l
1
1
_
), i-l,Z, ••• ,r.
The statement of the corollary looks complicated because it is
I
I
_I
I
I
I
I
I
I
_I
I
I
I,
I
I
I
I
_I
I
I
I
I_
29
intended to serve an omnibus purpose.
With
A.
~
=1;
i=1,2, ••• ,r, the
corollary is seen to be closely related to Theorem 1.1.4.
In fact, an
easy generalization of the proof of Theorem 1.1.4 suffices to establish
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I'
Ie
I
the present corollary under this special condition.
ci i ) =1,
With
ci i ) =0,
i=l, , ••• ,r, the present corollary is seen to be a multi-
variate analogue of Theorem 1.1.3.
In fact it amounts to a slightly
more general version of Theorem 1.1.3 when specialized to the univariate case.
N=
In any event, the result is proved easily by choosing
N. and reducing this case to the previous case.
max
. 1,
1.=
•••
r
~
Combining
the two cases produces no difficulties, and the corollary follows.
As remarked in section 1.1, the results of Theorem 1.1.4 can be
refined.
We are now in a position to provide that refinement.
An
easy application of Corollary 1.3.1.1 yields the following result.
For each n, let {A~(i=1,2, ••• )} be a sequence of
Corollary 1.3.1.2
m-dependent events such that for some
(1.3.16)
a
n
~
>0, as n -> 00,
n
=
max
Ip(A.) - ~/nl = o(l/n),
j=1,2,...
~
=
maxP(A A.) = 0(1) •
0</ i- j l:Sm i J
i,j=1,2 •••
and
(1.3.17)
Further, let
b
o~
J
n
=
n, n
(° 1 J,,° 2 J,);
j=1,2, ••• ,r be r bounded, non-overlapping
intervals of (0,00) (either closed, open or semi-open), and let m.;
J
j=1,2, ••• ,r be r non-negative integers.
Then, as n -> 00,
r
(1.3.18)
P(II {exactly m. amongst Ar; with i = [01.n]+1,.",[02.n]
j=l
J
~
J
J
occur})
->
r
II (e
. j=l
-~o.
m.
J(~o.) J/ m.!) ,
J
J
30
Proof.
This corollary follows from Corollary 1.3.1.1.
_I
Define
A~. = An[~~
]+.; j=1,2, ••• , [Din] = N.; i=1,2, ••• ,r. It will be seen
1J
LVIi J
1
that the sets, so defined, satisfy the conditions of Corollary 1.3.1.1
with sl = s and Ai = 0i; i=1,2, ••• ,r.
It will be seen that the other
conditions of corollary 1.3.1.1 are satisfied and that Corollary 1.3.1.2
is therefore immediate.
At this point it is convenient to summarize some of the results
of the preceding sections.
In order to do this, we introduce the
following definition.
Definition 1.3.1.
Let [Un(t);
°
~
t ~T) (T ~ 00; n=1,2, ••• ) be a
sequence of counting processes defined on the interval [O,T].
reals,t (0
For any
< s < t -< T) let Un (s,t) = Un (t) - Un (s) designate the num-
-
ber of 'counts' in the interval [s,t] for the process [U (t); 0< t < T).
The sequence [U (t);
n
° ~ t ~ T)
n
--
(n=1,2, ••• ) is said to converge weaklyt
to a Poisson process of intensity
S on
the interval
[O,T] provided the
following conditions are satisfied: (i) for any real s,t,h
(0
e
< s < t < t+h -< T) and non-negative integer k, P[Un (s,t)=k) ->
-
-s(t-s)
k
[s(t-s)] /k! and p[U (s,t)=k) - p[U (s+h,t+h)=k] ->
n
and (ii) for any real numbers
°
n
~
s
<
t
~
v
<
° as n ->
w ~ T and pair of non-
negative integers k, £, P[U (s,t) = k) = k, U (v,w)=£) _>[e-s(t-s)
n
[s(t_s)]k/kI)[e-s(w-v) [s(w-v)]£/£I)
n .
as n ->
00
I
I
•
In other words, Definition 1.3.1 requires that over the interval
[O,T] the process Un (t) has, asymptotically, stationary independent
increments and Poisson marginals.
The above definition now enables us to summarize many of the
00,
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
Ie
I
I
I
I·
I
I
Ie
I
I
I
I
I
I
I
31
preceding results in a single theorem concerning an asymptotic Poisson
process.
Theorem 1.3.2.
For each n (n> m) let {A~ (i=1,2, ••• ,n)} be a sequence
-
~
of m-dependent events such that for some
(1.3.19)
a
n
S < 0 we have, as n ->
=
max
Ip(A~) J
j=1,2, ••• ,n
=
max
p(A~IA~) = 0(1) •
o<li-jl~m
~ J
s/nl
= o(l/n)
,
and
(1.3.20)
b
Let U (t) (0 < t
n
with 1
{u (t);
n
~
i
~
o~
n
< 1; n=1,2, ••• ) be the number of events A.n that occur
~
[nt].
t
~
Then, as n ->
00,
the sequence of counting processes
T
I} (n=1,2, ••• ) converges weakly over [0,1] to a Poisson
process of intensity
S.
This theorem follows from Theorems 1.1.2 and 1.3.1.
It will prove
quite convenient for the purposes of future applications to have results
summarized in the above form.
A similar summary form should be possible
for results involving sequences of f(n)-dependent events, both stationary and non-stationary.
1.4
However, we shall not do this here.
Some generalizations of a result due to Loynes [12J
As was pointed out in Sections 1.1 and 1.2, the Poisson limit for
independent binomial events can be extended to m-dependent, and in
general f(n)-dependent, events.
Furthermore, 'multivariate' Poisson
limit results are also available for classes of such dependent events.
It is natural to inquire whether the Poisson limit result can be
established for sequences of events which are dependent in a more
extensive manner than m- or f(n)-dependent events.
I
00,
In particular, we
32
would like to establish such a Poisson limit result (as found in Theorem
1.1.2) for sequences of events in which each event conceivably depends
upon each other event (in some specified manner).
In this section we present a result quite closely related to a
theorem given by Loynes [12] which dealt with the maximum of a statioWe shall state Loynes' result, and then state
and prove a related result involving sequences of events rather than
stochastic process.
It will be seen that such an approach is quite
fruitful.
First, we shall need the following definition of uniform (strong)
mixing.
Definition 1.4.1
A discrete-time stochastic process [x.(i=1,2,
••• )) is
-1
termed uniformly (strongly) mixing (with mixing function g) if whenever
A
€
1? (~l' ••• ,~)
jP(AB) - P(A)P(B)/
and Be 13 ~k+l' •• ~) for some m, then
< g(k).·
Here~(••• ) denotes the Borel field of
events generated by the indicated random variables, and g(k) is a function tending to zero as k ->
00.
We now state Loynes' theorem
Theorem 1. 4. 1
(Loynes)
Let [x.
(i=l, , ••• )) be a uniformly mixing
-1
strictly stationary stochastic process with mixing function g.
c
n
<
-
Let
= cn(~) (~> 0; n=1,2, ••• ) satisfy the relation P~i > c n ) ~ ~/n
pf x .
> c).
n
~1 -
If there are sequences of integers [p ) and[q ) ,
m
m=1,2, ••• satisfying the following conditions, as m ->
(1. 4.1)
mg(q m)
_I
Naturally, one would
expect that such dependence would have to 'die off' quite rapidly.
nary stochastic process.
I
I
-> 0, qm/pm -> 0, Pm+l/ Pm -> 1 ,
m
00,
I
I
I
I
I
I
el
I,
I
I
I
I
I
I
el
I
I
I
1_
33
with the property that (writing p=p , q=q , t=m(p+q))
m
m
p-l
(1.4.2)
I
I
I
I
L.
i=l
as m ->
00,
«p-l)/p) P(~l
then as n ->
(1.4.3)
p(x.
-1
> ct '
~i+l
>
Ct)/P(~l
> c t ) -> 0
00
<
c n (s)(i=l, , ••• n)} -> e- s ..
-
It can be shown that Watson's original theorem follows from Loynes'
theorem by choosing the g
m
constant, independent of m.
What is not
clear is that with a slight strengthening of the hypothesis of Theorem
1.4.1 one can prove (in a different setting) a Poisson limit theorem
for events that are uniformly (strongly) mixing.
To be precise, we
specialize Definition 1.4.1 to sequences of events.
Definition 1.4.2
A sequence of events (A.(i=1,2, ••• )} is termed uni1
Je
formly (strongly) mixing (with mixing function g) if whenever li-jl ~ k,
]
then Ip(A.A.) - P(A ) P(A.)
i
1 ]
]
I <-
g(k), where g(k) -> 0 as k ->
00.
Accordingly, an event in a uniformly mixing sequence of events may
1
]
1
]
not be independent of any other event in the sequence.
However, the
dependence (judged by comparing joint and marginal probabilities 'dies
off' according to the function g(k) as k ->
00,
we now proceed to formu-
late an analogue of Loynes' result in an 'event-sequence' setting rather
than a stochastic process setting.
The extension will be applicable to
stochastic processes that are not necessarily stationary, although the
non-stationarity must be 'uniformly small' in a sense made precise by
]
(1.4.4) and (1.4.8) in the theorem.
By addition of a stronger uniform
mixing condition (we essentially require 'exponential'decrease in
dependence; see (1.4.5)), we are able to obtain a 'full-Poisson' result,
whereas Loynes' essentially obtained only the first term in the Poisson
J
34
distribution.
We now state and prove this result.
The cumbersome form
of the hypothesis is necessary to admit 'non-stationary' applications.
Theorem 1.4.1
For each n let [A~(i=1,2, ••• ,n») be a uniformly mixing
sequence of events (with mixing function g).
~
>
a
n
Ip(A~) - ~/nl = o(l/n) as n ->
max
=
1
<
i
<
00
•
n
Suppose further that there are sequences of integers [p ) and [q )
m
m
(m=1,2, ••• ) such that as m ->
00,
r
m g(q ) -> 0 for any fixed r > 0,
(1.4.5)
_I
I
I
I
I
I
I
Suppose that for some
0,
(1.4.4)
I
I
m
(1.4.6)
_I
and writing p = Pm' q - qm' t = m(p+q)
m-l
(1.4.7)
.E
.E
k=O £<i<j:5u
where £ = k(p+q), u=£+p.
Finally suppose that if E
t
p
in terms of A~, ••• ,A: and P(E:)~
~/m as m ->
00,
is an event defined
then
max
IP[T(P+q) (i-I) (E t )) - ~/ml = O(l/m),
l<i:5 m
p
(1. 4.8)
where Tj(E) denotes the translate to the right (along the [A~) sequence)
1
by j of E.
(1.4.9)
Proof.
Then, as n
~
00,
n
P[exactly k amongst A.(i=l, , ••• ,n) occur) -> e
1
-s Sk /kt
For fixed m, partition the positive integers into consecutive
blocks of size p and q alternately, beginning with the initial block
m
m
(1,2, ••• ,p) of size p.
m
m
Let P (Q ) denote those positive integers
m m
•
I
I
I
I
I'
I
I
_I
I
I
I
I_
35
falling into p (q ) blocks.
m
Supressing the index m (as in (1.4.1)),
m
definer.~ as the event "exactly k amongst A~; i=1,2, ••• ,t occur."
t
We first show that, asymptotically, if an event A. occurs, then
I
I
I
I
I
I
1
i
€
P •
n
Then, we show that if k amongst
Ie
I
€
Qm can be neglected in the limit.
~:;
1
i=l, ••• ,t occur, then, asymptoti-
cally, all k indices lie in separate p -blocks.
m
Finally, we use the
Bonferroni inequalities to obtain the asymptotic probability of the
t
event/}k defined as "exactly
k amongst A~;
i=1,2, ... , t occur and each
.
1
A: lies in a separate p -block."
1
m
Write, say,
(1.4.10)
where
(1.4.11)
f3t is the event "exactly k amongst A:(i=1,2, ••• ,t)
k
1
occur and all i
Ie
I
I
I
I
I
I
I
Thus, events A~ with i
P ", and
n
€
t
c;t is the event "exactly k amongst A.(i=1,2,
••• ,t)
k
1
occur and some i
Now from (1.4.4), as m ->
(1.4.12)
00,
I:
€
Qm• "
we have
peA:) "" mq (g/t) -> 0 •
• Q
1€
m
1
i<t
Thus P(f~) and P(~~) are identical in the limit.
(1.4.13)
P(8~)
=
Next write, say,
P(~~) + P(g.~),
where
(1.4.14)
~~ is the event "n~ occurs and all A~'s occuring have
indices in separate p -blocks,"
m
I
I
36
and
g ~ is the event ''1$~ occurs and some A~' s occurring have
_I
indices in the some p -block."
m
Via (1.4.7) we have, as m ->
P(g.~) <
(1.4.15)
t
P(~k)
Thus,
and
t
P~k)
I
I
I
I
I
I
00,
m-1
t
t
P(AiA.)
k=O f <i<j< u
J
L:
L:
-> 0 •
are identical in the limit.
We now proceed to evaluate the asymptotic value of
't
't
t
P(~k).
For
. ..1:
this purpose, we define an event rli:k h for which p(b k h) '" P(<<l..k h) as
,
"
m ->
00
as follows:
(1.4.16)
~
't
t
(Uk ,t is the event "exactly k amongst G.~ (i=1,2, ••• ,m)
occur," where
(1.4.17)
G~
_I
is the event "exactly one A~(j=(i-1) (p+q)+l, ••• ,
(i-1)(p+q)+p) occurs".
Using Bonferroni's inequalities we have, for any even integer £, £+k
(1.4.18)
(1.4.19)
Lt
= st
k,£
k
( l)£(k+£) st
with
= st _ (k+1) st
k
k
k+1 + ••• + k
k+£'
(1.4.20)
(1.4.21)
P(G
L:
1<i < •••<i <m
- 1
r-
t
••• G. )
t
i
~r
1
Via the uniform mixing property of {A~; i=1,2, ••• ] we have, for fixed
~
(1.4.22)
t
t
~1
~r
P(G •••• G. )
r
II P(G~ )
j=l
j
I "<
r g(qm) •
~
m,
I
I
I
I
I
I
I
_I
I
I
I
I_
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
37
We now make use of assumption (1.4.8).
t
Consider the event G •
1
depends only upon the events A~(i=l, ••• ,p).
~
G~
~j
It
Furthermore, each event
is a translate of G~ by an amount (i .-1)(p+q).
J
Using Bonferroni's
inequalities again, we have
(1.4.23)
where
(1.4.24)
Now from (1.4.4) and (1.4.7) we have, as m ->
(1. 4. 25)
00,
m T~ ~ mp(s/t) -> sand m T~ -> O.
Therefore, combining (1.4.25) with (1.4.23) yields
(l~ 4. 26)
P(G~) ~ s/m as m ->
00
and so, by assumption (1.4.8)
(1. 4. 2 7)
m
max
j=1,2, ••• ,r
IP(G.t
~.
) - simI = o(l/m).
J
The above step now allows us to write (1.4.22) in the form
(1. 4. 28)
tt
G. )
IP(G 1••••
~r
1
- [~/m
+ o(l/m)]
r,
~ r
g(qm)'
and so from (1.4.21),
(1.4.29)
It is then obvious that as m ->
00,
(1.4.5) and (1.4.29) together yield
t
r,
the desired conclusion, namely that S -> sir. as m ->
r
00.
we have established that for any even integer t, as m ->
00,
With this
38
t
Lk,.e
(1.4.30)
t
Uk,.e
.e-1
->
j=O
.e
->
k
gk+ j
(k+j) (-l)j
(k+j)!
k
L:
(k~j) (-l)j
L:
j=O
.e-1
k!
(- g) j
] and
. I
J.
j=O
k
.e
=L [
gk+ j
(k+j)l
=L [
k!
L:
L:
. 0
J=
~J
J.
. I
which together with previous remarks serve to establish that
(1.4.31)
.
t
We have shown that P(t ) converges along a sequence
k
m ->
00.
t~m(p+q)
as
However, since an arbitrary integer n lies between two
consecutive values of t and property (1.4.6) holds, it is easily shown
that (1.4.31) implies (1.4.9).
With this the proof is completed.
For completeness we state the following immediate corollary of
Theorem 1.4.1.
It will be of use in the sequel, and will be seen to be
a vector-valued analogue of Loynes
l
original result.
It will also
illustrate how a stationarily assumption simplifies the conditions of
Theorem 1.4.1.
Corollary 1.4.1.1
Let (1 (i=1,2, ••• ») be a uniformly mixing strictly
i
stationary p-dimensiona1 vector-valued stochastic process (with mixing
function g).
Suppose that S
n
= S (g) (g> 0; n=1,2, ••• ) is a sequence
n
of p-dimensiona1 (Borel) sets such that P(1
1
€
Sn) = gin + o(l/n).
If
there are sequences of integers (p ) and (q ) (in the notation of 1.4.1),
m
m
satisfying (1.4.5) and (1.4.6) as m ->
(1. 4. 32)
then we have, as n ->
00,
00,
and such that
I
I
_I
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
1_
I
I
I
I
I
I
39
P(Y.
-1
(1.4.33)
Proof.
€
Sn for exactly k amongst i=1,2, ••• ,n) -> £ -g S~ k /k!.
n
If we define events A.1 = (Y.
-1
€
S ); i=1,2, ••• ,n, it is easily
n
seen that the stationarity hypothesis on (Y.;
i=1,2, ••• ,} assures that
-1
these events satisfy the conditions of Theorem 1.4.1.
As mentioned above, this corollary illustrates the simplifications
brought about in the statement of Theorem 1.4.1 when the sequence of
n
events (A.; i=1,2, ••• } can be assumed stationary (by a stationary
1
sequence of events we mean a sequence of events whose characteristic
functions satisfy Definition 1.1.1).
In this stationary case, both parts
of condition (1.4.7) are the same, P(A~) = P(A~), i,j=1,2, ••• ; and con1
dition (1.4.8) is clearly satisfied.
J
Furthermore, to stress the re1a-
tionships between Theorems 1.4.1 and 1.1.2, we can rewrite condition
Ie
I
I
I
I
I
I
I
(1.4.7) (for the stationary case) in terms of conditional probabilities;
it can be written as
(1.4.34)
1
p-1
E (p-i)
P A=l
P(A~+lIA~}
= 0(1) as m -> 00 (p -> 00).
m
It is to be noticed that the expression in (1.4.34) is actually a
weighted mean of probabilities of events conditioned on a fixed event,
more weight given to probabilities of events 'close' to the conditioning event, less weight given to probabilities of events 'far' from the
conditioning event.
Of course, the question immediately arises as to whether Theorem
1.4.1 has a multivariate analogue, similar to Theorem 1.3.1.
answer is affirmative, but to state such a theorem in the same genera1ity as Theorem 1.4.1 would prove very cumbersome.
As an alternative,
we state a multivariate analogue of Theorem 1.4.1 only for the
I
The
40
'stationary' case.
This case is, in part, suggested with a future
application in mind, and it is to be understood that a more general
'non-stationary' version could be stated.
Theorem 1.4.2
For each n
let {A~j; j=1,2, ... }; i=1,2, ••• ,r be r
strongly mixing sequences of events (with common mixing function g) that
are also jointly stationary (in the sense that if E is an event defined
n
. J
in terms of some collection of the above events, Ai"
then
n
P(E) = P(T(E», where T(E) is the translate event in which each A.,
is
1.J
n
replaced by Ai,j+l).
Let
~i
>
0; i=1,2, ••• ,r be such that
(1.4.35)
Suppose further that there are sequences of integers (p) and (q ) such
m
that, as m ->
m
r
m g(qm) -> 0 for r > 0,
(1.4.37)
qm/pm -> 0 and pm+l/Pm -> 1
and writing P=Pm' q=qm' t=m(p+q)
(1.4.38)
1 p-l
tIt
I: (p-j) peA. 1 A.
'+l} = I m(i l ,i 2 )
p j=l
1. ,
1. ,J
1
2
-
-> 0;
Then, for any choice of r non-negative integers m ,m , ••• ,m , as n
r
l 2
r
(1. 4.39)
P(
n
i=l
n
(exactly m. amongst A. ,; j=1,2, ••• ,n occur})
1.
1.J
->
r
IT (e
i=l
-~i
mi
~i /m .. )
1.
•
_I
I
·1
I
I
I
I
_I
00,
(1.4.36)
I
I
~ 00,
I
I
I
I
I
I
I
_I
I
I
I
I_
41
Proof.
Assume without loss of generality that equality holds in
(1.4.35).
We prove the theorem for the special case r=2.
1.4.1, the integers are partitioned into two sets, P
I
I
I
I
I
I
Ie
I
·1
I
I
I
I
I
Ie
I
m
As in Theorem
and Q , and ~kt h
m
,
t
is defined as the event "exactly
k events amongst {A .. ; j=l, ••• ,t} and
.
~J
exactly hevents amongst {A
t
'
2j
; j=1,2, ... ,t} occur."
As before, we
intend to show that in the limit events {A~,; j=1,2, ••• ,t; i=1,2} with
~J
j E Q do not occur.
m
Furthermore, we show that, asymptotically, if k
events amongst {A~j; j=l, , ••• t} and h events amongst {A~j; j=l, , ••• t}
occur, then all the j-indices lie in separate p -blocks.
Finally, we
m
evaluate the limit value of P(J}~,h)' whereJ:t~;'h is th~ event "exactly
k events amongst {At,; j=1,2, ... t} and exactly hevents amongst
lJ
{A
t
2j
; j=1,2, ... ,t} occur and all j-indices lie in separate Pm-blocks."
Let
(1.4.40)
where
S~,h and ~ ~,h are the events defined, respectively, as "£~,h
occurs and all j-indices lie in Pm" and "E~,h occurs and some j-indices
lie in Q."
Now from (1.4.35), (1.4.37), and mutual stationarity it
m
follows that
(1.4.41)
t
P(~k,h) ~
I:
j E Q
j < tm
P(A t , U At )
Zj
1]
1
~
~l
+ -1.)
mq (t
t
as m ->
t
t
Thus P(Ek,h) and P(Sk,h) are the same in the limit.
t
0
co.
Now,
say,
(1.lt.42)
wherebk,h
->
andg~,h are the events defined, respectively, as '¥I~ , h
t
occurs and all of the {A.,; j=l,Z, ••• t; i=l,Z} occurring have j-indices
~J
42
t
t
in separate Pm-blocks" and 1'k,h occurs and some of the {A ij ;
j=l, , ••• t; i=1,2) occurring have j-indices in the same pm-blocks."
Now via (1.4.37), (1.4.38), and joint stationarity it follows that
t
p
t t
t t
t t
P<9 k ,h) < m : (p-i) {P(A llA1j ) + P(A21A2j ) + P(A llA2j )
j 2
(1.4.43)
t
t
+ P(A A1j »)
21
= ~t {Sl[Im(1,1) + I m(1,2)] + ~2[I m(2,1) + I m(2,2)])
->
t
Thus, P(Ek,h) and
t
P~k,h)
0 as m
are the same in the limit.
latter quantity, we define the following events.
->
I
I
_I
I
I
,I
I
00.
To evaluate the
t
Let G.; i=1,2, ••• ,m
1
I
I
_I
t
t
where1'1lk,h is the event "exactly k amongs t G.;
1
t
t
1
1
t
h amongst H.; i=1,2, ••• ,m, occur, but no G.H. ; i=1,2, ••• ,m occurs."
1
Now, using a multivariate analogue of Bonferroni's inequalities (see
Appendix C) we have for any integer £, 0
~
£
~
[m/2],
where
k+h+2[+1
(-1) t-(k+h) (i) (j)
L:
L:
t=k+h
i+j=t
k
h
t
s 1,]
..
~i~+£+l
~~+£+1
(1.4.46)
,
I
(1.4.44)
(1.4.45)
I
I
k+h+2£
t
Uk,h.
L:
t=k+h
L:
i+j=t
~i~k+£
~j~+£
(-1) t- (k+h) (ki) (hh
s~ . ,
1,]
,
I
I
I
_I
I
I
I
with
Ie
(1.4.47)
I
I
I
I
I
I
Ie
I
I
I
I
I
I
43
t
S•
. = I: P (G
C
1,J
t
a1
t
••• G
t
t
Hp,' •• Hp, );
a i ~1
~j
the set C is defined as C = {(a1~···,ai; ~l""'~j)1 1 ~ a 1 < ... < a i
<
m;l <
-
~1
< ..• <
~.
< m; ar F
J-
~
s
r=l, ••• ,i; s=l, ••• ,j).
Now by the
strong mixing property and joint stationarity it follows that for fixed
(a 1 , •••
,ai
; ~1"'" ~ j)
(1.4.48)
and so, by (1.4.36),
(1. 4. 49)
ISit , J.
as
m
->
00.
Using Bonferroni's inequalities,
(1. 4. 50)
where
(1.4.51)
t
p
t
t
T-r1 = I: P(A •. ) and T-r2 =
...
j=l
1J
...
Now clearly, from (1.4.35) and (1.4.38),
(1.4.52)
as m ->
and so we have as m ->
00
,
00,
I
(1.4.53)
Ie
Combining (1.4.53) with (1.4.49), we have, via assumption (1.4.36),
I
44
-,
(1.4.54)
With this, we have established that as m ->
00
I
I
I
k h
(1. 4. 55)
t
sls2 2£+1
Lk h ~ k'h' L
,
• • t=O
I:.
i+j=t
~i~£+l
~j~£+l
2£
L
L:
t=O
i+j=t
11 j!
~1~£
.0<.<£
_J_
t
t
Since £ is arbitrary, and both Lk,h,Uk,h tend to a common limit as
£ ->
00,
we conclude that P(~,h) tends to this common limit as well.
Thus, combining the above results we finally conclude that as m ->
00,
I
I
I"
_I
(1.4.56)
Since an arbitrary integer n lies between two consecutive values of t,
and property (1.4.37) holds, (1.4.56) implies the desired result, thereby concluding the proof.
It is clear how the mutual stationarity assumptions simplified the
proof of the above theorem as compared with Theorem 1.4.1.
Particular
simplification occurs at (1.4.41), (1.4.43) and specially (1.4.47)
through (1.4.54).
I
I
I
I
I
I
Now this assumption of mutual stationarity of the
event sequences {A~.; j=1,2, ••• ,n}; i=1,2, ••• ,r is satisfied in a par1.J
ticu1ar interesting class of situations related to stationary processes.
To illustrate this point we state a multivariate analogue (in terms of
stochastic processes) of Loynes' original result.
find an application in Chapter III.
This version will
"I
i
II
e,1
.1
,I
I
I
Ie
I
I
I
I
I
I
45
Corollary 1.4.2.1
Let (Y.(i=l,
, ••• ») be strongly mixing, strictly
-1.
stationary p-dimensiona1 vector-valued stochastic process, with mixing
function g.
Further, let
I
I
I
I
I
I
Ie
I
i=l, , ••• ,r;
n=1,2, ••• ) be r sequences ofp-dimensiona1 (Borel) sets such that
(1.4.57)
P(Y
-1
€
S.1.n (~.»
1.
~./n
1.
as n ->
00,
i=l, , ••• ,r •
Suppose also that there exist sequences of integers (p ), (q ) satism
m
fying (1.4.5) and (1.4.6) of Theorem 1.4.1.
A~. = (Y.
1.J
-J
€
s.1.n (~.)J
1.
Finally, defining
(i=1,2, ••• ,r; j=1,2, ••• ), suppose these events
satisfy condition (1.4.38) of Theorem 1.4.2.
Then, for any choice of
r non-negative integers m ,m , ••• ,m , we have as n ->
r
1 2
00
r
(1.4.58)
P(
n
(Y.
i=l -J
Ie
I
Sln(~l), ••• ,Srn(~r)' (~l>O;
€
S. (~.) for exactly m. amongst (j=1,2, ••• n) J)
1.n 1.
1.
->
Proof.
r -~. m.
TI (e 1.~.1./m.!) •
. 1
1.
1.
1.=
The only conditions that need verification are the mutual
stationarity and common strong mixing conditions for the event sequences
(A~.(j=1,2,••• »); i=1,2, ••• ,r as defined above. But, as mentioned in
1.J
the introduction to this corollary, the strong mixing and strict
stationarity properties of the underlying stochastic process
(X i (i=1,2, ••• »)
are precisely sufficient to insure that the corres-
ponding' conditions are satisfied for the event sequences.
It will prove useful to have an explicit statement of some
'asymptotic independence' results that can be obtained directly from
Theorem 1.4.2.
The results we have in mind are to be analogous to
Theorem 1.1.4, Corollary 1.3.1.1 and Corollary 1.3.1.2.
the following result.
We begin with
For convenience, these results will be stated
46
-,
for 'stationary' cases.
For each n, let [A~(i=1,2, ••• )} be a stationary uni-
Theorem 1.4.3
1
formly mixing sequence of events (with mixing function g).
that for some ~ > 0, P(A~) ~ ~/n,as n -> 00.
Suppose
Further, suppose that there
exist sequences of integers [p } and [q } satisfying (1.4.5) and (1.4.6).
m
m
Then provided
.1p
we have for 0
<
p
t
t
E (p-i) P(Al/A.) -> 0 as m -> 00 and N/n -> A as n -> 00,
j=2
J
s < t <00 P(exactly k amongst A~(i=[sN]+l,... , [tNJ) occur)
1
The multivariate analogue of the above theorem is now presented.
For each n, let [A~.(j=1,2,••• ,r be r mutually statio-
Theorem 1. 4. 4
1J
nary and mixing sequences of events, with COmmon mixing function g.
Let
~. > 0; i=1,2, ••• ,r be such that P(A~l) ~ ~./n; i=1,2, ••• ,r as n -> 00.
1
1
1
Further, suppose that there exist sequences of integers [p } and [q }
m
m
such that (1.4.36), (1.4.37), and (1.4.38) are satisfied.
Then for
0<
s. < t. < 00, N./n ~ A. (as n -> 00) (i=1,2, ••• ,r) and for any choice
1
1 .
1 1 ·
of r non-negative integers m ,m , ••• ,m , we have as n -> 00,
r
l 2
r
P(
n
i=l
n
[exactly m. amongst A.. (j = [S.N.]+l, ••• ,[t.N ])})
1
1J
1 1
1 i
->
~.A.(ti-s.);
111
1
where 5. =
Proof.
r
II (e
i=l
-5
m.
~.1/m.:).
1
I
I
1
i=1,2, ••• ,r.
I
I
I
I
I
I
_I
I
I
I'
I
I
For A. = 1; s.=O,t.=l; i=1,2, ••• ,r this is precisely the same
111
as Theorem 1.4.2.
The proof of the present theorem is merely an adapta-
tion of the proof of that theorem and will not be presented here.
Finally we are in a position to state an interesting theorem
explicitly
concerning asymptotic independence of the number of events
I
I
_I
I
I
47
occurring in disjoint intervals.
It will have an interesting app1ica-
tion in Chapter III.
Theorem 1.4.5
For each n, let (A~(i=1,2, ••• )} be a stationary,
1
strongly mixing sequence of events (with mixing function
n
~ ~/n
P(A )
1
as n -> 00 and that there exists sequences (p } and (q } so
m
m
that the condition of Theorem 1.4.3 are satisfied.
5~
(5
1i
,5
Suppose
g) •
2i
Further, let
); i=1,2, ••• ,r be r bounded, non-overlapping subintervals
of (0,00) (either closed, open, or semi-closed) and let m ,m , ••• ,m be
1 2
r
r non-negative integers.
r
(1.4.59)
P(
n
i=l
Then, as n -> 00
(exactly m amongst A~; j=[n5 ·]+1, ••• , [ n5 2 ' ]
i
1
J
. 11
r
J
occur})
Je
where 5. = 5 .-5 , ; i=1,2, ... ,r.
2 1
11
1
]
Proof.
]
1
-> II (e
i=l
-s.a.
1
This result follows from Theorem 1.4.4.
m.
1(~ 5 ) 1 1m
')
iii'
,
Define
n
n
A .. = A[~~
]+'; j=1,2,. .. , [ 5.n] = N.; i=1,2, ... ,r. It can be shown
1i J
1
1
that the sets so defined satisfy the conditions of Theorem 1.4.4 with
1J
~.
1
LlU
=
~
and
A.1 = 5 i'• i=l, , ••• ,r.
The other conditions of the theorem
are also satisfied, and the result is established.
For completeness we now state a 'multivariate waiting time' result.
Theorem 1.4.6
For each n, let (A~j; j=1,2, ••• }; i=1,2, ••• ,r be r
sequences of jointly stationary strongly mixing (function g) •
J
J
of events.
n
Suppose that as n -> 00, P(A
i1
)
~ ~i/n;
~i
Sequences
>·0; i=1,2, ••• ,r,
and that there exist sequences of integers (p } and (q } such that conm
m
ditions (1.4.36), (1.4.37) and (1.4.38) of Theorem 1.4.2 are satisfied.
Then if for r non-negative integers m ,m , ••• ,m and r real numbers
1 2
r
J
48
r
t ; 0 < t. < 00; i=1,2, ••• ,r, F(ml, ••• ,m ; tl, ••• ,t ) = P( n {waiting
i
- ~
r
r
i=l
time for m.th occurrence of A~.; j=1,2, ••• greater than [nti/ei]})
r
~ m.-l
-t ..
~J
E e ~t~/jJ) as n -> 00 •
-> II (1 . 0
~
i=l
J=
~
Proof.
This is, of course, a
si~ple
application of Theorem 1.4.5.
Again, as at the end of Section 1.3, we can summarize some of the
+
preceding section inthe form of a theorem concerning weak convergence
to a Poisson process.
We now state such a theorem for stationary
I
sequences of events.
Theorem 1.4.7
For each n let {A~(i=1,2, ••• ,n)} be a uniformly mixing
~
stationary sequence of events (with mixing function g).
n
for some e > 0, P(A.)
~
~
e/n as n ->
00,
Let U (t) (0
n
A~ that occur with 1
~
<
-
t
of Theorem 1.4.3 are
< 1; n=1-,2, ••• ) be the number of events
<
- i <
- [nt].
counting process {Un(t) ; 0
Suppose that
and that there exist sequences
of integers {p } and {q } such that the condition
m
m
satisfied.
~
t
Then, as n ->
~
00,
the sequence of
-t-
I} (n=1,2, ••• ) converges weakly over
[0,1] to a Poisson process of intensity e.
This theorem follows from the stationary version of Theorem 1.4.1
and Theorem 1.4.2.
is possible.
I
I
-I
I
I
I
Of course, a more general :non-stationary
However, we shall not pursue such a result here.
version
I
I
_I
I
I
I
I
I
I
I
I
I
I
1_
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
CHAPTER II
ASYMPTOTIC STRUCTURE OF A QUANTUM BIOPHYSICS OF VISION MODEL
2.0
Summary
As an application of the results of the preceding chapter, we
consider a certain stochastic model for the quantum biophysics of
vision.
The limiting behavior of this model has been considered by
Ikeda [10], Isii [11], Yamamoto et ale [20], Bouman and Van Der Velden
[2] (and in a different context by van Elteren et ale [18]) and others.
In this chapter we describe and extend the known limiting results for
this model, and expose some of its rich limiting structure.
2.1
Introduction
We first introduce the stochastic model for the quantum biophysics
of vision that will be examined in this chapter.
One considers light
quanta being absorbed by (or, arriving at) the retina of the eye according to a positive renewal process [y.,
i=1,2, ••• }, that is, a process for
_1.
which the 'interarrival' times [x.
= _1.
Y.-Y.
l' i=1,2, ••• } (y 0 =0) form a
-1.
-1.sequence of independent and identically distributed random variables for
which P(x.=O)
= 0, i=1,2, ••••
-1.
Let us assume that the ith particle
(quanta), which arrives at time
Y , has a (random) lifetime
A kth
(~l)
i
~i;
i=1,2, ••••
order visual response is defined to occur if at least k
particles are simultaneously 'alive' for some interval of time.
Of
interest is the limiting behavior of the probability of m kth order
I
50
visual response in the time interval (o,tJ as t ->
in some manner.
00
In the model above, let !(t) represent the number of particles
simultaneously 'alive' at time t.
One way of representing !(t) is as
follows:
N(t)
= - E f(t-~, ~
(2.1.1)
), 0
--n
n=l
< t <
00
,
where !(t) is the number of particles arriving before or at time t, and
f(x,y) is a function defined for y > 0 as
(2.1.2)
f(x,y)
_ 11, if 0
0, if x
~ x ~ Y
< 0 or x >
y •
A typical realization of the process [X(t), 0
< t <
00)
might, in part,
appear as follows:
_I
I
I
I
I
I
I
_I
let)
~
b
5
4
-
3
'2.
I
I
-
- -
-
1--------...- ------------ -
.1
0
~I
FIGURE 2. 1. 1
Ikeda [lOJ, Isii [llJ, Yamamoto et ale [20J, Bouman and Van Der
Velden [2J and others have considered certain limiting behavior of such
a process in relation to a problem of visual response in certain experimental research in quantum-biophysics of vision.
It is easily seen that
a kth order visual response begins at time t' if and only if at time t'
I
I'
I;
I
I
I
I
_I
I~
I
I
Ie
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
51
the process K(t) attains (or exceeds) the level X=k from below.
The
response lasts until the process first drops below the level X=k.
Of
specific interest is the behavior of P(m kth order visual responses in
(O,t] as t ->
00
in some suitable manner (later to be specified).
For
convenience, this will be referred to as the 'Limit Problem' (for a nonrandom time interval).
Of course, the quantum-biophysics of vision model is not the only
situation where the process
K(t~
and the 'Limit Problem' might arise.
The process has, in fact, been considered in a different setting by
van Elteren et al. [18].
There, the authors consider a model of the
form as K(t) for the 'thickness' at time t of a strand of fiber being
woven from individual filaments.
The ith filament, entering the strand
at time V., has a (random) length T .•
~~
-~
The simplest form of the 'Limit Problem' is the case where the
~
[.!i; i=1,2, ••• } form a Poisson process of intensity
random variables z. =
-~
~(.!.-.!.
~
~-
1); i=1,2, ••• , with y
and identically distributed -- iid
unit mean), and the lifetimes
stants, (T.
-~
=T > 0;
[~i;
i=1,2, ••• ).
reference 5 in [10]) that for t ->
0
(i.e. where the
= 0, are independent
-- exponential random variables with
i=1,2, ••• } are identical positive conIn this case it has been shown (see,
00
with
~kt -> A (A a positive
constant)
pk(t) = P(exactly m kth order visual responses in (O,t])
(2.1.3)
m
->
where
Sc
--
;~k-l/(k_l).'.
/\,
e
-g ~ m1m! ,
F or
.
conven~ence,
1 3) as t h e
we re f er to (2 ••
'Peisson Limit Result' (for non-random t, [v.}
Poisson (II),
[T.} constant
~~
~-~
and equal).
52
The Limit Problem for iid random lifetimes has been studied.
a = ev-T1. <
00
With
replacing T in the above definition of ~, the Poisson Limit
I
I
_I
Result has been established for various special cases using different
techniques.
With the
(~.)
~
iid. exponential random variables (independent
of (X.) with meana,van E1teren et a1. [18] and Yamamoto [20] have, for
1
example, established (2.1.3); with the (T.) iid positive random vari~
ab1es (independent of (Xi) with finite second moment, Ikeda [9] established (2.1.3); with the
(~)
iid positive random variables (independent
of (~))with a finite moment of specified order 0, 1
< 0 < 2, Isii [11]
established (2.1.3); finally, under the sole assumption that the (T.)
-1
are positive random variables (independent of
moment a
=c T.
-1
(~)
with a· finite first
,Ikeda [10] established (2.1.3).
It is felt that the diverse techniques used to establish the
various cases of Poisson Limit Result, (2.1.3), have in a sense hidden
much of the rich limiting structure
of the process -X(t) defined by
.
(2.1.1).
Furthermore, these diverse techniques all rely heavily on the
'nice' properties of Poisson arrivals.
of approach.
We will pursue a different type
Using the results of Chapter 1 (in particular, the genera-
1ization of Watson's theorem found in Corollary 1.1.2.1) all previously
known cases of the Poisson Limit Result will be established.
In the
process, the limiting structure of the process !(t) is nicely exposed,
and the effects of the increasingly more general assumptions on the
(moments of)
on the
(~i)
(~i)
become apparent.
For example, the moment assumptions
will be seen to be reflected in the limiting behavior of the
lifetimes of individual particles measured in terms of the number of
subsequent particles they survive.
presented.
An interesting boundary case will be
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
1_
53
The results of Chapter 1 will also be used to consider the Limit
Problem for some new situations.
In particular, it will be shown that
an analogue of the Limit Result is valid if the {y.; i=1,2, ••• } form a
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
~
positive iid renewal process while the lifetimes are iid (and independ(~})
ent of
exponential random variables.
First, however, we shall discuss the relationship between a certain
aspect of our method of approach (non-random n), and the approaches
previously taken (non-random t).
2.2
The Limit Problem for non-random n vs. non-random t
In the Limit Problem, one might consider both the limiting behavior
of
k
q (n) = P(exact1y m kth order visual responses in first n
(2.2.1)
m
arrivals) as n
as well as
(2.2.2)
->
00,
k
P (t) = P(exact1y m kth order visual responses in (O,t]) as
m
t
->
00,
where by the "first n arrivals" in (2.2.1) we mean the "time interval
determined by the first n arrivals," viz. (O'YuJ. As one might suspect,
the limiting properties of (2.2.1) and (2.2.2) are closely related.
This is quite convenient, since the results of Chapter 1 lend themselves
more readily to use with pk(n) as defined in (2.2.2).
m
In this section
we shall pursue this relationship, so that results obtained about
(2.2.1) can be 'translated' into corresponding results about (2.2.2).
It will be observed that a visual response can begin only at an
arrival point, that is, only at one of the (random) time points
{y.; i=1,2, ... }.
~
points).
(It need not, however, terminate at one of these
Thus, since there is a one-to-one correspondence between the
54
number of visual responses in an interval and the number of starting
points for these responses, we need only consider the latter for the
Limit Problem.
~(t)
by an investigation of a related discrete time vector process.
Now i f '!!t is the (random) number of particles arriving in the time
interval (O,t], we may restate the Limit Problem (for non-random t) as
k
a determination of the limiting behavior of (2.2.3) qm (E-t) = P(exactly
m kth order visual responses in the first '!!t arrivals) as t ->
Thus,
00.
the Limit Problem for non-random t corresponds to the Limit Problem for
Similarly, the Limit Problem for non-random n corresponds
to the Limit Problem for a random t.
Now if, for example, the arrival
times resulted from an iid positive renewal process, one might suspect
that for large t, pk(t) = qk(n ) and qk(fn ) would be 'close' (in some
m
m- t
m -t
sense).
This, indeed, can be formalized and proved rigorously.
For
purposes of continuity, however, we shall just,state some results at
this point, and relegate a complete treatment to Appendix A.
Suppose that particles arrive according to a positive renewal
process
(~.
(i=1,2, ••• ) for which the 'interarrival times'
(x. = Y.-Yi 1; i=1,2, ••• )
-1
-1 -
-
random variables with mean
(~
0
l/~
= 0) form an iid sequence of positive
(~>
0).
Furthermore, assume that
particle lifetimes (~i; i=1,2, ••• ) form a sequence of iid random variables (independent of arrivals).
The results of Appendix A can be used
to establish the following special results that will have immediate use
I
I
I
I
I
I
_I
I
I
I
I
I
I
I:
in the seque 1.
Theorem 2.2.1
_I
For our purposes, this observation will subsequently
enable us to replace investigation of the continuous time process
a random n.
I
I
If 'interarrival times' (~i = ~i-~-l; i=1,2, ••• ) (~o=O)
I
I
I
I_
55
form an iid sequence of exponential random variables with mean
while lifetimes
[~i;
i=1,2, ••• } form a sequence of iid random variables
(independent of arrivals) with finite mean a, then for
I
lim
n
-I
I
I
I
I
n~
Ie
I
->
0"
~ .emk ->
qk(n)
00
A>
m
k-1
-> A
3
(The symbol = is to read 'exists and equals.')
Theorem2.2.2
If 'interrariva1 times' [x. =:t..-v. 1; i=1,2, ••• } (:t.=0)
-~
~ ~-
0
form an iid sequence of positive random variables with mean
1/~
(~
>
0)
and finite variance, while lifetimes [T.; i=1,2, ••• } form a sequence of
-~
iid exponential random variables, independent of arrivals, and with
mean a, then
Ie
I
I
I
I
I
I
I
1/~,
k
q (n)
lim
n
k-1
n
II
->
m
00
.
"G'(..1...) -> 0
j=l
Oil
3=.ek >'
m
~t
lim
t
->
00
k-1
II
j=l
"G'(..i.)
~
A
where G(·) is the Lap1ace-Stie1tjes transform of the distribution function of the random variable
IIX ••
I""'_~
It should be noted that Theorems 2.2.1 and 2.2.2 essentially
coincide when both 'interarrival times ' and lifetimes are exponential.
A
1
In this case G(s) = - •
s
As noted above, the proofs of Theorem 2.2.1 and 2.2.2 are relegated
to Appendix A.
For the moment, the important feature of these results
is that they formalize the early remarks of this section for two very
particular situations.
These situations will present themselves in the
following sections; in view of Theorems 2.2.1 and 2.2.2 we feel free to
treat the Limit Problem in these situations for a non-random n rather
56
than for a non-random t, since they provide the appropriate translation
of results from the former to the latter.
2.3
Solution of the Limit Problem for known cases
In this section we establish the 'Poisson Limit Result' for all
previously known cases.
The method of approach is based on the results
of the preceding chapter (in particular Corollary 1.1.3.1).
It will
later become apparent that the time spent in treating known cases of
the Limit Problem is well worth while; not only does our method expose
much of the rich limiting structure of the particular model at hand,
but also the arguments used can (and will) be transferred, almost
verbatum, to treat the Limit Problem for some new situations.
It is convenient to establish all results for the specific case
k=3 (i.e., to deal only with the Limit Problem for third-order visual
responses).
It will be seen that this leads to considerable notational
convenience.
The proofs will remain valid for a general k with the
appropriate (and obvious) notational changes.
Throughout this section
we shall use the notation (n -> 00) as an abbreviation of the statement
II
n ->'00,
~
I
V 0 so that
~
2
->" /\ (/\, >
0) ."
The Simplest form of the Limit Problem
The simplest (non-trivial) case of the Limit Problem is the situation where arrival times [y.; i=1,2, ••• ] form a Poisson process of
.
1
intensity Il (Le., the 'interarrival.times'· [x. = y.-v. l' i=1,2, ••• ]
.
(y
o
-1
1
~-
= 0) are independent exponential random variables with mean l/Il),
and particle lifetimes are constant and equal (to T
> 0). A typical
realization of the process !(t) (see Section 2.1) might, in part, appear
as follows:
I
I
_I
I
I
I
I
I
I
_I
I
I
I
I
I
I
I'
el
I
I
I
Ie
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
57
~(f.)
')(-z.
v
~3
:?i l{~ 5"
~ '"
d.----~"--____...,~~.--""-,~----""--____
<>1
1I1\\_---"
" Ii:=-. -.-.---',1'-
I
-
-------------
5
4
I------+----+---~~.--------jl--+----
31---- - - - l - - - - - i - - - - 1 r - - - - - - - + - - + - - - ' - - - - - j
2.. - - - - - - -..---- ..----.-1--
-.-----.. --.-+---j----
I------J----.t-----t------+-+----- -- +o
I
I
I
~ ... l
~_'3
I
I
, 1:... I
~i
! r~
~
,
1:"
!
i-_---=.o1.3_...
,
_ _~>
~(
, t
,
I
'£j :: 1:
I
I
:I
----.;~~
FIGURE 2.3.1
We now proceed to analyze the limiting behavior of the number of
3rd order visual responses in this simple model.
If {x.;
i=1,2, ••• }
-1
are the interarrival times of particles defined above, let
{z.;
i=1,2, ••• } be defined by -1
z.
~
exponential with unit mean.
=~
x.; i=1,2 ••••
-1
The ~
z.'s are iid
Define a 2-dimensional discrete time
stochastic process {Y.;
i=1,2, ••• } and a collection of 2-dimensional
-1
sets S
~
(2.3.1)
(~>
0) as follows:
Y. = (z.+z.
l4z.); i=2,3, ... ; S ={(x"y)
-1 --.J.... ~-1
-1
I x < lit,
I""
Y
<
IIt};
.. -.."O.
1"""""'--
Note that {Y.;
i=2,3, ••• } is a l-dependent strictly stationary vector
-1
stochastic process, and that as (n
(2.3.2)
P(Y.
-1
€
~
This is true, since as (n
(2.3.3)
nP(Y.
-1
€
~
S )
->
00) with
f ~T se-s
o
00) (with
~ = AT 2 /2Z), we have
~/n
S~ ) = P(z.+z.
1
-1 -1= n
->
~ = AT 2 /2! ,
< lIt, -1
z. < liT) = nP(z.+z.
1<
-1 -1I""
I""
ds = nil - e
-~T
(l~T)}
->
~.
~T)
58
(We remark that it is, in fact, not necessary to introduce the vector
process (Ii' i=1,2, ••• }.
However, in the next section it will be neces-
I
I
_I
sary to do such a thing; we go through the procedure here so as to
stress an analogy that will later become apparent.)
Next we note that
as (n -> 00), we have
(2.3.4)
P(Y.
-1
8IJ. , -1
Y. +1
€
8IJ. ) I PIY
\.:.
€
i
€
811..... )
-> 0 •
To establish this result, we observe that via (2.3.2)
o -<
(2.3.5)
as (n -> 00).
np(Y
- i
€
8IJ. , -1
Y'+ l
€
8IJ. ) = np(zi+zi
1
- -
<
IJ.T, zi+1+z.
-1
<
liT)
.....
Thus the process (Ii; i=2,3, ••• } and the sets 81J.(1J. > 0)
satisfy the conditions of Corollary 1.1.3.1 as (n -> 00), and we may
immediately conclude
(2.3.6)
lim
(n
pcr.
=
_I:
e
8
€
1
-> 00)
5
~
IJ.
for exactly m amongst i=2,3,.o •• n)
m
1m! •
We conclude by showing that, in the limit, the events "Y.
-1
€
8
IJ.
for
exactly m amongst i = 2,3, ••• ,n" and "m (3rd order) visual responses in
the first n arrivals" have equal probability.
T.
~
== T
>
!~i+1)
Y. S
S
0; i=1,2, ••• , the event -1
t
<
However, the event
~ €
81J. guarantees only that
3, so by itself does not insure that a third order visual
response begins at time
points.
occurs if and only if
3, and hence if and only if no third order visual response
begins at time Yi+1.
!~i+1) ~
Note that since
~+1.
The following figures illustrate these
In the first figure, (Figure 2.3.2; A,B) .the correspondence
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
I_
59
between the events Xi , S~ and !(Yi+l)
<
3 is illustrated; in the
2.3.3), both panels depict the event Y. € S ,
second figure (Figure
-~
~
yet only in the second panel (B) is the event 'a third order visual
I
I
I
I
I
I
response beginning at time Yi+l' depicted.
'bunching up' of events Xi€ S
~
Ie
I
creates the difficulties.
" . .!.....
""",..
.J. .,.
...., ..,-,.. .....,
CI:"'''-.
("Jl'~
...
2-
I-
I
I
I
I
I
I
(
I
I
:
I
I
I
I
----r-.
I~
"l'1,'45"",
I
J-~----_._._- ~ ...I
I
I
o '_--.1.---+----".-;-..--~.~ f;-
j
·t
I
o
)t
r
~'+i
1
~i
IJi-I
"C
.
)
(A)
Ie
I
I
I
I
I
I
I
Figure 2.3.3 illustrates how
.
!3if-1
)
( \3)
FIGURE 2.3.2
~i'l
~i
:tii...\
I~
I
I
I
I
I
I
••
I
I
I
I
I
4-- +--~-~-~--t--lI
3:
~
i
o
I~
I
I
~'eS
J
'I~-I
r--".
I ~
I
I I I
lH., 19i i11~
I
~i-"2.
t
r
't"
- - -...
=--~
c
(6)
(A)
FIGURE 2.3.3
However, we note that using (2.3.2)
t"-
-~
60
n-l
P( U f.~(~+l) ~ 3, ~(Yi) ~ 3 ) ~ nP(~(I.n) ~ 3, ~(I.n-l)
i=l
(2.3.7)
<
-
nP (z
·2+z 1
'.=n- -n-
<
WI',
nP(z
2+z 1
-n- -n-
<
IlT) • p{z
'.=n
z
1 + z
-n--n
<
> 3)
IlT) -> 0
Thus, in the limit, no 'bunching up' of events Y € S
Il
-i
occurs; the event Y € S corresponds to the event II~(Yi+l) ~ 3, II that
- i
Il
is, to a visual response (beginning) at time Yi+l.
Since (as noted
before) visual responses can begin only at arrival times, (2.3.6) is
equivalent to the Poisson Limit Result, viz.,
lim
(n
->
P(m 3rd order visual responses in first n arrivals)
-_I:. m
= e !is 1m!
co)
With this, we are done.
We shall, however, return in Section 2.4 for
another look at the distribution of particle lifetimes measured in a
different way.
A more general form of the Limit Problem
We now turn to a more general model that allows for randomness of
the lifetimes
(~1;
i=1,2, ••• }.
The model we consider has been con-
sidered by Ikeda [10] and is as follows:
as before the arrival times
(y.;
i=1,2, ••• } form a Poisson process with intensity Il, while now life1
times
(~i;
i=1,2, ••• } form a sequence of iid positive random variables
(independent of arrivals) with finite first moment a.
We now proceed to
establish the Poisson Limit Result for this case, which of course
includes the previous case.
As in the previous case, let
~i
=
Il~i;
i=1,2, ••• , where
(~i;
(~i'
_I
< IlT)
as (n -> co).
(2.3.8)
I
I
i=1,2, ••• } be defined by
i=1,2, ••• } are the 'interarrival times'
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
1-
I
°1
I_
61
for particles.
mean.
The z.'s are iid exponential random variables with unit
-~
Define the 2-dimensional vector valued discrete time stochastic
process [Y.; i=2,3, ••• } and the collection of 2-dimensional sets S
I
I
I
I
I
I
I'e
I
I
I
I
I
I
I
Ie
I
-~
(~
~
> 0), as follows:
(2.3.8)
S
~
[(x,y) I~, y<~} (~o).
Note that [Y.; i=2,3, ••• } is l-dependent and strictly stationary.
-~
Furthermore, letting F(t) =
P(~i ~
t) we have
(2.3.9)
Using the relation (see Ikeda [lOJ, p. 298)
(2.3.10)
ci /'2.1
00 00
= J J [l-F(x+y)}[l-F(x)}
o
dy dx •
0
an application of the dominated convergence theorem shows that as
(n
->
00)
(2.3.11)
Hence, as (n -> 00),
(2.3.12)
P(Y.
-~
€
S )
~
~ ~/n.
Furthermore, as (n -> 00)
(2.3.13)
P(Y. € S , Y'+ l € S )/P(Y. € S ) -> 0
-~
~
-~
~
-~
~
Since,
(2.3.14)
o<
nP(Y; €S , Yi + l € S ) < nP(Y. € S ) .P(z'+l <
.-~
fl
fl -~
~
-~
I-.t:r + )
- i l
->
0
62
as (n -> 00) via (2.3.12).
sets S
~
Thus, the process (Xi; i=2,3, ••• } and the
0) satisfy the conditions of Corollary 1.1.3.1, so
(~>
lim P(Y
(2.3.15)
(n ->00) - i
€
S
~
=e
for exactly m amongst i=2,3, ••• n)
-I:-m
S 1m! •
2.
What remains is to establish a correspondence (in the limit) between the
events "Xi
€
S~"
and "a third order visual response at time Xi+1;"
i=2,. •• •
Establishing the above correspondence in the (present) case of
random lifetimes is considerably more complicated than for the case of
constant and equal lifetimes.
The problem is that particle lifetimes
independently may vary widely, from very small to very large.
Neverthe-
less, the correspondence can be established, and the remainder of this
Our method of attack is to establish the following relations (in
the limit):
the two events "Y
-i
€
S
~
for exactly m amongst i=2,3, ••• ,n"
and "X(xi+1) :: 3 for exactly m amongst i-2,3, ... ,n" have
equal probability;
(2.3.17)
if X(Xt+1) :: 3 for some i=2,3, ••• n then
(2.3.18)
the event "X(Xi+1)
~
~(Xi)
< 3;
3 for all i=2,3, ••• ,n" occurs with
probability 1.
From the above relations one may then conclude that (in the limit),
(2.3.19)
_I
I
I
I
I
I
I
_I
section is devoted to this task.
(2.3.16)
I
I
the desired correspondence is valid, and hence the Poisson
Limit Result is true; i.e., the number of 3rd order
I
I
I
I
I
I
I
_I
I
I
1I
I
I
I
I
I
63
visual responses is finite and is distributed according
to the Poisson law with parameter
~;
(2.3.20)
visual responses of order greater than three never occur;
(2.3.21)
visual responses of order less than three occur infinitely
often (this follows from the m-dependent analogue of the
Bore1-Cante11i lemma found in Lemma 1.2.1).
Thus, in the limit, we picture the process X(t) (defined by 2.1.1)
as jumping infinitely often amongst the states X=r, r
quently to the state X=3, though never higher.
<
3, and infre-
.?"'" A
For large nand nl.
a typical realization of the process might appear, in part, as illustrated in Figure 2.2.4.
Ie
If
31----------,,,..-----.-,,-
I
2-
1
()
I
I
I
I
I
FIGURE 2.2.4
The number of jumps by the process to the state X=3 in along series of
trials of such a process would have approximately a Poisson distribution with
parameter~.
These jumps to X=3 are the
p~overbia1
'rare
events' to which Poisson theory so frequently applies.
To prove (2.3.16) we use the following definitions and Lemma.
Definition 2.3.1
The event Ai is said to occur if
X(~+l) ~
3;
i=2,3, ••• , that is if two or more particles arriving prior to time Yi+1
I
64
survive until time
~i+1.
Definition 2.3.2
The event Bi is said to occur if Xi
€ S~;
i=2,3, ••• ,
that is, if the two particles arriving immediately prior to time
~i+1
survive until time
~i+1.
Definition 2.3.3
The event C is defined by C = Ai B ; i=2,3, ••••
i
i
i
If we let,
A(j) be the event "exactly j amongst A (i=2,3, ••• ,n) occur,"
i
n
B(j) be the event "exactly j amongst B.(i=2,3, ••• ,n) occur,"
n
~
C(j) be the event "exactly j amongst C (i=2,3j ••• ,n) occur,"
i
n
we have the following relationship:
Lemma 2.3.1
The proof is simple.
Since B. C A. and C = A.B., we have
i
A. = B. LJ C., a disjoint union.
~
~
~
~
~.
~
~
~
Thus, m A.'s occur if and only if m-j
~
B. 's and j C.' s occur for some integer j, 2 ..~ j
~
~
n.
By virtue of the definitions of the above events, (2.3.16) will be
established if we can show that p(A(m»
n
- p(B(m»
n
fact, it will suffice to show that as (n -> 00),
(2.3.22)
From Lemma 2.3.1 we have,
(2.3.23)
_I
I
I
I
I
I
I
_I
m
= U B(m-j)
C(j)·
n ' m= 0 , 1 , ••• ,n- 1 •
n
j=O
Proof.
I
I
-> 0 as (n-> 00).
In
I
I
I
I
I
I
I
_I
I
I
I
II
I
II
I
I
I
65
Each of these last two events is contained in C(O).
n
Hence the above
2P(C~0)), and so it tends to zero when (2.3.22)
expression is at most
is satisfied.
m
To show that (2.3.22) holds, we first define a basic event, F ;
i
the event F~ (0 < i < m, m ~ 2) is said to occur if the (m-i)th particle
1.
survives until the arrival of the mth particle; i. e. , F~
1.
< lm-i-1}.
=
We now note that we may write
",(0)
(2.3.24)
C
n
=
n+1
U C
m=4 m-
l' (s ince C = <1», and
2
m-1
m
_ = U R., where
m1
j=3 J
(2.3.25)
C
Ie
I
II
I
I
I
I
I
Ie
I
and so on.
~
Replacing every event of the form F. by the sure event, we
1.
obtain from (2.3.25),
(2.3.26)
m
P (R .)
J
j-1
m m
L: P (F £ F .) •
J
- £=1
<
By noticing that
(2.3.27)
we obtain from (2.3.25)
(2.3.28)
66
00 j-1
= E- E
j=3 £=1
00 00 £ £·'-flt
JJ
0
[4 t e
(£-1)! ] [4
j-£-l j-£-l -fls
S
e] (
(J'-£-l)!
1-F ()
s }{ 1-F ( s+t ) }
I
I
_I
0
dsdt
where F(x) = P<.!i
j-1
(2.3.29)
~
x).
Thus, u,sing the identity
t£-l s j-£-l
1
j-2 j-2 £ j-2-£
(t+s)j-2
=
(j-2):
E
(£)t
s
=
(j-2)!
£:1 (£-l)!(j-£-l)!
£-0
we obtain from (2.3.28) and the principle of monotone convergence,
(2.3.30)
(l-F(s+t)} dsdt
= fl2
JooJoo
o
(l_e- fl (s+t»
(l-F(s») (l-F(s+t)} dsdt •
_I
0
Hence, by (2.3.24)
(2.3.31)
P(~(O»
n
00 00
~ ~2 J J (l_e- fl (s+t»(l-F(s)}{l-F(s+t)}dsdt •
o
0
Now as (n -> 00) the integrand in (2.3.24) tends, pointwise, to zero.
Since it is bounded by the integrable function 2(1-F(s)}{1-F(s+t)} (see
(2.3.10»
nfl
2
the integral tends to zero as (n -> 00) (and fl ~ 0).
Finally,
-> A < 00 as (n -> 00), thereby establishing (2.3.22).
Now to prove (2.3.17) it will suffice to show that as (n -> 00)
n
(2.3.32)
P[.U O~(YJ'+l) ::: 3, !(Y .) ::: 3}] -:;::. 0 ,
J
J=2
or, using the notation established above, that
n
(2.3.33)
I
I
I
I
I
I
p[ U (A'+ l
j=2 1
n
A.)]
1
->
0 •
I
I
I
I
I
I
I
_I
I
I
I
I_
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
67
Now A. = B. U C. ,. hence A.
~
~
~
~
n
(2.3.34)
p[U (A.+
i=2 ~
n
A'+
~
l
n
n
1
A.}]
~
<
L: P(B.
~
i=2
n
B.+ ) +
~ 1
n
2 L: P(C.)
i=2
~
<
- n P(B . n B'+1 ) + 2P(C~n(+O)1 ) •
~
~
The first term on the (extreme) right-hand side of (2.3.34) tends to
zero by (2.3.14) and the second term tends to zero by (2.3.22).
Thus,
(2.3.33) and hence (2.3.17) is established.
Finally, we prove (2.3.18).
events:
We begin by defining the following
> 3 for some i=1,2, ••• ,n}, H: = (!(Ym+1) > 3},
Hn = (!(Yi+1)
8 = (the three particles arriving immediately prior to time Ym+1
m
survive until time Ym+1}' T = (some three particles arriving prior to
m
time Ym+1 survive until time Ym+1 }, and Urn = Tm 8m
First we note that since 8 C T
m
m
n
(2.3.35)
H
n
n
U
T
m=4
=
m
U (8
m=4
U U )
m
m
and hence,
n
(2.3.36)
P(H )
n
<
n
P(S) + L: P(U ) •
m=4
m
m=4
m
L:
We now proceed to show that P(H ) -> 0 as (n ->
n
(2.3.18).
.
00),
thereby establishing
To show that the first summation on the right-hand side of
(2.3.36) tends to zero, observe that for m ~ 4,
00
(2.3.37)
P(8 ) = ~3J
m
00
00
J J (l-F(S+t+U)}(l-F(s+t)}(l-F(s)}e-~(s+t+U)
000
dsdtdu, hence
68
m
2000000
P(S ) ~ ~[~ J J
m=4
moo
~
(2.3.38)
J
(l-F(s+t+u)}(l-F(s+t)}(l-F(s)}
0
e
as (n ->
00)
(and ~ ~ 0).
-II
~
(s+t+u)
dsdtdu] -> 0
We now show that the second summation on the
right hand side of (2.3.36) tends to zero as (n ->
00).
In a manner
similar to that employed previously, we write
(2.3.39)
m-1 m
U
= U T , where
m-1
j=4 j
(2.3.40)
t; = ~~;r~ UF~?;r~ UF~~?~ ,
T; =
and so on.
F??;r~~ u F?~?~~ u F?~;r~~
U... ,
~
Replacing every event of the form F. by the sure event, we
1
I
I
_I
I
I
I
I
I
I
_I
obtain
P (T~ <
(2.3.41)
J
~
~
l<.e<k<j
P (F~~~J. •
Using a 'trinomial' analogue of the arguments in (2.3.27) through
(2.3.30), one can easily show that
P(U _1 ) :5 ~2
m
00 00 00
J J J (1-e-~(s+t~)H1-F(s+t+U)H1-F(S+t)H1-F(S)}
000
dsdtdu
and hence
n
000000
~ P(U ) ~ ~2J J J (l-e-~(s+t+U)}(l-F(S+t+U)}(l-F(S+t)}(l-F(S)}
~4
m
000
.
dsdtdu -> 0
as (n ->
00),
thereby establishing (2.3.18).
I
I
I
I
I
I
I
_I
I
I
I
1_
69
With this, the correspondence between events "Y.
-1
€
S " and "a third
IJ.
order visual response at time .Y +1" has been established.
i
In addition
to the Poisson Limit Result, we can immediately apply the general
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
results of Chapter 1 to obtain the following more elaborate (and informative) result.
Theorem 2.3.1
Suppose that particles arrive according to a Poisson
process with intensity IJ., and particle lifetimes are independent (of
each other and arrivals) and identically distributed random variables
with mean a
<
00.
Let U(s,t) denote the number of visual responses in
the time interval defined by (.Y[sn]' .Y[tn]) Le., in the time interval
defined by the [sn]th through [tn]th (s
= U(O,t).
U(t)
< t) arrivals.
Then, U(t) has, asymptotically,
Poisson process with intensity
~
as (n -> 00).
Write
properties of a
In particular, for t=l,
as (n -> 00) we-have the Poisson Limit Result, viz.
P[m (third order) visual responses in first n arrivals}
->
Proof.
_I:
e !>~
m
1m!,.
The proof follows easily from the general results of Chapter 1.
We have established the equivalence (in the limit) of events of the type
"Xi
€
SIJ." and "a third order visual response at time 4+1'"
Therefore,
we need only show that V(t) = the number of occurrences of the event
Y. € S with i=1,2, ••• ,[nt], is asymptotically a Poisson process.
IJ.
-1
This
is, of course, an immediate consequence of the stationary version of
Theorem L L 2.
It should be observed that the preceding theorem provides a quite
complete asymptotic picture of the process U(t).
Naturally, as men-
tioned in the introduction, it would be o·f great value to have some idea
I
70
of the 'closeness' of the asymptotic result to the finite situation.
However, the bounds, approximations and limits involved (implicitly and
I
I
_I
explicitly) in obtaining Theorem 2.3.1 are, unfortunately, too numerous
and/or complex to permit such a determination.
Empirical evaluation by
Monte Carlo methods could be quite feasible, however.
We might also mention that one simple consequence of Theorem 3.2.1
is that for 0 < r < s < t <00
U(r,s) and U(s,t) are asymptotically
-> 00);
= u1 ,
U(s,t) = u )- P{U(r,s) = u ) p{U(s,t) = u ) -> 0 as (n -> 00). In other
2
1
2
independent random variables as (n
i.e., p{ U(r,s)
words, the numbers of visual responses in non-overlapping intervals of
time are asymptotically independent.
The requirement of non-overlapping
intervals is not crucial; the important point is that the intervals be
'asymptotically non-overlapping.'
For example, an application of
Theorem 1.3.1 establishes the following interesting result.
Corollary 2.3.1.1
as n
-> 00.
Let 0 < r
n
< sand
0 < t < u and t - s -> 0
-n
n
n
n
n
Then U(r ,s ) and U(t ,u ) are asymptotically independent
n n
n n
(and Poisson in distribution).
Thus, for example, with r n
= 0'n
s = 1 +1
n' t n = 1 - 1
n' un = 2,
we have that U(O, 1 + 1) and U(l n
1,
n
2) are asymptotically independent,
even though these random variables represent the number of light sensations in highly overlapping time intervals.
For completeness we mention that the waiting times (in units of n)
beweeen successive light sensations have, asymptotically, independent
exponential distributions with mean
1/~.
This, of course, is a conse-
quence of the asymptotic Poisson character of U(t).
In practice these
inter-sensation times might provide a convenient characteristic of the
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
1_
71
process X(t) to observe for purposes of comparing the finite model with
its (theoretical) asymptotic behavior.
Finally, we remark that the results of Theorem 2.3.1 are, in some
I
I
I
/
sense, complimentary to those obtained by Cramer and Leadbetter [4] for
certain continuous time normal stationary processes.
In this latter
work, the authors show that the stream of upcrossings of a high level
by the sample functions of such a fixed process, when suitably normalized, asymptotically the Poisson distribution.
In Theorem 2.3.1 we
deal, not with a fixed process and increasing level, rather we deal with
]
a fixed level (the threshold number of quanta necessary for a visual
response) and a changing process (i.e., a process with parameter
tending to a certain limit).
The results obtained are strikingly simi-
lar, as one would suspect would be the case.
1e
]
I
1
~
Some reflection will
convince one that an upcrossing of a Level for a discrete time stochastic process can be represented as a linear relation; limit results for
numbers of upcrossings may therefore be obtained on the one hand by
changing the level curve (the Cramer-Leadbetter
[4] approach) or by
'"
changing the parameters of the underlying process while keeping the
level fixed (the approach of Theorem 3.2.1).
We refer forward to
Chapter 3 for a further discussion of these and related points.
1
J
2.4
The distribution of 'scores'
In the model of the preceding section (viz. where arrivals formed
a Poisson process of intensity
J
1
J
~
and lifetimes were (random) iid),
individual particle lifetimes could very likely be quite large if, for
example,
1-1-6
~ T.
~-1
=
00
for 5
> 0, say.
Nevertheless, it was established
that in the limit light of no higher order than three occurs.
This
72
indeed seems to be a curious result, and leads one to look more closely
at the limiting behavior of the distribution of lifetimes of individual
particles.
To facilitate this investigation, we introduce the following
notion of a'-score.'
Definition 2.4.1
The 'score' of a particle is the number of arrivals
the particle survives (counting its own arrival as the first).
8(i)
will denote the score of the ith particle.
To illustrate the notion of a 'score,' we provide the following figure;
the first panel (A) represents a segment of a possible realization of
the process X(t); the second panel (B) represents the corresponding
record of 'scores.'
S(i)
~ +-----;~----­
!5 +----t-----.,---
q. +----j-----~.
3
- f - - - - - - - . _ _.-
T 1
2.
\ r---
o
f-pu-
--
-+I~'-I-I+----+-~l
1.
3
It
~ ",
PA~TlCu:
tll/MBQI\,,"
)
(6)
(A)
FIGURE 2.4.1
Clearly 8(i) can be any positive integer and, for example, 8(i) = 3 if
and only if (using the notation of the previous sections)
I
lei
I
I
I
I
I
I
el
I
I
I
I
I
I
I
_I
I
I
I
Ie
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
73
Yi+2
~
< Yi +3·
~i+Yi
We remark that one can reconstruct the pattern
(number, order) of visual responses of a particular realization of the
process 1(t) knowing only its sequence of scores.
response occurs at time
Ym
i f and only if S(m-i)
amongst i=1,2, ••• ,m-1, while S(m-1-i)
i=1,2, •.• ,m-2.
~
Thus, a visual
~
i+1 for at least two
i for at most one amongst
Accordingly, considering merely the pattern of scores
is really sufficient for purposes of the Limit Problem.
We now establish a simple result that will enable us to bring to
bear the results of Chapter 1 in a study of the asymptotic distribution
of scores.
Suppose particles arrive according to a Poisson process
Theorem 2.4.1
of intensity
~,
and lifetimes are iid (and independent of arrivals).
For fixed k> 1 the events [S(i)
~
k, i=1,2, ••• } are (k-2)-dependent.
Furthermore, it is possible to define a (k-2)-dependent strictly stationary (vector) stochastic process [Y~; i=1,2, ••• } and sets S
(~>
~
-1
0) so
k
that the event [Y.€ S } occurs if and only if S(i) >_ k.
-1
Proof.
---
Clearly S(i) ~ k if and only if Yi +k-1 - ~ ~ !i' which is
equivalent to
_z~+l
.L
interarriva1 time).
times
~
[~i;
+ ••• + -1
z.+k - 1
<
-
liT.
r---1
where -1
z. =
IIX.
(x. being the
J--1-1
Because the -1
z.'s are iid (and independent of 1ife-
i=1,2, ••• }), the events S(i)
~
k and S(j)
~
k (i
<
j) are
dependent if and only if i+1 ~ j+1 ~ i+j-1, or generally li-j/ ~ k-2.
If we now define a sequence of (1-dimensiona1) vector random variables
[y~;
i=1,2, ••• } and sets S
-1
~
(2.4.1)
(~>
0) as follows:
~i+1 + ••• + ~i+k-1
(......;;;;.~----;;;,....;;~)
T.
-1
; i=1,2, ... ; S
~
[xl X:S~},
the second assertion of the theorem follows, and we are done.
74
A similar theorem can be stated for events of the type [8(i)
i=1,2, •.• } or [8(i) = k; i=1,2, ••• }.
~
k;
However, for the moment it is most
convenient to work with the events described in Theorem 2.4.1.
cation of Corollary 1.1.3.1 to the asymptotic distribution of scores.
As a first example we consider the model considered in the beginning of Section 2.3 dealing with the case where lifetimes are constant
case, as (n ->
> 0) and arrivals are Poisson.
~),
(2.4.2)
and
P[8(i)
Observe that in this
I
~
V 0,
>
1
k] = r(k-1)
~T
o
J
s
k-1
-s
(UT)
e ds ""~(k'--~l~)~!
k-2
while it can also be shown that
(2.4.3)
max
I i- j I ~
P[8(i)
>
k, 8(j)
>
k]/P[8(i) > k] ->
k-2
°.
Therefore, an application of Corollary 1.1.3.1 in conjunction with
Theorem 2.4.1 yields the result that as (n -> 00).
P(m scores of k or more amongst first n particles)
(2.4.4)
->
where
~
=
AT 2 /2.
For k = 1,2"
the analogue of the Bore1-Cante11i
Lemma for dependent events embodied in Lemma 1.2.1 suffices to estab1ish that events 8(i)
~
k occur infinitely often.
Thus, in this sim-
p1est (non-trivial) case there is really little difference between the
limiting behavior of scores and the limiting behavior of visuR1
responses.
_I
It is to
be observed that this theorem 'sets the stage,' so to speak, for app1i-
and equal (to T
I
I
The next example treats a situation potentially quite
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
I
I
I
Ie
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
75
different in this respect.
As a second example we consider the more general model for the
Limit Problem, viz. the case where lifetimes (T.; i=l,2, ••• } are iid
-~
random variables with finite first moment a and arrivals from a Poisson
process independent of lifetimes.
If F(t) = P(T.
1
P[S(i) > k] = f(k-l)
(2.4.5)
-~
00
J
<
- t) and k>- 2,
[l-F(s)] Sk-2 ~k-l e-~s ds.
o
We consider two cases:
Case A:
~
2
T.
e,,-~
< 00.
This situation is similar to the preceding case.
We have
(2.4 •.6)
P[S(i) > 3 for some i=l,2, ••• n]
<
n
Z P[S(i)
i=l
= n~
2
>
3] = nP[S(n»3]
J00 [l-F(s)]~s 2e - ~ s
ds.
o
The integrand above tends pointwise to zero as (n-> 00), while it remains bounded by the integrable function s[l-F(s)].
Thus, via bounded
convergence,
P[S(i) > 3 for some i=l,2, ••• ,n] -> 0 as (n -> 00).
(2.4.7)
Hence, in the limit, no score higher than three occurs.
A more precise
result; e.g., an actual limit distribution for scores of three, say, is
not possible without further knowledge about F.
2
Case B: (;
T. = 00.
c.._~
previous cases.
This situation may be quite different from the
To display this difference we must consider in general
the asymptotic behavior of (2.4.5) as (n -> 00) and so ~ ~ O.
This
presents some problems, since the asymptotic behavior of (2.4.5) as
(n -> 00) is intimately related to the asymptotic behavior of l-F(t) as
I
I
I
76
t -> 00.
Now for particular distribution functions F one may actually
_I
attempt to evaluate the Limit of (2.4.5) as (n -> 00), and attempt to
verify the other conditions necessary for application of the results
of Corollary 1.1.3.1.
I
I
I
I
I
I
However, we are concerned with general results.
In the following discussion we present some (reasonable and
general) conditions on the asymptotic behavior of 1-F that will enable
u£ to obtain some general results.
First note that, apart from con-
stants, (2.4.5) is the Laplace transform of the function [l-F(s)]s
k-2
•
It is not possible to make general statements about the asymptotic
behavior of this transform as ~ ~ 0 (n
tions about the behavior of [l-F(s)]s
-> 00) without making some assump-
k-2
as s
-> 00.
(See Appendix B.)
An appropriate assumption is that
(2.4~8)
[1_F(s)]sk-2 ~ L(s) s-a
as s -> 00
where L(s) is a so-called function of slow growth.
non-negative real function defined for s
>
el
That is, L(s) is a
0 and bounded in every finite
interval such that for every fixed A > 0, L(AS)/L(s) -> 1 as s -> 00.
The constant
a is chosen so as to insure suitable moment conditions on
F are satisfied (here, we want
2
cT.
'" -1.
= 00, c:
T.
'"-1.
<
00).
In keeping with our
previous technique, we may occasionally proceed into the more detailed
analyses that follows for the specific case k=3.
The results are valid
(with appropriate and obvious changes) for a general k.
Let us, then, assume that as s -> 00,
(2.4.9)
[l-F(s)] s ~ L(S)S-a, 0 ~ a ~ 1 •
where L(s) is a function of slow growth.
If
a> 0 in (2.4.9), then
L
fT.
-:L
<
00; yet by appropriate choice of
a and/or L(s), one can have, for
I
I
I
I
I
I
I
_I
I
I
I
I_
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
77
example,
5
~ 't'"
~-1.
=
for 5
00
e
> 2 and -1.
't'~ <
00
[1 - F(s)] ~ 1/s2 as s
(2.4.10)
< 2.
for 5
->
00
Such is the case when
•
Under the assumption of (2.4.9) we have the following result.
Theorem 2.4.2
with
Suppose particles arrive according to a Poisson process
intensity~,
while lifetimes
(~i;
i=1,2, ••• ) are iid (and independ-
ent of arrivals) with finite mean and common distribution function F
satisfying (2.4.9).
Then for any k
P[S(i)
(2.4.11)
> k] ~
lI
> a+2
a +l L(.!) r(k-a-l)
~
r (k-l)
as
t-"
Proof.
The theorem is given in Smith [17].
~
I
V 0 •
The result follows from a
general Abelian theorem for Laplace transforms by which we have, as
~ ~O,
00
(2.4.12)
_ ~
_
k-2 k-l -~s
P[S(i)2::k ] - r(k-l)f [1 F(s)]s
~
e
ds
o
We emphasize the fact that for all integers k
P[S(i) 2:: k] are of the same asymptotic order (as
> a+2 the quantities
~~
0).
In particular,
for k > a+2,
(2.4.13)
p[S(1.") = k] ~
l+a_(l) (r(k-a-2)
rCk-a-l»)
-L ~
(k-l) - (k) - as ~
r
~.
Therefore, for all integers k
r
Vi
0
•
> a+2, scores of order k have the same
asymptotic order of probability.
This point will be pursued further.
To illustrate the curious behavior of scores, we consider the
following very specific example.
Suppose that the distribution function
78
F of lifetimes {T.;
i=1,2, ••• } satisfies (2.4.10).
-1
noted, guarantees that e.!~
< co for
°< 2, while e.!~ = co for ° 2: 2.
Theorem 2.4.2 it follows that for k
P[S (i) _> k]
(2.4.14)
This assumption, as
From
I
I
I
I
I
I
j
2 r (k-3)
"" I-t • r (k-l)
as I-t V 0 •
I
(2.4.15)
I-t V 0 •
Thus, as (n -> co)
nP[S(i) 2: k]
r (k-3)
.
-> A r(k-l) , k 2: 4 ,
while for k = 3, nP[S(i) 2: 3] is unbounded as (n -> co).
It can be verified for k 2: 4, as (n
(2.4.17)
-> co)
P[S(i) 2: k, S(j)
max
o < Ii- j I :s
~
k]/p[S(i) 2: k]
-> 0 •
k-2
Hence Corollary 1.1.3.1 is applicable to the determination of the
asymptotic frequency of occurrence of events of the type S(i) > k;
k 2: 4.
In fact, from the aforementioned Corollary one may conclude
that for k
>4
lim
(2.4.18)
(n
-> co)
P[S(i)
> k for exactly m amongst i=1,2, ••• n]
-0
= e
where ok = Ar(k-3)/r(k-l).
k ok
ml m.' ,
Furthermore, the Borel-Cantelli-type result
contained in Theorem 1.2.1 insures that as (n -> co) events of the type
S(j) 2: k, j=1,2,3 occur infinitely often.
By using (2.4.13) we can
state the above results in a slightly different and more useful form.
For k
_I
> 3,
Furthermore,
(2.4.16)
I
I
> 4 and any non-negative integer m,
el
I
I
I
I
I
I
I
_I
I
I
I
I_
I
I
I
I
I
I
Ie
I
79
lim
(2.4.19)
(n
->
P[S(i) = k for exactly m amongst i=1,2, ••• ,m]
00)
=e
and for k
<
lim
(n
~k
=
m
,
~k/m.
3
(2.4.20)
where
-~k
~k
-
P[S(i) = k for exactly m amongst i=1,2, ••• ,n]
0,
-> 00)
~k+1 (~'s
are positive constants).
Thus, unlike the preceding cases, we have (a) scores of every
order occurring in the Lir.tt (in particular scores of order> 4 occur
marginally according
~o
a Poisson distribution), and (b) scores of order
three (and less) occur infinitely often.
Accordingly, the asymptotic
distribution of scores displays a structure significaIlt1y different
from that of visual responses (where the highest asymptbtic order was
three).
Intuitively, the difference lies in the fact that we may obtain
very long lived particles (judged in terms of scores), yet the particles
do not ' accumulate' closely enough to produce patterns yielding visual
I
responses of order higher than three.
I
I
I
I
I
segment of a possible 'score record' when lifetimes have a distribution
Ie
I
For purposes of illustration, the following diagram represents a
function satisfying (2.4.10).
The horizontal scale is greatly com-
pressed as compared to Figure 2.4.1 (B).
In effect Figure 2.4.2 is
obtained from the process K(t) by 'standing lifetimes on end,' measuring
vertical distance in terms of score rather than time.
80
_I
sll)
,,
i
5
If
31n-:;iTT-n'-T'T"rMm"TT1mn-r-rt.,-lr'TTTT"":rT"t".......- -
l
1-
---~,
o
FIGURE 2.4.2
In the limit, jumps to S=k (k
<
3) occur infinitely often, however the
orders of magnitude may be different.
In the case at hand we note that
> 3] ~ A log 1 , nP[S(i) > 2] ~ ~ (K a positive, finite conIl
Il
A
I
stant) and nP[S (i) ~ 1J ~""2 as·1l V 0, (n -> 00). Thus, amongst the
nP[S(i)
Il
first n particles one might expect the number of occurrences of, say,
the event S(i) ~ 2 to be O(~n), whereas one knows that amongst the first
n particles the number of occurrences of the event S(i)
~
> 1 is n,
exactly.
Of course, in the limit, jumps to S=k (k
events.'
Such jumps occur a finite number of times and (marginally)
4) are the 'rare
according to the Poisson law.
The above marginal results can be refined.
We have previously
established that the marginal distribution of the number of jumps to
S=k (k
~
I
I
4) is Poisson with parameter
~k
(in the limit).
The following
theorem extends this result to include a limiting joint distribution.
Theorem 2.4.3
Suppose particles arrive according to a Poisson process
with intensity
Il, and particle lifetimes {~i; i=1,2, ••• ) are iid (and
independent of arrivals) with a distribution function satisfying
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
I_
81
positive numbers, and k ,k , ••• ,k be r positive integers such that
1 2
r
<
Then,
k.
r
r
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
lim
(2.4.21)
(n
p[
-> 00)
n
[SCi) = k
t=l
t
for exactly m amongst
t
1=1,2, ••• ,[ttn ]}]
-(~ttt)
r
=
where
~t
Proof.
2.5
II
t=l
[e
(~ttt)
mt
/mt !}
is defined as in (2.4.20)
This is a rather direct application of Theorem 1.3.1.
The Limit Problem for some new situations
As was mentioned in the introduction to this chapter, our approach
to the Limit Problem (via the general results of Chapter 1)
enab1e~
to establish the Poisson Limit Result in some new situations.
us
It will
be noticed that many of the results of the preceding sections are
immediately ftransferab1e.'
We begin by establishing the following result, which essentially
gives the Poisson Limit Result in the case where the interarriva1 and
lifetime distributions considered in Theorem 2.3.1 are interchanged
(and slightly strengthened).
Theorem 2.5.1
Suppose that light quanta arrive at (random) time points
o :5 Yi < Y2 < •.• where Yi =
t (~(1~2+·· ·~i)
and the
~i' s
are iid
positive random variables with fixed distribution having unit mean and
finite variance, and
~
>
o.
Let particle lifetimes [T.;
i=1,2, ••• } be
-1
iid exponential random variables (independent of arrivals) with mean
Ct.
Then
82
(2.5.1)
lim
P(exactly m third order visual responses amongst first
[n -> 00]
arrivals) = e
where [n -> 00] denotes "n ->00, p.
-&
-0 m
0 1m!
0 in such a manner that
~ 1 ~2
~
00 -sx
.
nG(-) GI;-) -> 0," where G(s) = f e
dG(x) ~s the Laplace-Stieltjes
CX\l
CX\l
0
transform of the distribution function G(x) = P(x. < x), and 0 > 0 is a
-~
-
positive constant.
Remarks.
Before proving Theorem 2.5.1 we make some observations.
limiting condition [n
-> 00] is possible if and only
if
'G' is
A
the Laplace-
Stieltjes transform of a positive random variable, since by definition
P(X.
(2.5.2)
-~
=
0)
+ f
00 --1 x
e ~
dG(x)
o
The right hand side of (2.5.2) tends to zero as
P(x.
-~
~ -&
I
I
_I
I
I
I
I
I
I
_I
0 if and only if
= 0) = O. Furthermore, note that the limiting condition [n -> 00]
reduces essentially to the condition (n -> 00) previously consigered
(e.g., in Section 2.3) when arrivals are Poisson.
'G'(!) ~ ~
~
only if
as
~ -&
0 so that n~G(1
- )
~
~G(2)
CXj..L
n~2 -> a 20/2! = A as n -> 00,
- :--.
......
~
In that case,
and
as n - :--.
...... 00 and"to'" VI 0 -If
...
S>.
u
I
V
O.
Finally, we are following
a pattern established earlier by considering the particular case
Theorem 2.5.1, that is, third order visual responses.
k~3
in
It should be
clear that Theorem 2.5.1 could have been proved for general k; accordingly the appropriate interpretation of [n
k-l
.
in such a manner that n j~l 1het) -> 0."
Proof of Theorem 2.5.1
serve to establish that
-> ooJ
would be "n
-> 00, ~ -& 0
Via Theorem 2.2.2, a proof of this theorem will
I
I
I
I
I
I
I
_I
1-
I
I
I_
83
lim
(2.5.3)
[t
-> ooJ
P(exactly m third order visual responses in (O,t]>
= e
I
I
I
I
ooJ is interpreted to mean "t ->
where the symbol [t ->
k-1
-6 m
6 1m.! ,
III'"" Vi 0 in such
00,
.
6
a manner that "lJ.t IT G(...l....) -> 6." As a consequence, we deal exclusively
j=l CX!J.
with limits as [n -> ooJ . As before, define -~
z. = IIX.;
where
I'""_~
x. = y. -y. 1; i=1,2,... (y
-~
~
~-
= 0) are the'interarrival times.'
0
The ran-
dom variables (z.; i=1,2, ••• ) are iid with unit mean and finite
-~
varianc~
We now define a two-dimensional vector valued stochastic process
(Y.; i=2,3, ••• ) and a collection of 2-dimensional sets S (II
-~
IJ.
I'""
>
0) as
follows:
(2.5.4)
Z,+Zi
- -1
-~
= (T
-i-l
z.
' :r:-); i=2,3, •••
-~
S = (x,y)
jJ.
~
I x<jJ.,
y<jJ.}
(~O).
We now assert that the l-dependent vector stochastic process
(Y.; i=2,3, ••• ) and the sets
SjJ.
_~
(III'""
> 0) satisfy the conditions of
Corollary 1.1.3.1 under the limiting condition [n ->
ooJ.
To verify this
assertion, we first note that
(2.5.5)
nP(Y.
-~
E
S )
nP(z.+z.
1
-~ -~-
jJ.
00
= n
00
I I
n(j e
o
Furthermore,
/-.LT.
-~-
l'
-1(s+t) -1 s
e IJ.
e jJ.
dG(s)dG(t)
2
oo-s
=
<
jJ.
00
1 t
dG(s))(I e IJ.
o
dG(t) }
84
_I
-!v
J.(s+t) J.(t+v)
= 1 lIe ~
e ~
e ~ dG(s)dG(t)dG(v)
00 00 00
000
1
2
2
-s
00 - t
00 v
= (1 e ~ dG(s)}(1 e ~ dG(t)}(1 e ~ dG(v)}
00
0 0 0
80, via (2.5.5),
(2.5.7)
pry.
'-=-1
€ 8 , Y.+1 € S )/P(Y. € S ) ="2'(£)
~
~
-1
-1
~
~
-> 0 as [n ->
00].
Thus, via Corollary 1.1.3.1 we conclude that
(2.5.8)
lim
[n
->
P(Xi€ S
~
00]
for exactly m amongst
=
i=2,3,.~.,n)
m
e-5 5 /m! •
All that remains to show is that the analogies of (2.3.16),(2.3.17) and
(2.3.18) are satisfied for the case at hand.
Accordingly we will then
have established a correspondence (in the limit) between the events
".xi € SJ.& for exactly m amongst i=2,3, ••• ,n" and "exactly m third order
visual responses in the first 'n arrivals."
To verify that condition (2.3.16) is satisfied for the case of
hand, all we need show is that, in the notation of Section 2.3,
P(c(O)) -> 0 as [n ->
n
00].
00
(2.5.8)
p(c(O)) <n
n
Observe here,
j-1
P(x
< ~T£,
x +••• +x.
< ~T.)
+••• +x£
- 1
- 1
-J
-J
1· 1
00
j-1 00 00 --s -(s+t) £*
dG (s) dG(j-£)*(t)
= n I: I: l I e ~ e ~
j=3 £=1 o 0
00
j-1
= n I: ;t {"G'(1)}£ (G'(£) }j-£
~
~
j=3 =1
I:
I:
j-3 £=1
I
I
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
85
I_
I
I
I
->
establish the validity of the analogous (2.3.17) and (2.3.18).
not present the details.
~
(2.5.9)
-(
-I
-,
Arguments analogous to
those used in Section 2.3 can be transferred, almost word forword, to
Limit Result, viz.
]-
,
hence the analogue of (2.3.16) is satisfied.
~
]
° as [n -> 00]
With this, we have established the Poisson
lim
P(exact1y m third order visual responses in first
[n -> 00]
-5 m
n arrivals) = e 5 1m! •
In addition, the analogues of (2.3.19), (2.3.20) and (2.3.21) are, of
course, valid.
Hence, the asymptotic behavior of the process !(t) is
indeed the same as for the case considered in Section 2.3 and pictured
(in part) in Figure 2.2.4.
We remark, in conclusion, that so far, the
proof has made no explicit use of the fact that the -1.
x.'s have a finite
second moment.
However, we recall that Theorem 2.2.2 required this
condition in order that the 'non-random n' Poisson Limit Result imply
the corresponding 'non-random t' Poisson Limit Result.
J
]
-I
_I
1-1
We shall
(We refer to
Appendix A for an expanded discussion of related points.)
With this,
the theorem is established.
As an illustration of the preceding theorem, let us suppose that
the
~(s)
~i
(i=1,2, ••• ) have a Gamma distribution with parameter
A, so that
1
Noting that ~l) _ ~A as ~ ~ 0, we conclude that if
(l+s)A·
~
particle arrive at ti~e points v. = 1 (xl+••• +x.) (i=1,2, ••• ) and have
=
"'"1.
~
-
-1.
exponential (a) lifetimes, then the Poisson Limit Result (for the
86
number of third-order visual responses) is valid when [n -> 00] is taken
to mean
n
->
00,
j.l
~
0 in such a manner that nw) 2/\
->
5. 1I
I
I
_I
I
I
I
1
I:
I
_I:
I:
,l
I
I:
I
IJ
I,;
I'
s
_I"
I
I
I
1_
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
CHAPTER III
ASYMPTOTIC DISTRIBUTION OF THE NUMBER OF UPCROSSINGS OF A
HIGH LEVEL BY CERTAIN DISCRETE-TIME STOCHASTIC PROCESSES
3.0
Summary
In Chapter 1 we developed a series of general theorems concerning
the asymptotic behavior of the probabilities of 'rare' events in
sequences of dependent (m-dependent, f(n)-dependent, strongly mixing)
binomial (multinomial) trials.
In this chapter we develop a particular
application of these results dealing with the asymptotic distribution
of the number of upcrossings of a high level by the sample function
corresponding to certain dependent discrete time stochastic processes.
/
The results are analogous to those obtained recently by Cramer and
Leadbetter [4] and Cram~r [3] for a broad class of continuous time,
strictly stationary normal stochastic processes.
It should be noted that Chapter 2 dealt with a study of the
(asymptotic) number of upcrossings of fixed integer levels by a particular continuous-time stochastic process.
The general (discrete) theory
of Chapter 1 could be applied because the level changes occurred only
at points of time determined by a discrete-time stochastic process.
3.1
Introduction and preliminary comments concerning continuous time
normal stochastic processes
We first develop some (known) definitions and results, pertaining
to continuous time normal processes, that will enable us to give the
I
88
recent result due to Cram&r and Leadbetter [4].
The purpose is pri-
mari1iy to motivate analogous definitions and results for the case
I
I
_I
we shall consider, and to display the similarity of the two results.
We shall use terminology found'in [4].
Let
~(t),
-00
<t <
00
be a real, normal and stationary process with
zero mean and unit variance.
Suppose that its cov.ariance function r{t)
satisfies the conditions
(3.1.1)
r{t)
=1
.-
7\2 2 7\4
t + 4!
n
4
t
-
4
+ o{t)
with finite 7\2 and 7\4 as t -> 0, and
(3.1.2)
r{t)
= o(t-O:)
for some a> 0, as t ->
one can assume that
~(t)
00.
(It is known [4] that (3.1.1) implies that
has a continuous sample function derivative,
and that (3.1.2) implies that
~(t)
is ergodic and that values of
~(t)
lying far apart on the time axis depend ,on1y weakly upon each other.)
We shall let U{s,t) denote the number of upcrossings of a level u in
the interval (s,t] by the process
of U{o,t).
(3.1.3)
~(t)~
When s=o, we write U{t) instead
Also write
I-L =
..[7\2 -u 2 /2
U{l) = - e
2rr
As remarked in [4], for a process
(see
~(t)
4 ).
satisfying the above con-
ditions, the 'stream' of upcrossings of a fixed level u is stationary
and regular.
Were the numbers of upcrossin,.occurring-in disjoint
time intervals also independent random variables, the three conditions
characterizing the Poisson process would be satisfied.
will in general not be the case, but as the level u ->
However, this
00,
the third
I
I
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I
I
I
I_
89
condition is satisfied aSymptotically.
Therefore, one might reasonably
(and quite justifiably) suspect that the stream of upcrossings is
asymptotically a Poisson process (in a specified way) as the level
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
u ->
00.
It appears natural to use
l/~
remark that
->
00
as u ->
00
l/~
as a scaling 'unit' of time.
by relation (3.1.3).
We
Accordingly,
...
Cramer and Leadbetter [4J establish the following result.
Theorem 3.1.1
Suppose that the normal and stationary process g(t)
satisfies conditions (3.1.1) and (3.1.2).
Let (al,b ), (a ,b ), ••• ,
2 2
l
(a ,b ) be r disjoint time intervals depending on u in such a way that
r r
b. - a. =
~
1
T./~
1
(T.1 > 0), rand T.,
••• ,T r being independent of u.
1
m ,m , ••• ,m be r non-negative integers independent of u.
r
l 2
r
r
-T. m.
P( n [U(a.,b.) = m.}) -> II (e 1 T .1/ m.!) as u -> 00
i=l
1
1
~
i=l
1
1
Let
Then
A proof of the above result is quite involved, yet straightforward.
It relies heavily upon the normality of the process
~(t).
There are several other asymptotic results related to Theorem 3.l.L
If in the univariate special case of Theorem 3.1.1 one considers m = 0
1
-z
and replaces the parameter T b y e , and taking as before
(3.1.4)
it follows that, for any fixed real z, the probability of no upcrossing
of the level u in the interval (O,T) satisfies
(3.1.5)
P(U(T) = 0)
->
-z
e-e
as u
->
00
•
A small lemma (see (4) p. 272) then yields the following results:
90
Corollary 3.1.1.1
P(
(3. L 6)
s(t) ~ u) -> e-e
max
-z
as u ->
00,
and hence
I
I
_I
0< t < T
(3. L 7)
P(
max
s(t))
o< t < T
< (2 log
~
k
T)
2
log
+
A2
- log 2rt + z
(2 log
_>
e-e
T)~
)
-z
Not quite so immediately we have another result.
Let us' choose as
a scaling time unit,
(see [4] ).
(3. L 8)
Let F(t) be the distribution function of the duration of an upwards
excursion above the (high) level u.
Then, we have the following result:
F(et)
(3. L 8)
->
1 - e
as u ->
the normality of s(t).
It is remarked in [4] that the above results for continuous time
normal stationary processes have their counterparts for discrete time
normal stationary processes, but with simpler proofs.
Let
e. (i=0,+1,+2
••• ) be a stationary sequence of normal random variables
--
~1
with zero means, unit variance, such that
(3. L
some ex
>
O.
For an analogue to the continuous time case, we
consider the connected path of straight line segments between the points
(i,!.), and the up and down crossings of this line with a given level u.
1
I
00
The proof of this theorem is not simple, and again depends heavily upon
for
I
_I
Theorem 3.1.2
_ (1I.) t 2
4
I
1
I
I
I
I
I
I
I
I
I
_I
1-
I
I
1_
91
Note that this path has a single upcrossing in the interval (j, j+l) if
and only if 1. 1 < u and !.
J-
J
> u.
The asymptotic distribution of the
number of upcrossings as well as the maximum of the discrete process
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
I
differ somewhat from the corresponding asymptotic distributions for the
continuous process.
As an example, we quote the following result, to
be compared with (3.1.7):
Theorem 3.1.3
For a discrete stationary normal process satisfying
(3.1. 9) we have
(3.1.10)
P( max
O<i<T
!i ~ (2 log T)~ + z -
<
i-\ log
log log
(2 log T)
-> e-e
3.2
4:rr)
2
-z
, as u
->
00
•
Related results for discrete time processes that are not
necessarily normal
In this section we shall apply some of the more general results of
Chapter 1 to obtain limit results for some not-necessarily-normal discrete time processes, results analogous to those of the preceding section for normal processes.
It is felt that, even though the class of
discrete time processes for which the conclusions are valid is somewhat
restricted, they are useful at least for displaying the robustness of
the results of the preceding section under deviation from normality.
We are motivated by the previous section in formulating the following
definitions.
Let {x.; i=0,±1,±2, ••• } be a discrete time stochastic process.
-~
We
shall associate with each realization of this process a (continuous)
sample function
{~(t),
-
00
<
t
<
joining, in sequence, the points
oo} obtained from {~i; i=0,±1,±2, ••• } by
(i'~i)
by straight line segments.
92
Thus, a particular sample function associated with the process might
(in part) appear as in the following figure:
I
I
_I
l(~)
FIGURE 3.2.1
Such a process might arise naturally from periodic phenomena, or as an
approximation to a continuous phenomenon.
~i
For example,
might repre-
sent the amount of water in a reservoir at time i in the history (or
projected future) of the reservoir.
Frequently it is desirable to have
some idea about the probability that such a process will exceed a certain (high) level, or drop below a certain (low) level.
These events
might correspond to catastrophies for the system being characterized by
the process.
In the example above, a high level might represent the
capacity of the reservoir behind a dam; a low level might correspond to
a minimal value necessary to produce, say, electric power.
We are
therefore faced with the problem of finding the probability that the
sample function
~(t)
(associated with
[~i
(i=O,±1,±2, ••• »)) sustains an
an upcrossing of a (high) level u or sustains a downcrossing of a (low)
level v.
Since a solution of the problem for upcrossings of
implies the solution of the problem for downcrossings of
restrict attention to the former of the two problems.
~(t)
-~(t),
we
In addition, a
I
I
I
I
I
I
_I
I
I
I,
I
I
I
I'
el
lu
I
I
1_
93
related problem is to determine the distribution of (sojourn) time
above a (high) level given that an upcrossing has occurred.
This might
well correspond to a 'duration of crisis' distribution when interpreted
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
I
physically.
It seems that in many situations, the specific assumption of normality for
[~i
(i=O,±1,±2, ••• )} might be difficult to justify on the
basis of evidence (or lack thereof) available.
However, it may be
possible to make some assumptions on certain conditional probabilities
such as P(x.+.
> ul -1.
x. > u) for large u.
-1. J
As it will be seen, such
assumptions are required in our approach.
Now, there is an upcrossing of a level u by the process
interval (i-l,i) if and only if
~i-l
< u and
~i
> u.
~(t)
in the
Thus, events of
the latter type are of specific interest when studying upcrossings.
For
studying such events, we introduce the following definition.
Definition 3.2.1
Corresponding to the stochastic process
[x.
(i=O,±l,±2, ••• )} we define a new (vector) stochastic process
-1.
[Xi (i=O,±l,±2, ••• )} and sets Su
(-00
(i=O,±1,±2, ••• ) and Su = [(x,y)/ x
<u<
00) as follows:
-1. -
-1.
< u, y> u}.
It is to be observed that the event -1.
Y.
€
Su occurs if and only if
there is an upcrossing of the level u by the process
val (i-l,i).
Y. = (x. l'x.)
-:L
~(t)
in the inter-
Thus our study of the number of upcrossings of u by
~(t)
in some interval of time, say (a,b), reduces to a study of the number
of events of type Xi
€
Su occurring with i = [a]+l, ••• ,[b]-l.
This, in
conjunction with the form of the theorems of Chapter 1, motivates our
next de finition.
94
The event A~ is defined as A~ = [Xi
Definition 3.2.2
€
Su 1
_I
(i=O ,±1,±2, ••• ).
We now propose to show that, if u ->
00
in a certain manner, and
the process [Xi (i=0'±1,±2, •••)} (or equivalently
[~(=O ,±1,±2, ... )
}
satisfies certain conditions, a limit distribution for the number of
upcrossings of u by
can be obtained.
~(t)
We begin with one of the simplest cases.
Suppose
[~i (i=0,±1,±2, ••• )} is a strictly stationary, m-dependent stochastic
process.
~1
and
If F(.) and F(·,.) denote the distribution functions of
(~1'~2)
respectively, let
(3.2. 1)
and suppose that sup [x: F(x)
< 1} =
+00 •
p(u) -> 0 as u ->
Let us choose a level u
(3.2.3)
n
->
nP(u ) ->
n
00
e as
00
•
in such a manner that for some
n ->
00
e>
0
•
From this point on we assume, without loss of generality,. that equality
holds in (3.2.3).
Theorem 3.2.1
We then have the following result.
Let [~i(i=O,±l,±2,••• )} be an m-dependent strictly
stationary stochastic process.
Suppose that a level u
n
(n=1,2, ••• )
tending to infinity, is chosen so that (3.2.3) is satisfied.
Suppose
that, in the notation of Definition 3.2.2,
u
(3.2.4)
I
I
I
I
I
I
el
Now,
(3.2.2)
I
I
b
n
=
u
n
max
p(A.nIA. ) = 0(1), as n ->
li-jl~ m+1
1
J
00
•
I
I
I
I
I
I
I
_I
I
I
I
Ie
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
9S
Then for every real t, 0 < t < 00, as n -> 00,
P{exact1y k upcrossings by
(3.2.5)
~(t)
of u
n
in the interval (o,nt)}
This is a relatively simple application of the stationary
For each n let {A~ (i=1,2, ••• )}be the
version of Theorem 1.1.4.
sequence of events where
process
{~i
u
A~ ~ Ai
n
~
(i=0,1,2, ••• ).
Since the
(i=0,±1,±2, ••• )} is m-dependent and stationary, the sequence
n
of events {Ai (i=O,l, ••• )} is (m+1)-dependent and stationary.
n
more, P(A ) =
i
via (3.2.4).
Further-
max
p(A~IA~) = 0(1) as n -> 00
li-jl~m+1
~ J
An appeal to the result of Theorem 1.1.4 with s=o shows
~/n
(i=1,2, ••• ), and
that P{exact1y k amongst A~ (i=O,l, ••• [tn])}
~
-> e-t~(t~)k/k!. The
remark preceding Definition 3.2.2 then serves to establish the theorem.
Now, the result obtained in Corollary 1.3.1.2 can be used to obtain
a result more refined than the preceding theorem.
We obtain the fo1-
~owing result which is to be compared with the Cram~r and Leadbetter
result found in Theorem 3.1.1.
Theorem 3.2.2
Let {x. (i=0,+1,+2, ••• )} be an m-dependent strictly
-
-~
-
stationary stochastic process, and suppose that a level u
n
(n=1,2, ••• )
tending to infinity is chosen so that (3.2.3) is satisfied with
~=1.
Let (a.,b.) (i=1,2, ••• ,r) be r disjoint, bounded intervals of (0,00) and
~
~
let m. (i=1,2, ••• ,r) be r non-negative integers, (both intervals and
~
integers independent of n).
Suppose that condition (3.2.4) of Theorem
Then, as n -> 00
3.2.1 is satisfied.
r
P(
n {exactly
i=l
mi upcrossings by
~
->
~(t)
of u
n
r
-T. m.
TI (e ~T.~/mi!) ,
i=l
~
in interval (a.n,b.n)})
~
~
96
where T. = b.-a
1.
Proof.
1.
i
(i=1,2, ••• ,r).
The proof of Theorem 3.2.2 is similar to the proof of Theorem
3.2.1 and is a direct consequence of Corollary 1.3.1.2.
Naturally a question immediately arises concerning the nature of,
in particular, assumption (3.2.4) in the preceding theorem.
tion thatb
u
n
n
= 0(1) as n ->
The assump-
is equivalent to the condition that, for
00
defined by (3.2.3),
max
0< li- j IStn+1
(3.2.6)
P (xi
1
~ -
< un , -1.
x. > un I-Jx. 1 < un , -J.
x. > un ) -> ·0
as n
->
00
In words, the condition requires that, given an upcrossing of a high
level u
n
has occurred, the probability of another upcrossing occurring
within (m+1) time units of the given upcrossing tends to zero as the
level tends to infinity.
Thus (3.2.4) can be interpreted as a condition
requiring that the occurrence of a rare event (an upcrossing of a high
level) at some time implies that no other rare event occurs at a neighboring time point (i.e., within (tn+1) time units).
for some comments relevant to this.
(See Newell [14]
He gives an example of a process
theorem valid for such processes.)
It is interesting to look more closely at the conditions of
(~i
(i=0,±1,±2, ••• )} is an
m-dependent normal stationary process with mean zero, variance one.
that case it can be verified easily that
(3.2.7)
P(u)
,... __1=---
,[2rr. u
2
e -u /2
.
as u ->
00
,
_I
I
I
I
I
I
I
_I
I
I
I·
not satisfying (3.2.6), and a general result analogous to Watson's
Theorem 3.2.1 in the particular case Where
I
I
In
I
I
I
I
_I
I
I
I
I_
97
and so according to (3.2.3), nand u
the equation
1
n(-)
(3.2.8)
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
are (asymptotically) related by
n
2
n
-u /2
1
e
,J2;r un
.
~
Furthermore it can be shown that-as u
n
->
00
b -> O.
'n
Thus the theorem
is valid for stationary m-dependent normal processes.
By observing
that (3.2.8) implies that
(3.2.9)
u
k
~ (2 log n)2
n
+
k
[log~ - ~ log 4rr - ~ log log n]/(2 log n)2
=
,
.en~
as n
->
00
(see, for example, Cramer and Leadbetter [4 ]), it follows that for
m-dependent normal stationary processes,
P(k upcrossings by
(3.2.10)
~(t) of .e~ in (o,tn)) -> e-~t(~t)k/k!
n
as n
->
00
•
If, in particular, we consider the above result for the special case
k=o, t=l and
-z
~=e
(-00 < z < 00), we have
P(no upcrossings by ~(t) of .e~ in (O,n))
(3.2.11)
n
Now if E
n
->
e
-e
-z
as n
->
denotes the event "no upcrossings by ~(t) of .e~ in (O,n)" we
n
may write
P(E ) = P(E and x < .e~) + P(E . and x > .e~)
n
n
-0
n
n
-0
n
(3.2.12)
= P(
maxx(t) < .e~) + P(E and x > .e~) •
O<t<n-n
n
-0
n
- -
The last term tends to zero as n -> 00 (.e~ -> 00) and so, via (3.2.11),
n
(3.2.13)
pi
max
o<t<n
~(t) ~
k
(2 log n)2 + [z - ~ log 4rr - ~ log log n]/
z
(2 log n)~)-> e-e-
00.
98
as n ->
00.
This is, of course, a well-known result, and can be derived
from results found in Watson [19], and is explicitly quoted in Berman
I
I
_I
[1].
Of course, it would be nice to have a characterization (in terms
of, say, distribution functions) of stationary m-dependent processes
that satisfy the conditions of Theorem 3.2.1.
difficult problem.
This appears to be a
Newell [14] remarks that for many processes arising
in application the conditions
ar~
satisfied; some of his results may
prove useful in the problem of characterization.
We shall now state some other results valid for m-dependent stationary processes satisfying the conditions of Theorem 3.2.1.
The first
result concerns waiting times between consecutive upcrossings.
be determined by (3.2.3) with
~
n
is
= 1, and let Fk(t) denote the probabiln
and the kth
< nt. We may then write
1 - F~(t) = P(U(o,nt)
(3.2.14)
n
n
ity that the waiting time between an upcrossing of u
successive upcrossing of u
Let u
u
< k-lIA_~)
where U(o,nt) denotes the number of upcrossings by
~(t)
of the level u
n
u
in (o,nt) and A_~ is as in Definition 3.2.2.
Provided the process
(x.
(i=O,+1,+2,
••• )} satisfies the conditions of Theorem 3.2.1, the
-I.
-events "U(o,nt) So k-l" and"U(m+l,nt) So k-l" are asymptotically equivam u
n
lent, since their difference is contained in the event U A. , and
i=o 1
mum
u
u
.
m
m
P( E A. ) < E P(A. ) = (m+l) P(A m) ~ (m+l)/n ->
as n -> 00 via
•
1
1
0
1=0
i=o
u
un
.
n
(3.2.3). Thus P(U(o,nt)So k-l/A_ l ) - P(u(m+l,nt) So k-l/A_ l ) ->
as
°
n
->
00.
°
I
I
I
I
I
I
.-I
I
I
I
I
I
I
_I
I
I
I
I_
99
u
But from m-dependence, the events "U(m+1,nt) :::; k-1" and A_
pendent.
Thus via (3.2.14) we have, as n
->
m
1
are inde-
00
u
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
Ie
I
1-F~(t) - P(U(m+1,nt) ~ k-1} = P (U(o,nt) :::; k-1IA_~}
(3.2.15)
- . P(U(m+1,nt) ~ k-1}
~
Via stationarity, p( u(m+1,nt)
->
°.
k-1} = P(U(0,nt-(m+1))
~
k-1}.
Now,
an application of the stationary, univariate of Corollary 1.3.1.1 with
A1 = t serves to prove that as n ->
00
P(U(0,nt-(m+1)) ~ k-1} - P(u(o,nt) :::; k-1}
(3.2.16)
->
°
while directly from Theorem 3.2.1 we have
(3.2.17)
P(U(o,nt)
< k-1} ->
k-1
L:
i=o
e
-t t:\.
"':"f.
1.
Thus, combining the results of the above paragraph we have the fo110wi~g
theorem:
Theorem 3.2.3
Let
(~i
(i=O,±1,±2, ••• )} be an m-dependent stationary
stochastic process satisfying the conditions of Theorem 3.2.1 [with
~ = 1 in (3.2.3)].
Let F~(t) denote the probability that the waiting
time between an upcrossing of u
u
n
is
<
-
nt.
Then as n
->
n
and the kth successive upcrossing of
-
00
(3.2.18)
k-1
+ (k-1)!] ,
t
that is, the (normed) waiting time has the asymptotic distribution of
the sum of k independent exponential (unit mean) random variables.
The result is not surprising.
It is another example lending justi-
fication and clarification to the assertion that the stream of upcrossings by x(t) of the (appropriately chosen) level un is asymptotically
100
1"
(or converges weakly to) a Poisson process (as n -> 00).
(See Definition
I
I
_I
1.3.1.)
As a final example of the asymptotic behavior of the stream of
upcrossings and downcrossings. by
~(t)
of a level u
n
(tending to 00) in
the m-dependent case, we consider upward excursions that were mentioned
in section 3.1 in connection with the continuous-time normal results of
Cramer and Leadbetter [4].
We first notice that for p 2: m+l, the
probability that an excursion above a level u
units tends to zero as u
n
-> 00.
n
lasts longer than p time
To prove this, observe that the prob-
ability of the above event, say E , is given by
p
(3.2.19)
Now from m-dependence,
p
2:
m+1.
(3.2.20)
~p+l-l
is independent of
~i-2
and
~-l
for
_I
Therefore,
n
P(E p ) = P(-px+i-l>U n ) P(x+.
2>Un , ••• ,x.>u
Ix. 2<un ,xi
l>un )
~ ~-~
n -1- < pIx
+. t>Un ) -> 0 as un -> 00 •
~p 1-
-
Therefore, in the limit, excursions above u
~
m+l.
n
tend to be of duration
Thus, in contrast with the result stated in Section 3.1, com-
pared with the 'size' of the time interval involved in the statement of
the limit distribution, viz (o,tn), the excursion lengths are
This, of course, is not unexpected.
is possible.
Level u
n
(3.2.21)
negligibl~
However, a more informative result
n
Let E ; q=1,2, ••• ,m+l denote the event "an excursion above
q
has duration
I
I
I
I
I
I
~
,
q time units."
Then
n
P(E ) =pIX.<U Ix. 2<U ,xi l>u ), and
~~ n -~n - n
l
n
q=2, ••• ,m+l.
P(Eq ) = Plx'+q
2<un ,xi
~~
- l<un , ••• ,x.>u
-1
n Ix.
-1- - l>Un ),
I
I
I
I
I
I
I
_I
I
I
I
101
n
q
Then, assuming P(E ) -> p , q=1,2, ••• ,m+l as n -> 00, the asymptotic
probability ,
~
j' that an excursion above un persists for a time interval
of length d, j < d
I
I
I
I
I
I
Ie
3
q
~
j+l; j=O,l, ••• ,m is given by
(3.2.22)
We have considered only m-dependent stationary processes so far.
However, the assumption of stationarity is not crucial.
The general
results in section 1.1 showed that certain limit results valid for
m-dependent stationary sequences of events remained valid for nonstationary sequences of events, provided the non-stationarity was, so
to speak, uniformly small.
Also, the results of section 1.2 (Theorem
1.2.2, for example) yielded analogous limit results in a setting that
did not require a uniform non-stationarity.
As one might suspect, these results may be applied in the problem
of upcrossings to yield results analogous to, say, Theorem 3.2.1, when
the stochastic process (~i; i=0,±1.±2, ••• } is m-dependent though not
3
necessarily stationary.
As an example we state the following result.
1
Theorem 3.2.4
(~i
(i=0,±1,±2, ••• )} be an m-dependent stochastic
Let
process, unbounded above.
u
n
Suppose that for some sequence of levels
-> 00, the events (A~~ (i=1,2, ••• )} defined by A~~ = (x.
l<u ,x.>u }
~n -~ n
with p~ = P(A~); i=1,2, ••• satisfy the following conditions as n -> 00:
J
J
J
]e
I
~
~
n
(3.2.23)
(3.2.24)
a
n
=
n
L: Pi = 0(1) ,
i=l
max
l<i<n
n
p~
...
= 0(1), and
102
(3.2.25)
b
Then as n
->
=
max
p(A~IA~) = 0(1) •
0< li-jl ~m+1
1
J
i,j=1,2" ••• ,m
_I
I
,
-
P{exact1y k upcrossings by x(t) of level u in (o,n)},
n
-a
n k/k ••
f
... e a
n
(3.2.26)
Proof.
00
n
,
This is almost an immediate' consequence of Theorem 1.2.2.
I
From
that theorem we can immediately conclude that (3.2.26) is true when the
expression on th~ left is replaced by P{exact1y k amongst A~(i=1,2, •• ,n)
1
occur}.
However the two expressions differ by an amount which is at
most equal to'P(x
-0
> un ) -> 0 as n -> ' 00
•
Let us assume that under some ideal conditions a certain
m-dependent stochastic process
{~i
(i=0,±1,±2, ••• )} is stationary, and
does in fact satisfy the conditions of Theorem 3.2.1.
However, let uS
suppose that we have reason to believe that we are dealing with a
process {Y
i
(i=0,±1,±2, ••• )}
such a manner that Y =
i
i, that is, Ig(i)
I~
tha~
~i+g(i),
M<
00;
is related to
{~i;
For example, the process,
otherwise stationary, might have a mean function Il(i)
i=0,~1,~2, •••
i=0,±1,±2, ••• } in
where g(i) is some bounded function of
i=0,±1,±2, ••••
=f~i;
that varies in such a manner so as to remain bounded.
It
is then not difficult to see that the conditions of Theorem 3.2.4 would
be satisfied, and we would expect the asymptotic Poisson character of
the frequency of upcrossings to remain valid.
I
I
I
I
As an application of this theorem, consider the following situation.
I
I
It is also to be expected
that a similar result would hold in the continuous case considered by
Cram~r and Leadbetter [4] if the normal process {Ht)-m(t),-oo<,t< oo}
_I
I
I
I
I
I
I
I
_I
I
I
I
I_
103
where stationary, where m(t) is some bounded function of t.
restricted to the discrete case, however.
We are
Needless to say, there are
many forms of non-stationarity under which the conditions of Theorem
I
I
I
I
I
I
3.2.4 might be satisfied, but we shall not consider this point any
further.
We shall now turn attention to applications of the results of
section 1.4 to the problem of upcrossings by a discrete process of a
high level.
As was pointed out in section 3.1, the results of Cram:r
and Leadbetter [4 ] indicate that a sufficient condition for the
asymptotic Poisson character of the stream of upcrossings of a high
level for a discrete time stationary normal process is that r
a ) as Inl
->
O(/n/-
00
for some
a> O. As can be proved from the results
of Berman [1], other sufficient conditions are that either r
00
Ie
I
I
I
I
I
I
I
Ie
I
f:(x.x)
(, -ern
n
n
log n -> 0
2
Z r < 00. These sufficient conditions have various
n=l n
interpretations (in terms of, for example, the Spectral distribution
as n ->
00,
or
function of the process; see Berman [1], p. 511), and give some intuitive notion of the 'rate' at which dependence in the process 'dies off.'
In this respect the sufficient conditions required for validity of the
results of this section may be less appealing.
This is to be expected
though, since in the non-normal case dependence generally cannot be
summarized by such a simple quantity as a correlation.
We begin with
the following result.
Theorem 3.2.5
Let (x. (i=0,+1,+2, ••• )} be a stochastic process,
-1.
--
unbounded above, and Su , (Y.
(i=0,+1,+2,
••• )} the sets and associated
-1.
-stochastic process defined in Definition 3.2.1.
sequence of levels u
n
->
00
(as n ->
00)
Suppose that for some
the conditions of Theorem 1.1.4
104
are satisfied with events (A~ (i=1,2, ••• )) defined by A~
Then, as n ->
(3.2.27)
Proof.
(Y.
-1
€
8
U
).
n
00,
P(exact1y k upcrossings by x(t) of u
n
in (o,n)} ->e -~ ~ k /k!.
Analogous to Theorem 3.2.4, the proof is almost an immediate
consequence of Theorem 1.4.1.
conclude that as n ->
00
From that theorem, we can immediately
,
.
P(exact1y k amongst A.n (i=1,2,
••• ,n) occur} -> e -~ ~ k /k! •
(3.2.28)
1
Now the probabilities on the left-hand sides of (3.2.27) and (3.2.28)
differ by an amount that is at most
P~
o
> u ) which tends to zero as
n
Of course, Theorem 3.2.5 is somewhat cumbersome and obtuse.
" clear when the conditions are satisfied.
not at all
It is
80, in order to
give a more appealing version, we specialize to the case of stationary
sequences.
At least in this case some intuitive feeling for the con-
ditions can be given.
Theorem 3.2.6
[~i
(A stationary version of Theorem 3.2.7)
(i=0,±1,±2, ••• )} be a strongly mixing strictly stationary stochastic
process (with mixing function g), and unbounded above.
u
n
Let
Let
(n=1,2, ••• ) be a sequence of levels tending to infinity in such a
manner that (3.2.3) is satisfied.
Further, suppose that there exist
sequences of integers (p ) and (q ) such that as m ->
m
m
r
(3.2.29)
m g(q -1) -> 0 for n > 0,
(3.2.20)
qm/pm -> 0 and Pm+1/ Pm -> 1 ,
m
00
I
I
_I
I
I
I
I
I
I
_I
I
I
I
I
I
,
I
I
_I
I
I
I
I_
I
I
I
I
I
I
Ie
I
I
I
I
I
I
'·1
Ie
I
105
and writing P=Pm' q=qm' t=m(p+q)
p-l
(3.2.21)
Let
I
(~.,b.);
~
1 ~
=
P
P i=l
(p-i) p(x.
> ut'x.
1
-~
-~ -
< u t Ix- < u t ,x- 0 > u t }=o(l).
-1
i=1,2, ••• ,r be r bounded disjoint intervals in (0,00) and
~
m.; i=1,2, ••• ,r be r non-negative integers, all independent of n.
~
Then
as n -> 00
r
P(
(3.2.32)
n
i=l
(exactly m. upcrossings of u by x(t) in (a.n,b.n)})
~
n
~
~
->
r
II (e
i=l
m.
~'t". ~ 1m. 1)
-'t"o
~
~
where't". = Hb.-a.); i=l, , ••• ,r •
~
Proof.
~
~
This theorem is an application of Theorem 1.4.5.
Let
[Yo (i=0,±1,±2, ••• ) ) be the stochastic process associated with
-~
[x. (i=0,±1,±2, ••• )} as in Definition 3.2.1, and observe that
-~
[Yo (i=O,±l,±2, ••• ) } is strongly mixing and stationary with mixing
-~
function g' defined by g'(k) = g(k-l).
by A~~ = [Yo
-~
€
Su } = [xi>
un , -~.
x. 1.< u}.
n
Define events [A~ (i=1,2, ••• )}
By virtue of the properties of
n
the process [Xi (i=0,±1,±2, ••• )} these events are stationary and
strongly mixing with mixing function g'.
For the sequences [Pn} and
[q } of integers that were postulated in the hypothesis, we have as
n
m
->
00,
(3.2.23)
r
r
m g'(q ) = m g(q -1) -> 0 for r > O.
m
m
Since (3.2.3) is satisfied, and (3.2.31) implies that the sequences of
sets [A~ (i=O,l, ••• )} satisfy the conditions of Theorem 1.4.5, the proof
~
is complete.
106
We now discuss the. conditions of the above theorem.
Condition
(3.2.29) requires that,judged in terms of the mixing function g(k), the
dependence in
{~i
difficult to interpret.
Condition (3.2.31) is a bit more
The quantity in (3.2.31) is a weighted average
of the conditional probabilities of an upcrossing of u
t
units (to the right), given an upcrossing has occurred.
within p time
Letting
a~ = P{~i > u t ' ~i-1 < UtI ~-l < u t ; ~o > u t }, we write (3.2.31) as
I
Thus, if as m ->
(3. .33)
mp. =
1 P
t
= - E (p-i)a <
P
p i=l
i 00
n-1
t
max
~2
a .•
i~p
,
max
i<i~p
1
t
= 0(-)
p
a. =
1
Also, without having to make reference to any sequence, it would be
sufficient that
m=
n
n
1
a. = 0(-) as n
max
l<i<n
1
n
->
00
•
Accordingly, we may state the following (weaker) form of Theorem 3.2.6.
Corollary 3.2.6.1
Suppose that
{~i;
i=O,±l,±2, ••• } is a stationary,
strongly mixing stochastic process, unbounded above, and with mixing
function g(k) satisfying
(3.2.35)
g(k) = o(k
-r
), r=1,2, •••
as k ->
00
•
If u ; n=1,2, ••• is a sequence of levels tending to infinity and chosen
n
so that as n ->
00
I
I
I
I
1
it would immediately follow that condition (3.2.31) would be satisfied.
(3;,2.34)
_I
(i=0,±1,±2, ••• )} decrease faster than any power of the
'distance' k between two events.
(3.2.32)
I
I
I
I
_I
I
I
I
I
I
I
I
_I
I··
i
1
I
I
I_
I
I
I
I
I
I
I.
I
I
I
I
I
I
I
Ie
I
107
(3.2.36)
(3.2.37)
n
max P(x.
> un ,x.
1 < u n Ix
< un' -xl > un ) -> 0 ,
-~
-~ -0
l<i<n
then the conclusion of Theorem 3.2.6 is valid.
As one would expect, a 'waiting time' result is also available for
the situation at hand.
Via the univariate version of Theorem 1.4.6, we
have the following result.
Theorem 3.2.7
Suppose the conditions of Theorem 3.2.6 (or Corollary
3.2.6.1) are satisfied.
If F~(t) is the probability of the event "the
waiting time from an upcrossing of level u
n
by x(t) until the kth
-
successive upcrossing of un is -< nt , " then as n ->
(3.2.38)
00
,
+ ••• +
I
I
_I
APPENDIX A
REMARKS CONCERNING NON-RANDOM t AND NON-RANDOM n
The main purpose of this appendix is to establish the validty of·
Theorems 2.2.1 and 2.2.2.
As was brought out in Section 2.2, these
theorems represent the 'link' between our approach to the Limit Problem
(non-random n), and the approach taken, for example, by Ikeda [10] (nonThere are two limiting conditions of interest, (n ->00) and
random t).
[n -> 00].
We shall deal with these, while, as before, sometimes
restricting attention to the special case of k=3 (i.e. third-order light
sensations).
First, we shall show that for the model of Section 2.3 (i.e.,
Poisso~
arrivals of intensity
~
and iid lifetimes with means a
< 00), the
Poisson Limit Result for non-random n implies the (known) Poisson Limit
Result for non-random t.
Secorid, we shall deal with the model of Sec-
tion 2.5 (i.e., Poisson lifetimes with mean a and iid interarrival times
with mean
l/~).
We propose to show that the non-random n result con-
tained therein implies an appropriate non-random t Poisson Limit Result.
We begin with a restatement of the first theorem we wish to prove,
while making free use of the notation established in Section 2.2
Theorem 2.2.1
If interar:ival times (~i = Zi-Zi~l (i=1,2, ••• ») (Zo=O)
form an iid sequence of exponential random variables with mean
while lifetimes
(~i
l/~,
(i=1,2, ••• ») form a sequence of iid random variables
I
I
I
I
I
I
.-I
I
I
I
I
I
I
_I
I
I
I
I_
109
(independent of arrivals) with finite mean
lim
(A.1)
(n
I
I
I
I
I
I
I.
I
I
I
I
I
I
I
Ie
I
where the symbol
->
g is
means "n
->
I
co, fl V
means "t
->
00,
, then for A
(t
co)
->
00
0
3= it k
k
P (t)
lim
>
m
m
to be read 'exists and equals.' Here, (n -> 00)
0 in such a manner that nfl
k-1
-> A,"
and (t
->
00)
fl~ 0 in such a manner that tflk-? A."
Our proof of Theorem 2.2.1 requires two lemmas.
The lemmas (as
should be noted) represent particular cases of general results involving
the relationship between renewal theory and the law of large numbers
Although this relationship would be interesting to pursue in generality,
we shall be content with the specific results in the following paragraphs.
Let us define two quantities n
o<
t
<
both n
t
u
00.
~
>
€
>
O.
We
shall~
t
k+€
= flt + (flt)2
, n
t
k+€
= flt - (flt)2 ;
it
without loss of generality, assume that
Furthermore, let E
and nit are integers.
t
= number of arrivals
in (O,t]; 0
<
Lemma A.1
If interarriva1 times (~i = Zi-~-l; i=1,2, ••• } (Zo=O) form
t
<
t
u
co.
an iid sequence of exponential random variables with mean l/fl, then
t
P(nn.KJ
(A.2)
<
n <
-t -
t
nu ) -> 1 as lit
-> co •
l'"'"
The proof is a simple application of Chebychev's inequality.
Proof.
Via that probability inequality we have
(A.3)
as flt
->
00 •
t
We now let n = flt and observe that, as flt -> co, nit = n + o(n) and
110
n
t
u
= n + o(n).
Assume, without loss of generality, that n is a non-
negative integer.
Note that (n
->
00) if and only if (t
-> 00) according
ook
k
k
If P (t) = E p.(t) and Q (n)
m
j=m J
m
to the above definition.
ook
E q.(ri), we
j=m J
have the following result.
Lemma A.2
Under the assumptions of Theorem 2.2.1,
(A.4)
lim
(n
Since
Proof.
~
->
k
~(n)
3= Lk
m
00)
>
lim
(t
->
00)
is the waiting time until the arrival of the nth
particle, letting n£
t
= n£,
pk(t) < P(m or more kth order visual responses in (o,t], and
m
-
(A.S)
~
:s P(m or more
£
:s
t)
+
P~
£
>
t)
kth order visual responses in first n£
arrivals, and
<
~
But
flt
P~
~
£
:s t)
= P(.!!t 2: n£)
->
1 as flt
->
> t) •
t) + P(Zu
£
£
00, and P(~
£
>
->
t)
0 as
00, so that from (A.S)
(A.6)
=n +
We now recall that n£
o(n) as (n -> 00).
Furthermore, in Chapter
2 it was shown that visual responses (light sensations) correspond to
events which satisfy the conditions of Theorem 1.1.2.
We may therefore
appeal to the results of Corollary 1.1.3.1 and conclude that
(A.7)
lim sup Qk(nn) = lim sup Qk(n) =
(n
->
00) m
.KI
Hence, via (A.6) and (A.7),
m
lim
(n
->
00)
~(n)
= L
k
m
I
I
_I
I
I
I
I
I
I
.-I
I
I
I
I
I
I'
el
Ie
I
I
I_
I
I
I
I
I
I
I.
I
I
I
I
I
I
I
Ie
I
111
lim sup pk(t)
(A.8)
->
(t
On
00)
pk(t)
m
lim
->
(n
the other hand, letting n
(A.9)
<
m
n
u
Qk(n) =
00) m
t
u
> P(m or more kth' order visual responses in (o,t], and
~ 2: t 2: i;)
-
u
.e
> P(m or more kth order visual responses in first n.e
arrivals and ~ 2: t 2: ~ ).
.e
u
) = P(n.e ~ ~t ~ nu ) -> 1 and n.e = n + o(n) as
.e
-> 00 so that another appeal to Corollary 1.1.3.1 yields
But, P(~
2: t 2:
~
u
~t
lim inf pk(t)
(A. 10)
(t
->
00) m
,
>
lim
(n
->
00)
Combining the results of (A.lO) and (A.8) we have proved (A.4).
Proof of Theorem 2.2.1
Theorem 2.2.1 is now almost immediate.
All we
k
k
k
k
k
k
need note is that Pm(t) = Pm(t) - Pm+l(t) and qm(n) = Qm(n) - Qm+l(n).
With
.e~
=
L~
-
L~l'
Theorem 2.2.1 then follows from Lemma A.2.
We remark that the limiting condition (n -> 00) .or (t -> 00) is
actually used in two crucial steps in the above proof.
need more than
~t
->
00.
We do indeed
(The steps referred to are (A.7) and (A.lO)).
Of course, Theorem 2.2.2 is the analogue of Theorem 2.2.1 for the
case where interarrival times (~i = Xi-Xi-I; i=1,2, ••• } (Xo=O) form an
iid sequence of random variables with mean
(~.;
-1
a.
l/~,
and lifetimes
i=1,2, ••• } are iid exponential (independent of arrivals) with mean
We shall state and prove a theorem, Theorem 2.2.2', slightly more
general than Theorem 2.2.2.
112
Suppose finterarriva1 times' (~i = Zi-~-l; i=1,2, ••• }
Theorem 2.2.2'
(Z =0) form an iid sequence of positive random variables with mean
o
(~>
(~;
0), and lifetimes
of arrivals) with mean
-!t
a. Let H~ (t) = ~-t
t: n , where n is the number of
-t
If there is a function g (t) such that as t
k-1
~
0 with H (t) II ~.i.) -> a > 0 we have
~
j=l
CXi..t
g (t)
->
(A. 13)
g (t)
= o(H
~
P(nn<nt<n)
(A. 14)
.{/- - n
-> 00
00
~
I
(t»
-> 1, where nn.{/ =H~ (t)-g ~ (t) and nu = H~ (t)+g ~ (t),
then,
3
k
k
q (n) = P,
lim
m
m
[n -> 00]
>
:3
lim
[t
->
00]
pk(t) = p,k
m
m
\1, 0
where the symbol [n -> 00] is integrated to mean "n -> 00, ~ Y so that
k-1..6. .
n II G(..l.-) -> a > 0" and [t -> 00] is interpreted to mean fIt -> 00,
j=l
CXi..t
k-1.
~ -!t 0 so that H (t) II 'G'(-L) -> 0." Of course, -as above, 'G'{s) denotes
~
j=l
otJ.
the Lap1ace-Stie1tjes transform of the distribution function of ~~i.
Proof.
Proceeding as in the proof of Theorem 2.2.1 we conclude that by
defining n =
n
u
H~(t)
and assuming without
loss of generality that n, np,'
are integers,
lim
[t -> 00 ]
lim
(A. 15)
[n
-> 00]
We remark that since H (t) _
~t
as t
-> 00, one may replace the
former by the latter in the definition of [t
->
I
I
.1
I
I
I
I
I
11
from which the conclusion of the theorem is immediate.
~
I
I
I
(A. 12)
~
_I
i=l, , ••• } are iid exponential (independent
'arrivals' in (o,t].
and ~
1/~
I
I
~
in the theorem.
Of
I:
_I
I
I
I
I_
113
course, a question arises concerning the conditions under which the
hypotheses of Theorem 2.2.2'are fulfilled.
For in this case, we shall let g (t) = (V (t)}~ + E
J.l
J.l
0), where V (t) = var n • it is known (see Smith [15]) that
J.l
-t'
cient that
I
I
I
I
I
I
I.
I
I
I
I
I
I
I
Ie
I
(E
>
We remark that it is suffi-
Ex~ < 00.
-1
V (t) ~ KJ.1t (K > 0) as t -> 00. Furthermore H (t) ~ J.lt -> 00, while
J.l
J.l
Int-tgt I
obviously g (t) = o(H (t». Finally P(nn;u <
n
<
n
)
= P(
)
-t u
(J"
J.l
J.l
!!t
~ (KJJ.t)E) -> 1 as [t -> 00]. Therefore, the condition of Theorem 2.2.2
'
are valid, and so its conclusion holds.
Theorem 2.2.2.
This is, of course, precisely
I
I
_I
APPENDIX B
SOME RESULTS FOR POSITIVE RANDOM VARIABLES
The results of this section apply to remarks found in Section 2.4.
We state and prove the following (known) result for positive random
variables.
Theorem B.l
Let F be the distribution function of a positive random
For real numbers r, s 2: 1 we have,
variable x.
(i)
f~r
(11)
x
r
<
00
[l-F(x)]
->
0 as x
For A > o,E~r 2:
Proof.
->
implies xr[l - F(x)]
->
00
0 as x
->
implies E~s
00
<
00
for s
< r.
A
J
Taking limits
o
A ->
(as
00)
inequali~y,
of both sides of the preceding
Formally, for 1
< s < r we may write for any
s
x
- -[l-F(x)] +
s
x
s-l
[l-F(X)]dX.
X
J
o
implies that for x
0
<x<
-
r
we obtain (i).
1 x s
00, J X dF(X) =
s
o
Now x [l-F(x)] -> 0 as x ->
> Xo x s-l [l-F(x)]
~
x
-(l+r-s)
00
, and hence the second
term on the right of the preceding equality represents a convergent
integral as x ->
00.
By hypothesis the first term on the right of the
preceding equality -> 0 as x ->
00.
x ~
thus proving (ii).
00
it is seen thatf~S
<
00,
Taking limits of both sides as
Corollary B.l
r
x [l-F(x)]
--> 0 as x ->
00
I
I
I
I
I
I
.-I
I
I
I
I
I
I
_I
I
I
I
I_
I
I
I
I
I
I
I.
I
I
I·
I
I
I
I
Ie
I
115
00
J
and
x
r-l
[l-F(x)]dx
<
imply
00
E~s <
00
for s
< r.
o
Now if, for a positive random variable
F, r
o
< sup(r:t~r <
as x ->
~
with distribution function
oo} , the above result insures that i>[l-F(x)]-> 0
r
for any 0
00
XT)[l-F(x) ]
->
00
< ro.
as x ->
One might be tempted to conclude that
00
<
for 0
r •
o
This is not the case in general
unless additional assumptions are made on the behavior of [l-F(x)] as
x ->
oo~
The following theorem, a modification of a result due to
Smith [16], demonstrates that without additional sssumptions on [l-F(x)],
little can be said about the asymptotic behavior of xT)[l-F(x)] (1) > r )
o
as x ->
00.
The moment assumptions made in the theorem are seen to be
relevant to the situation encountered in the discussion of asymototic
behavior of scores in Chapter II.
Theorem B.2
Let w be any increasing function and
3- =
{F: €x
<
00,
f~2 = oo} (that is, the class of all distribution function F of random
variables x with finite mean, infinite variance).
Then there exists
an F in j- such that
w
lim inf
x-> 00
w(x) [l-F (x)] = 0 •
w
(Note that if w(x) = x 0 (0 > 2), the theorem insures that for at least
O
one distribution function F, x [l-F(x)] does not tend to infinity.)
Proof.
The proof follows Smith [16].
Suppose w(x) > 1 for x > 1.
Let
{€ } be a monotone decreasing sequence of positive real numbers such
n
El
< 1 and En ~ 0 as n ->
00
·W(~efine ~1'~2'···' and Sn = ~1+~2+"·+~n
as follows: ~l=l and ~ 1 =
n_
n=2,3, •••• Thus
n+
E
(S n
) w (S n-l )
t )
( 50
w 1
1
n
w
~2 = - - > € > 1 > ~l' and for n > 1, ~ +1 =
>
=
El
1
n
En
En _ l
~
n
•
116
Now define
n
1 - Fw (x)
2
_I
w(S)
n
I
I
I
I
I
I
=
1 if x
I
Then 1-F (x) 'V 0 as x ->
w
00
< '1
•
€n
since
>
-2--==~-
n w(S )
n
Note that
Sn+1
00
f. F-x=l+
W
E
[l-F (x)]dx
f
w
n=l S
n
00
= 1
+
- 12
E
n=l
<
co
n
.-I
Furthermore,
Sn+1
co
E
n=l S
f
1
00
€
E
x[l-F(x)]dx = -
n
2
2 n=l n w(S )
n
n
co
€
00
> E
n
n=l
n
W
€
3".
.Sn
E
2
n=l n w(s )
Thus F
n
2
co
>
-
E
n=l
1
n
-=co
In addition
€
W(S ) [l-F (S )] = w(S )
n
w n
n
n
2
n
->
co
->
0 as n
->
I
I
I
00
w(S )
n
W
The results of Theorem B.2 show that if x,is a positive random
variable with distribution function F and
behavior of x 2i5 [1_F (x)] as x ->
w
co
f,! <
I
I
I
so, lim inf w(x) [l-F (x)] = 0, as was to be shown.
x
I
I
co,
E~2
=
co,
the
is not determined in general.
Thus
el
I
I
I
I_
117
to deal with the asymptotic behavior of expressions such as 2.12, it
seems reasonable (by virtue of necessity) to make some additional
assumptions concerning the asymptotic behavior of 1-F(x) as x ->
I
I
I
I
I
I
I.
I
I
I
I
I
I
I
Ie
I
00.
The somewhat natural occurrence of a relation between the asymptotic
behavior of 1-F(x), (2.12) and functions of slow growth, both from a
probabilistic and mathematical standpoint, make the slow growth assumptions on 1-F(x) seem reasonable (see Smith [17], Feller [5]).
I
I
_I
APPENDIX C
MULTIVARIATE ANALOGUE OF THE BONFERRONI INEQUALITIES
Suppose we are given two finite classes of events, [A1' ••• '~)'
[B , ••• ,B }.
N
1
tities,
We are interested in upper and lower bounds on the
p[m,n] = Pr (exactly m A's and exactly n B's occur)
and
p(m,n ) = Pr (at least m A's and at least n B's occur);
o :::; m :::; M, 0:::; n :::; N.
S
m,n
It
Define
!:
... << i.m <<MN
1 :::; i <
1
1 :::; j1 <
In
P(A
i1
••• A. B.... B. ) •
1m Ji
In
-
is known (see Frtchet [6]) that
(C.1)
P
M
= L:
[m,n]
i=m
N
!:
j=n
(_l)i+j-(m+n) (i) (j) S.
m n
1,j
M+N
=
!:
L:
t=m+n
i + j = t
m<i<M
n:::;j:::;N
(-1) t- (m+n) (i) (j) S.
1,j
m n
from which it follows (by solving the linear system (C.1) ) that
M
N
L: (i) (j) p[. .]
(C.2)
S
= !:
m,n
1,J
i=m j=n m n
quan-
I
I
I
I
I
I
.-I
I
I
I
I
I
I
_I
I'
I
I
119
M+N
I_
( i) (j) P
.E
.E
t=m+n
+ j =t
m<i<M
i
m n
[i,j]"
n ~ j ~ N
I
I
I
I
In the following theorem we establish a sequence of upper and lower
bounds on p[m,n] based on the expansion in (C.1).
analogous to the Bonferroni inequalities (see Feller [5], p. 100) and
in fact reduce to them in case M = 0 or N = O.
]
-1
Theorem C.1
~
<.E
[m,n] - t=m+n
m+n+2£+1
p[m,n]
Proof.
(i)
>
.E
t=m+n
s.1,]. .
.E
i+j=t
We shall make use of the following combinatorial identities:
k
.E
(_l)',»-t (k-m) = (k-m-1)
'\) -m
t-m-1 '
(Feller [5], p. 61),
and
It will suffice to show that for £
~
m+n,
M+N
(C.3)
]
.E
.E
t=£
i+j=t
Using (C.2) we have the left-hand side of (C.3) equal to
J
N.
and
J
1
Je
~
s.1,]. ,
.E
i+j=t
(ii)
1
M, m ~ j
For £ = 0,1,2, ••• we have
m+n+2£
P
v=t
j
In Theorems C.1 and C.2,
the indices i and j are restricted to the range m ~ i
~
3
J.
These bounds are
M+N
(C.4)
.E
t=£
.E
i+j=t
120
M·
N
=.E
(~) (~) p
.E
1
i=m j=min(£-i,n)
J
[y,z]
)
M
.E
=
i=m
The range of summations for j has been extended to £-i to N since the
<
summand is zero if j
n.
Rearranging summation orders for z and j and
using (H) yields the following expression for (C.4):
M
M
.E
.E
~ [~
i=m y=i z=£-i
M
M
(_l)z-(£-i)
j=£-i
N
•
i
(z:n-1)( )(~)(z) p[
] by (i) ,
i=m y=i z=£-i £-1-n-l m 1 n
y,z
= .E
> o.
.E
.E
This completes the proof.
We have analogous results for p( . )_
m,n
(C.5)
P
(m,n)
M
N
= .E
.E
i=m j=n
First,
( _l)i+j-(m+n) (i-I) (j-l) s
m-l n-l . i,j
This is established by noting
M
P
(m,n)
=
N
.E
.E
y=m
z=n
M
N
p[ y,z ]' and using (C.l)
M
N
=.E.E.E
.E
y=m z=n i=y j=z
Interchanging the order of summation of both y and i and z and j, and
using identity (i), we obtain (C.5).
By solving the system of linear equations (C.5) we obtain,
I
I
_I
I
I
I
I
I
I
.-I
I
I
I
I
I
I
_I
I
I
I
Ie
I
I
I
I
I
I
I.
I
I
I
I
I
I
I
Ie
II
121
(C.6)
S
m,n
=
M
N
L;
L;
( i-l) (j-l) P
m-l n-l
(i,j)·
i=m j=n
Again, a sequence of upper and lower bounds for p(
expansion (C.5) are available.
m,n
) based on the
They reduce to the well-known inequal-
ities (see Feller [5], p. 100) in case either M = 0 or N = O.
Theorem C.2
For £ = 0,1,2, ••• we have
P
(m,n)
<
m+n+2£
L;
L;
t=m+n
i+j=t
( _l)t-(m+n) (i-l) (j-l) s
m-l n-l
i, j
and
m+n+2£+1
P
Proof.
>
I:
(m,n)
t=m+n
s.1,]. .
L;
i+j=t
As in Theorem C.l, it suffices to show that for £
M+N
(C.7)
L;
1::
t=£
i+j=t
S
( _l)t-£ (i-l)(j-l)
m-l n-l
i,j
>0
~
m+n ,
•
Proceeding as before, and using (C.6), the left hand side of (C.7) is
equal to,
M+N
(C.8)
=
L;
L;
t=£
i+j=t
M
N
L;
L;
i=m j=minU-i,n)
(-1) i+j-£ (i-l) (j-l) (
m-l n-l
M
N
L;
L;
e-1) (~-l)
y=i z=j 1-1
J-l
P(y,z))
=
M
N
M
N
L;
L;
L;
L;
i=m j=£-i y=i z=j
(_l)i+j-£(i-l) (j-l) (~-l) (~-l) P
(y, z) •
m-l n-l 1-1 J-l
122
Interchanging the order of summation for z and j and using identities
(i) and (ii) yields the following expression for (C.B)
This completes the proof.
It is clear that Theorems C.1 and C.2 can be generalized to any
number of dimensions, that is, if (A
11
,···,A 1N }, (A21 , ••• ,A
1
••• , (~l, ••• ,Al_' } are k finite classes of events and p(
L\.l~ k
. 1<.
p[
]and 8
m1'···'~
any £
~
2N
},
2
m ' • •• , ~
)'
1
m1'···'~
are defined in the obvious way, then for
.
0
t-(Lm.)
(-1)
J
k i.
II ( J) 8.
.
. 1 m.
1 1 ,···,1 k
J=
J
···
Lm.+2£+1
P
P
and
>
[m1'···'~] -
<
(m1'''''~) -
t-(Lm.)
J
1:
t=Lm
(-1)
j
Lm.+2£
J~
t=~.
J
J
( - 1)
t-(Lm j )
k
II
i.
( J)
j=l
J
m.
k i.-1
II (J )8
j-1 mj - 1 1· 1 , •• , i k '
I
I
_I
I
I
I
I
I
I
.-I
I
I
I
I
I
I
el
Ii
I
I
II
I
I
I
I
I
I.
I
I
I
I
I
I
I
Ie
I
123
&.+2 +1
J
P
(m1'···'~)
>
L:
L:i =t
L:
t=&.
(-1)
t-(&.)
J
j
J
k
i. 1
II ( J- ) S .
• •
m _
1. ,···,1.
j-1
j 1
1
k
1:::;i 1::;N1
m
As an application of Theorem C.1 we consider the following problem.
n
n
n
n
Let {A , ••• ,A }, {B , ••• ,Bn } be two classes of n events.
1
n
1
for these events
n
p[a,~]
(C.9)
S~,~ -> ~la ~~/a!~! as n ->
as n -> oo?
pn
a,~
00.
Suppose that
What can be said about
Via Theorem C.1 we have, for any t > 0,
<
CX+~+2t
L:
t=a+A
I-'
L:
i+j=t
06i:::;n
~j:::;n
(C.10)
L:
n
. •
1.,]
S.
i+j=t
C1<i<n
~j:5n
From the hypothesis, as n ->
tend, respectively, to
00
the right hand sides of (C.9) and (C.10)
124
E
(C.11)
i+j=t
~i:::;n
t:?6J:::;n
I'
E
i+j r
O<i<n
D:::;j:::;n
and
E
(C.12)
i+j=t
(-1) t- (0:+13)
~ (i-a) ~ (j-I3)
1
2
(1-a) ! (j-I3) !
~i:::;n
~J:::;n
E
i+j=r
O<i<n
D:::;J:::;n
.,
i •' J.
Since £ is arbitrary and as £ -> 00 both (C.11) and ~.12) tend to
0:
13-~
(~1 e 1/0:!)(~ e 2/ 13 !), the conclusion is that P[O:,I3] also tends to
-s
this quantity as n ->
00.
I
I
-I
I,
I
I
I
1.1
I
I
I
I
I
I
I
_I
I
I
I
Ie
I
I
I
I
I
1
I.
I
I
I
I
I
1
I
I.
I
BIBLI~RAPHY
[1]
Berman, S. M. (1964) Limit theorems for the maximum term in
stationary sequences. Ann. Math. Stat. (35) 502-515.
[2]
Bouman, M. A. and H. A. van der Velden (1947) The two-quanta
explanation of the dependence of threshold values and visual
acuity on the visual angle at the time of observation. Jour.
Optic. Soc. Amer. (37) 908-919.
[3]
Cramer, H. (1966) On the intersections between the trajectories
of a normal stationary stochastic process and a high level.
Arkiv f~r Mathematik (6) 337-349.
[4]
Cramer, H. and M. R. Leadbetter (1966) Stationary and Related
Stochastic Processes; Sample Function Properties and Their
Applications. Wiley and Sons, New York.
[5]
Feller, W. (1960,1966) An Introduction to Probability Theory and
Its Applications. Vol I (2nd Ed.), Vol II. Wiley and Sons, New
York.
[ 6]
Fr6chet, M. (1940,1943) Les Probabiliti~s Associees &un System
d'Evenements Compatibles et Dependents. EXpos~s d'Analyse
Generale (Nos. 859 and 942) Hermann, Paris.
[7]
Hodges, J. L., Jr. and L. LeCam (1960) The Poisson approximation
to the Poisson Binomial distribution. Ann. Math. Stat. (31) 737740.
[8]
Hoeffding, W. and H. Robbins (1948) The central limit theorem
for dependent random variables. Duke Math. J. (15) 773-780.
[9]
Ikeda, S. (1963) Asymptotic equivalence of probability distributions with applications to some problems of asymptotic independence. Ann., Inst. Stat. Math. (15) 773-780.
[ 10]
Ikeda, S. (1965) On Bouman-Velden-Yamamoto's asymptotic evaluation formula for the.probability of visual response in a certain
experimental research in quantum-biophysics of vision. Ann. Inst.
Stat. Math. (17) 295-310.
[ 11]
Isii, K. (1963) On a limit theorem for a stochastic process
related to quantum-biophysics of vision. Ann. Inst. Stat. Math.
.
(15) 167-175.
/
,,-
126
[12 J Loynes, R. M. (1965) Extreme values in uniformly mixing stationary stochastic processes. Ann. Math. Stat. (36) 993-999.
[13J
"
Mises, R. von(1921)Uber
die Wahrschen1ichkeit se1tener
Ereignisse. Z. Angew. Math. Mech. (1) 121-124.
[14J
Newell, G. F. (1964) Asymptotic extremes for m-dependent random
variables. Ann. Math. Stat. (35) 1322-1325.
[15J
Smith, W.L. (1958) Renewal theory and its ramifications.
Roy. Stat. Soc., Sere B (20) 243-302.
[16J
Smith, W. L. (1960) Infinitesimal renewal processes. Contributions to Probability and Statistics, Stanford University Press.
[17]
Smith, W. L. (1964) Renewal Theory (unpublished notes) University
of North, Chapel Hill.
[18J
van E1teren, P. et a1. (1961) Een wachtprobleem voorkomende bij
dremp1ewaardemetingen aan het oog. Statistica Neer1andica (15)
385-401.
[ 19J
Watson, G. S. (1954) Extreme values in samples from m-dependent
stationary stochastic processes. Ann. Math. Stat. (25) 798-800.
J.
I
I
-I
I
I;
I
I
I
I
.1:
I~
I:,
I
I
I
I
I
~Il,
~
1
1