Roginsky, ALlen Leonid; (1989).On the Central Limit THeorems for the Renewal and Cumulative Processes."

ON THE CENTRAL LIMIT THEOREMS
FOR THE RENEWAL AND CUMULATIVE PROCESSES
by
Allen Leonid Roginsky *
A Dissertation submitted to the faculty of The University of North Carolina at Chapel Hill in partial
fulfillment of the requirements for the degree of
Doctor of Philosophy in the Department of Statistics.
Chapel Hill
1989
Approved by:
Advisor
Reader
Reader
* This research was performed while the author was on leave from
IBM Corporation.
ALLEN LEONID ROGINSKY. On the Central Limit Theorems for the Renewal
and Cumulative Processes (Under the direction of \"'1 ALTER 1. SMITH.)
In our dissertation we derive the Central Limit Theorems (CLTs) with the
remainder terms for the renewal and cumulative processes.
A renewal process
based on the sequence of random variables {X n } is defined in one of the following
ways:
= Sup{n:
Sj $ t, j
= 1,2,... ,n},
1)
NI(t)
2)
N2 (t) = number of n such that Sn $ t,
3)
N3 (t) = Sup{n: Sn $ t},
where Sn = Xl + ... + X n. If X n
>
..
0 then all three definitions are identical. In this
case the renewal process is denoted N(t).
It is well-known that a CLT for the Sn's allows an expansion if EIXllk <
00
for some k 2: 3. The order of the remainder term in the CLT depends on k. \Ve
prove a similar CLT for N(t) when the Xn's are i.i.d. and positive. Next we allow
the Xn's to have different distributions and to take negative values. Vle prove a
CLT with the remainder term of the order of O(~) for each Ni(t), i = 1,2,3, when
the Xn's have the uniformly bounded third moments and satisfy other "natural"
conditions.
When all Xn's are i.i.d. and positive, a cumulative process can be defined as
W(t) = I:~(t)+IYJ.' where Y I , Y 2 , ... are such random variables that pairs
J=l
(XI,Y 1)' (X 2,Y2)' ... are i.i.d. Under certain assumptions on the X's and V's, a
CLT is obtained for W(t).
The order of the remainder term depends on the
finiteness of the absolute moments of the V's. An even better estimate is found for
an average error in the CLT approximations for W(t).
III
TABLE OF CONTENTS
Chapter
I.
II.
ACKNOWLEDGEMENTS
v
INTRODUCTION
1
1.1.
OVERVIEW
1
1.2.
SUMMARY
2
BACKGROUND
5
2.1.
INTRODUCTION
5
2.2.
SUMS OF INDEPENDENT RANDOM VARIABLES
6
2.3.
RENEWAL PROCESSES
2.4.
CUMULATIVE PROCESSES
..
III. CENTRAL LIMIT THEOREMS FOR THE RENEWAL PROCESSES
11
16
18
3.1.
AN EDGEWORTH EXPANSION
18
3.2.
AUXILIARY LEMMAS
21
3.3.
A CENTRAL LIMIT THEOREM FOR THE DIFFERENT TYPES
OF THE RENEWAL PROCESSES
29
THE NECESSITY OF CONDITIONS IN THE
CENTRAL LIMIT THEOREM
43
3.4.
IV. CENTRAL LIMIT THEOREMS FOR THE CUMULATIVE PROCESSES 49
4.1.
INTRODUCTION
49
4.2.
ESTIMATES OF THE TAILS OF THE
DISTRIBUTION FUNCTION OF R(t)
51
APPROXIMATIONS TO THE CHARACTERISTIC
AND CUMULATIVE FUNCTIONS OF Y 1
63
AN ESTIMATE OF A CORRECTION TERM ARISING
FROM AN APPROXIMATION OF N(t) WITH t/JJ
70
THE BEHAVIOR OF THE
CHARACTERISTIC FUNCTION OF V(t)
73
ESTIMATES OF THE REMAINDER TERM IN THE CENTRAL
LIMIT THEOREMS FOR THE CUMULATIVE PROCESSES
82
A CASE WHEN Y's HAVE A NON-ZERO MEAN
88
4.3.
4.4.
4.5.
4.6.
4.7.
IV
TABLE OF CONTENTS (Continued)
4.8.
ESTIMATES OF AN AVERAGE ERROR
4.9
AREAS OF FURTHER RESEARCH
96
102
APPENDIX A
103
APPENDIX B
104
BIBLIOGRAPHY
108
.
v
ACKNO\VLEDGEMENTS
I want to express my heartfelt gratitude to my a.dvisor, Professor \\'alter L.
Smith. Dr. Smith attracted my attention to a very interesting research topic and
generously shared with me his ideas, inspiration and love for mathematics.
I would also like to thank the members of my thesis committee, Professors
Stamatis Cambanis, Edward Carlstein, Vidyadhar Kulkarni and Gordon Simons for
their important comments.
Ms. Lee Trimble typed this manuscript with care, patience and high level of
professionalism. Her work and advice are appreciated.
I am grateful to Ms. Frances Greenstein for her friendship and encouragement.
Finally, I thank my employer, the IBM Corporation, for giving me the necessary time off and for providing generous financial support for my graduate
education.
CHAPTER I
INTRODUCTION
1.1. OVERVIE\V
A well-known Central Limit Theorem (CLT) for the independent, identically distributed (i.i.d.) random variables Xl' X 2 , ... requires only the existence of
..
their second moments. It would be natural to investigate if an improvement could
be made when more information is available about the random variables.
It has
been known for many years that when the k th moment of the X's is finite, the
k-2
CLT allows an asymptotic expansion with the error term of the order of l/n 2.
Similar results were obtained for the independent non-identically distributed
random variables. We will state the precise theorems in Chapter II.
It would be interesting to obtain such results for the renewal processes. A
renewal process based on a sequence of random variables {X n }:1 is defined for
t > 0 in one of the following ways:
(1.1.1)
1) N 1(t)
= Sup{n:
2) N2 (t)
= number of n such that Sn
= Sup{n: Sn ~
+ X 2 + ... + X n ·
3) N 3 (t)
where Sn = Xl
Sj ~ t, j
= 1,2, ... ,n},
~
t,
t},
It is clear that when X n > 0, all three definitions are identical. In this case
we will use the N(t) notation to describe the renewal process.
When Xn's are i.i.d. positive random variables with a finite mean
variance (72
jJ.
and
> 0, the Central Limit Theorem for the renewal process associated
2
with these Xn's can be derived with relative ease from the classical CLT for the
sequences of random variables.
The situation becomes much more complicated
when the random variables are not identically distributed and, especially, when
they take negative values.
Our objective is to bring the theory of the Central
Limit Theorems with the remainder terms for the renewal processes to the same
level where our knowledge of the CLT's for the sequences of random variables is.
Another area of interest is a cumulative process Vl(t).
Here is its formal
definition. Let {(X n, l'nn, n = 1,2, ... , be a sequence of the i.i.d. bivariate random
variables with a known joint distribution function F(x,y). Suppose that the Xn's
are strictly positive with finite mean JJ and variance (12 > 0 and that the Yn '8 have
finite mean A and variance r 2
> O. Then
N(t)+l
W(t) -
(1.1.2)
L
y.
j=l
J
is a cumulative process. This process is known as a type A cumulative process. A
type B cumulative process is defined almost like W(t) in (1.1.2) except that the
summation ends at j
= N(t).
We will be interested in a type A process only.
Smith (1955) proved an analogue of the CLT for W(t) assuming only that
Xl and 1'1 have finite second moments. We will consider possible generalizations
of Smith's CLT when more is known about X's and Y's.
Again, our goal is to
obtain the improvements in the Central Limit Theorem for the cumulative
processes similar to those obtained in the CLT for the sequences of random
variables.
1.2. SUMMARY
In Chapter II we put in a proper perspective the existing Central Limit
Theorems for the sequences of independent random variables, the renewal and
3
cumulative processes.
Our investigation shows that much more advanced CLTs
were proved for the sequences of random variables than for the processes defined in
(1.1.1) and (1.1.2). That provides the motivation to learn more about the renewal
and cumulative processes and to prove more sophisticated Central Limit Theorems
for them. We also present some other limit theorems discussed in recent papers on
this subject.
In Chapter III we concentrate on the renewal processes. At first we consider
a renewal process N(t) based on the independent identically distributed posi tive
(a.s.) Xn's with the rth finite absolute moment (r ~ 3). \~Te obtain a Central Limit
Theorem for N(t) with the remainder term of the same order as one would get in
the CLT for the sequence of random variables with the same condition on the
finiteness of their moments. The term that approximates a distribution function of
the (normalized) random variables N(t) is more complex than that used in the
classical Central Limit Theorem for the sequences of independent random
variables. \Ve give an explicit formula for this term.
Next we turn to the Xn's that are not identically distributed and that may
take negative values, thus making all three definitions in (1.1.1) different.
By
imposing certain requirements on the first three moments of the Xn's ,ve obtain (in
Section 3.3) for N 1 (t), N2 (t) and N3 (t) a Central Limit Theorem with the
remainder term. Prior to that, in Section 3.2, we prove several lemmas that allow
us to extend the theorem to cover N2 (t) and N3 (t), once it is proved for N1(t). In
Section 3.4 we investigate the necessity of the assumptions in the Central Limit
Theorem.
Chapter IV is dedicated to the cumulative process. We impose the necessary assumptions on the distributions of X's and Y's and prove (in Sections 4.2 4.5) that these assumptions lead to certain properties that X's and Y's satisfy and
that are critical in the proof of the Central Limit Theorems.
4
In Sections 4.6 and 4.7 we prove several different CLTs for \V(t). The size
of the remainder term depends on the order of the finite absolute moment of Y 1
(and hence of any Yn).
Our next step is to estimate an average (over all possible values of t) error
in the Central Limit Theorem approximations for W(t). Such results do not follow
immediately from the estimates of the remainder terms obtained earlier in that
chapter. Vve end Chapter IV with a discussion of possible directions for the future
research in this area.
The most common application of a cumulative process is when a machine is
repaired at time zero at the cost of Y I' then it runs for the Xl uni ts of time, then
gets repaired at the cost of Y 2' etc.
Random variables X. and Y
J
J
may be
dependent. Then W(t) is the total cost of repairs by time t.
If we considered a type B process instead then the machine would first run
for time Xl' then get repaired at the cost of Y l' and so on.
CHAPTER II
BACKGROUND
2.1. INTRODUCTION
In this chapter we review the literature on the Central Limit Theorem with
the remainder term for the sequences of independent random variables, for the
renewal processes and for the cumulative processes.
Most of the research pre-
viously done concentrated on the limiting distributions of the sums of independent
random variables. We describe those results in Section 2.2.
The first systematic approach to this problem was developed by Cramer in
his 1937 book. Later, Gnedenko and Kolmogorov (1954) wrote a monograph with a
much deeper analysis of the Central Limit Theorem and other limit theorems for
the sequences of independent random variables.
By that time it was noticed
(Feller, 1949) that similar techniques could be applied to obtain the CLTs for the
renewal processes. In the 1950's Smith obtained for the first time a Central Limit
Theorem for the Cumulative Processes.
In the 1960's and 1970's much more has
become known about the limiting distributions for the sequences of random
variables.
Petrov's book (1975) provides an excellent summary of these results.
Among other things, Petrov and others developed the estimates for the
probabilities of large deviations; these results will help us to obtain a Central Limit
Theorem for the Cumulative Processes.
In the 1980's, a Central Limit Theorem with a remainder term was proved
for the renewal processes (see Ahmad, 1981, and also Niculescu and Omey, 1985).
These papers, however, contain errors. The results stated there are not true; they
6
only hold with more restricted conclusions than the authors suggest. In Section 2.3
we give a more detailed description of these papers and in Chapter 3 we prove
much more general CLT's for the renewal processes which include those stated by
Ahmad, Niculescu and Omey as special cases.
After more than 30 years since Smith's original paper on the CLT for the
cumulative processes was published, there was another publication on this subject.
Murphee and Smith (1986) proved a CLT for a so-called transient cumulative process. See Section 2.4 for details.
2.2. SUMS OF INDEPENDENT RANDOM VARIABLES
The very first result improving the Central Limit Theorem for the sums of
independent, identically distributed random variables with the finite higher
moments was apparently obtained by Cramer (see his book, 1937) who showed that
and the characteristic function </> of Xl satis-
if
fies the following (C) condition
(2.2.1)
limsup 1</>(0)1 < 1,
(C)
101-00
then
(2.2.2)
where Sn = Xl
+ ... + Xn·
Here cI>(x) is the distribution function of a standard
normal random variable and II A(x) II is defined for any function A(x) as
sup
IA(x)l. Functions Qv(x) are explicitly defined and can be expressed in
-oo<x<oo
terms of the Hermite polynomials of x and the cumulant moments of Xl. See
Appendix A for the precise definitions of functions Qv(x).
Formula (2.2.2) is often referred to as the 'Edgeworth expansion'.
In
e-
7
particular, for k = 3, (2.2.2) means that if the third moment of Xl is finite then
Various
modifications
of
(2.2.2)
were
discussed
by
Gnedenko
and
Kolmogorov (1954). Ibragimov (1967) addressed the issue of not only the sufficient
but also the necessary conditions for equalities like (2.2.2) to hold.
He also
considered a case of fractional moments and showed that if EX I = 0 and condition
(2.2.1) holds then for 0<6<1,
(2.2.3)
li
p { Sn
if and only if
q{ii
f
~
x
}
-
( ) _ k-2
cI> x
1I~1
QII(X)
nll/2
I -- O( n(k-2+C)/2
I
)
'
IxlkdF(x) = O(z-6), where F is the distribution function of Xl'
Ixl>z
Deeper results for the case of the i.i.d. Xn's were obtained by Osipov (1967,
1969, 1972). He considered a further improvement in the quality of the estimator
of the remainder term when the value of variable x in (2.2.2) is taken into account.
Here is one of his theorems (not the most general one).
Theorem 2.2.1 (Osipov).
Suppose that (2.2.1) holds, EX I
=0
and EIXllk <
00
for some integer k ~ 3.
Then
(2.2.4)
In addition, Osipov obtained an estimate for the expression in the left hand
side of (2.2.4) when the (2.2.1) condition does not hold. That last estimate is nontrivial only when the value of Ixl is very large.
It is important to note that if the theorems mentioned above were stated for
k = 3 only, they would not require the (2.2.1) condition to hold. This is related to
8
the method of proof, where some estimates of the characteristic functions Oil (0) of
certain random variables must be obtained.
The method of estimation varies
depending on the value of the parameter O. It turns out, however, that for k = 3
there are no values of 0 such that the estimation of ¢n(8) requires the (2.2.1)
condition to hold.
That will be the case in our research also, even though our
methods are different from those used by Cramer, Ibragimov and Osipov and the
problems we consider are different from those considered by them. In Chapter III,
the (2.2.1) condition will be required for the Edgeworth expansion, but will not be
necessary in our analysis of the renewal process with the underlying random
variables all having a finite third absolute moment.
In Chapter IV, assumption
(2.2.1) in the Central Limit Theorems for the cumulative processes is not necessaTy
to achieve the required precision in the remainder term since we do not assume the
existence of the absolute moment of Y 1 of the order higher than 3.
Another way to generalize (2.2.2) is to allow the independent random
variables Xl' X 2, .., to have different distributions. This problem was confronted
by several authors including Petrov (1960), StatuleviCius (1965), Bikelis (1966) and
Pipiras (1970).
For our purposes it is sufficient to present here the following result due to
Bikelis (1966).
Theorem 2.2.2 (Bikelis ).
Let {X n }, n= 1,2,... , be a sequence of independent random variables with
zero means and finite absolute third moments.
Denote
B~ = .¥; u~,
where
J=l J
Var Xj . Then there exists an absolute constant C such that for all n
(2.2.5)
~
u~
J
1,
9
Petrov (1960) considered a case of Xn's having the k th finite absolute
moments. He proved that under certain conditions on the characteristic functions
of Xn's, the following equality holds:
(2.2.6)
II
I ~ {p {Sn :::; x} _ ~(x) _ kf:2 Qlln(x)}
nll /
dx l
Bn
2
11=1
where l = O,l,... ,L and the value of L
~
=
o(
1
),
n (k-2)/2
0 depends upon the behavior of the charac-
teristic functions of Xl'''' ,X n . Functions Qlln(x) are defined similarly to functions
QII(X). See Petrov's book (1975) for detail.
Note that, unlike (2.2.5), the estimate in (2.2.6) does not take into account
the size of x.
As far as we know, nobody has been able to combine (2.2.5) and
(2.2.6) even for L
1/
= 0, i.e.,
there is no known estimate for
(1 + Ixl)k {P{ ~ < x} U''n
..,u -
~(x)
_ kf:2 Qlln(x) }II
11= 1
n 11/2
under some reasonable assumptions on Xn's.
Let X 1,X 2, ... again be a sequence of independent, identically distributed
random variables with zero means.
Another interesting problem that was first stated and solved for the
sequences of random variables and that we will try to extend into the area of the
cumulative processes is to find an estimate of an average error in the asymptotic
expansion of the CLT. Of course, we are not interested in the trivial consequences
of (2.2.2) or (2.2.6). Suppose EIX1Ik+cS <
00
for an integer k and 0 :::; 6 < 1. The
object in question is the convergence of the series
(2.2.7)
k-2
E
11=1
Galstyan (1971) proved the following theorem.
QII(X)
11/2
n
II.
10
Theorem 2.2.3 (Galstyan).
Suppose that the condition (2.2.1) is satisfied and also that
E {IXllk log(1 + IXll)} <
(2.2.8)
for 6=0,
00
{
EIXllk+6 <
for 0 < 6 < 1.
00
Then the expression in (2.2.7) has a finite value.
Galstyan also showed that when 6 =F 0 or when
{j
= 0 and k
1S
even,
condition (2.2.8) is necessary for the convergence of the series in (2.2.7). A result
equivalent to Theorem 2.2.3 was obtained by Heyde and Leslie (1972).
In both
papers a method originally developed by Heyde (1967) for k = 2 was used.
Later in this dissertation we will prove a theorem for a cumulative process
similar to Theorem 2.2.3.
There is another topic in the theory of the limiting distributions of the sums
of independent random variables that we would like to mention here. This is called
in the literature "the probabilities of large deviations."
\Vhile we do not extend
any of the results from this theory into the area of the renewal and cumulative processes, we will use the theorems that give the estimates for the probabilities of
large deviations.
The quantities of interest here are
(2.2.9)
l-Fn(x)
1- ~(x)
and
F n( -x)
~(-x)
,
where X l ,X2 , ... are independent identically distributed random variables with zero
means and Fn(x) = P{uS.fn ~ x}. We are interested in the behavior of the ratios in
(2.2.9) when x depends on n and tends to
+00
as n -
00.
Petrov (1968) proved a
rather general theorem that gives the estimates of the above ratios as n will present his result in a form which is most useful for our purposes.
00.
We
e
11
Theorem 2.2.4 (Petrov).
If Ee
gX1
<
00
for some g
>
0 and if x
~
r 2
0, x = 0(n / (r+2)) for some
positive integer r, then
n(x) = exp{L A[r1(-L-)}
I-F
1- 4>(x)
1/2
1/2
(2.2.10)
n
Fn(-x) = ex p {- L
4>( -x)
1/2
(2.2.11)
n
[r]
where A (t) =
(1 +
O(X+I))
n
n
A[r](_-L-)}
1/2
(1 +
1/2
'
O(X+l)),
n
n
1/2
r-l
k
E akt , and the coefficients aO' aI' ... , a r - l depend only on the
k=O
cumulants of the random variable Xl'
We will use this theorem in Chapter IV to obtain a CLT for the cumulative
process.
2.3. RENEWAL PROCESSES
There have been a number of attempts to apply the knowledge of the asymptotic distributions for the sums of independent random variables to obtain a
Central Limit Theorem for the renewal process. Feller (1949) noticed that for the
positive (a.s.) random variables Xl' X 2 , ... , the following identity holds:
(2.3.1)
for any t
P {N(t) ~ n}
=
P{Sn ~ t},
> 0 and any n = 0,1,.... When used in conjunction with the classical
Central Limit Theorem, (2.3.1) easily leads to the following CLT for a renewal
process based on the sequence of the independent identically distributed positive
(a.s.) random variables:
(2.3.2)
p{N(t)-t/jJ ~
ut 1/2/ jJ 3/2
for any real x.
x} _ 4>(x)
as t -
00,
12
Until the 1970's, no successful efforts were made to extend (2.3.2) to get a
remainder term in the Central Limit Theorem, or to allow Xn's to have different
distributions or take negative values, so that all three definitions of the renewal process given in (1.1.1) would have to be considered. Instead, the researchers concentrated on the analysis of the behavior of the renewal function H(t)=EN(t) and of
Var N(t), as t -+ 00.
Feller (1941) and Doob (1948), using different methods,
proved a so-called "elementary renewal theorem":
H(t)/t-+p-l
(2.3.3)
ast-+oo.
Here p = EX I and the theorem applies even when p =
00.
Blackwell (1948)
showed that for a continuous renewal process,
H(t+O')-H(t) -+ O'p-l
(2.3.4)
for any fixed
0'
ast-+oo,
> O. Smith (1954) showed that if the second moment
112
of Xl is
finite then
(2.3.5)
H(t) - t/p -+ ft - 1
as t-+oo.
2P1
Smith then used his methods to examme the asymptotic behavior of
Var N(t). He showed that if P2
<
00
then
2
Var N(t) = P2 - P1 t + o(t)
pi
as t -+ 00.
To obtain a more precise result he had to assume that F belongs to a class of 6,
which means that there exists an integer k such that P {Sk :::; x} is a distribution
function with a non-null absolutely continuous component. When F c 6 and a third
absolute moment P3 of Xl is finite, then (Smith, 1959)
(2.3.6)
Var N(t) =
P2-/~t
P1
+
(5P~
4P1
_ 2p~ _ P22) + 0(1)
3P1
2P1
as t -+ 00.
e
13
The issue of the Central Limit Theorem for the renewal processes was raised
again by Hunter (1974) who considered a case of a two-dimensional renewal
process, i.e., when {X~I), X~2)}, n = 1,2,... , is a sequence of independent identically
distributed bivariate positive random variables and N(t I , t 2 ) is defined for any pair
(I)
of real numbers t l and t 2 as Sup {n: Xl
(1)
+ ... +X n
~
(2)
t I , Xl
(2)}
+ ... +X n
~
t2 .
Hunter obtained a Central Limit Theorem for it. In this dissertation we will only
consider a one-dimensional renewal process and while many authors have
generalized the CLTs to cover the multi-dimensional cases, we will state their
theorems (with appropriate notes) as they apply to the one-dimensional renewal
processes.
Csenki (1979) considered a k-dimensional case. He also allowed Xn's to take
negative values which created a difference between the three definitions of a
renewal process which we gave in Chapter I.
Csenki proved a Central Limit
Theorem for NI(t).
In his 1981 paper, Ahmad stated several Central Limit Theorems with
remainder terms for the two-dimensional renewal processes.
dom variables not to be necessarily positive.
He allowed the ran-
The following theorem summarizes
his results as they appear in a one-dimensional form.
Theorem 2.3.1 (Ahmad).
If Xn's are independent identically distributed random variables with mean
J.I >
0, variance (7'2> 0 and such that EIX I I3
(2.3.7)
< 00, then the following equality holds:
14
Ahmad's proof contains an error. On page 121 of his paper he claims that
Ixll- ~I/UX
(2.3.8)
.[tP
for x ~ _(tJl)1/2/ 2u and some constant C.
He later proceeds with an assumption
that C does not depend on x. What is true, however, is that
(2.3.8')
Ixl
1 -
~I +.[tjJ
lux
for some C independent of x.
~
Cx 2t-l/2 ,
This mistake in the proof does not allow one to
obtain a uniform (in x) bound for the difference between P
{:~~;~/~~~~ ~
x} and
cf>(x). All that one can get is that for any real x,
(2.3.9)
p{N1(t)-t/ Jl ~ x} _ cf>(x)
I
ut1/2 /
//2
I=
O(t- 1 / 2)
as t-oo,
where a constant in 0 may depend on x.
We should note that even (2.3.9) represents a significant step forward from
what was done before.
A remainder term in the Central Limit Theorem for the
renewal process was obtained.
In the same paper Ahmad stated another theorem similar to Theorem 2.3.1,
except that N3(t) was used instead of N 1(t ).
The proof of that other theorem is
wrong too. It is based on the erroneous formula P{Sn~t} = P{N3(t)~n}. (See
page 119 of Ahmad's paper.) So for N3 (t) he did not get even a non-uniform version of the Central Limit Theorem.
Niculescu and Omey (1985) extended Ahmad's result to the case of a kdimensional renewal process.
Their methods are the same as Ahmad's and the
errors are the same too. Hence, in fact, they only correctly obtained a result for
15
N 1(t) with the bound for the error that is not uniform in x.
In the same paper
they also covered the case of fractional moments between 2 and 3, i.e., when
EIXlI2+6 <
00,
0 < 6 ~ 1. Their result in this case, when written in a one-dimen-
sional form, would be
(2.3.10)
t/
Ip{ Nl(t) -//2
II
ut1/2 /
< x} _
ell (x)
I=
O(t -6/2),
where a constant in 0 may depend on x. Niculescu had previously written a paper
(1984) where (2.3.10) was stated and correctly proved.
In that paper he did not
make any effort to extend (2.3.10) to cover N3 (t) or to get a uniform error bound.
In the very same paper he also proved an asymptotic CLT for Xn's that are not
identically distributed, but he did not get a remainder term in this case.
Woodroofe and Keener (1987) used a completely different approach to prove
a result that includes (2.3.9) as a special case. We discuss their paper in greater
detail in Section 2.4.
As we can see from the above discussion, the Central Limit Theorem with a
remainder term for a renewal process is proved only when the renewal process is
based on a sequence of independent identically distributed random variables with a
finite absolute third moment and the estimate of the remainder term is not
uniform. Also, only one of the three possible definitions of the renewal process is
covered. There is at present no result similar to the Edgeworth expansion (2.2.2)
under the assumption of the finiteness of the higher moments of Xn's.
In Chap-
ter III, we will prove the theorems that expand the existing Central Limit
Theorems for the renewal processes to cover the Edgeworth expansions. \Ve will
also allow Xn's to be non-positive and not identically distributed random variables
and consider all three definitions of the renewal process. That will bring the theory
16
of the CLTs for the renewal processes to the level where such theory for the sums
e-
of independent random variables stands.
2.4. CUMULATIVE PROCESSES
Not many papers dealt with the CLT for the cumulative processes. Smith
(1955) achieved the following result.
Theorem 2.4.1 (Smith).
Suppose Xl has a finite first moment and Y I has a finite second moment.
Assume also that ..\ == EY I = O. Then for any real x,
(2.4.1)
P {
W(t~/2:$ x} _ <fl(x)
as t -
..
00 .
r(tjJJ)
In the same paper, Smith has shown that the restriction ..\
=0
The corollary to Theorem 2.4.1 shows what happens when ..\ =/;
o.
can be avoided.
Corollary 2.4.1 (Smith).
If, as in Theorem 2.4.1, EX I and EYi are both finite, then for any real x,
(2.4.2)
P {wet) - ..\N(t) < x} _
r (tj JJ)1/2
<fl(x)
as t _
00.
-
If, in addition, Var Xl < 00 then
(2.4.3)
where
as t -
.? =
00,
Var(YI - ~XI) is assumed not to equal to zero.
We like (2.4.3) better than (2.4.2) because it does not involve any random
variables other than W(t).
Smith then extended his result to obtain a CLT for a multi-dimensional
cumulative process. Murphee and Smith (1986) obtained results similar to (2.4.1),
e-
17
(2.4.2) and (2.4.3) for the cumulative processes with an Improper distribution
function F of Xl' i.e., when F(oo)
<
1.
They called them the "transient
regenerative processes."
Woodroofe and Keener (1987) took a somewhat different approach to this
problem. They considered a distribution of
(2.4.4)
where
it is a
"stopping time" random variable satisfying certain conditions together
with a sequence of Yn'so It turns out that if it
= N( t) + 1,
as it is in our definition
of the cumulative process, then the value of Yn is uniquely determined (as a result
of those conditions on
N(t)+l
= 1:: n=l
it and Yn's)
by the value of X n , n = 1,2, .... Hence \Vo(t)
h(X n ), where h is a certain function of a real argument. That reduces a
cumulative process to a variation of a renewal process. For their \\TO(t), \Voodroofe
and Keener obtained a Central Limit Theorem with a remainder term of the order
of o(t -1/2).
As in a case of the renewal processes, there are results that describe the
Smith (1955) showed that EW(t) = ~ t
asymptotic behavior of EW(t).
t-
00.
+ o(t)
as
He also proved the following so-called "ergodic theorem."
Theorem 2.4.2 (Smith, 1955).
If A, JJ are both finite then
' W(t)_~}
P{l
1m
t
t-oo
,.
II
-
1
•
In this paper we are only concerned with improvements in the Central Limit
Theorem for W (t).
CHAPTER III
CENTRAL LIMIT THEOREMS FOR THE RENEWAL PROCESSES
3.1. AN EDGEWORTH EXPANSION
In this section we prove a new Central Limit Theorem for a renewal process.
By assuming the finiteness of the higher moments, we can achieve an appropriate
improvement in the order of the remainder term, as it was done for the sums of
independent random variables. Throughout this section we assume that Xl' X 2 , ...
are independent identically distributed positive (a.s.) random variables with mean
p, variance
> 0 and such that EIXllr <
0'2
00
for some integer
l'
2: 3.
Theorem 3.1.1.
If (2.2.1) holds for the characteristic function
(3.1.1)
p{
N(t)-t/p < x} _
2
12
O't / / / / -
¢
~(-u(x t)) + rt2
of Xl then for x 2: _(tp)1/2/0',
Qv(u(x,t»
' v = 1 {k(x,t)}V/2
- o(
-
1
)
t (r-2)/2
uniformly in x, where
t-pk(x,t)
u (x t) - ---'---'---'...-7:;
, - 0'{k(X,t)}I/2 ,
functions Qv(x) are defined in the Appendix A and [a], as usual, stands for an
integer part of a.
Note 1.
It is not of any interest to consider a situation when x < -( tp //2 / 0'
b ecause t h en
p{ N(t)-t/p
1/2 3/2 ~ x} = o.
O't
/p
e·
19
Note 2. It turns out that for r
>
3 in order to achieve the required precision in the
error term estimate, we need to use functions k(x,t) and u(x,t) defined ill the
theorem instead of the variables x and t which one would expect to use looking at
(2.2.2).
Proof of Theorem 3.1.1.
The following equalities hold:
..
(3.1.2)
-
1 - P{N(t) ~ k(x,t)}
-
1 - P{Sk(x,t)
:$
k(x,t)
1 - P {.r: Xj
t}
:$
J=l
(according to (2.3.1))
t}.
We will consider two separate cases.
Case 1:
In this case, k(x,t) ~ [t/2Jl), hence l/k(x,t)
1,2,... , let Zj
= (Xj-Jl)/u,
1 - P{
k(X,t)
.r:
J=l
Xj
= O(t- 1)
:$
-
1 - P{
_
1-
_ 1-
11l
x.
For j
Evidently,
t
}
I
k(x,t)
r: Z. < t-Jlk(xt)}
'
{k(X,t)}I/2 j=1 J - u{k(X,t)}I/2
p{{k(x,t)} 1/2 k<r:t)z.
1
j=l J
(3.1.3)
uniformly
:$
U(X,t)}
{~(U(X,t)) + rt2 Qv(u(x,~~~ + o(
v=l {k(x,t)}
1
{k(x,t)} (r-2)/2
)}
=
20
(according to Theorem 2.2.1, which is applicable since condition (2.2.1) is satisfied
for the characteristic function of Zl' Note that the ra.te of convergence to zero in
0
_-
depends only on r and on the distribution of Xl')
_ 4>(
rt
2
(t))
Qv(u(x,t))
-u x,
- v=l {k(x,t)}V/2
+ 0 ( t(r-2)/2
I
)
as t -.
00,
uniformly in x. That proves (3.1.1) in this case.
Case 2:
12
In this ca.se, u(x,t»Ct / for some C>O.
Since Qv(y)=exp(-y2j2)H v (Y),
..
where Hv(Y) is a polynomial of degree 311 -1 with coefficients depending on the first
(v+2) moments of Xl only, we have
(3.1.4)
Also,
4>( -u(x,t))
12
= o( (r_12)/J when u(x,t) > Ct / . Hence in order to prove
t
(3.1.1) in this case, we only need to show that for x $ _(tJ.l)1/2 j2(T,
(3.1.5)
p{N(t)-t/J.l
(Tt
1/ 2 / / / 2
<
-
} _
X
-
(
I
0 t (r-2)/2
)
.
Denote y = _(tJ.l)1/2/ 2(T. Since x < y,
P{
N(t)-t/J.l
1/2 3/2 $
(Tt /J.l
X
}
(case 1 now applies because y satisfies its restrictions.)
_ 4>( -u(y,t))
_
rt
v=l
2
Qv(u(y,t))
{k(y,t)}I1/2
+ o(
I
t(r-2)/2
)
e'
21
as t .....
uniformly
u(y,t)
in
x.
The
last
equality
00,
holds
SInce k(y,t)
= [t/2p] + 1
and
> Ct 1/ \ for some C > O.
That completes the proof of Theorem 3.1.1.
0
In the remaining portion of this chapter we will concentrate on the renewal processes based on the sequences of the independent random varia.bles with finite tl:ird
absolute moments only (1'=3). It turns out that for such renewal processes we can
obtain a Central Limit Theorem with the remainder term of the order of O(t -1 / 2),
while not using functions k(x,t) and u(x,t) introduced earlier in this section. \Ve
will also allow Xn's to have different distribution functions and to take negative
values. The Central Limit Theorem will be proved for all three definitions of the
renewal process: N1(t), N2 (t) and N3 (t).
3.2. AUXILIARY LEMMAS
In this section we will prove four lemmas that will allow us later to derive the
Central Limit Theorem for N1(t) and then to use it in order to obtain one for
N3 (t). From the definitions it follows immediately that N1(t)
~
N 3 (t). \Ve want
to show that N1(t) and N3 (t) do not differ very much in some sense. Lemma 3.2.1
allows us to relate the estimation of P {N1(t) ;::: k} to that of P {Sk ::; t}. Lemma
3.2.4 shows the connection between P{N 3 (t) ;::: k} and P{Sk ::; t}.
Lemma 3.2.2
and Lemma 3.2.3 are needed to prove Lemma 3.2.4.
Throughout this section the following notation and assumptions will apply:
X 1,X 2 , ... is a sequence of random variables (not necessarily identically distributed
and not necessarily positive). We denote
22
2
n
L
Bn =
2
U.,
j=l J
'Ve will assume that there exist positive constants p,O',D,g and M such that the following conditions hold for all n ~ 1:
>
(i)
Pn
(ii)
I1:
(3.2.1) (iii)
(iv)
0',
p. . 1 J
J=
npl
< D,
B~ ~ gn and
EIZnl3 < M.
These conditions allow us to apply Lemma 2.1 from Ahmad's 1981 paper. Condition (iv) also implies that there exists M 1
> 0 such that
O'~
< M 1 for all n. \Ve
will be making references to M 1 later in the text.
Lemma 3.2.1.
The following equality holds:
(3.2.2)
The proof of this lemma is gIven by Ahmad (1981); although we have
mentioned that there are errors in his paper, this result is correct. He proyes it as
a part of the proof of Lemma 2.1 in his paper. See formula (2.7) there.
Lemma 3.2.2.
For any l > 0 and any j
~
0,
(3.2.3)
where a constant in 0 does not depend on j.
23
Proof.
Zj+l+,,·+Zj+n
nf}
1 - P{ 2
2
1/2 ~
2
2
1/2
( 0' '+ + ... + 0' '+ )
( 0',J+l + ... + 0'J+n
,
)
J 1
J n
.. (3.2.4)
<
1 - If>
(2 +... + 2'+ )
J+l
J n
Of
( 0',
1/2 )
0'
Zj+l+,,·+Zj+n
nf}
2
1/2 >
2
2
11"
+ ... + 0' '+ )
( cr, + ... + cr, ) ~
J 1
J n
J+l
J+n
+ P { ( 0' 2'+
-
A + B, say.
Since
where f1 = f/M
1/2
l
does not depend on j or n,
uniformly in j.
Let us now estimate B.
The conditions of Theorem 2.2.2 are satisifed.
j+o
Applying this theorem to the sequence {Ze} e=j+1 with x =
2
Of 2
1/2'
(O'j+l + '" + O'j+n)
24
we get
B :$
3
C(EIZj+ 113 + ... + EIZj+nl)
2
2
(0'.
J+1
+ ... +0'.
J+n
)
2 )3/2
0'j+1 + ... + O'j+n
(2
3/2'
(nc
)3
(where C is an absolute constant)
where the constant in 0 does not depend on j.
Lemma 3.2.2 is proved.
.
0
Lemma 3.2.3.
For any c > 0 and j
(3.2.5)
~
0,
-c}
P{Inf Zj+1 + ... + Zj+k <
k~n
k
O{n -2),
uniformly in j.
Proof.
Denote V n = -Zn, n = 1,2,.... Then (3.2.5) becomes
(3.2.6)
p{s up
Vj+1 +",+Vj+k}
k
> c
k~n
2
= O{n- ),
uniformly in j. We will prove (3.2.6).
For a given n, choose m = m{n) such that
Then
:$
f
.
l=rn
p{
max
Vj+1 + ... + Vj+k
2i :!: k <2 i+1
k
>
c}
e·
25
<
(3.2.7)
According to Petrov (1975) (see Chapter III, Theorem 12 in his book),
(3.2.8)
so the value of the sums of the probabilities in the right hand side of (3.2.7) does
not exceed
$
f
2
i=rn
p{Vj + 1+ ... + V.
J+2
_ 2~ P
00
(3.2.9)
.L..J
l=rn
i+l
> 2i .;} (for any sufficiently large n)
{V'+l
+ ...• + V.J+2 i+l > f }
J
21+1
4
C
22i +2
(by Lemma 3.2.2; constant c does not depend on j and i)
2c
-
22(m+l)
< C21
n
00
.L
1=0
1
22i
(since 2
rn
+1 > n),
26
where cI does not depend on j and n.
This completes the proof of Lemma 3.2.3.
o
e-
Finally, we need to prove a lemma that will use the result of Lemma 3.2.3 to
tie the estimation of P{N 3 (t) ~ k} to that of P{Sk $ t}.
Lemma 3.2.4.
If four conditions (3.2.1) are satisfied, then
(3.2.10)
liP {there exists k ~ n such that Sk $ x} - P {Sn $ x} II = O(n -1/2),
where a constant in 0 does not depend on n.
Proof.
For any real x,
o
< P{there exists
-
k~n
such that Sk $x} - P{Sn $x}
P {Sn > x, there exists k > n such that Sk $ x}
< P
{m~
O~~n
SJ. > x, there exists k > n such that Sk $ x}
(3.2.11)
n
- ?:
J=o
P{SO$X,,,.,Sj_I$X, Sj>x,
there exists k > n such that Sk $ x}
The last equality holds because of the independence of Xl' X 2, ....
Let us show that
e-
27
(3.2.12)
where a constant in 0 does not depend on n or j.
Set m
= n-j, l = k-j.
Then
(3.2.13)
Since
~
2p
(by condition (3.2.1, ii)),
the probability in the right hand side of (3.2.13) does not exceed
(3.2.14)
P { Inf ( ZHl +",+ZHl - -2D)
l>m
l
l
<
-p.
} .
For m > 4D/ p., the value of the probability in (3.2.14) does not exceed
P { Inf ( ZHl +",+Z'+l
l
J
l>m
-
p./2 ) <-JJ }
ZHl +",+Z'+l
J
<
t~m+l
t
=P { Inf
28
(by Lemma 3.2.3)
(3.2.15)
e·
where a constant in 0 does not depend on n or j. Since the value of the probahility
is not greater than 1, by choosing a constant in 0 appropriately, we can assure that
(3.2.15) holds for m
~
4D / JJ too, while that constant in 0 is independent of nand
j. Combining (3.2.13), (3.2.14) and (3.2.15) and substituting n-j for m, we prove
that (3.2.12) holds.
Therefore the expression in the right hand side of (3.2.11) does not exceed
(3.2.16)
where c does not depend on n or j. We have
(3.2.17)
Now we need to estimate 12 , Ahmad (1981, see formulae (2.14) and (2.15) in
his paper) proved correctly that the following estimate holds:
(3.2.18)
where a constant c is independent of x and j. For j > [n/2], the expression in the
right hand side of (3.2.18) does not exceed c2n -1/2, where c2 is independent of x
29
and j. Therefore
(3.2.19)
O(n -1/2).
By combining (3.2.11), (3.2.16), (3.2.17) and (3.2.19), we complete the proof of the
lemma.
3.3. A CENTRAL LIMIT THEOREM FOR THE DIFFERENT TYPES
OF THE RENEWAL PROCESSES
In this section we state and prove a rather general Central Limit Theorem for
a renewal process based on a sequence of independent random variables X 1 'X 2 ' ... ,
not necessarily identically distributed and not necessarily positive. That theorem
will cover all three different definitions of the renewal process: N 1( t ), N2 ( t ), and
N3 (t). The same approximation and the same remainder term will apply in each
case.
We will use the notation consistent with that used in Sections 3.1 and 3.2:
Zn=Xn-Pn, n=I,2, ...,
also,
k(x,t)
= [tf P + x B[t/ p]f p] + 1.
The following theorem is the main result of this chapter.
30
Theorem 3.3.1.
If conditions (3.2.1, (i) - (iv)) are satisfied then the following three formulae
are true:
(3.3.1)
(3.3.2)
and
(3.3.3)
as t -
00.
Proof.
Since N 1 (t)
~
N2 (t)
~
N3 (t), it is sufficient to prove (3.3.1) and (3.3.3) only.
Let us evaluate the probability involved in (3.3.1). \Ve will use the following
identity:
(3.3.4)
It generalizes the Feller's identity (2.3.1) which was only applicable to positive
Xn's. We obtain
(3.3.5)
e-
31
1 - P{N 1(t) ~ k(x,t)}
-
1 - P{
max
l~o~k(x,t)
So < t}.
-
By performing calculations similar to the above and using the identity
~
P {N 3 (t)
(3.3.6)
k}
=
P {there exists n ~ k such that So ~ t },
which follows from the definition of N3 (t), we get
(3.3.7)
=
1 - P{ there exists n ~ k(x,t) such that Sn
~
We will consider two main cases.
Case 1:
x
Case 2:
x ~
and
< -t/2B[t/ p)
-t/2B[t/ p).
In Case 1, we will show that, uniformly in x,
(3.3.8)
and
(3.3.9)
cJI(x)
=
O(t
-1/2
)
as t -+ 00.
That will prove the first part of the theorem, (3.3.1), since as we saw in (3.3.5),
p{N~(t)-t/P ~
x} =
'jl B[tl p]
Moreover, since N3 (t)
~
1 - P{N 1(t)
~
k(x,t)} .
N 1(t), it will follow from (3.3.8) that
1 - P {N3 (t) ~ k(x,t)}
=
O(t-
1 2
/ )
as t -+ 00,
t}.
32
uniformly in x. This result together with (3.3.7) shows that
O(t -1/2)
(3.3.10)
uniformly in x.
as t
-+ 00,
Together with (3.3.7), equation (3.3.10) proves the theorem for
Thus by proving (3.3.8) and (3.3.9), we will complete the proof of the theorem
when the Case 1 condition applies. First, (3.3.9) holds since
<
<fl( -
t
2([t/ Jl] M 1 )1/2
)
(M 1
is such that
O'~:$ M 1 for all n)
(for some constant c > 0)
e·
Let us prove (3.3.8). Under the condition of Case 1, k(x,t) :$ (t/2Jl)
(3.3.11)
P {N 1 (t) ~ k(x,t)}
=
P{
max
Isnsk(x,t)
Sn:$ t} ~ P {
max
+ 1,
Isnst/2Jl+l
so
Sn:$ t}.
According to Lemma 3.2.1, the probability in the right hand side of (3.3.11)
IS
equal to
p{S [t/2Jl]+1 <
(3.3.12)
The constant
_p{
III
t} + 0 ( ([t/2Jl]+I)I/2
1
)
t
[t/2 ]+1
1
B[t/2JlJ+l
j=1
z. < t
J -
[t/2JlJ+l
- Ej=l
Jlj}
B[t/2JlJ+l
0 may depend only on parameters Jl,
0',
+ O(t- 1 / 2 ).
D, g and M from
33
conditions (3.2.1). From the theorem proved by Bikelis (1966), it follows that the
value of the above expression is equal to
(3.3.13)
~
~~t/2Pl+l po)
t -
J=l
(
J
B[t/2Pl+l
+0
-1
B
([t/2P]+I)
+0
(
t -1/2
).
Because of condition (3.2.1, ii),
O(B -1
[t/2p]+1
)
= O(t -1 / 2)
as t -+ 00.
Also,
[t/2Pl+l
t -
~j=l
Pj
for some C1 >
~(cl t I/2 )
o.
t-{[t/2p]+l}p-D
1
> (M ([t/2p]+I)
B[t/2Pl+l
yl2
t I/2
> cl
'
Therefore, the value of the expression in (3.3.13) is greater than
+ O( t -1 / 2).
Hence by combining (3.3.11), (3.3.12) and (3.3.13), we obtain
and thus
as t
-+ 00.
This proves (3.3.8) and therefore, under the condition of Case 1, the theorem
is proved.
Let us now consider Case 2, i.e., when x ~ -t/2B[t/ p]. In this case,
k(x,t)
=
[t/ p
+ xB[t/ p]/ p] + 1 >
[t/2p],
so
{k(X,t)} -I/2
-_
O(t- I/2 ),
·r
Iy·III x.
unl10rm
34
From (3.3.5) and Lemma 3.2.1 we get
p{
JJ
N1(t)-t/
1
~
J.iB[t/JJJ
} _ I - P{
x
(3.3.14)
e·
Sn < t }
-
max
l~n~k(x,t)
-
1 - P{Sk(x,t) ~ t}
+ O({k(X,t)}-1/2)
-
1 - P{Sk(x,t) ~ t}
+ O(C
1 2
/ )
as t -
00,
uniformly in x.
Similarly, from (3.3.7) and Lemma 3.2.4 we deduce that
p{Ni(t)-t/ JJ
~
x} = 1 - P{there exists
n~k(x,t) such
that
S)J~t}
J.iB[t/JJJ
(3.3.15)
= 1 - P{Sk(x,t) ~ t}
1 - P{Sk(x,t)
-
~ t}
+
O({k(X,t)}-1/2)
+ O(C 1/ 2 )
as t -
00,
e·
uniformly in x.
Formulae (3.3.14) and (3.3.15) show that in order to prove the theorem for
N 1 (t) and N 3 (t) (and hence for N 2 (t) too), we only need to show that
(3.3.16)
1 - P{Sk(x,t) ~ t} =
4>(x)
+ O(t -1/2)
as t -
00,
uniformly in x. Obviously,
(3.3.17)
P{Sk(x t) ~ t} = P { ~
,
k(x,t)
k( X, t)
k(x,t)
I:
j=l
Zj
~
t -
:E
B j=l
According to Theorem 2.2.2, the above probability is equal to
k(x,t)
(3.3.18)
4> (t - :Ej=l JJj)
Bk(x,t)
with an absolute constant in O.
k(x,t)
+ O(:Ej=l
3
EIZjl ),
B 3/ 2
k(x,t)
k(x,t)
po}
J
•
35
Since
<
(3.3.19)
M k(x,t)
_
{gk(x,t)} 3/2
uniformly in x, it follows from (3.3.17) and (3.3.18) that
(3.3.20)
Hence, in order to obtain a complete proof of the theorem, all we have to do is to
show that
(3.3.21 )
1
k(x,t)
~(Ej=~
J.lj-t) _
k(x,t)
~(X)I
_ O(t- 1/2 )
as t
-+ 00,
.
uniformly in x.
By using conditions (3.2.1, (ii) and (iii)), we obtain
k(x,t)
k(x,t)
E
J.lo-t
j=l J
Bk(x,t)
k(x,t)J.I- t
-
Bk(x,t)
+
Ej =l J.lr k(x,t)J.I
Bk(x,t)
(3.3.22)
=
k(x,t )J.I- t
+ O( t -1 / 2),
Bk(x,t)
uniformly in x.
Since 1~(Yl)-~(Y2)1 5 IYl-y2 1 for any real Yl' Y2' in order to
prove (3.3.21) it will suffice to show that under the condition of Case 2,
36
(3.3.23)
11>( k(x,t)p-t) Bk(x,t)
1
O(t -
l1>(x) I
1/ 2 )
as t ---- 00,
uniformly in x.
We will break Case 2 into the following subcases:
Case 2.1:
6
x> t 1 / ,
Case 2.2:
and
Case 2.3:
2B t
[tl p]
=:; x < O.
First, let us consider Case 2.1. Under its condition, one can see that
1 - l1>(x) = O(t-1/2),
uniformly in x. So we need to show that
(3.3.24)
1 _ 11>( k(x,t )p- t) < ct
-1/\
e·
Bk(x,t)
for some constant c. Let us evaluate the argument of 11> in (3.3.24). \Ve have
(3.3.25)
k(x,t)p-t
([t/p+xB[t/p]/p] + I}p - t
Bk(x,t)
Bk(x,t)
>
xB[tl p)
B
k(x,t)
"flIP] (T~
-
X
J=1
J=1
- x
[tl p] 2
(To
j=l J
E
where
J
k(x.t) 2
E.
(T.
J
k(x,t)
+ Ej=[tlpHI
2
(T.
J
-
x
~I ,: Al
'
37
E
k(x,t)
2
U·
j=[tl Jl]+l J
[tl Jl] 2
E.
<
U'
Mdk(x,t) - [tf Jl]}
g[tfJll
J
J=l
(3.3.26)
<
Ml(B~/Jl] + 1)
g[tf Jll
(from the definition of k(x,t))
<
for some constant cl independent of x and t. Hence the value of the expression in
the right hand side of (3.3.25) is greater than or equal to
1
(3.3.27)
CIX
X
1
+ t 1/2
1
1
c1
+
xt 1 / 2
x2
=
S'lllce x > t 1/6 ,
..l.2
x
+~
xt 1 / 2
< c2 t
-1/3
,
for some C2'
Therefore,
(3.3.28)
1
1
c1
+
- >
x2
xt 1/ 2
c3
t 1/6
'
for some c3 independent of x and t. Combining (3.3.25), (3.3.27) and (3.3.28), we
prove that
(3.3.29)
k(x,t)Jl-t
Bk(x,t)
e/
>c
6
3
and that proves (3.3.24). Thus in Case 2.1, the theorem is proved.
Now, let us consider Case 2.2, i.e., when O:$X:$t
1 6
/ •
In this case,
38
x-
xBk(x,t)-{[tfJJ+xB[t/JJ]f/l]
k(x,t)JJ - t
Bk(x,t)
-
+ l}/I + t
Bk(x,t)
x{l _
(3.3.30)
B
[t IJJ ]}
+ O(t -1/2)
Bk(X,t)
[tl JJ] 2
E.J=1 U J'
E
[tl JJ] 2
j=1
=
X
U·
J
+
E
+
+
{ r=c}
1 - ~1
A
2
}
+ O(t - 1/~
»
k(x, t)
2
U·
j=[tl p]+l J
O(t -lr-),
where
E
k(x,t)
2
U·
j=[tl p]+l J
[tl JJ] 2
E.
J=1
U
'
J
By performing the calculations similar to those in (3.3.26), we find that
(3.3.31 )
{k(x,t) - [tf JJ]}M 1 < c(l + x)
t 1/ 2 ,
g[tfJJ]
for some c that does not depend on x and t. Hence the right hand side of (3.3.30)
does not exceed
(3.3.32)
- o(Xtt/~2)
39
Therefore, it follows from (3.3.30) that
(3.3.33)
for some constant cl'
Direct calculations show that
x _ k(:c,t)p-t >
c2
Bk(x,t)
rlOr
. d epen d ent a f
> 0, In
some c2
I
(3.3.34)
t 1/ 2 ,
x an d t . Th us, 1'f X
~(k(x,t)/l-t) _ ~(x)1
th en
Bk(x,t)
~
1 then
~(k(x,t)P-t) _ ~(x)1 ~ Ik(x,t)/l-t _
Bk(x,t)
xl
Bk(x,t)
(according to (3.3.33))
(3.3.35)
_
So we can assume t h at x> 1 and x >
that
t
~ k(x,t)J1B
k(x, t)
< k(x,t)/l-t _ x
Bk(x,t)
so (3.3.23) holds. Similarly, if x
1
-
Cl(\~2X2) <
0(t -l/2),
x
>
x
~
1.
k(x, t)
1<X~tl/6.)
k(x,t)p -t
Bk(x,t)
(by (3.3.33))
> x/2 > O.
~
k(xt)p-t
B'
. Suppose that t is large enough so
x/2. (This can be done since
t
.
0
SInce
This implies that
40
Hence
x
J
1
<I>(X) _ <I>(k(X,t)J.I- t)
Bk(x,t)
e-
y2j2
e·
dy
k(x,t)J.I-t
Bk(x,t)
<
1
t)2} ( x - ---';,:-'---'-'-k(x,t)J.I- t)
exp { - 1(k(X,t)J.I(211" )1/2
2 Bk(x. t)
Bk(x, t)
<
(3.3.36)
_
O(t -1/2)
as t _
e-
00,
uniformly in x.
This proves (3.3.23). So the theorem is proved under the condition of Case 2.2
too.
Finally, consider Case 2.3, i.e., when - 2B t
< x
[t/J.I]
< O.
In this case, k(x,t) :$ [t/J.I]+1. If
(3.3.37)
then
IXIB~t/J.I] <
k(x,t) =
1,
so
[tf J.I] + 1,
Ixl = O(t -1/2).
O(t -1/2) . This shows that
Also, if (3.3.37) holds then
k(x,t)J.I-t _
Bk(x,t)
41
x - k(X,t)p-tl
Bk(x,t)
I
=
O(t- 1/2 )
as t
--+ 00,
uniformly in x. This proves (3.3.23) when condition (3.3.37) holds.
Suppose now that k(x,t) ~ [tf p]. Then
k_(,:...,x..:....'t.t..:.)p_-_t
x-=
Bk(x,t)
>
xBk(x,t)-[tlp+xB[t/Pl/p]p-p
+t
Bk(x,t)
xBk(x,t)-(tlp+xB[t/pl/p)p
(3.3.38)
=
-Jl+
t
Bk(x,t)
(-x)(B[t/p] - Bk(x,t»
p
Bk(x,t)
- Bk(x,t)
> - -p- > -ct -1/2 ,
Bk(x,t)
for some c
>
O. So if x < k(x,t)p-t then
- Bk(x,t)
o~
4l(k(X,t)P-t) _ 4l(x) < ct- 1/2 ,
Bk(x,t)
so (3.3.23) holds in this situation too. So we only have to consider the case when
x>
k(x,t)p-t
B
.
k(x,t)
Performing the calculations similar to those in (3.3.25) and
(3.3.30), we obtain:
o
k(x,t)p-t
Bk(x,t)
< x - ---;"'---'-'--
_ X(l _Bk(x,t)
B[t/ Pl )
+
O(t -1 / 2)
42
e-
[t/ p] 2
1;.
J=1
(3.3.39)
1;
[t/ p] 2
- Ixl
{~l _1 A
-
3
[t/ p]
2
J
1;
(To
j=k(x,t)+1 J
I}
+
(To
j=1
(TJo
-
O(C
} +
O(t -1/2)
1 2
/ ),
where
1;
[t/ p]
2
(T.
j=k(x,t)+1 J
[t/ p] 2
1;.
J=1
< M 1 ([tl p] - k(x,t)}
-
g[tl p]
(TJo
M d [tip] - [tip -Ixl B[t/ p]l p] - I}
g[tl p]
(3.3.40)
e(l
t
for some c
+ Ixl)
1/ 2
> 0 that does not depend on x and t. Therefore the right hand side of
(3.3.39) does not exceed
(3.3.41 )
for an appropriate cl
o<
> O. We showed, therefore, that
k(x,t)p-t
x-~..:...:.;.-
Bk(x,t)
e'
43
Hence
~(x)
_
~(k(X,t)J.I-t)
x
_
1
Bk(x,t)
J
e-
y2
/2 dy
k(x,t)Jl- t
Bk(x,t)
<
uniformly in x. This proves (3.3.23) under the condition of Case 2.3.
Theorem 3.3.1 is now completely proved.
0
3.4. THE NECESSITY OF CONDITIONS IN THE
CENTRAL LIMIT THEOREM
In Section 3.3 we proved the Central Limit Theorem for a renewal process.
That renewal process was based on a sequence of independent random variables Xl'
X2 , ... with the finite absolute third moments. Also, we required conditions (3.2.1,
(i)-(iv)) to hold. In this section, we raise a question of the necessity of these conditions.
First, let us note that parts (iii) and (iv) of (3.2.1) are not unusual.
Very
similar conditions are imposed by Petrov (1960) in his Central Limit Theorem for
the sequences of independent non-identically distributed random variables with
zero means and by Ahmad (1981) in the Central Limit Theorem for the renewal
processes.
Condition (3.2.1, (i)) was also assumed by Ahmad in that portion of his 1981
paper where he proved a lemma for the non-identically distributed Xn's.
This
44
condition assures the (a.s.) finiteness of Nj(t), i = 1, 2, 3.
The only new condition that we introduce is (3.2.1, (ii)).
restrictive.
It may look too
e-
For example, Cox and Smith (1953) were able to achieve significant
results in renewal theory by assuming that a different condition on the
Jill'S
is
satisfied. They introduced the so-called "stable sequences."
Definition (Cox and Smith, 1953).
A sequence {J.ln} such that
(3.4.1)
n+p
_1_ "'" 1.
p-oo (p+ 1).L..J J.l
lim
J.l uniformly in n
J=n
is called stable with average J.l.
Our condition (3.2.1, (ii)), instead, requires the existence of positive numbers J.l
and D such that
(3.4.2)
It
J.lj - nJ.l/ < D
J=l
for all n
~
e-
1.
The following proposition provides a comparison between the stable sequences
and those satisfying (3.4.2).
Proposition 3.4.1.
(i) If a sequence {J.ln} satisfies condition (3.4.2), then it is stable.
(ii) There exist stable sequences that do not satisfy (3.4.2).
Proof.
(i) Suppose {J.ln} satisfies (3.4.2) with some D and J.l
> O. Then
45
n+p
.L: Pi - pi
I pi1 l=n
p~i
{Ct:
Pi - (n+p)p) -
(~ Pi -
(ll-i ll ')}
(3.4.3)
< 2D/(p+1),
hence
n+p
lim
p--+oo
pi1i=n
L:
Pi
= P uniformly in n.
That proves (i).
(ii).
Consider the following sequence:
Pn = p+n l / 2 - (n_1// 2 , n ~ 1, where
p> 0 is fixed. Sequence {Pn} is stable since for any n,
P
<
pi
n+p
1
L:
Pi
pi
-
1
{(p+1)p + (n+p)I/2 - (n_l)1/2}
i=n
(3.4.4)
which proves that (3.4.1) holds.
{Pn}, i.e., there exist
p', D >
Let us now suppose that (3.4.2) also holds for
0 such that for all n, IE;=1 Pi -np'l < D. Evidently,
(3.4.5)
That shows that IE~
J=1
Pl'
-nP'1
--+ 00
as n --+ 00 for any choice of
p'.
Therefore, {Pn}
does not satisfy (3.4.2) and that completes the proof of the proposition.
0
46
In the paper mentioned above, Cox and Smith achieved some significant
results by only assuming that the sequence
{Jln}
of means of the random variables
e-
Xl' X 2, ... is a stable sequence. In Theorem 3.3.1, we require that {PIl} satisfies a
stronger condition. The following theorem shows that even if we wanted to prove
Theorem 3.3.1 for N1(t) only and even if we didn't want a uniform (in x) bound for
we would still need condition (3.2.1, (ii)) (or (3.4.2), as it was restat.ed in this section) to hold.
Theorem 3.4.1.
Suppose that conditions (3.2.1, (i), (iii) and (iv)) are satisfied with some
positive
(3.4.6)
0',
g and M and that for some
p{N~(t)-t/Jl
$
Jl
> 0,
x} _ 4>(x) _
e-
O(t- 1/ 2 ),
jjB[t/Jl]
where a constant in 0 may depend on x. Then there exists D
t
Jl' -
. 1 J
IJ=
nJlI
$
> 0 such that for all
D,
i.e., condition (3.2.1, (ii)) holds.
Proof.
It has been shown in the course of the proof of Theorem 3.3.1 that with condi-
tions (3.2.1, (i), (iii) and (iv)) satisfied,
(3.4.7)
_ 4>(
E~(x,t)Jl._t
J=l
J
Bk(x,t)
)
+ O(t -1/2).
47
From (3.4.6) and (3.4.7) it follows that
E~(x,t) Jl' - t
4>(
(3.4.8)
1
J=l
J
) -
Bk(x,t)
~(X)I
<
c(x)
t 1/ 2 '
where c(x) is a constant that may depend on x but not on t.
Set x = 0 and suppose with no loss of generality that t tends to
such values that t/ Jl is an integer. Then we have ~(x)
=~
and k(x,t)
Set c1 = 2(211' )1/2 C(0) and suppose t is large enough so that
exp( - ci/2t) > ~.
(3.4.9)
Let us show that
Denote Zt =
(3.4.10)
Indeed, suppose (3.4.10) is not true. Set Yt
c(O)t -1/2 > 1~(Zt) - 4>(0)1
= cIt -1/2 .
.
EVIdently,
(by (3.4.8) with x=O)
(3.4.11)
-
t
~}2'
1 1 / 2 exp(-CI/2t)
(211')
(according to (3.4.9))
+00
through
= t/ J1 + 1.
48
e-
This contradiction shows that (3.4.10) holds. Hence
t/ JI+l
(3.4.12)
IL
. 1
J=
JI' J
tl
~
where M 1 is such that u~ < M 1 for all j and c2 is a constant independent of t. Set
J
n = tj JI + 1. Inequalities (3.4.12) show that
c2 >
It Jlj - nJl + JlI
J=l
~
It Jlj - nJlI-
JI,
J=l
so condition (3.2.1, (ii)) has to be satisfied (with D = c2 + JI) for all sufficiently
large n. Therefore, it is satisfied for all n ~ 1 with an appropriate D.
This completes the proof of Theorem 3.4.1.
0
e-
CHAPTER IV
CENTRAL LIMIT THEOREMS FOR THE CUMULATIVE PROCESSES
4.1. INTRODUCTION
In this chapter we study properties of a cumulative process \V(t) defined in
Section 1.1. Instead of referring to F(x,y), a joint distribution function of Xl' and
Y I , we will use a conditional distribution function Gx(Y), which uniquely deter-
mines the distribution of Y I when Xl is equal to x.
The existence of Gx (y)
IS
guaranteed for any joint distribution function F(x,y) (see Doob (1953)).
Throughout this chapter we will assume that Xl (and hence every Xj , j = 1,
2, ... ) satisfies the following co-called Cramer condition:
there exists g >
a such
that
(4.1.1 )
Ee
gX 1
<
00.
Condition (4.1.1) assures, in particular, that all moments of Xl are finite. \Ve also
assume that there exists a real number r
> 2 such that
(4.1.2)
We are interested in obtaining Central Limit Theorems with a remainder term
whose order depends on r.
We will use the following notation:
t/J(0) - characteristic function of Y I'
1£(0) - cumulative function of Y I (see Appendix A for definition),
(4.1.3)
50
From the defini tions it follows that t/J( e)
exp(lI:(e)) whenever 1/'(0)
=j:.
O.
Define also
(4.1.4)
V(t)
V(t) is a normalized cumulative process.
Our Central Limit Theorems will be
stated for V(t), not for W(t).
In all O(
) expressions the constants are independent of t and B.
"for all sufficiently large t" mean" for t
\Vords
> to, where to depends on the joint distI'i-
bution of (Xl' Y1) only."
In Section 4.2 we examine the behavior of the distribution function of the
random variable R(t), where for a given t,
(4.1.5)
R(t) =
N(t)+I-t/p
ert 1 /
2 / JJ3/2
•
It turns out that, because of the Cramer condition (4.1.1), the tails of the distribu-
tion function of R( t) can be bounded uniformly in t by an exponentially decreasing
function.
In Section 4.3 we prove several lemmas that allow us to estimate the
cumulative function of Y 1 under an assumption of finiteness of the higher moments
In Section 4.4, we prove a lemma that later helps to estimate an error in the
characteristic function of V(t) caused by an approximation of N(t) by its mean.
Then in Section 4.5 we make a necessary evaluation of the behavior of the characteristic function of V (t) as t tends to
00.
Section 4.6 contains two Central Limit Theorems for V(t) based on Yn's
with zero means.
The difference between the two is in the assumptions on the
51
moments of Y l'
The deepest of these results is Theorem 4.6.1.
smallest remainder term.
It. pro\"ides the
Both theorems in this section constitute a significant
improvement over Theorem 2.4.1.
In Section 4.7 we drop an assumption of zero means and prove two CLTs
that apply to arbitrary V's with a finite moment of a certain order.
Section 4.8 is dedicated to the estimation of average remainder terms in the
Central Limit Theorems proved in the previous section.
The main result of this
section can be viewed as an extension of Theorem 2.2.3 into the area of the cumulative processes.
Finally, in Section 4.9 we discuss some further possible improvements in the
Central Limit Theorems.
Results of this chapter bring the theory of the Central Limit Theorems for
the cumulative processes closer to the level of where the theory of the CLTs for the
renewal processes and for the sums of independent random variables stands.
4.2. ESTIMATES OF THE TAILS OF THE DISTRIBUTION FUNCTION
OF R(t)
In this section we prove a lemma the result of which shows how close in
probabilistic terms the value of N(t) is to t/ p. While this lemma does not refer to
the random variables Y l' Y 2"" at all, its result plays a critical role in estimating
the characteristic function of V(t).
A random variable R(t) is defined by (4.1.5).
function of R(t), i.e., F R(t)(x)
= P {R(t) $ x}.
Let F R(t) be a distribution
52
Lemma 4.2.1.
There exist positive real constants a and v that do not depend upon x or t
such that for all sufficiently large t, the following inequalities are satisfied:
(i) for any x
~
0,
-vx
1 - F R(t)(x) < ae
(4.2.1)
and
(ii) for any x
(4.2.2)
~
0,
F R(t)(x) < ae
-vlxl .
Proof.
We will first find a and v that satisfy (4.2.1). The following three cases will
be considered separately.
e-
Case 1:
Case 2:
t
1 3
/
< x ~ Ct 1/ 2 , where C is a constant that will be chosen
later, and
Case 3:
x> Ct
1/ 2
.
We will consider Case 3 first. For all sufficiently large t,
(4.2.3)
Let us show that for any integer k
~
0,
53
(4.2.4)
Write Sk for Xl + ... +X k and let F k be a distribution function of Sk. Evidently,
X
{Ee- l}k
=
Ee
-5
f
00
=
k
e-XdFk(x)
o
t
f e-
>
x dF k (x)
o
~ e- t
t
f dFk(x)
= e- t P {S k $ t}
0
e-tP{N(t) ~ k}
(according to (2.3.1)).
That proves (4.2.4).
12
Taking k = [xut / j//2]
+ 1 in
(4.2.4), we deduce from (4.2.3) that
(4.2.5)
Since 0 < Ee-Xl < 1, there exists 6
c
e- v = Ee
(4.2.6)
-X
> 0 such that
l.
Hence
t -6{[xut
1 - F R(t) ()
x $ e e
(4.2.7)
< e
Fix any
(4.2.8)
II
t-x6ut 1 /
2
1/2
/JJ 3/2 ]+l}
/ JJ3/2
.
> O. Assme that t is large enough so that
II
<
c
vU
t 1/ 2 j2 JJ.
3/2
Choose
(4.2.9)
C
= 2JJ 3/2/ 6u.
That will be the choice of C that defines the condition of Case 3. Then
54
~
XvU
t1/2/ 3/2
P
-
/.IX
(because of (4.2.8))
(4.2.10)
(since x > Ct 1 /
= t
2
)
(by the choice of C).
That shows that
(4.2.11 )
t - fJuxt 1/2/ p 3/2 < -
/.IX,
so it follows from (4.2.7) that 1 - FR(t)(x) < e- vx .
satisfied for any given v
> 0 with
0'
Hence in Case 3, (4.2.1) is
= 1 as long as t is sufficiently large.
Let us now evaluate the left hand side of (4.2.1) when Case 1 or Case 2
apply. Denote Zj = (Xj-p)/u for j=l, 2, ... and k(x,t) = [t/p+xut
2
1 2
/ / / / ].
have
= P {N(t) ~ k(x,t)}
(4.2.12)
-
P{Sk(x,t)$t}
= P
1
{ {k(x,t)} 1/2
(by the definition of k(x,t))
(according to (2.3.1))
k(x,t)
t [t/ p + xu t 1/ 2 / p 3/2] P }
~ Z < j~l j u{k(x,t)} 1/2
\Ve
55
< P
1
{ {k(x,t)} 1/2
k(x,t)
L
Z.
j=1
~
J
t-(t.IJ1+xut
1/2
III
3/2
-1)1 1
}
u{k(x,t)} 1/2
where
xu(tIJl)
(4.2.13)
1/2
-Jl
U{k(X,t)}I/2 .
We can assume with no loss of generality that Xl
X
is bounded by a constant as t
-+
00 (
actually,
X
> O. Indeed, if Xl
~
0 then
= 0 ( t -1/2) , but we do not even
need that), so ae- vx can be made greater than 1 with an appropriate choice of
and v
>
o.
Thus (4.2.1) holds in this case.
Suppose the condition of Case 1 applies, I.e., 0 ~
X
~
e/
3
•
Then for all
sufficiently large t,
{k(X,t)}I/2
(4.2.14)
Q
=
xu(t/ Jl)I/2 - Jl
uk(x,t)
u[tIJl+xut
1/2
=
1/2
xu(t/Jl)
u[t/ Jl xut 1/2 I Jl3/2]
+ O(C 1)
~
1/2
x(tl Jl)
1 2
xut / / //2_ 1
t/ Jl +
+ O( C 1 )
=
x
(1.1 Jl)I/2 + xu / Jl- (Jl/t )1/2
<
+
X
(t/ Jl)I/2
+ O(t -1/2) -
IJl
3/2
]
+ O(C1)
O(t -1/6) -
O({k(x,t)} -1/6).
56
Hence
(4.2.15)
with a constant in 0 independent of x as well as of t.
Formula (4.2.15) shows that we can use Theorem 2.2.4 to estimate the probability in the right hand side of (4.2.12). That theorem is applicable since (4.1.1)
implies that Eexp(gZl)
EZ 1
=0
(using
l'
and Var Zl
<
00,
= 1.
i.e., the Cramer condition is satisfied for Zl'
Also
From Theorem 2.2.4 (see formula (2.2.11)) we obtain
= 4, so that 1'/2(1'+2) = 1/3)
P
k(x,t)
1
{ {k(x,t)} 1/2
j~
Z. < -x
J -
}
1
(4.2.16)
-
~
(
-x ex
1)
P{
-
x
[4]
x
A
31
( _I
{k(x,t)}1/2
{k(X,t)}I/2 )}
(
1+ 0
x +1
1
({k(X,t)}1/2) ) .
From the definition of functions )r](s) it follows that )r](s) is bounded as s -. o.
Hence
(4.2.17)
Here and later c1' c2' ... are some constants that do not depend on x or t.
Since for Xl > 0, ~(-x1) is bounded by c3exp(-xi/2), the right hand side of
(4.2.17) does not exceed
(4.2.18)
57
However,
x2 . __x-,l,---;--..;
I {k(x,t)} 1/2
-
{k(x,t)} 1/2
_ Xi. o( {k(x,t)} -1/6)
(4.2.19)
(from (4.2.15))
so the value of the expression in (4.2.18) does not exceed
C4 exp ( -Xl2/ 2
+ c5xI2/ t 1/6)
< c4 exp( -c6 x
i)
( C6(XU(Ifp)'/'_")2)
c4 exp -
-
0'2k(x,t)
(4.2.20)
2
c x t)
< c7 exP ( - k(x,t)
~
c exp ( _
7
< c7 exP(-c9 x2 ) < clOe
C8
2
x t
t/J.l+XO'tl/2/Jl/2
)
-cu x
Combining (4.2.12), (4.2.17), (4.2.18) and (4.2.20), we see that when
conditions of Case 1 apply, there exist
0', II
> 0 such that (4.2.1) holds.
Let us now consider Case 2, i.e., when t
1 3
/
< x ~ Ct 1/ 2 , where C is a con-
stant defined in (4.2.9). In this case, for all sufficiently large t we have
XO'(t/J.l)1/2_J.l
{k(x,t)} 1/2
(4.2.21)
-
0'
k(x,t)
X
(t/J.l)1/2
~
X
(t/J.l)1/2
2k(x,t)
x
Since f(x) = a+bx is an increasing function of x for a, b, x
> 0, it follows that
58
(4.2.22)
e·
(Here cI' c2' .. , is a new set of constants.) From (4.2.21) and (4.2.22) we conclude
that
Xl > CI {k( x,t )}
(4.2.23)
1/2 -1/6
t
.
Denote x2 = cdk(x,t)}1/2t-1/6. Let us estimate the probability in the right hand
side of (4.2.12). Since xl > x2'
(4.2.24)
P
k(x,t)
1
{ {k(x,t)} 1/2
j~
Z. < -x
J -
}
1
p{
<
1
k~t) Z.
{k(x,t)} 1/2 j~
< _x }
J -
2'
Let us show that Theorem 2.2.4 can be used again. We need to verify that x2
=
O({k(X,t)}1/3). That holds since
{k(X,t)}1/3
(4.2.25)
-
e-
cdk(x,t)} 1/2 __
(k(x,t))1/6
cI
-t6
1
t / {k(x,t)}1/3
< cI (
1/6
t/ IJ + xut 1/2/ IJ3/2
t
_ 0(1),
)
-
c 1( 1/ IJ
+ Xu /(tIJ 3 ) 1/2)1/6
because X :::; Ct 1/ 2.
Therefore, Theorem 2.2.4 is applicable (with r = 4) and from it we obtain
P
k(x,t)
1
{ {k(X,t)}1/2
j~I
Z. < -x
J -
}
2
59
(since the argument of )4] tends to 0 as t -- 00)
(4.2.26)
-
c~k(x,t)
c4 exp ( -
2t
1/3
+
Cs k(X,t))
t
(we used the definition of x2 here)
1/2
Hence when conditions of Case 2 apply, (4.2.1) holds for some 0',
II
>
O.
We have shown that (4.2.1) is satisfied in each of the three cases (with posu
).I.e., I'f 0 5x5t 1/3 t h en 1 - F R(t) ()
'blY d'IU.erent
0' an d II,
x 50'1e -1I1X'
, If t 1/3 <x
51
5 et
1/2
(
)
then 1 - F R(t) x 5 0'2 e
-1I2 X
•
and If x
Therefore (4.2.1) holds for all x ~ 0 and 0'
> et 1/2 t h en
( )
-1I3 X
.
1 - F R(t) x 5 Q3 e
= max(0'1'0'2'0'3)'
The statement in part (i) of the lemma is valid for these 0' and
In order to find constants 0' and
II
II
= min(lIl,1I2,1I3)'
II.
that will satisfy part (ii) of the lemma, we
will consider the following three cases:
Case 4:
Case 5:
Case 6:
Case 6 is trivial since F R(t)(x) = 0 for x < _(tp)I/2/ u , because N(t) ~ O.
60
'\Then conditions of Case 4 or Case· 5 apply, we can write the following
sequence of equalities:
=
F R(t)(x)
P { R()
t ~ x}
=
P{
N(t)+l-t/jJ
ut
1/2
/jJ
3/2
~
X
e-
}
1 - P { N (t ) > t / jJ -1 + xut 1/2/ jJ 3/2}
(4.2.27)
-
1 - P{N(t) ~ k(x,t)}
(where k(x,t), as before,
-
1 - P{Sk(x,t) ~
=
t}
.
set equal to [t/J1+xut
IS
P{Sk(x,t)
1/~
3/~
-/1' -])
> t}.
When Case 4 applies,
P S
{ k(x,t)
>
t
-
} -
P
k(x,t)
1
{ {k(x,t)} 1/2
j; j
Z > t - k(x,t)J1
}
u{k(x,t)} 1/2
(4.2.28)
1- P
1
{ {k(x,t)} 1/2
k(X,t)}
j~
Z. <
J -
X
1
'
where
(4.2.29)
Since Ixl ~ t
1 3
/ ,
it follows that for all sufficiently large t, we have c1 t ~ k(x,t) ~
c2t for some cl' c2 > 0 and hence
_.
61
")1/2
{k(X,t)}1/3 . Ixl(t/ ,..
{k(x,t)} 5/6
(4.2.30)
~
{k(x , t)} 1/3.
C3
t
5/6
{k(X,t)}5
<
/
6
{k( x,t )}1/3 .
C4'
Therefore Theorem 2.2.4 is again applicable. It yields (see formula (2.2.10))
1 - P{
1
{k(x,t)} 1/2
k(X,t)}
Z. < x
j~l
J -
1
(since ,x[4](s) = 0(1) as s - 0)
(4.2.31)
(since t/k(x,t) > cll > 0 when Ixl ~ t
J 3
/ )
Formulae (4.2.27), (4.2.28) anmd (4.2.31) show that (4.2.2) holds in Case 4 with
some
C1'
and
II.
13
Finally, suppose that the condition of Case 5 applies, i.e., t / < -x
~ (tp.)1/2/ u . Denote m(t)
= [t/ p. - ut 5/ 6/ //2].
Then k(x,t) ~ k( _t 1/ 3, t)
Let us now estimate the probability in the right hand side of (4.2.27).
= m(t).
62
=
P{Sk(x,t) > t}
< P{X 1 +".+Xm(t) > t}
met)
1
(4.2.32)
-
e·
P{X 1 + ... +Xk(x,t) > t}
P { { (t)}1/2
m
L
j=l
(since X 1 ,X 2 , ... are positiYe (a.s.))
t - m(t)" }
Zj
> u{m(t)} 1/2
where
(4.2.33)
e'
Since for all sufficiently large t,
Theorem 2.2.4 is applicable. Therefore
1
1 - P { {met)}1/2
_
-
(4.2.34)
met)
j~
}
Zj
:5
X2
{l-~(x)}ex{
x~ ,\[4]({met)}
x2 1/2
2
P {met)}1/2
)}(1+0(
X2+1))
{met)} 1/2
63
Combining (4.2.27), (4.2.32) and (4.2.34), we show that (4.2.2) holds with some
and
/I
0'
under the conditions of Case 5.
Hence (4.2.2) is satisfied (with possibly different
0'
and
/I)
in each of the
three applicable cases (one of which, Case 6, is trivial). As we did at the end of the
proof of (4.2.1), we can choose here such
0',/1
>
a that
(4.2.2) is satisfied for all x
and all sufficiently large t. It is clear therefore that there exist
0'
and
/I
that satisfy
both parts (i) and (ii) of the lemma.
Lemma 4.2.1 is proved.
0
4.3. APPROXIMATIONS TO THE CHARACTERISTIC AND CUMULATIVE
FUNCTIONS OF Y 1
In order to approximate the distribution function of V(t), we will consider
the behavior of its characteristic function. In this section we find a close approximation to 1/J( 0), the characteristic function of Y l' Later we will show that 1,!.( 0), properly adjusted, under certain conditions does not differ much from the characteristic function of V(t). Together these results will be used in Section 4.6 to prove a
Central Limit Theorem with the remainder term for the cumulative processes.
Lemma 4.3.1 and Lemma 4.3.2 provide the tools to approximate K(O), the
cumulative function of Y l' Lemma 4.3.3 gives the desired approximation of K( 0).
The estimation of 1/J(O) is done in Lemma 4.3.4 and Lemma 4.3.5.
While it is
sufficient for our purposes to assume in Lemmas 4.3.1, 4.3.2 and 4.3.3 the existence
of a finite absolute moment of Y 1 of the order between 2 and 3 only, we state these
results in a more general way.
We believe that the lemmas in this section have
64
possible applications beyond those in our dissertation.
e·
Lemma 4.3.1.
Suppose that
(4.3.1 )
where k
~
2 is an integer and 0 :5 6 < 1. Define for z
(4.3.2)
=
R(z)
I
~
0,
Iylk dG(y),
Iyl~z
where G(y) is the distribution function of Y I . Then
R(z) = 0(z-6)
(4.3.3)
as z --
00.
Proof.
Evidently,
(4.3.4)
I
z6
IYlk dG(y) <
Iyl~z
I
ly1k+6dG(y) = 0(1)
Iyl~z
sInce
I
+00
ly1k+6dG(y) -
-00
The statement of the lemma follows directly from (4.3.4).
Lemma 4.3.1 is proved.
0
Lemma 4.3.2.
If (4.3.1) holds then
(4.3.5)
I
Iyl~z
ly1k+I dG(y) -
0(zI-6)
as z --
00.
as z --
00,
65
Proof.
Fix c > O. Let z1 be such that
I
(4.3.6)
ly1k+6 dG(y) <
c/2.
/yl>Z1
The existence of z1 is guaranteed by assumption (4.3.1). For z > z1 we can write
I
ly1k+1 dG(y)
Iyl~z
=
I
I
< z~-6
..
J
IYlk+6dG(y) + z1-6
lyl~z1
:5 z~-6
ly1k+1 dG(y)
Z1 <Ixl~z
IYl91
(4.3.7)
I
IYlk+1dG(y) +
ly1k+6 dG(y)
Z1 <Iyl~z
+00
J
ly1k+6dG(y) + cz 1- 6 /2
-00
<
Since
l
lz1-6,
for z sufficiently large.
is chosen arbitrarily, the statement of Lemma 4.3.2 is now proved.
0
Lemma 4.3.3.
Suppose that EY 1
= 0 and (4.3.1) is satisfied with some k
IC(S)
(4.3.8)
~
2. Then
as s - 0,
where 1m is the mth cumulant of Y l'
Note.
When 6 = 0, the result of Lemma 4.3.3
example, Chapter I in Petrov's book (1975).
IS
well-known.
See, for
66
Proof of Lemma 4.3.3.
Ibragimov (1967) showed (see Lemma 3 in his paper) that if ElY 11 k <
then
(4.3.9)
as s
-+
O. According to Heyde and Leslie (1972),
leiX _
k 1
(iX);"/ < Ixl +
(k+l)! '
o m.
m=
~
for
Ixl
~ 1,
(4.3.10)
leiX _
~
m=
o
(iX);"/ < (l+e)
m.
Ixl
k
..
,
for
Ixl
2: 1.
Hence
f+OO(e
isy
-00
(4.3.11 )
<
m~o (is~{
k
-
f
m) dG(y)
k+l
Ig~l)!
dG(y)
Isyb;l
+ (l+e)
f
IsYlkdG(y)
Isyl>l
We will estimate II and 12 separately:
(4.3.12)
-
/slk+l
6-1
(k+l)! o(lsl
)
(by Lemma 4.3.2)
00
67
(l+e) Isl
12 -
f
k
IYlkdG(y)
IYI>lsl- 1
(by Lemma 4.3.1)
(4.3.13)
Putting together (4.3.9), (4.3.11), (4.3.12) and (4.3.13), we complete the proof of
Lemma 4.3.3.
0
Corollary 4.3.3.
Suppose 101
:$
l
for some f3
<~.
Then
(4.3.14)
-
2
- 0 /2
k
(iO)m /m-2)/2 I'm
(m-2)/2
m=3 rmm!t
+L
+0
k 6 )
(1(k-2+6)/2
0I +
as t
-+ 00.
t
(The sum is set to 0 if k = 2.)
Proof.
Under the conditions of this corollary, 0t - 0 as t -
00.
Hence Lemma
4.3.3 is applicable. Putting s = 0t in (4.3.8), we obtain
(4.3.15)
Since
k
tJc(Ot)/ JJ
(iO)m /m-2)/2 I'm
(m-2)/2
m=2 rmm!t
= L
+0
(1 0Ik+6
t
(k-2+6)/2
)
.
2 the statement of the corollary follows immediately from (4.3.15).
1'2=T ,
0
2
Now we are ready to estimate the closeness of {tJI(Ot)} t/ JJ to e _0 /2. \Ve will
use these estimates in Section 4.6. Two cases will be considered: k
= 3, 6 = 0 and
68
k
= 2, 0 < 6 <
1. The results in these cases are slightly different, therefore we will
need two separate lemmas.
Lemma 4.3.4.
Suppose EY1
= 0 and
ElY11 3 <
00.
Let 13 be any real number such that 0
< 13 <~. Then for 101 :5 t 13 ,
(4.3.16)
Proof.
Let us estimate the difference between {1/J( 0 t) }t / J.l and e -(P /2. Conditions of
the lemma guarantee that 1/J( 0t)
(4.3.17)
-::j:.
O. Thus
_ lexp{-02/2 + O(1013C1/2)} _
e-
02 2
/ 1
(from Corollary 4.3.3 with k = 3 and 6 = 0)
-
e -0
2
/21 exp {0 (101 3 t -1/2 )}
1013 t -1/2 <
The choice of /3 guarantees that
eX -1
= O(lxl) for
I
- 1.
1 for all sufficiently large t.
Ixl < 1, we obtain
(4.3.18)
Together (4.3.17) and (4.3.18) show that (4.3.16) holds.
Lemma 4.3.4 is proved.
0
Since
69
Lemma 4.3.5.
Suppose EY1 = 0 and ElY11
2+6
<
00
{3 be any number within the range 0 < {3 <
for some 6 such that 0 < {; < 1. Let
2(2~6)'
Then for 101
~
t{3,
(4.3.19)
Proof.
The proof is similar to that of the previous lemma. \Ve write
6
12 + )}
-_ e -0 /2 exp { a (18
6[2
- 1
2
(4.3.20)
(by Corollary 4.3.3 with k = 2)
t
101 2 +6 < 1 for all sufficiently large t.)
(The choice of {3 assures us that 6[2
t
Lemma 4.3.5 is proved.
Note. If we allowed 6
0
= 1 in Lemma 4.3.5,
then Lemma 4.3.4 would follow
from Lemma 4.3.5. We chose, however, not to combine these two results together.
They are used separately and also the statement of Lemma 4.3.2 does not hold with
6 = 1. That would make a proof of a combined result much more difficult.
70
4.4. AN ESTIMATE OF A CORRECTION TERM ARISING FROl\f AN
e·
APPROXIMATION OF N(t) WITH t/J.I
In this section we prove a lemma the result of which will lat.er be used to
approximate the characteristic function of V(t ).
Lemma 4.4.1.
Suppose EY 1
o < /3 <!.
=0
and (4.1.2) holds for some r > 2.
Then for all sufficiently large t and 101
Let /3 be such that
~ t/3,
(4.4.1)
..
where 0t is defined by (4.1.3).
The proof of Lemma 4.4.1 requires the following proposition.
Proposition 4.4.1.
For any complex number h,
(4.4.2)
Proof of Proposition 4.4.1.
<
lim
D-+oo
~ Ihl
L..J -k'
k
k=l
.
=
e 1hl _ 1.
Now we can prove Lemma 4.4.1 itself.
o
71
We have
(4.4.3)
-
-
I {
ut1/2
E exp -,,(Ot) . -m
Ii
li }
. N(t)+l-t/
1/2
3/2
ut
/ Ii
-
1
I
Elexp{h(O, t)R(t)} - 11,
where
and R(t) is defined by (4.1.5). Let us estimate h(o,t). Assumption (4.1.2) and the
fact that 0t ..... 0 as t .....
00
(which is true since 101
~ t,B) allow us to apply Lemma
4.3.3 with k = 2. It implies that
(4.4.5)
(4.4.6)
Evidently,
Elexp{h(O,t)R(t)} -
11
+00
(4.4.7)
-J
-00
lexp{h(O,t)x} -
Il dF R (t)(x)
-
JOO +
o
t
-00
72
According to Proposition 4.4.1,
~
II
e·
Io {exp(lh(O,t)lx) -1}dF R(t)(x)
00
- -Io {exp(lh(O,t)lx) - 1}d(1-FR(t)(x))
00
(4.4.8)
_m(o,t,x)I
OO
_
x-o
I exp(lh( o,t) Ix) (1- F R(t)(x)) dx,
00
+ Ih(O,t) I
0
where
(4.4.9)
m(O,t,x) -
{exp(lh(0,t)lx-1)}(1-F R(t)(x)).
We will show that
Io exp(lh(O,t)lx)(l-F R(t/
00
(4.4.10)
= 0(1)
X ))
e-
and that x-oo
lim m(O,t,x) exists and is equal to O.
According to Lemma 4.2.1, there exist
0',
v
> 0 such that 1 - F R(t)(x) <
O'e- vx for all x ~ O. Suppose t is large enough so that Ih(o,t) I < v/2. (\Ve know
that h(O,t) - 0 as t -
00,
since h(O,t)
= 0(02 t -1/2)
and /01 ~ t f3 , where /3 < ~.)
Then the value of the expression in the left hand side of (4.4.10) does not exceed
00
(4.4.11)
O'I
exp(lh(O,t)lx)exp(-vx)dx
=
O'(v-h(O,t)t I
=
0(1).
o
So (4.4.10) holds. Also for such t,
o
~ m(O,t,x) ~
0'
exp{(lh(O,t)l- v)x} -
That proves that )Lmoom(O,t,x) = 0 for any
0 as x -
00.
°and t, such that t is sufficiently large
and 101 ~ t f3 • Together with (4.4.8) and (4.4.10), it shows that
73
Similarly,
~
12
o
f
{exp(lh(O,t)xl) - l}dF R(t)(x)
-00
o
(4.4.13)
-
f
{exp(-lh(O,t)lx) - 1}dFR(t/ x )
-00
o
0
= n(o,t,x)lx=_oo
+ Ih(o,t)l f
exp(-lh(O,t)lx)FR(t/ x ) dx,
-00
where
n(O,t,x) = {exp(-lh(O,t)lx) - I}FR(t)(x).
As in a case of II' using Lemma 4.2.1 we obtain
o
(4.4.14)
f
exp(-lh(O,t)lx)FR(t)(x)dx = 0(1)
-00
and
lim n(O,t,x) = 0
x--oo
for all
° and t
such that t is sufficiently large and
101
:$
t J3 •
That along wi th
(4.4.13) implies that
Lemma 4.4.1 is proved.
0
4.5. THE BEHAVIOR OF THE CHARACTERISTIC FUNCTION OF V( t)
In this section we obtain the estimates for
(4.5.1)
p(t,O) == Ee
iOV(t)
.
74
Our goal is to show that when () is small (as compared to t), p(t,()) can be closely
approximated by a function of the characteristic function of Y l' This will be done
in Lemma 4.5.1. Next, in Lemma 4.5.2 we will demonstrate that for certain large
values of () (again, relative to t), p(t,()) is sufficiently small as t
-+ 00.
The information on p(t,()) obtained in this section will be needed to approximate the distribution function of V( t).
Lemma 4.5.1.
If the conditions of Lemma 4.4.1 are met, then
as t
(4.5.2)
-+ 00.
Proof.
In Appendix B we prove the following generalization of the \Vald's
Fundamental Identity (Wald, 1944):
(4.5.3)
Since
(4.5.4)
we can write
1 - E exp(j()V(t»
{tf(()tHt/ JJ
E
exp(j()V(t»
{tf«()tH N(t)+1
_ E exp(j()V(t»
{tf«()t H t/ JJ
e·
75
1
(4.5.5)
Hence
/{t/J(Ot)}t/ JJ
(4.5.6)
-
E exp(iOV(t))I
< E/exP{K(Ot)(t/ JJ -N(t)-l)}-ll
_ O(02C 1/ 2 )
Lemma 4.5.1 is proved.
(according to Lemma 4.4.1).
0
Lemma 4.5.2.
Suppose EY 1 = 0 and (4.1.2) is satisfied with some r such that r > 2. Then
there exists
TJ
> 0 that does not depend on
0
and t such that if
(4.5.7)
then
(4.5.8)
p(t,O)
=
O(t- 2 )
as t -
00.
(Obviously, there is nothing magic about a constant '5' in (4.5.7).
It is large
enough to satisfy the inequalities in the proof. Any constant larger than 5 would
suffice too.)
76
Proof.
First, we may assume that 71 is small enough to ensure that lP(Ot) i= 0, i.e.,
2
1 2
that IC(Ot) exists. Indeed, if 101 :5 71t / then lOti :5 71// IT, and we can use the fact
that lP(s) i= 0 when lsi is sufficiently small.
Next, let
(4.5.9)
where
(4.5.10)
and
II
13 -
2u
---m
IIIl
is such that (4.2.1) and (4.2.2) are satisfied. Then
N(t) -
till
P {N(t) < mtl = P { 1/2 3/2 <
ut
III
ut
1/2
< FR(t) (rot-till)
<
ut 1 / 2 1// 2
(4.5.11 )
-
o (exp ( _ IIf3/~210gt))
-
O(exp(-2Iogt))
2
= O(t- ).
Similarly,
(4.5.12)
P{N(t) ~ Mtl -
O(t- 2 ),
till }
3/2
III
rot -
F
R(t)
32
(_f3J.l / 10gt)
u
(by Lemma 4.2.1)
(by the choice of (3)
e·
77
so
(4.5.13)
for all sufficiently large t.
Consider now a sequence of random variables
Yj , j
= 1,2,... , defined as
,
...., = y. - Y""
y.
J
J
J
(4.5.14)
where Yj, Yj' are independent for all iJ = 1,2,... and are such that sequences (Xj ,
Yj) and (Xj , Yj'), j = 1,2,... , satisfy the same conditions as the original sequence
(Xj,Yj ). Thus,
Yl' Y2""
are independent identically distributed random \"ariables
satisfying the following conditions:
(4.5.15)
where r is the same as in (4.1.2).
Let 11'1(0) and "1(0) be the characteristic function and the cumulative
function of
11P( 0) 12 , so
Y1 correspondingly.
From the definition of
11'1 (0) is real and non-negative.
Define the following cumulative processes:
N(t)+1
W'(t) -
L
j=1
Y~,
J
N(t)+1
(4.5.16)
W"(t) -
L
j=1
Y~'
J
and
N(t)+1
W(t) -
L Y"J
. 1
J=
Clearly, W(t)
= W'(t) - W"(t).
Y1 it
follows that lh(O)
=
78
For any n = 0,1,2,... we will define
e·
(4.5.17)
and
(The distribution function Gx(Y) was defined at the beginning of this chapter.)
Then
E{exp(iO t W(t)) IN(t)=n}P{N(t)=n}
f
exp(iot()\+ ...+Yn+l))dP
N(t)=n
(4.5.18)
(n+l
)
+00
+00
+00
- f+00
.
·f
In(tjXl"",Xn+l)f
.. ·f
exp iOt?: (yj-yj')
-00
-00
-00
-00
J=l
(where F X is a distribution function of Xl)
+00
+00
- f-00 ...f-00 In(tjXl,·",xn+l) IJn(OtjXl,,,·,Xn+1)12
-real and non-negative. Therefore we can write
e-
79
- n=O
f: E{exp(i8 ()\+ ..·+Y
t
Mt
))IN(t)=n} P{N(t)=n}
(4.5.19)
(since for j
> n+ 1, Yj is independent of {N(t)=n})
Hence
(4.5.20)
for all sufficiently large t.
Also
I
E{exp(i8 t W(t))IN(t)=n}P{N(t)=n}
+00
+00
-00
-00
+00
+00
-00
-00
- II·.. I
In(tjXl""'Xn+l)
2
1
+00
+00
-00
-00
I ..I
exp(i8 t (YI + ... +yn+1))
(4.5.21)
- II·.. I
2
In(tj XI,.. ·,X n+ I )J n(8 t j XI'''''Xn+I)FX(dxI)· ..FX(dxn+l)
I
80
<
+00
+00
-00'" -00 In(t; Xl,.. ·,Xn+l) !In(B t ; Xl,,,,,Xn+l)12FX(dxl)···FX(dxn+l)
J
J
= E{exp(iB t W(t))IN(t)=n}P{N(t)=n}
e·
(according to (4.5.18)).
Now we can estimate E exp(iBV(t)):
(4.5.22)
eAccording to (4.5.13),
This estimate holds for all B for which ,,(Bt ) is defined.
In order to estimate 82 we note that for any positive integer k and any real
xl,... ,xk' the following inequality holds:
81
Therefore,
(4.5.24)
(by (4.5.20) and the choice of m t and M t )
for all sufficiently large t and all 0 such that
Ii: (
0t)
is defined.
Yj has mean 0, variance 2T 2 and
2 6
6 = min(r-2, n. Then ElY 11 + <
According to (4.5.15), for each j = 1,2,... ,
finite moment of the order r > 2. We write
From Lemma 4.3.3, applied to random variable
a
00.
Y1 and
its cumulative function
"'1 (s), it follows that
(4.5.25)
as lsi -
o.
Since tP1 (s) is real-valued, so is "'1 (s). Clearly, "'1 (s) ~ o. Let sl > 0 be such that
2 6
",(s) ~ _T 2S 2 + Is1 + whenever lsi ~ sl (the existence of such sl follows from
) Then -T 2s2 + c1 Isl 2+6
(
) Take c1 = max ( T 2/ 5 6,1.
4.5.25).
1
all real s we have "'1(s) < _T 2 S2 + c1Is/2+6. Thus
(4.5.26)
~
0 for lsi
~
sl' so for
82
Let
T]
be such that
T
(4.5.27)
e·
6+2
12
If 101 $ T]t / then
(4.5.28)
It follows therefore from (4.5.26) that
exp ( - 02
2tJl) .
(4.5.29)
From (4.5.24) we conclude that
S2 < t exp( -0 2 /8)
(4.5.30)
_ O(t exp(- 25
-
O(C 2 )
~Ogt))
(since 161
~
5 (10gt)1/2)
for all sufficiently large t.
Together (4.5.22), (4.5.23) and (4.5.30) show that (4.5.8) holds.
Lemma 4.5.2 is proved.
0
4.6. ESTIMATES OF THE REMAINDER TERM IN THE CENTRAL LIMIT
THEOREMS FOR THE CUMULATIVE PROCESSES
We are ready now to formulate and prove the main results of this chapter.
These are the Central Limit Theorems for the cumulative processes that provide
the remainder terms. The order of these terms depends on the existence of finite
absolute moments of Y l'
We start with Theorem 4.6.1 which requires the
e'
83
imposition of the strongest assumptions on the distribution of V's and, in return,
allows the remainder term of the smallest order.
Theorem 4.6.1.
Let W(t) be a cumulative process defined by (1.1.2). Let V(t) be defined by
(4.1.4). Suppose the distributions of random variables Xl and Y 1 (and therefore Xj
and Yj for any j) satisfy the following requirements:
(i)
the Cramer condition (4.1.1) holds for Xl'
(ii)
EY 1
= 0,
(iii) ElY 11 3 <
00.
Then
(4.6.1)
IIP{V(t)::; x} - ~(x)1I
-
O(IOgt)
t
1/2
as t
-+ 00.
Proof.
Let 4J(0) = e-
O2
/2 denote the characteristic function of the standard normal
random variable and let p(t,O) be the characteristic function of V(t) as defined by
(4.5.1).
From a version of the Berry-Esseen inequality (see, for example, Petrov
(1975), Chapter V, Theorem 2), it follows that there exists an absolute constant c
such that for any T > 0 we have
(4.6.2)
s~p IP{V(t) ::;x} - ~(x) I ::; ~
T
J Ip(t,O) 0- 4J(0) IdO + cT-
l.
-T
Take T = 7]t 1 / 2 , where 7] is chosen as in Lemma 4.5.2.
Then cT- l = O(t -1/2).
Hence we only need to estimate the integral term in (4.6.2). We have
84
(4.6.3)
<
f
IOls5(logt)1/2
Ip(t,o) - ¢(O)l do
101
f
+
Ip(LO)1 do
5(logt)1/2 <lols7Jt1/2
f
+
¢( 0) do 101
IOI>5(log t )1/2
Let us estimate II' 12 and 13 separately. First,
f
10ls5(log t )1/2
+
(4.6.4)
f
IOls5(log t)
1/2
l{tP(Ot)}t/Jl-¢(O)1 do
101
According to Lemma 4.5.1,
-- d°
f
10/t
10Is5(logt)1/2
co2
l 2
/
(4.6.5)
(c is a constant that does not depend on 0 or t)
- O(~~~:).
101
85
Further,
J
191~5(Jogt)
(by Lemma 4.3.4)
1/2
(4.6.6)
J
+00
$
t -1/2
2
e -9 /2191 2 d9 _
o( t -1/2).
-00
Hence
(4.6.7)
Also, according to Lemma 4.5.2,
(cl here is a new constant that does not depend on
e or
t)
(4.6.8)
_ o( t -1/2).
Finally,
J
191>5(Jogt)I/2
2
-9 /2
e 101 do
<
1
(211")
1/2
J
e
2
-9 /2
de
101>5(logt)I/2
(4.6.9)
O(t -1/2).
86
Estimates for II' 12 and 13 show that the expression in the left hand side of
(4.6.3) is O( ~~~;). That together with (4.6.2) completes the proof of the theorem.
Theorem 4.6.1 is proved.
0
We will now consider a situation when Y 1 has a finite absolute moment of
the order of (2 + 6), where 0
< 6 < 1. It turns out that in this case we can obtain a
remainder term of the order of o(t -6/2), which is what we would expect from the
2 6
finiteness of ElY 11 + .
(A remainder term of a similar order was obtained by
Niculescu (1984) in the (non-uniform in x) Central Limit Theorem for the renewal
process based on the i.i.d. X's.) The reason that we can not get a similar result for
6 = 1 is that there is a correction term caused by the randomness of N(t) (we dealt
with this term in Lemma 4.4.1). The order of this term can not be made less than
O(~~~;).
While such error can be ignored if 6 < 1, i.e., when the desired accuracy
is O(t -1/2+() for some (
term for 6
> 0, it becomes the largest component of the remainder
= 1.
The following theorem covers the case of the V's with a finite absolute
moment of the order of greater than 2 but less than 3.
Theorem 4.6.2.
Suppose conditions (i) and (ii) of Theorem 4.6.1 are satisfied and also
ElY 1/
2+6
(4.6.10)
<
00
for some 6 such that 0 < 6 < 1. Then
IIP{V(t)~X}-~(x)1I =
6 2
0(t- / )
ast-oo.
e·
87
Proof.
As in the proof of Theorem 4.6.1, using the Berry-Esseen inequality (see
(4.6.2)) with T = 17t1/2, where
(4.6.11 )
'1
is chosen as in Lemma 4.5.2, we can write
IIP{V(t)$X} - 4>(X) II $
* f
IOI:S17 t1 /
Let {3 be any real number such that 0
f 2
IOI:S17 t1 /
<
{3
Ip(t,O~OI<P(O)1 do + O(t- 1 / 2 ).
2
< min(14 0, 2(2~0)). Then
..
Ip(t,O)-<p(O)1 do <
101
(4.6.12)
From Lemma 4.5.1 it follows that
(4.6.13)
since {3
< (1-0)/4.
88
Also,
2
A2 <
e- O /2
J
101
do
6/2
t
e·
1+6
10b;t.B
(by Lemma 4.3.5, which is applicable since .B < 2(2~b))
(4.6.14)
O(t -6/2),
-
A3 <
J
c C 2
+or- do
(according to Lemma 4.5.2)
t.B <101~'7tl/2
(4.6.15)
O(t -1/2)
-
..
ast-oo
and
(4.6.16)
A4
-
O(t- 1 / 2 ),
as it was shown in the course of the proof of Theorem 4.6.1 (see (4.6.9)).
e-
The above estimates for AI' A2 , A3 and A4 combined with (4.6.11) and
(4.6.12) complete the proof of the theorem.
Theorem 4.6.2 is proved.
0
4.7. A CASE WHEN Y's HAVE A NON-ZERO MEAN
In this section we drop the assumption EYI
Central Limit Theorems with the remainder terms.
=0
and prove two general
The first of these two theo-
rems follows almost immediately from the results of Section 4.6, while the proof of
the second one is much less obvious. We believe, however, that that second result
(Theorem 4.7.2) represents the correct approach to the Central Limit Theorem,
of
f
~ ...
89
because, unlike Theorem 4.7.1, its statement does not involve any random yariahles
besides the one, namely \V{ t), whose distribution is of interest to us.
In the remainder of Chapter IV we will denote
(4.7.1)
We will assume that /
> 0 and
A can be any real number.
We start with the following result.
Theorem 4.7.1.
Let \V{t) be a cumulative process defined by (1.1.2) and suppose that the
Cramer condition (4.1.1) holds for Xl' Then
(i)
if ElY11 3 <
00
then
- A(N(t)+l) < x} li p {wet)T(t/
JJ)1/2
-
(4.7.2)
(ii)
if ElY11
2 6
+ < 00
~(x) I
=
o(JOgt)
l/2
t
for some 6 such that 0
<6<
as t -
00,
1, then
(4.7.3)
Proof.
The proof follows immediately from Theorem 4.6.1 and Theorem 4.6.2
applied to the cumulative processes based on random variables Xj and Yj, j
1,2,... , where Yj
= Yj-A, so that EYj = O.
0
The proof of the next theorem will require two additional lemmas.
=
90
Lemma 4.7.1.
e·
Let qt be any function of t. Then
(4.7.4)
P{XN(t)+l
> qtl = 0 ( te
-gqt)
,
where g is the constant in the Cramer condition (4.1.1) that applies to Xn's.
Proof.
Let us introduce function H(t) =: EN(t), a renewal function that corresponds
to the renewal process defined by the Xn's. From (2.3.3) it follows that H( t) :5 ct
for some fixed c and all sufficiently large t.
Here and later c, cl' c2' ... will be
some appropriate constants that do not depend on t. Also, the Cramer condition
(4.1.1) implies that for a fixed t and any given n,
e-
(4.7.5)
We can not use (4.7.5) directly to prove the lemma because in (4.7.4) the
index of X depends on t. The proof requires a few more steps. We write
00
- 2:
P{N(t) = n, XN(t)+l
n=O
> qtl
00
- 2:
0=0
00
<
2:
0=0
P{N(t) =n, X n+ 1 > qtl
P{N(t) ~ n, X o+ 1
> qtl
00
(4.7.6)
- 2:
0=0
P{So:5 t, X o+ 1 > qtl
(because {N(t) ~ n} = {Sn:5 t})
91
00
- L
P{Sn ::;t}P{Xn+l>qd
n=O
::; ci e
-gqt
L
00
P{Sn::; t}
(since X n+ l and Sn are independent)
(by (4.7.5))
n=O
-
cle
-gqt
L
00
P{N(t)2=n} = cle
-gq (
)
t l+H(t)
0=0
_ O(te-
gqt
(since, as we showed above, H(t) = O(t)).
)
Lemma 4.7.1 is proved.
..
For any t > 1 set
(4.7.7)
0
qt = 23g logt.
According to Lemma 4.7.1,
(4.7.8)
Let us now introduce a new cumulative process \VI(t) based on the i.i.d.
pairs (X o , y~l»), where
y~l) = Yo - ~Xn, n = 1,2,....
:EfJ~)+lyy). Clearly, Ey~l)
= O.
We have \VI (t) =
If condition (4.1.2) is satisfied for Y I and some
real r then (because of (4.1.1)) it is also satisfied for Yil).
The following lemma allows us to convert a Central Limit Theorem for W(t)
into one for WI(t).
Lemma 4.7.2.
If Xl satisfies the Cramer condition then for all sufficiently large t,
92
I P { i(t/ J.l) 02
W(t)-~t}
~ X
-
4>(x) II
(4.7.9)
<
-
211 p {
W 1(t) <
i(t/ J.l)1/2 -
e·
X} _4>(x) II + ct logt
l/2
where a constant c depends only on the joint distribution of (X 1,Y 1)'
Proof.
If ~
that
~
= EY1 = 0 then the result is trivial since W(t) = W 1(t).
Suppose now
> O. Obviously,
..
e'
(4.7.10)
Since ~ > 0 and Xl + ... +X N(t)+l > t, formula (4.7.10) implies that
(4.7.11)
P
{W(t)-~t <x}
J.l
i(t/ J.l)1/2 -
< P { W 1(t)
-
<x}
i(t/ J.l)1/2 -
for any real x.
Let us now find a lower bound for the probability in (4.7.10). Denote xl =
x -
~q\/2'
i(J.lt)
where qt is defined by (4.7.7). We have
93
p
W (t)
>. (Xl +",+XN(t)+I-t)}
1
<x{ "(tl p)1/2 "(pt)1/2
(4.7.12)
(by the definition of xl)
for some cl' according to (4.7.8).
From (4.7.11) it follows that
(4.7.13)
P
$ X - ~(X)
{W(t)-~t}
"({tip)
1/2
< P
From (4.7.1O) and (4.7.12) we obtain that
{ W (t)
1 1/2 $
"(tip)
X} - ~(x).
94
P{
W(t) -
r(t/ p)
~t
1/2
~
X
}
~(x)
-
e·
(4.7.14)
for some c2 that does not depend on x or t.
It is easy to verify that for any real numbers Yl' Y2 and Y3 the following
if Yl ~ Y2 ~ Y3 then ly 2 1 ~ ly11
property holds:
and (4.7.14) with
Y2
-
P{
W(t) -
r(t/ p)
~t
}
~
X
~(x)
I
1/2
and
we obtain
w (t) - ~ t }
{
I r(t/ p)
P
(4.7.15)
1/2
~
X
-
-
cIl(x)
+
ly 3 1.
Applying it to (4.7.13)
95
Hence
~t}
~ x
I p { r(tl p) /;2
Wet) -
- ~(x) II
(4.7.16)
..
SInce for
any functions
a(x) and p(x) it IS true that
lIa(x) + ,8(x) II
~
+ IIP(x)lI·
lI a (x)1I
Consider now the second term in the right hand side of (4.7.16). For any
fixed t, it is equal to
I{
sup
P WI (t)1 2 <
-oo<x<oo
r(tl p) / -
X -
,\qt}
1 2
r(pt) /
-
~( X
-
~(x) .
-
,\qt)
1 2
r(pt) /
I
(4.7.17)
I{
sup
P Wl(t)1/2 ~
-oo<x<oo
r(tl p)
X
}
I
That, together with (4.7.16), proves formula (4.7.9).
Similarly, (4.7.9) holds when ,\ < O.
Lemma 4.7.2 is proved.
0
We are now ready to state the most general Central Limit Theorem for the
cumulative processes. It does not involve any random variables other than \V(t)
and allows Y 1 to have a non-zero mean.
96
Theorem 4.7.2.
e-
If conditions of Theorem 4.7.1 are satisfied then
(i)
if ElY 1/ 3 <
then
00
(4.7.18)
(ii)
(4.7.19)
2 6
if ElY 1/ + <
00
W(t)-~t}
~ X
II P { -y(t/ JJ) 02
for some 6 such that 0 < 6 < 1, then
-
~(x)
I = (6/2)
t0
as t
-+ 00.
Proof.
The proof follows from Lemma 4.7.2 used together with Theorem 4.6.1 (that
will demonstrate that (i) holds) and Theorem 4.6.2 (that will prove (ii)).
0
4.8. ESTIMATES OF AN AVERAGE ERROR
Results proved in the previous section give us the remainder terms in the
Central Limit Theorems for the cumulative processes. As it was done for the sums
of independent random variables, it is interesting to see if more can be said of these
remainder terms when they are averaged over all possible values of t. For example,
if conditions of Theorem 4.7.2 (ii), are satisfied then it follows immediately from
(4.7.19) that for any
£
>0
(4.8.1)
However, it does not follow from Theorem 4.7.2 that (4.8.1) holds for
l
= o.
In
this section we will be concerned with advancing Theorem 4.7.2 by obtaining a
97
formula similar to (4.8.1) with ( = O.
This corresponds to how Theorem 2.2.3
improves on Theorem 2.2.1.
Here is the main result of this section.
Theorem 4.8.1.
Let W(t) be defined by (1.1.2). If the Cramer condition (4.1.1) holds for Xl
and EIYII
2+6 <
00
for some 6 E (0,1) then
Joot-l+6/21Ip{W(t)-t: ~ x} - ~(x)lldt
(4.8.2)
o
<
00.
I(t/JJ)
(Note that the lower limit of integration in (4.8.2) is 0, not 1 as in (4.8.1)).
Before we begin the proof of Theorem 4.8.1, we will state two auxiliary
lemmas.
Lemma 4.8.1.
Suppose EY 1
= 0 and
ElY 11
2+6 <
00
for some 6 such that
a< 6 <
1. Let
function a(u) be such that
(4.8.3)
I\: (u)
= -
T
2u 2
2
+ u 2 a(u),
where I\:(u) is a cumulative function of Y I and
that fjJ(u) =f:. 0 (therefore, I\:(u) is defined) for lui
T
2
= Val' Y I . Let A>
~ A,
a be
such
where fjJ is the characteristic
function of Y l' Then
(4.8.4)
<
00.
Proof.
It was shown by Heyde and Leslie (1972, Lemma 2) that if EY 1 =
a and
98
ElY11 2+6 <
00
for some 6 E (0,1), then
la(u)1 du
(4.8.5)
1+6
u
for any A
<
e-
00
> 0 such that 0 < A
~ A.
In the same paper (see page 262) Heyde and
Leslie noted that la(u)l is symmetric in u, so the statement of the lemma follows
from (4.8.5).
Lemma 4.8.1 is proved.
0
The next result is somewhat similar to Lemma 4.3.5. The difference is that
now we want a remainder term to depend on function a(u) defined by (4.8.3).
Lemma 4.8.2.
Suppose EYI
= 0 and
tive real number such that 13
EIY112+6 <
00,
where 6
E
(0,1). Let 13 be any posi-
< 2(2~6)' Then for 101 ~ t 13 ,
(4.8.6)
(Note that the conditions of this lemma are identical to those of Lemma 4.3.5.)
Proof.
We have
(4.8.7)
(by the definition of a(u))
e'
99
_ -02/21 exp(02 a(Ot»)
_ 11 .
2
-e
T
From Lemma 4.3.3 it follows that la(u)1
t
-+ 00
= o(u 6)
as u
-+
O.
Since 0t
-+
0, as
and 101 ~ t/3, we obtain
(4.8.8)
as t
-+ 00,
because of our choice of /3.
Since eX -1
= O(lxl)
as x
-+
0, the right
2
hand side of (4.8.7) can be written as
Lemma 4.8.2 is proved.
O(e- O /2 02 Ia(Ot)I).
0
Now we have all the necessary tools to prove the main result of this section.
Proof of Theorem 4.8.1.
First, we notice that if we prove that for some y that does not depend on t,
(4.8.9)
~ x} - ~(x)lldt
JOOt-l+6/21Ip{W(t)-t:
-y(t/ p)
<
00,
Y
then the theorem will be proved, since
We will choose an appropriate y later.
Second, let us show that it is possible to assume that the V's have mean O.
Indeed, if EY 1
= A then it follows from
Lemma 4.7.2 that the value of the integral
100
expression in (4.8.9) does not exceed
(4.8.10)
The second integral in (4.8.10) is finite since 6 < 1. A cumulative process \iVI(t) is
based on Y's having zero means.
EY1
= O.
Thus we only need to show that for some y that does not depend on t,
f
00
(4.8.11 )
Therefore, from now on we can assume that
t -1+6/2 11 P{V(t)
~ x} - eJl(x) II dt <
00,
Y
where V(t) is defined by (4.1.4).
Let /3 a,nd 7J be defined as in the proof of Theorem 4.6.2.
From formula
(4.6.11) it follows that it is sufficient to show that
00
f
(4.8.12)
= e -0
e-
Ip(t,O) - 4>(O)l dodt <
00,
1°1
101~7Jtl/2
y
where 4>(0)
f
t-I+6/2
2
/2 is the characteristic function of a standard normal random var-
The inner integral does not exceed Al + A 2 + A3 + A 4 , where the Ai's, i =
iable.
1,2,3,4, are defined by formula (4.6.12). We saw in (4.6.13), (4.6.15) and (4.6.16)
Al + A3 + A4
that
~
cIt
-1/2+2/3
root-l+6/2(A +A +A )dt
1
y
2
3
for
all
is finite for any y
sufficiently
large
t,
> O.
We now have to take care of the integral that involves A 2 , i.e., prove that
(4.8.13)
<
00.
so
101
Lemma 4.8.2 tells us that it would be sufficient to prove the finiteness of
(4.8.14)
By changing the variable of integration in the inner integral from () to
U
= Bt =
1/2
(J
/Jl/2 l
Tt
we see that the value of the expression in (4.8.14) is equal to
"JOO t -1+6/2
(4.8.15)
Y
J
2 exp (- 2
u 2~2 t ) lulla(u)1 dudt.
T /J t
1/2
l ul<~/J_~
- 1/2-{3
Tt
1/2
We can assume that y is large enough so that
/J1/2- {3 <
-
~ whenever t
Tt
defined
~
> •v. ("T e
in Lemma 4.8.1.) Then the quantity in (4.8.15) does not exceed
~2 JOO t 6/2
J
Y
lul~~
-~2
exp (- 2
u 2~2 t ) lulla(u)1 dudt
J~ lulla(u)1
-~
<~2
J~ lulla(u)1
-~
(4.8.16)
_
JOO t 6/2 exp ( -
2 2 ) dtdu
u 2~ t
Y
JOO t 6/2 exp ( -
u
22~2 t ) dtdu
0
~ J~lulla(u)1 JOO( 2t 2)6/2 e-v
V
_~
0
u
T
;/J 2 dvdu
U
T
102
-
c2
J~
_~
<
00,
la(u)1 d JOO 6/2 -vd
1+6
u·
v e
v
lui
(for some constant c2)
0
according to Lemma 4.8.1.
That completes the proof of (4.8.13).
Theorem 4.8.1 is proved.
0
·4.9. AREAS OF FURTHER RESEARCH
In conclusion, we mention several other interesting problems that we have
not yet investigated and indeed may never investigate.
improve the error term in (4.7.18) and (4.7.19) when
Ixl -
One can attempt to
00.
For the sequences
of random variables such results were obtained by Bikelis (1966), Osipov (1972)
and Ahmad and Lin (1977).
Also, it would be great to get rid of the Cramer
condition (4.1.1) on Xl and consider instead a cumulative process based on X's and
Y's such that the k th moment of Xl and the m th moment of Y l are finite. So far,
we could not achieve any improvement in the estimate of the remainder term by
assuming that Y 1 has a finite moment of the order greater than 3. It is also of
some interest to look into a case when Xn's and Yn's are not identically
distributed.
e-
APPENDIX A
Let X be a random variable with the characteristic function 4J, i.e., 4>(0) =
EeiOX . If 4J(0) #= 0 for a given 0, we can define a cumulative function of X
(A.I)
as
K(O) = log4J(O).
If X has a moment (lk of some integer order k, then a cumulant of order k IS
defined by the equality
(A.2)
"Yk
=
.1 [d~
1\(0)1
.
dO
JO=O
1
Write (7'2
= Var X
and let H m be the Hermite polynomial of order
ill,
that
IS,
(A.3)
Hm(x)
We are now ready to introduce functions Qv(x) that are referred to in the text.
Define
rr
2
(AA)
Qv(x) -
v
1 e -x /2 'LJ
" H + - (x)
-.f2;
v 2s 1
m=l
=v
- and
s
= k 1 +... +k v .
k
m
m +2
1
(
"Y
)
k1
,m+2'
m (m+2).(7'
where the summation is taken over all integers k 1 ,... ,k v
k 1 + 2k 2 +... + vk v
.
~
0, such that
Functions Qv( x) are needed to
obtain the Edgeworth expansion in the Central Limit Theorem.
APPENDIX B
Here we prove a generalization of the Wald's Fundamental Identity (see
Wald, 1944). This proof has been developed by W.L. Smith. Professor Smith presented it in his lectures given at the University of North Carolina. The method of
the proof is similar to that used by A. Wald.
Let W(t) be a cumulative process based on the i.i.d. bivariate random
variables {(Xn,Y n )}, n
= 1,2,... ,
as
defined in Chapter 1. Let N(t) be
a
renewal
process based on the Xn's and let tP(0) be the characteristic function of the Y n'so
We assume that t and 0 are fixed and 0 is such that 1/,(0) =/; O. The main result of
this Appendix is the following theorem.
Theorem B.
iOW(t)
(B.1)
First we need to prove an auxiliary lemma.
Lemma B.1.
For any q> 0 there exists a positive constant A = A( q) such that for all
n ~ 1,
(B.2)
P {N(t) ~ n} < Aqn.
105
Proof.
Since Xl
(B.3)
Ee
> 0, there exists
-vX
> 0 such that
v
< q.
1
We have
~
P{So ~ t}
=
P{N(t) ~ n}.
Hence
P{N(t) ~ n} < e V t Ee -vS 0 _ evt(Ee-vXl)n
So (B.2) holds with A
= e vt .
Lemma B.l is proved.
0
We denote a = 1t/J(o)1 and will use Lemma B.l with q = a/2 only.
Proof of Theorem B.
Fix any integer M
~
1.
iOW(t)
E
e
{t/J(O)}
N(t)+l-
L
00
P{N(t)
0=0
(B.4)
M-l
00
L+L
0=0
o=M
= n} E
{
I
iOW(t)
e N(t)+l N(t)
{t/J( O)}
=n }
106
For n
~
M -1, we obtain
(by the definition of \V(t))
(since Y n+2' ... , YM are independent of {N(t) = n}).
Therefore
M-1
L
PM =
P{N(t)=n}E
YM ) I }
{iO(Y1+"'+
e
M
N(t)=n
{t/J( O)}
n=O
where
{iO(Y1 +... +y M) / }
- L
00
P{N(t)=n}E e
M
N(t)=n.
{t/J(O)}
n=M
Evidently,
00
<
L
P{N(t) = n}.
n=M
P{N(thM}
aM
1
It/J(O)I
M
_ A( q) . _1
2M
--+
0
as M --+ 00.
107
Also,
00
I:
P{N(t)
o=M
00
<
I:
o=M
_
A(q)
a
A(q)qO
0+1
a
1 0 +1
1,p(0)1
<
f:
o=M
P{N(t) ~ n}
o 1
/,p(0)l +
(by Lemma B.1)
00
1
'"
L.J
20
o=M
= n}.
-+0
as M -+ 00.
Our estimates of R M and T M show that the expression in the left hand side
of (B.4) tends to 1 as M -+
00.
Since that expression does not depend on M, it is
equal to 1.
Theorem B is proved.
0
REFERENCES
Ahmad, LA. (1981). The Exact Order of Normal Approximation in Bivariate
Renewal Theory. Adv. Appl. Prob. 13, 113-128.
Ahmad, LA. and Lin, P.E. (1977). A Barry-Esseen Type Theorem. Utilitas Afath.
11, 153-160.
Bikelis, A. (1966). Estimates of the Remainder Term in the Central Limit
Theorem. Liio1J.S. Mat. Sb., 6, 323-346.
Blackwell, D. (1948). A Renewal Theorem. Duke Math. J., 15, 145-150.
Cox, D.R. and Smith, W.L. (1953). A Direct Proof of a Fundamental Theorem of
Renewal Theory. Skand. Act. 36, 139-150.
Cramer, H. (1937). Random Variables and Probability Distributions.
University Press.
Cambridge
Csenki, A. (1979). An Invariance Principle in k-dimensional Extended Renewal
Theory. J. Appl. Prob. 16, 567-574.
Doob, J.L. (1948). Renewal Theorem from the Point of View of the Theory of
Probability. Trans. Amer. Math. Soc., 63, 422-438.
Doob, J.1. (1953). Stochastic Processes. Wiley, New York.
Feller, W. (1941). On the Integral Equation of Renewal Theory. Ann. Math. Stat.,
12, 243-267.
Feller, W. (1949). Fluctuation Theory of R.enewal Events. Trans. Amer. Math.
Soc. 67, 98-119.
Galstyan, F.N. (1971). On Asymptotic Expansions in the Central Limit Theorem.
Theor. Prob. Appl. 16, No.3, 528-533.
Gnedenko, B.V. and Kolmogorov, A.N. (1954). Limit Distributions for Sums of
Independent Random Variables. Addison-Wesley, Reading, MA.
Heyde, C.C. (1967). On the Influence of Moments on the Rate of Convergence to
the Normal Distribution. Z. Wahrscheinlichkeiistheorie verw. Geb. 8, 12-18.
Heyde, C.C. and Leslie, J.R. (1972). On the Influence of Moments on Approximations by Portions of a Chebyshev Series in Central Limit Theorem. Z.
Wahrscheinlichkeiistheorie venD. Geb. 21, 255-268.
Hunter, J.J. (1974).
Renewal Theory in Two Dimensions: Asymptotic Results.
Adv. Appl. Prob. 6, 546-562.
e-
109
Ibragimov, I.A. (1967).
On the Chebyshev-Crarher Asymptotic Expansions.
Theor. Prob. Appl. 12, No.3, 455-470.
Murphee, E. and Smith, W.L. (1986).
Appl. Prob. 23, 52-70.
On Transient Regenerative Processes.
J.
Niculescu, S.P. (1984). On the Asymptotic Distribution of Multivariate Renewal
Processes. J. Appl. Prob. 21, No.3, 639-645.
Niculescu, S.P. and Omey, E. (1985).
On the Exact Order of Normal
Approximation in Multivariate Renewal Theory. J. Appl. Prob. 22, 280-287.
Osipov, L.V. (1967). Asymptotic Expansions in the Central Limit Theorem.
Vestnik Leningrad. Univ. No. 19, 45-62.
Osipov, L.V. (1969). Asymptotic Expansions in the Distribution Function of a
Sum of Independent Lattice Ra.ndom Variables. Theor. Prob. Appl. 14, No.
3, 450-457.
Osipov, L.V. (1972). On Asymptotic Expansions of the Distributions of Sums of
Random Variables with Non-uniform Bounds on the Remainder Term.
Vestnik Leningrad. Univ. No.1, 51-59.
Petrov, V.V. (1960).
Asymptotic Expansions for the Derivatives of the
Distribution Function of a Sum of Independent Terms. Vestnik Leningrad.
Univ. No. 19, 9-18.
Petrov, V.V. (1968). Asymptotic Behavior of Probabilities of Large Deviations.
Theor. Prob. Appl. 13, No.3, 408-420.
Petrov, V.V. (1975). Sums of Independent Random Variables. Springer-Verlag.
Pipiras, V. (1970). On the Remainder Terms in the Asymptotic Expansion of the
Distribution Function of the Sum of Independent Random Variables. Litov.
Mat. Sb., 10, No.1, 135-159. (In Russian).
Smith, W.L. (1954). Asymptotic Renewal Theorems. Proc. Royal Soc. Edinb. A,
64, 9-48.
Smith, W.L. (1955). Regenerative Stochastic Processes.
A, 232, 6-31.
Proc. Royal Soc. Edinb.
Smith, W.L. (1959). On the Cumulants of Renewal Processes. Biometrika, 46, 1-29.
Statulevicius, V.A. (1965). Limit Theorems for Densities and Asymptotic Expansions of Distributions of Sums of Independent Random Variables. Theor.
Probab. Appl., 10, No.4, 582-595.
Wa.ld, A. (1944). On Cumulative Sums of Random Variables. Ann. Math. Stat.
15, 283-296.
Woodroofe, M. and Keener, R. (1987). Asymptotic Expansions in Boundary
Crossing Problems. Ann. Probab., 15, No.1, 102-114.