BIOMATHEMATICS~TRAINI"G PROGRAM
ON HIE FUNCTIONAL CENTRAL LINIT THEOREr,j FOR MARTINGALES
by
Bolger Rootzen*
Department of Statistics
University of North CaroUna at Chapel Hill
Institute of Statistics
~limeo
Series #1043
Novepber, 1975
* Research
sponsored in ;.Jart by the Office of Naval Research, Contract N0001475-C-0309.
Op! THE
FUf~CTIOf'JAL
CEfHRAL LIMIT
TI1EOREf~
FOR ft1ARTINGALES
by
Holger ;::;:ootzen*
ABSTRACT:
Necessary and sufficient conditions for the functional central limit
theoren for a double array of random variables are sought.
this is a
n~rtinga1e
problem only if the variables truncated at
point c are asymptotically a
thesis, necessary
It is argued that
&~d
~artlllgale
difference array.
s~e
fixed
Under this hypo-
sufficient conditions for convergence in distribution to
a Bro\'mian motion are obtained when the normalization is given (i) by the suns
of squares of the
variables~
(ii).by the conditional variances and (iii) by the
variances, . TIle results 9-r,,) ;)roved by cOr.lparing the various nomalizations with
a IVnatuT2.1" nonna1ization,
AI.1S
1970 Subject classifications: Primary 60BlO; Secondary 60G45.
Key
words and phrases: Functional central limit
* Research
theorem~
sponsored in part by the Office of Naval
N00014-75-C-0809.
martingale.
Research~
contract No.
1.
INTRODUCTION
Since the final solution of the classical central limit problem through
the Horks of Uvy> Lindeberg, and Feller ~ research on the convergence in distribution of sups of random variables has developed in two directions.
Firstly
the classical results have been extended to the random processes obtained by
interpolating the partial
dependent summai"1ds.
SUIT1S.
SecondlY,results have been obtained also for
In this context perhaps the nost irrportant dependence
property is that of a martingale.
The ptu-pose of the present paper is to find conditions that are necessary arid sufficient for the functional central limit theorem for martingales.
Unfortunately it is not possible to give as clearcut a solution to this probler.l as in the classical situation witll independent sUl1nr..ands.
One reason for
this is that a munber of different normalizations (or time-scales) all seem
reasonable~
but that they lead. to different results.
Another reason is that
any array of random variables 'ITith finite means can be nade a martingale difference array by addu1g random variables
ticall): negligible probability.
th~t
are zero except on a set of
aS~)to
This alteration would then not change the
convergence or non-convergence of the distribution of the slu:ur.ation processes
based on the array? so any such array could be regarded as a martingale difference array, which would make the problerl of getting conditions for convergence
to a Brmmian motion rather meaningless.
Hence one has to introduce some
restriction to make the problem not only superficially a martingale problem.
A~
appropriate restriction is eiven as Condition (1) of Section 2.
condition is introduced, our approach is very simple.
Once this
We show that there is a
ilnatural?l time-scale that makes the sur:wation process converge in distribution,
2
a~d
then get conditions for convergence when using other time-scales by compar-
ing these time-scales witfu:the natural one.
~'le
have not attempted to find necessary conditions for the convergence to
normality of the one-dioensional distribution of a normalized
Qartingale~ a~d
in fact it seeQS to be difficult to find non-trivial conditions for this (cf.
Dvoretsky
n~y
(1972)) Section 6).
Hmqever) from a pedogogical point of view it
be preferable to get necessary conditions for the functional central
theorer.l~
1i~t
since this avoids the extra assumption that the st.JI:ITIands are asympto-
tically small.
Successively weaker sufficient conditi0l1s for the central liJ:ri.t theoreTI
for JI1artingales have been obtained by a nmber of authors.
Early iffilJortaIlt
results were proved by P. Levy (see e.g. his 1937 book; ref. [9]) ffild DUch of
the subsequent work has relied on raethocls developed by him.
and, independently
Billingsley (1961)
Ibragimov (1962))proved convergence to nOTQality
when the martingale differences are stationary, ergodic and with finite VariaI1C('
Further weakenL"1.g of the conditions were made by Dvoretsky (1972)) Brown (1971)
mld Scott (1973).
Drogin (1972) considered random tine-scales and also got
results on necessity.
Our point of view is similar to that of Drogin.
most recent results l<nown to the present author are those of
from the sequel it
CaJl
~~Leish
The
(1974) and
be seen that his sufficient conditions are rather close
to the necessary ones.
TIle plan of this paper is as follows; in Section 2 the necessary notation
is developed and the natural tLn:le-scale is found.
In Section 3 nonnalization
by reans of sums of squares and by means of conditional variances are considered
while in Section 4 the nonnalization is given by variances.
Finally) Section 5
3
contains a cOf.1.TJ1ent on a remaining problem.
2. THE NATURAL TIME-SCALE.
For n = 1,2, ...
{~}i}:=l be a sequence of random variables on a
let
Bn,l. be tIle su?-sigmaalgebra
of Bn that
1<
is generated by ~ 19""~ i and put Sn(k) = .L ~ i ' Furthennore let
"
1=1
'
"n(t); tc::[O,I] be stopping times of {~,i}~=l that are increasing and right
probability space
continuous in t
(0. ,Bn,P ), let
-n
n
In the sequel we will without further cornnent assume
a.s.
that
00
Then {SnoTn(t): tc::[O,l]}n=l is a sequence of rffi1dom variables in D(O,I), the
space of functions on
limits.
[O,l]
We let D(O,I)
which are right-continuous and have left-hand
be endowed lvith the Skorokhod topology and let B be
a Brownian motion on D(O,l).
For brevity we abuse notation slightly and
l'vrite S o Tn
for E(
taken of variables in the nlth row
II Bn , i)
(EO(o)=E(o)).
to find conditions for So Tn ~ B when
array (m.d.a.) i.e. when
0
Ei-l(~,i)=O}
{V
"'h,l. }
i~2,
when the expectation is
The object of this paper is
is a martingale difference
n~l,
so that the partial sums in
each row form a martingale.
However, as was noted in the introduction, if the X....
's have fi.n.ite
H,1
means then any array
-n,l.} can be made a m.d.a. by adding variables which
take large values, but with low probabilities, in such a way that the asymptotic
{"X
distribution of SoTn is not changed. For i>O put
Xln,l. = Xn}l.I(/X-n,l.I::;;c) and Xii.
n,l = Xfi,l. - Xln,l.• Convergence of the distribution of SoTn to a Brownia'l motion entails that the ma.xiTIU.m1 of the summands
4
tends to zero in probability and
l~i~Ln(l)
the X".
,
n~1
are zero.
the distribution of the array
one
hence~
with a probability tending to one, all
lhus we are essentially concerned only with
{~i}'
and if the problem is to be a martingale
,
{X ' · } has to be, at least asymptotically, a
n ,1
written as
k
I IE. -1 C~
max
(1)
l~k~T (1)
i=l 1
n
1
•)
n,1
I~
I:l. d.
0 as
a.
Formally this can be
n-?OO.
However, once this condition is assumed to hold there is no need to require
that the original array,
explicitly stated.
{Xn,l.}, is a m.d.a. and this will not be done unless
11oreover, it is no restriction to assume that e.g.
c=l so
that
X'n,l.
= Xn,l. I (I -11,1
x ·1 ~1) and
. = X.....
u,1 1(1'1.
-"'11,1·1>1).
,11
) \.
n,1
Furthermore introduce
F
•
"'n~1
Then
I~ i
, l
~ 2
= x'n,l.
- E.1- 1 (X'n,l. ) •
a.s. and {~i}
is a m.d.a.
,
Our first result is that the
sum of squares of {t::n,i} gives a natural time-scale for the summation process.
To state the result we need the further notation M
i1
=
1.
satisfied.
PROOF.
Y:..
n
k
THEORE~l
ftlaX
l~hT (1) "11,1
Let Tn(t) = inf{k; L r2 .>t}, tE[O,l], &ld assume that (1) is
i=l"'l1,l
p
Then 8 0 Tn ~ B if and only if lIn ~ O.
it follows immediately that 11n ~ 0, so only the
reverse implication remains to be proved.
AssumiJlg that 11n
.!:.r
0 it follows
5
k
k
p
max
.I i=l
LX.
L
~n,l·1 - 0 and thus, putting
n,l
i=l
ls;ks;T (1)
K
n
d
Sn' (k) = L F . , it is enough to prove that Sn' oT.,11 -+- B and since
i=l ~,1
from (1) that
1
n,l.I s; 2 a.s. it follows from e.g. Theorem 1 of Drogin (1972) that
I~
,
SnoTn
d
-+-
0~e
B.
nright as well have used the results of [3], [11], or [10], the
proof in I/IcLeish (1974) perhaps being the easiest one.
Furthennore, that proof
0
can be somewhat simplified in the present case.)
REf~RK
2.
It is easy to see that an equivalent
k
ti~e-scale
2
Eo_l(F- .»t}
L
(c.f. Theorem 5 below).
i=l 1
~,1
Tn(t) ?
1
\. 1 E~. -+- t as
be noted that for these tiDe-scales 1.1=
'n,l
Tn(t) = inf{k;
is given by
Furthennore it should
n~,
for
tE[O,l].
The main tool for the rest of the paper is Lerr!!:~ 3 below which shows that
k
2
T.il(t) = infik; L ~l' >t} is not only a natural t:bne-scale, but that it is
1
i=l ., lalso in a certain sense minimal.
Let
LEMMA 3.
Tn (t)
be stopping times of
right continuous in t
So Tn
~
B if I'\t
L
C{n,i}
that are increasing and
a.s., and assume that (1) is satisfied.
Then
0 and if furthennore
Tn(t)
(2)
\
l.
i=l
Conversely, if SoTn
~
2
P
~. -+- t
as
n~ ,
t€ [0 ,1] .
,1
B and if furthen:1ore
Tn(t)
(3)
lim sup
n~
E( L ~.)
i=l
s; t,
tE[O)l],*
,1
then r'il ~ a a"ld (2) holds, also if convergeilce in probability is replaced by
n
* Actually
it is enough that (3) holds for
t=l.
6
convergence in the mean.
PROOF.
For the first part it is as in the previous proof sufficient to shO\Jl
Ie
that S' 0 T ~ B? where S~ (k) = 2 ~ i . This is easy to do by comparing
n n
i=l
~
the time-scale Tn of this lemma tilth the natural time-scale of TIleorem 1
However ~ since
above.
n n ~ B is also iIi!p1 ied by Theorem 3.2 of McLeish
Si 0 T
(1974) we omit the details.
For the converse part we first note that
ately~
and that we then also have that
Tn(t) 2 p
\'.
1
r-.
~ t,
L. 1 =
11,1
Proved that
s'n ~Tn ~
tE (O~lJ
B.
again follows irrnnedi-
Next it
(that (2) holds for
and by dividing both members by
{3),
f'h ~ a
,
t
has to be
t=O
follows fro'11.
it is seen that it suffices to
show that
Tn (1)
2
(4)
i=l
Now the functional
I
I
n~.
{B(~)
"'I1~1
k
2
. 1
1=
i=l
as
x(o) -+
'r 2 . ~ 1 as n~.
. 1 2
)}
K
is a.s. B-continuous and hence
I
{SloT (i)-S'oT (i-l)}2.!.
{B(i)_n(i-1)}2
n n K- n n ~
i=l
Kk
Furthennore the latter
~1
-B(iil)}2
i=l
sequence n' = n' (n)
as
L.\'
i=l
{Sf
k~.
Stn!J.
has mean 1 and variance
~'"
~I
- ) -0
oT
n n,
n~
(i-I)}2 -P- 1 as
oT - - .
n n n'
when..dealing with variables from the n'th row"
k
I
i=T(i··l)+l
~~il>l}
T' (i)
so
such that
n~.
To simplify notation we will for the rest of the proof write
sing the row index, let
2/k
It follows that it is possible to find a
of integers "nth n l ~ as
n'
(5)
.
{x(';:')-x(lk-
ls::im'
be the minirT.El of T(i)
and denote the event that
T(i)
(T·(O)=O).
and of
for
Tn ~1)
Again supprc~
inf{k>T(i-l);
7
T(i) =
T'
so that A~
n (8) = inf
{t.}
1
y
n
~
{til
{Yn>l}
c
W
over
Let
(i) ,
W..3.X
O<i~r
and introduce the
nax
l
It:n i
hi~Tn (1)
,
= 1,
and since
C»n{n~) ~
of Billingsley (1968) that
D(O,I) modulus of continuity
where the infinum is taken
•
= t o<t 1<... <tr
0
=
IS~oTn (t) -S~oTn (s) I,
sup
t.1- l~t,s~t.1
satisfying
2wn {J) +
y
n
t -t _ >o,
i i 1
nl~
... ,r.
Then
it follows from Theorem 15.2
As also
C.
l~
i =
max
1~i~1n(I)
1t;.1 ~
0 we
n,l
have that
P(A~) ~ P({Yn>I})
(6)
+
0 as
n+oo.
TI,{i)
=)
L
t,;n :k ' it follows from (5) and (6) that
k=T(i-l)+l
'
i
n
?
p
I
z;-. ~ 1 as n+oo.
i=l n,l
(1)
Since
max
I~i~nl
lZ:n,i I ~ Yn
,
p
max
l~b;nl
.,. I ~
I?n;;i
0 as
\lTe
n+oo.
have by the reasoning above that
I'breover, as
follows from the definition of
T' (i)
for all
that
raax
I~i~ni
For n
{S~(k)}k=l is a nartingale, a,d as
fixed
(i)) }~=l is a EJaTtincaie.
T (1)
- S~({k-l}AT'(i)) ) Z ~ 2k~1
t:~.k and since
time also,
{SI~ (kAT
I
enough by (3), the martin;rale
n.
Iz:n,l.1
I
i, it
fuid
~ 3.
TI(i)
Since 1
z
TnCl)K=Z
E(2k=1
n
is a stopping
(SI~ (kAT' (i))
~,k)
<
ro
for
n
large
{S~(kATi(i))}~=l is square integrable for large
TI1en? by the optional stopping theorem
{S~(h(i-l)+k}AT'(i)) }~O
square inteerable martingale and hence mean square convergent.
lLm 51 ({T(i-l)+k}AT'(i)) - S~(T(i-l)AT'(i))
k+oo n
= ~n
,
is a
Since
i a.s. by definition, it
8
follmvs that
.11 B (. 1))
n,l n,T 1-
(8)
E(1;2
1
'f)
E(k=T(i-1)+1
2
=
T
s2 k ll B
n,
(.
n)T 1-1
)).
nl
(I
In particular we have from (3) that 13
1;2.) :;; 1 + 0 (1). Together with (7)
n'
i=l n,l
this shows that {2
~; .}oo_l is uniforr,uy integrable (see e.g. Chung (1963)
. 1 .1-1 n1=
'
Theorerrl 4. 5. 4) •
n°
T'ii)
Z
nl 2
Tn(l) 2
Le t" ,i-n = I =l
2
T
and
let
d'
=
I
L
~~}k- i=l 7il,i
k
n
i=l k=T(i-l)+l
,k
n'
2
2
- i=l
I ,2.
for n = l,Z, ...
Equation (8) and the inequality (a +b ):;;
n,l
~ Za 2+Zb Z imply that
~
n
l
E<\;2~2 I E
1
{
i=l
T'ii)
T'(i)
~,k
l~T(i-l)+l
I
- E(
2
S;Je ll
k=T(i-1)+1'
B
) }2=
2
BnoT (i-1))}
,
nl
+ Z
t
i~l
E{ 2
I;n,i
_ E( 2
!;n,i
II"
n,T(i-l)
, 2
2 I
2
4
~I 4
r1' .
Here E{I;
n,1. - E(I;n,l·1 Bnsf (.1- I))} :;; E,n,1. so 51 ~ EL.I;
i=l n,~.; , an.... suuT'ii)
2
_
T'ii)
2
,}2
{
T'ii)
2 ~2
lar1y E{
L
r- k - C (
L
r- . II B (. -1))
:;; E
L
r- 1, r
k=T(i-l)+l "'1l"
k=T(i-1)+1 '11;lC n,T 1
k=T(i-l)+l "'n'""J
By a wellknmvn martingale inequality (see Burk'1older (1974), Theorem 3.2) there
is a
c such that
~i\~rsal consta~t
': {
:5: C .c.
_
T~
- C "'-'
(Remember that
since
T'{i)AN
I
Ik=T(i-l)+l
s
L,
1,
Il,."
I
k=T(i-l)+l
I; 4 . .
n,1
t"
L
k=T(i-l)+l il,k
T'(i)AN
T1fi)
~
converges almost surely as
l'-I-+oo
3, also converges in the 4' th oean.)
and thUS,
Hence
9
~ 2(1+c)E
n
l
I
i=l
~
4
.
n~1
~
nl
~ 2(1+c)E{ max ~2 . I ~2 .} ~ 0
l~i~nl n,1 i=l n,l
l
n 2 00
as Jl-+oo, since { I 1;n,i}n=1 is tmifonnly integrable and sinre
i.=l
2 P
P
and
max ~ . ~ O. Thus d~ ~ 0 ~ and combining this witIl
l<i<n ' n,l
(6) we have that for arbitrary £>0
as
n~.
By
(7) this proves that (4) holds and thus that (2) is satisfied.
It
now only remains to be observed that (2) and (3) together give that
'rn(t)
Ii=l
2 I
~ ,i -+ t~ also.
0
It should perhaps be noted that in the converse part of the lemma, the
condition (3) cannot be deleted entirely, not even if {Tn(t)} is non-random.
3. RANDOM TIME-SCALES.
In Section 2 above it is shown that a natural tinle-scale is given by the
sums of the squares of the trtmcated and recentered summAnds.
In this section
sone ways of expressing this time- scale more directly in terms of the origi.. .l al
Tne first result shows when it is possible
array {Xu , i} will be investigated.
to normalize directly by sums of squares of the
THEOREM 4. Let
satisfied.
k
2
= inf{k; i=l
L Xn,l.>t},
>:n ,i's.
t€[O,l], and assume tl1at (1) is
n
P
Then SoTn ~ B if and only if both Mn ~·o and
T (t)
10
(9)
PROOF.
By definition
Either by assumption~ or else since So1' ~ B we have that Ivh ~ 0 and thus
1'n(t)
P
n
I i =l X~,i ~ O. Furthennore, by considering 1'n(t)Af~ and letti."'lg !~,
Fatou ' s 1ermna gives that
1'n (t)
E{ iII
where the last inequality holds since
Hence for
tE [0,1]
1'n (t)
I
+ r
i=l
where
r
p
n
-+
1'n (t)
I
i=l
as
0
n-+ro.
Thus
1'n (t)
2
r:
sni
,
=
\'
.I.
1=1
2
;'\ (xhi
'
- E?1.- l(Y~11~1.)) - r n
=
n
11
for all
tE:[O,l]
if and only if (9) holds.
Ln(t)
I
E
i=l
~.
,1
Now
Ln(t)
s E
I
i=l
s t +
0
(1).
o
so Lermna 3 is applicable and the theorem follows.
As seen above, lmder weak conditions the sums of squares of the
give the natural til7le scale.
.' s
n,1
TIlis normalization has the further advantage that
X
it is readily available and that it does not depend on the lmderlying probability measure.
However, in analogy with the case of independent summands it
might also be interesting to normalize by the conditional variances (cf. Remark
2) •
This possibility was introduced by Levy, and is the random normalization
that has been most investigated.
THEORE4 S.
satisfied.
Ie
2
L (t) = inf{k; I E'_ l (X .»t} and assume that (1) is
n
i=l 1
n,1
Then SOL ~ B if and only if both !'Ii ~ 0 and
Let
n
n
Ln(l)-l
L
(10)
i=l
{E? 1 (X' .) + E. 1 (Xn2 .)} + r
1-
n,1
l"n(l)-1
where
r n = 1 - ~.
L1 =1
1-
n,1
n
~
as
0
n-+oo,
2
E.1- 1 (Xn,l.).
.4
A simpler condition that together with I,'l
0 implies (10) is
(1)
n
2
{E 2. 1 (V'
If a 1re ady the orlgrna
. . 1 array {X}
. n1
A.... ) + E . 1 ('n
1...:..... )}?
- +0.
no, l'
I 1=
111,1
111,1
1i2
. a m.d
IS
• a.It E
len ?1-1 (X'.)
n,l = E~1-1 (Xii.)
n,l ~ E.1- 1 (Xn,1.) so in that case (10)
RH1ARK 6.
L
is equivalent to
L (t)-1
n
(10')
I
. 1
1=
E'_ l
1
as
n-+oo.
12
1'.1oreover, together with h\t
We first prove that (1), (10), ai1d Mn ~
PROOF.
d
(1).
a
are sufficient to ensure
By definition
St,:r --+ B.
n
2)
E1°
1 (Xn,1
0
and hence for
r
i=I
= E1- 1
°
ii2
,F' 2 °) + Eo2 1 (X' ° ) + E ° 1 (X
. n,1° )
~,1
1n,1
1-
t€[O,l]
Tn (t)-l
(11)
~ a this also implies
Tn(t)-I
2
=t
E1 1 (F .)
~,1
o_
-
L
(t -
i=I
1
T (t)-l
n
-
EO_ I (X2
i!I
2
{Ei - 1 (Xh,i)
T
p
It follows that if IrIn ~ 0
0)) -
n,1
and (10) hold then
2
Ei - 1 (X~,i)}'
+
(t)-I
Ii~I
2
P
Ei - I (~,i) --+ t
and
thus, since (1) and M ~ 0 together imply that
max
I~ ° I ~ 0, also
n
I~i~T (1)
,1
Tn(t)
n
2
P
(12)
I Eo1 -1 (r0) --+ t as n~.
° 1
~,1
1=
k
Let the natural time-scale be T'(t) = inf{k: L ~. > t} A Tn(I), t,[O,I]
, ,n
i=I
,1
TnCt) , 2
2 1
obseHte that L° -1 r:- ° ~ t + max r:- ° --+ t. Now
1~,1
1:i>i-;;;T (1)' ~,1
n
and
T~ (t)
(13)
2
T~ (t)
4
2
E{ i=l
'L' (r~,1o-Eo_
(r: o))} ~ E I r:
~
1 l ~,1
i=l ~}1
2
°
~ E{
max
lsi~T (1)
~
n
Since
4
~
max
l~i~Tn(l)
any
0>0.
r: 2
°
~,1
~ 0, so also p(
T~(t)
L
i=l
T~(t)
I s2
°
,1 i=l
o} ~ 0
EO_ l (F 2 0) ~ t
1
Hence, combining this with (12) \ve have, for
as
n,1
~,1
+ 0)
E>O, that
n~,
~ 1 for
13
Tn(t)
T~(t-E)
P(T (t) $ T'(t-E)) $ p( I E'_ l (~ .) $
I
n
n
i=I 1
. ~1
i=I
In particular P(T i (t-E) < T (1)) -+- 1 and thus
T~(t-E) 2 P n
n
Li=l
~]i -+ t-E so
T~(t-E)
Tn(t)
(14) p( L [2. < t - 2E) $ p( I
[2. < t
i=I ~,1
i=l ""I1,1
as n-+<».
(15)
Furthennore
Tn(t)-l
Tn(t)
E I [2. = E
I E'_ l ([2,1')
i=l ~,1
i=I
1
~
+
-
0(1)
E.1- 1 ([2
.))
~,1
-+-
2E) + 0(1)
0
$
t +
-+-
0 as ~.
0(1),
and since
E>O is arbitrary, it follows from (14) and (15) that
Tn(t) 2 P
Ii=l ~,i -+ t for tE[O,l] (cL e.g. Lemma 2.11 of Mcleish (1974)), which
by Lemma 3 proves that SOTn ~ B.
Conversely ~ suppose SOT ~ B as n-+<». Then 1"1 ~ 0 and since (15)
n
n
T (1) 2
1
holds LeIl'lITla 3 is again applicable and thus, in particular, Li~l ~~i -+ 1.
tbreover, (13) then holds also if T~(t) is replaced by Tn(l) and it follows
Tn(l)
2
P
tl~t Li=l
Ei - I (~ , i) -+- 1 as n-+<», which by (12) proves that (10) holds. 0
14
If the summands are independent it is sufficient to consider deterministic
time scales, the most obvious one being given by the variai'1ces.
we will treat non-random time-scales.
LX....
}
H,l
assume that the original array
case is treated in
First, in Theorem 6
is a m.d. a.
>
In this sectio'"(
be10w~
we will
and then a more general
80
Theor~n
THEOREf16. Let {Xu,i} be
a
n.d.a., put Tn(t)
assume th('1.t (1) is satisfied.
k
2
= inf{k; E I X
.>t}
i=l n,l
Then SoT ~ B if aad only if both
n
and
~~
0
antI
2 . -~
xh,l
(19)
REfJARK
h
If
(16) toeether
(1) holds.
t
as
C'n.~i} is a m.d.a. and Tn(t) is as above, then lIn ~ 0 and
in~ly
(1), so ror sufficiency it is not necessary to assume that
A number of equivalent sufficient conditions are given by Scott
(1973) .
PROOF. It is easy to shoH that if Mn ~ 0 and (16) holds then the conditions
of the first part of
Lem~
the result is "lellkn01'!fl we
3 are satisfied and
OEli t
the details.
h~lce
d
So Tn ~
B.
Eowcver, since
15
For the reverse implication, asstmle that
SoT
n
d B.
.._>-
Then i'1n
L
0
and
moreover
~
t + 0(1).
n (t) t;2 . }.~
t
Hence., fron the converse part of Lemma 3, J: T1=
. 1
,1 --.-
as n-KO}
t€ [0 }1] .
NOvl,
T (t)-l
n
t C!! E
~'J:
Tn· (t)
2
x
.
= E J: t;
i=l
n,l
i=l
T (t)-l
so
E \,.11
E:~1-1 (X'
.)
~
£1=1
-h,l
Tn(t)-l
2
-
+
1
1.+o(l)+E
1
I
i=l
{Ef-lO~,i)'+ Ei-l(~~i)}'
0
'
8S
n-KO.
Proceeding as in the proof of Theorem 4, it is enough to note that hence
2
2
Tn(l) 2
Tn(l) 2
also E Ii=l ~n,i Ei - l (~ i) ~ E( max
Ei - l (~,i) Li=l
t;,i) + 0 as
1
l~isTn(l)
te[01l], and thus that
o
(16) holds.
k
THEOREf4 8.
SoT
n
d
-+
B
Let Tn(t) = inf{k; E
i:h ~
if and only if
Tn(t)
L
i=l
~~
i
'
~
t
I
i=l
')
~n'"
.>t} and assume that (1) holds.
0 and furthermore
as
n~, t€[O,l].
PROOF. Theorem 8 is just a special case of
5. A CONCLUDING
Tl1en
,1
Lem~a
3.
o
RE~~RK.
VJe have argued that the condition (1) is needed to make the martingale
property come into play.
However} the necessary and sufficient conditions of
Theorem 6 and, if it is assumed that
{~,i}
is a m.d.a., of Theorem 5 ir.lply
16
that (1) holds.
Hence one might hope to obtain the results of those theorems
without assuming (1).
Sufficiency is of course obvious? but as regards neces-
sity the best the present author has thus far been able to do is to replace (1)
2
by the requirement that . {Mn }oo
n=l
is 1.miforn1v
. integrable.
REFERENCES
[1]
Billingsley, P.: The Lindeberg- Levy theorem for martingales. Proo. ArneI'.
Plath. Soc. 12 3 738-792 (1961).
[2]
Billingsley) P.: Convergenoe of Probability MeasUl'es. New York: Wiley?
1968.
[3]
Brovffi? B. H.: l''1artingale central limit theorems. Ann. Math. Statist. 42?
59-66 (1971).
[4]
Burkholder, D. 1.: Distribution function inequalities for martingales.
Ann. Probability 1; 19-42 (1973).
[5]
Chtmg? K. L.: A ,Cour'se in Probability Theory. New York: Harcourt, Brace &
!"forld 1 .1968.
[6]
Drogin? R.: An invariance principle for nartinga1es. Ann. Math. Statist.
2~ 602-620 (1972).
[7]
Dvoretsky? A.: Central 1irnit theorems for dependent randoI:1 variables.
Proo. Sixth Berkeley Symp. Math. Statist. Probability. Berkeley:
Univ. of Calif. Press, 1972.
[8]
Ibragimov, 1. A.: A central limit theoren for a class of dependent random
variables. Theor. Prob. AppZ. ?? 83-29 (1962).
[9]
Levy, P.: Theol'iede l 'addition des Variables Aleatories
Villars, second edition 1954.
Paris: Gauthier-
[10]
11cLeish p D. 1.: Dependent central limit theorems and invariaIlce principles.
Ann. ProbabiZity 2, 620-628 (1974).
[11]
Scott, D. J.: Central lliait theore~ for rnartingales and for processes
with stationary increnents using a Skorokhod representation approach.
Adv. Appl.Prob. 57 119-137 (1973).
© Copyright 2026 Paperzz