•
,
WEAK CONVERGENCE OF MULTIDIMENSIONAL EMPIRICAL PROCESSES
FOR STATIONARY ¢-MIXING PROCESSES
By
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina, Chapel Hill, N. C.
Institute of Statistics Mimeo Series No. 846
OCTOBER 1972
•
•
WEAK CONVERGENCE OF MULTIDIMENSIONAL EMPIRICAL PROCESSES FOR
STATIONARY ¢-MIXING PROCESSES l
BY PRANAB KUMAR SEN
University of North Carolina, Chapel Hill
For a stationary ¢-mixing sequence of stochastic
p(~l)-vectors,
weak con-
vergence of the empirical process (in the Jl-topology on DP[O,l)) to an
appropriate Gaussian process is established under a simple condition on the
mixing constants {¢}.
n
Weak convergence for random number of stochastic
vectors is also studied •
•
AMS 1970 Classification:
Key Words and Phrases:
60F05, 60B10.
DP[O,l) space, empirical processes, Gaussian process,
random sample size, Skorokhod Jl-topology, tightness, weak convergence.
l)Research sponsored by the Aerospace Research Laboratories, Air Force Systems
Command, U. S. Air Force, Contract F336l5-7l-C-1927.
Reproduction in whole
or in part permitted for any purpose of the U. S. Government .
•
•
1
~.
Let {x.=(x.l, ... ,x. )', _oo<i<oo} be a stationary ¢-mixing
-1
1
1p
sequence of stochastic vectors defined on a probability space
(~,A,p)
with each
~i having (marginally) a continuous distribution function (df) F(~), ~£RP, the
p(~l) dimensional Euclidean space.
Thus, if
M~ and Mk: n be respectively the
k
00
a-fields generated by {~i' i~k} and {~i' i~k+n}, and if, A£M_oo and B£M + , then
k n
for all k(-OO<k<oo) and non-negative n,
(1.1)
Ip(AnB)-P(A)P(B)
where ¢
n
is
~
in n and lim
¢ =0.
n-?<X> n
I~
¢nP(A), ¢~o,
We denote the marginal df of X.. by F[.],
1J
J
let Y.. = F[.](X .. ), l<j2P, Y. = (Y.l, ... ,Y. )', -oo<i<oo, and denote the df of Y.
1J
J
1J
-1
1
1p
-1
by
p{Y.
<
-1
(1. 2)
~},
.-
where EP = {E: ?~E~~} is the unit p-dimensional cube, 2=(0, ... ,0), !=(l, ••. ,l),
and a<b means that a.<b., l_<j<n.
---
J- J
Note that all the p univariate marginals of G
-J
are rectangular [0,1] df, while the bivariate or higher order dfs depend on F,
unless X.l'
]. ... ,X.1p are independent.
its coordinates is zero.
Also, note that G(E)=O if at least one of
For a sample Xl, ... ,X
-
-n
of size n, we define the
empirical df for Yl, ... ,Y by
-n
(1. 3)
where
c(~)=l
tl, ... ,t
p
defined by
•
(1. 4)
iff u>O, and 0, otherwise.
---
is equal to 0.
Also, G (t)=O, when at least one of
n -
The empirical process W = {W (t), t£E P } is then
n
n -
-
2
•
For every n>l, the process W belongs to the space nP[O,l] of all real valued
-
n
functions on EP with no discontinuities of the second kind, and with nP[O,l],
we associate the (extended) Skorokhod Jl-topology.
For an excellant exposition
of weak convergence of processes on nP[O,l], we may refer to Neuhaus (1971).
Also, for p=l, a detailed account is given in Billingsley (1968).
When the X.
are independent and identically distributed (iid), i.e.,
-1
¢n=O,
V n~l,
W converges in distribution (in the Jl-topology on nP[O,l]) to
n
an appropriate Gaussian process (Brownian bridge for p=l); we may again refer
to Neuhaus (1971) who also cites other references.
For ¢-mixing processes,
satisfying (1.1), Billingsley (1968, p. 197) established the weak convergence
of W to a Gaussian function when p=l and A (¢)<oo, where
2
n
'\00
~(¢) = Ln=O(n+l)
(1.5)
k
1:
¢~, k>O.
Later, Sen (1971) showed that for p=l, Al(¢)<OO insures the weak convergence of
W.
n
Our first objective is to show that for general
p~l,
the weak convergence
of W to an appropriate Gaussian process holds under Al(¢)<oo.
n
Let now {N , v~l} be a sequence of positive integer valued random variables,
V
such that
(1. 6)
where
V
~
-1
N
V
~~,
in probability, as
V~,
is a positive random variable defined on the same probability space (n,A,p).
For independent {X.} and p=l, Pyke (1968) has shown that W converges in law to a
-1
N
V
Brownain bridge
as~.
Using certain sub-martingale properties of the Kolmogorov
sumpremum of W , Sen (1972b) has shown that for general
n
weakly converges to an appropriate Gaussian process.
p~l
the iid vectors, W
NV
Our second objective is to
show that for ¢-mixing processes, satisfying (1.1), Al(¢)<oo again insure the weak
•
convergence of W
N
V
to an appropriate Gaussian process, for every
p~l •
3
•
(2.1)
Then, [~(¢)<oo]~[AQ,(¢)<oo],\lQ,"::'k, and [AQ,(¢2)<OO],VQ,"::'2k.
Consider now a
p-dimensiona1 Gaussian process W = {weE), E€E P }, where E[W(E)]
= 0, E€E P , and
for every ~, E€E P ,
(2.2)
Note that by Theorem 20.1 of Billingsley (1968, p. 174), the series on the
right hand side (rhs) of (2.2) converges when AO(¢)<OO'
LEMMA 2.1.
Under (1.1) and AO(¢)<oo, the finite dimensional distributions of
{W } converge (as
n
Proof.
n~)
to those of W.
For any (fixed) m(~l) and given t , .•. ,t
1
~
~m
(€E P), consider an arbitrary
linear compound Zn (A)
= L\.m= A,W
(t.), where ~A=(A , ... ,Am)#0.
~
~
1
J 1 J n ~J
and (1.4),
Z ( A)
n
~
=
IJ=
Then, by (1.3)
n J~I . n 1 { . m1 A. [c ( t , - y . ) -G ( t . ) ]}
= n
1=
-~\ n
L . -1
1-
J
-J
~1
-J
U.1 (A,
t , ... , ~m
t ), say,
~1
where the U. are bounded random variables (with zero expectations), and by (1.1),
1
{U., -ro<i<oo} is also a ¢-mixing sequence.
1
Therefore by Theorem 20.1 of Billingsley
(1968, p. 174), under (1.1) and AO(¢)<OO, Zn (A) is asymptotically normally dis~
tributed with 0 mean and variance
•
(2.4)
•
4
Since this holds for all AfO, [W (tl), ..• ,W (t )] has asymptotically am-variate
- -
n -
n -m
normal distribution with null mean vector and dispersion matrix
«E[W(!.)W(!n)])).
n=l ,
J
N
J ,N
••• ,
m
.
Since m and tl,-...
,t are arbitrarily fixed, the
m
lemma follows.
By virtue of Lemma 2.1, we need to establish only the tightness of {W }
n
which will ensure then the weak convergence of {W } to W.
In this context, the
n
following lemma will be useful.
Let {T.1. = T(X.),
-OO<i<oo} be a ¢-mixing sequence
-1.
of bounded random variables, such that (1.1) holds and
ET.=O,
ET~ = T: O<T<l, P{IT.I>l}=
1.
1.
-1.
(2.5)
where O<c<oo.
then S
n
1.
In particular, for centered Bernou11ian random variables, c=2.
Let
= T + •.. +T , n>l.
n
1
LEMMA 2.2.
(2.6)
° and EIT.I<CT,
Under (1.1), (2.5) and
E(S n2(k+1))
~
~(¢)<oo
k~O,
for some
for every
n~l,
K¢ {nT+ ..• + (k+1}
nT)
, K¢<oo,
where K¢ depends only on {¢n}'
Proof.
(2.7)
Note that for
k~O,
E(S2(k+1)) < [(2k+2)!] n l:
n
n,
2k+1 IE (T 1T. . . . T.
where the summation E ,2k+1 extends over all
n
if
1.
1
1.
2k+1
1~i1~... ~i2k+1~n.
~ and n be M~ and Mk: n measurable, EI~I<oo and p{lnl>l} =
)1,
Also, note that
0, then [cf.
Billingsley (1968, p. 171)]
(2.8)
•
Proceeding as in the proof of Lemma 2.1 of Sen (1971) and using (2.8), we obtain
that if AO(¢)<oo, under (1.1) and (2.5)
•
5
n Ln, lIE(T1T.1 )
(2.9)
1
I
< [2CA O(¢2)](nT),
(2.10)
n L
(2.11)
n,
3IE(T1T . . . . T. ) I ~ K~[nT+(nT)2], K~<oo.
1
Let us now assume that for
(2.12)
1
1
where a*=t for a=2t or 2t-l,
t~l.
I -< K~
~,a
other case follows similarly).
~,a
implies that
We consider only the case of a=2k+l (as the
1<~<2k+l,
~-
Then
\' 2k+l{ nLdj)2k+l IE(Tl···T.
n Ln, 2k+l IE(T1T 1. . . . T.1 + ) I _< L'=l
)I},
J
n,
1
+
1
2k l
2k l
n L:(j2)k+lIE(Tl ... T.
)
n,
1
2k+ l
I~
1~j~2k+l,
n L:(j2)k+l IE (T l ... T.
)E(T . . . . T.
)
n,
1 _
1
1
2k+l
j l
j
(')
+ 2 n L: J 2k+ l ¢ EIT1···T.
I,
n,
r.
1. 1
(2.14)
J
J-
where the second term on the rhs of (2.14) is bounded by
2n E ITl
•
~(¢)<oo
For this, we let iO=l, iJ.=i. l+r., r.>O,
JJ
J-
where by (2.5) and (2.8), for each j:
(2.15)
<00,
be the summation over all 1~il~.•. ~i2k+l~n for which rj=max{rl, .•. ,r2k+l}
for j=1, ... ,2k+l.
(2.13)
{nT+ ... +(nT)a*}, K~
Then, we shall show that
(2.12) also holds for a=2k and 2k+l.
Ln(')
:2k+l
~
1~a~2k-l, k~l, n~l,
n L
IE(T1T .•.. T. )
n,a
1
1
1
a
and let
~
3
\,(j)
IL n, 2k+l¢ r.
\,n-l
2k
~ 2ncTL r. =O(r.+l)
J r¢.
J
J
J
I
•
6
and A2k (¢2)<00.
For j=l or 2k+1, the first term on the rhs of (2.14) vanishes
[by (2.5)], while for
n
22J~2k,
we have
( .)
I n,J2k+1IE(T1°O .T.
1
j
_
1
)E(T 1. . . . T.1
j
~ n I~1. =l{L.1 . , j_1IE(T1···T.1.
(2.16)
J
J
J-
I
{L _. +1 2k+1-' E(T . ,
,
n 1j
J
1
0
where ii=i j +£-i j +1, £=0, ... ,2k+1-j.
a~2k-1,
assumption, (2.12) holds for
inequality that for
•
)I}
T. ,
)
1
2k+1 - j
I},
2~j~2k, 1~j-1,2k+1-j~2k-1,
and by
we obtain from (2.12), (2.16) and the
a~O, b~O,
\ n 1.a( n _'+l)b
<
Li=l
1
_
(2.17)
Since for
00
1
+ )I
2k 1
C
(+1)a+b+1
n
_< c *n a+b+1 , c*<oo,
that the rhs of (2.16) is bounded by
K~
(2.18)
.[nT+ .•. +(nT) k* ],
~,J
K~
.<00,
~,J
where
k,
k* - { k+1,
(2.19)
j
odd
j = even.
Thus, from (2.13) through (2.19) we conclude that (2.12) holds for a=2k+1.
then (2.9)-(2.11) and the method of induction, the proof follows for general
Using
k~l.
Q.E.D.
Let now
{T~=T*(X.),
1
-1
~<i<oo}
be another ¢-mixing sequence satisfying (1.1) and
(2.5) with T being replaced by T*:
•
LEMMA 2.3.
O~T*~l,
and let
S~
Under (1.1), (2.5) and A (¢)<00, for every
1
= T!+ •.• +T~, n>l.
n~l,
•
7
(2.20)
where K¢«oo) depends only on {¢n}.
The proof follows along the lines of Lemma 2.2, and hence, is omitted.
In
passing, we may remark that Lemma 2.2 extends Lemma 5.2 of Neuhaus (1971) to
more general random variables and to ¢-mixing processes.
For independent
processes (i.e., for ¢n=O, V n~l), the rhs of (2.20) can be replaced by 3n 2 TT*,
for every
O~T~l,
O<T*<l.
On the other hand, for general ¢-mixing sequence, the
rhs of (2.20) can not be replaced by K¢n 2 TT* for every O~T, T*~l.
This replace-
ment is possible for more restricted ¢-mixing processes, where in (1.1), the rhs
is ¢ P(A)P(B), i.e., for the so called *-mixing sequences; we may refer to
n
Lemma 3.1 of Sen (1973).
For iid stochastic vectors, the tightness of {W } has been studied in
n
section 5 of Neuhaus (1971).
On replacing his Lemma 5.2 by our Lemma 2.2 and
then using his Lemma 5.1, we are able to prove the tightness of {W } in a similar
n
way.
However, in view of the fact that his Lemma 5.2 involves the central moment
of order 2(p+l), we end up with the condition that A (¢)<OO, which for p>l, appears
p
to be more stringent than our desired condition Al(¢)<OO.
Let now B={E: E~t~l} be a p-dimensional block contained in EP (i.e.,
-1
~~~~~l~~)' ~(B)=P{¥lEB} and for every n~l, let ~n(B)=n [# of ~iEB, l~i~n].
us then define
(2.21)
Now, by definition, G(!) and G (t) are equal to
n coordinates of t is equal to O.
l~j~.
•
o
if at least one of the p
Thus, W (t) is equal to 0 when t.=O for some
n J
Thus, using a direct multiparameter generalization of Theorem 15.6 of
Billingsley (1968, p. 128) [viz., Theorem 3 of Bickel and Wichura (1971)], for
the tightness of {W }, it suffices to show that for every pair (B,C) of
n
Let
•
8
neighbouring blocks (in EP) and every A>O,
(2.22)
where K<oo, y>O and S>l.
Now, as mentioned earlier, for iid stochastic vectors
[or for a *-mixing (stationary) process], E[V 2 (B)V 2 (C)] < K~(B)~(C), so that
n
(2.22) holds with y=4, S=2.
n
-
But, for a general ¢-mixing process, by Lemma 2.2,
(2.23)
where O~Q(A) = ~(A) [l-~(A)] ~ ~(A) ~ 1,
V AEE P .
The last two terms on the rhs
of (2.23) relate to a value of S=l in (2.22), so that we can not claim S>l for
every B, CEE P .
A similar problem arises if one uses the inequality that
k
E[V 2 (B)V 2 (C)] < {E[V 4 (B)]E[V 4 (C)]}2 and then uses Lemma 2.1 (for k=l).
n
n
n
n
Thus,
presumably, a modified approach is needed.
We may remark that for general ¢-mixing processes, it has been observed in
Sen (1972a) that empirical processes behave quite smoothly for large n.
suggests the following approach.
This
First, using the basic inequality between the
modu1ii of continuity for CP [O,l] and
nP [O,l]
spaces [cf. Billingsley (1968,
p. 110) and Neuhaus (1971, p. 1288)] and the fact that W (t)=O with probability
n -
one when t.=O for some
J
12j~,
it suffices to show that for every E>O and n>O,
there exists a 0>0 and an integer no' such that
(2.24)
where for every 0<0<1,
n > n ,
-
0
n~l,
(2.25)
•
Second, by a direct mu1tiparameter extension of Theorem 8.3 of Billingsley (1968,
p. 56), for (2.24), it suffices to show that for every _-_0
O<b <1,
_ E>O and n>O, there
•
9
exist a 0: 0<0<1 and an integer n , such that for B={t: b <t<b +01} and n>n ,
a
-
-0----0
-
-
a
p{supIWn(£)-Wn(E o ) I>£} < ~n[~(B)+oP],
t£B
(2.26)
where ~(B) is defined before (2.21).
Euclidean volume of A) is always ~1,
[Note that ~[~(A)+ IIAII ] (where !lAII is the
V AEE P , and as ~ (A) is bounded from above
by any side of A, we have oP<~(B)+OP<O+oP + 0 as 0+0.]
For a given E>O and 0>0 (to be chosen later on), select n
J.
o>n a~ E/2p.
a
so large that
Let then (for n>n
),
- a
J.
b (i) = b + (n ~ £/2p)i, 0 < i < m ,
-n -0
- - - - -n
(2.27)
where m = m ·1 and
-n
n -
(2.28)
Also, let
(2.29)
Note that, if G[j](t)(=t) be the marginal df of Yji ,
l~j~,
then by (2.27), V
~~~,
P
IU(G(b (i+1))-G(b (i)] < \ IU[G[,](i,+l)-G[,](i,)]
-n - -n - j:1
J
J
J
J
(2.30)
= Lj~1(E/2p) = E/2.
Hence, on using the fact that for tEB(i,n), G (b (i)) < G (t) < G (b (i+1)) and
n -n - n - - n -n - G(b (i)) < G(t) < G(b (i+1)), we obtain by (1.4), (2.30) and a few routine steps
-n - n - that
(2.31)
•
sup
IW (t) - W (b )
n n -0
!£B(!,n)
I
< max IW (b (j))-W (b ) I + E/2, Vi> O•
- , , , 1 n -n
n -0
~=:':+-
•
10
Consequently,
(2.32)
supIWn(!)-Wn(E o )
t£B
I~
~x
IWn(En(!»-Wn(£o) 1+£/2.
O<~<m
-----n
Thus, it suffices to show that for every £>0 and n>O, there exist a 0: 0<0<1
and an integer no' such that for n > n
(2.33)
p{ max
O<i<m
o
and every b £E P ,
-0
Iwn (b-n (i»-W
n (b)1
-0
> ~d<~n[j.l(B)+OP].
----_n
Now, by (1.4), (2.21) and (2.27), V !
~
Q,
(2.34)
= S (i), say,
n -
where by Lemma 2.1, under Al(¢)<OO,
(2.35)
Unfortunately, j.l(B(~,n), though bounded from above by £/2/~, can be arbitrarily
close to O.
For example, if G(!) is degenerate on a lower dimensional space,
then j.l(B) may be equal to 0 for some B£E P .
To overcome this difficulty, we
define
(2.36)
A (i) = max{j.l(B(i,n», (£/2plrl)P}, Vi> O.
n-
-
---
Now, (2.36) implies that>" (i) > (£/2plrl)P i.e., (2p/£)2>..2/ p (i) > l.
n n
- - n
Hence, from
(2.35), we have under Al(¢)<oo,
(2.37)
Since for p=l, the proof is considered in Billingsley (1968, p. 198) and Sen (1971),
•
we confine ourselves to
p~2,
so that
1+2/p~2.
Then, from the above, we have,
•
11
(2.38)
where
(2.39)
Now Bickel and Wichura (1971, Theorem 1) have considered a multiparameter
extension of Theorem 12.5 of Billingsley (1968), under the assumption that
(2.22) holds for every (B,C)SE P .
By the same method of induction, it follows,
on pretty much the same line, that under (2.38), Theorem 12.2 of Billingsley
(1968, p. 94) extends to a multiple array, as in (2.34).
Thus, we have for
every s>O,
(2.40)
p{
< (16
where
K~
~,s
max
IS n (!)
O<i<m -1
_-_-_n _
K~ /S4)(
~,S
I
O<i<m -1
....,-....,-.....n -
«00) depends on S through K~
(s/2pln)P, ~
~,S
!
I
> ~s}
A (i))S, S = 1+2/p,
n-
Now, A (i)
n-
in (2.39).
~ ~(B(!,n))
+
~ 0, so that the rhs of (2.40) is bounded by
(2.41)
where B
n
= {t:
b <t<b +(s/2pln)m } and 8
- -0----0
-n
n
and 0<0 <0+s/2 p ln.
-n-
m (s/2pln).
n
By (2.27), O<~(B )-~(B) < s/21n
n
-
Thus, using the fact that oP<~(B ) + oP < 0
n-
n
n-
n
+ oP, we obtain
n
that for every n>O, there exist a 0>0 and an n o , such that
n > n ,
(2.42)
-
0
(2.43)
which completes the proof of (2.33).
~
LEMMA 2.4.
Hence, we have the following.
Under (1.1) and Al(cjJ)<oo, {W } is tight i.e., (2.24) holds.
n
12
•
From Lemmas 2.1 and 2.4, we arrive at the main theorem of this section.
THEOREM 1.
Under (1.1) and
Al(~)<oo,
as
n~,
Wn converges in law (in the
Skorokhod J1-topology on DP[O,l]) to a Gaussian process W for which (2.2) holds.
For later use in Section 3, we consider the following.
sup E{sup
te::E P
n
(2.44)
Proof:
For a non-negative random variable Y, E{YI(Y>k)}
= kP{Y>k}+
f
00
[l-P{Y>t}]dt,
k
for every k>O, where I(n) is the indicator function.
So, to prove (2.44), it
suffices to show that for every A>l,
sup p{sup
n
tEE P
(2.45)
Iwn (!) I
> A} ~ K A- Y, Y > 2, K <
00.
To prove (2.45), we virtually repeat the steps (2.27) through (2.41), where (i)
in (2.27)-(2.28), we take both e:: and 0 equal to 1, (ii) in (2.36), we replace e:: by
1 (so that K¢,l ~ K¢(1+4 p2 ), in (2.39)), and in (2.40), ~E by ~A, A>l.
then A (i) < l/plU, the rhs of (2.40) reduces to
n -
-
(16 K¢,l)(l+(l+l/pl:n)P)/A~
(2.46)
~
C¢A
-4
,
C~ <
00,
Y n.
Actually, it follows that under (1.1) and
sup E{sup IWn(E)
n
te::E P
(2.47)
for every
•
(2.48)
O~o<l,
and in general, i f
sup E{sup I Wn (!)
n
te::E P
~ (~)<oo
1
1
Q.E.D.
Al(~)'
2 O
+ } <
00,
for some
2k O
+ } <
00,
k~l,
then
\;j , 0<0 < l.
Since,
•
13
For two real valued functions Z(E) and Z*(E),
{W }.
NV
defined on EP , we let
p(Z,Z*)
(3.1)
Then
p(W ,0).
W*
(3.2)
n
LEMMA 3.1.
n
Under (1.1) and A (¢)<OO, for every positive E, there exists a K£«OO) ,
l
such that for every n,
Proof.
Without any loss of generality, we take K >1, as otherwise, a smaller
E
value of K can always be replaced by a value greater than one.
E
As by definition
of Wn , with probability one, p(Wn ,0) < n~ , for every n, we need to prove (3.3)
only for n > K2 (>1).
Let then
(3.4)
n
£
1
= s[n~], s=O,l, ... ,m
s
n
and note that by (1.4), (3.2) and (3.4),
1
< (n -n 1) < n~, \f s~l,
s s-
(3.5)
so that
1
(3.6)
max (k/n)~W~ 2 max (n
l<k<n
l<s<m
--n
-1
k
*
n) 2 W +1-
s
n
s
Thus, it suffices to show that for every O<E<l, there exists a positive
that
•
(3.7)
p{ max
l<s<m
--n
> K*} < E.
E
K~«oo),
such
•
14
Let us then define S
=
ns
~*l+ ••• +~*
n
ns
,
s~l,
where
1
- n~ W*
s-l n
(3.8)
)
s-l
s>l.
'
l~i~mn-s, s~l,
Now, by (1.4), (3.2), (3.4) and (3.8), for every i:
(3.9)
n
II
s
- sup
tEE P i=l
-
where W*
is the Kolmogorov supremum [cf. (3.2)] for the empirical process based
k-q
on ~k+l' ... '~q' q>k.
Now, on W* , we may use (2.47), so that by (3.9), when
k -q
Al (<jJ) <00,
Eis
(3.10)
ns+i
-s
ns
1
2+ 0 < ([n
-n ]/n)1+0/2 E [w*
]2+0
s+i s
ns+i-n s
~ C(i/In)1+0/2, C<oo,
Vi=O,l, .•. ,mn -s,
s>l.
-
Therefore, if we use Theorem 12.2 of Billingsley (1968, p. 94), we have by (3.4)
and (3.10),
•
•
15
p{ max (n -1n) ~ W*
s
n
l<s<m
s
--n
(3.11)
p{ max Is I > K*}
l<s<m
ns
£
--n
< £, by proper choice of
as K«oo) depends on 0(>0) only.
K~,
Q.E.D.
As a direct consequence of Lemma 2.5 (or the weak convergence of {W } to W),
n
it follows that for every £>0, there exists a positive K «00), such that
£
p{p(W ,0) > K } < £.
(3.12)
n
£
From (3.3) and (3.12), and proceeding as in the proof of Theorem 2.1 of
Pyke (1968), we arrive at the following.
LEMMA 3.2.
(Uniform continuity in probability).
For every E>O and n>O, there
exist a 0(>0) and an integer no' such that for n>n
,
-0
p{
(3.13)
max
p(Wk'W ) > £} < n.
k: Ik-nl<on
n
We now consider the main theorem of this section.
THEOREM 2.
Under (1.1), (1.6) and Al (~)<oo, {W } converges in law (as
N
V~)
in
V
the Skorokhod Jl-topology on DP[O,l] to a Gaussian process W for which (2.2)
holds.
•
Proof.
When in (1.6), s=c (a constant >0), with probability one, the proof of
the theorem follows readily from Theorem 1 and (3.13).
When S is a positive
•
16
random variable, we introduce a sequence {kn } of positive integers, such that
k ~ but n J~ k +0 as n~.
(3.14)
n
n
Let then
(3.15)
Then, by (1.4) and (3.14),
peW ,WI) < k n
n n - n
(3.16)
Since k
n
~
as
n~
and
~
J~
+ 0 as
n~.
is defined on the same probability space (n,A,p) as of the
{X.}, W' (and hence, W ) is a mixing sequence in the sense of Renyi (1958).
1
n
n
Con-
sequently, by Lemma 3 of Blum, Hanson and Rosenblatt (1963), we obtain from
•
Lemma 3.2, the following:
0>0, such that as
if AEA, then for every E>O and n>O, there exists a
n~,
p{
(3.17)
max
p(Wk,W) > EIA} < n.
k: Ik-n!<on
n
Similarly, by Lemma 2.4 and the Blum et al. lemma,
(3.18)
lim
n~
p{w.r(W)
> EIA} <
u n
n, 'V AEA
The last two inequalities insures that the proof of the theorem for independent
"
vectors, considered in Sen (1972b), holds for general ¢-mixing processes,
satisfying (1.1) and
Al(¢)<~'
For brevity, the details are therefore omitted •
•
REMARK.
Kiefer (1961) has shown that for iid stochastic vectors, for every E>O,
there exists a positive c (E), «00), such that for p_l,
p
•
(3.19)
p{p(W ,0) > A} ~ c (£)exp{-(2-£)A 2 },
n
p
•
17
for every A>O.
For ¢-mixing processes, such a strong result is not known.
However, (2.48) shows that if
~(¢)<oo,
for some
k~l,
then p{p(Wn,O»A} is of
the order A-2k-o , 0>0, for every A>O.
REFERENCES
•
[1]
BICKEL, P. J., AND WICHURA, M. J. (1971). Convergence criteria for multiparameter stochastic processes and some applications. Ann. Math.
Statist. 42, 1656-1670.
[2]
BILLINGSLEY, P. (1968).
New York.
[3]
BLUM, J. R., HANSON, D. L., AND ROSENBLATT, J. I. (1963). On the central
limit theorem for the sum of a random number of independent random
variables. Zeit. fur Wahrsch. Verw. Geb. 1, 389-393.
[4]
KIEFER, J. (1961). On large deviations of the empirics D.F. of ve~~o~
chance variables and a law of iterated logarithm. Pacific Jour. Math.
11, 649-660.
[5]
NEUHAUS, G. (1971). On weak convergence of stochastic processes with
multi-dimensional time parameter. Ann. Math. Statist. 42, 1285-1295.
[6]
PYKE, R. (1968). The weak convergence of empirical process with random
sample sizes. Proc. Cambridge Phil. Soc. ~, 155-160.
[7]
RtNYI, A. (1958). On mixing sequences of sets.
Hungar. 1, 215-228.
[8]
SEN, P. K. (1971). A note on weak convergence of empirical processes for
sequences of ¢-mixing random variables. Ann. Math. Statist. ~,
2131-2133.
[9]
SEN, P. K. (1972a). On the Bahadur representation of sample quantiles
for sequences of ¢-mixing random variables. Jour. Mult. AnI. ~,
77-95.
•
John Wiley,
Acta Math. Acad. Sci.
[10]
SEN, P. K. (1972b). On weak convergence of empirical processes for random
number of independent stochastic vectors. Proc. Cambridge Phil. Soc.
(in press).
[11]
SEN, P. K. (1973). Limiting behaviour of regular functionals of empirical
distributions for stationary *-mixing processes. Zeit. fur Wahrsch.
Verw Geb. (in press).
•
•
Convergence of Probability Measures.
© Copyright 2025 Paperzz