Sen, P.K.; (1972).Limiting behavior of regular functionals of empirical distributions for stationary-mixing processes."

I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
LIMITING BEHAVIOR OF REGULAR FUNCTIONALS OF EMPIRICAL
DISTRIBUTIONS FOR STATIONARY*-MIXING PROCESSES
By
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina, Chapel Hill, N. C.
Institute of Statistics Mimeo Series No. 804
February 1972
,e
I
..... 0
q.!j'"
I
I.
I
I
I
I
I
I
I
I_
LIMITING BEHAVIOR OF REGULAR FUNCTIONALS OF EMPIRICAL DISTRIBUTIONS
FOR STATIONARY *-MIXING PROCESSES l
By Pranab Kumar Sen
University of North Carolina, Chapel Hill
SUMMARY.
For a stationary *-mixing stochastic process, the law of iterated
logarithm, asymptotic normality and weak convergence to Brownian motion processes
are established for von Mises' (1947) differentiable statistical functionals and
Hoeffding's (1948) U-statistics.
A few applications are also sketched.
1.
INTRODUCTION
Let {X.,-m<i<oo}
be a stationary *-mixing stochastic process defined on a
1
probability space
(~,.fi.,P). Thus, if.M _~ and ...M. k: n be respectively the a-fields
k
00
generated by {Xi' i~k} and {Xi,i~k+n}, and if, AE~_oo and BE~k+n' then for every
k(-OO<k<oo) and n
I
I
I
I
I
I
I
.I
Ip(AB) - P(A)P(B)
where ~
n
+0
necessary.
as n t
00.
I<
~
n
P(A)P(B),
(1.1)
Further conditions on {~ } will be stated as and when
n
We may refer to Blum, Hanson and Koopmans (1963) and Philipp
(1969 a,b,c) for detailed treatment of *-mixing processes in the context of the
limiting behavior of sums of the X.•
1
p
We denote the marginal distribution function (d.f.) of X. by F(x), xER ,
1
the
p(~l)-dimensional Euclidean
e (F)
1···1
Rpm
space.
Consider then a functional
g(xl,···,x )dF(xl)···dF(x ),
m
m
(1. 2)
lWork sponsored by the Aerospace Research Laboratories, Air Force Systems
Command, U. S. Air Force, Contract F336l5-7l-C-1927. Reproduction in whole or
in part permitted for any purpose of the U. S. Government.
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
••
I
2
defined over ~= {F:
arguments.
le(F)/<oo}, where g(x1, ••• ,x ) is symmetric in its m(~l)
m
We consider here the following two estimators of e(F).
For a
sample {x1, ••• ,X }, the empirical d.f. is defined as
n
(1.3)
where c(u) is equal to one only when all the p components of u are non-negative;
otherwise, c(u) = O.
Then, in the same fashion as in von Mises (1947), a dif-
ferentiab1e statistical functional
e (Fn )
=
f···f
~m
eCFn )
is defined as
g(xl,···,x )dF (xl) ••• dF (x )
m n
n m
(1.4)
=n
-m~n
rn
£i =l···Li =1 g(Xi , ••• ,Xi ), n~l,
m
1
m
1
which is the corresponding functional of the empirical d.f.
Also, as in Hoeffding
(1948), we define aU-statistic
(1.5)
where the summation
I*n extends
over all possible 1<i < •.• <i <no
- 1
m-
Under suitable moment conditions on g and on
the following three problems are studied here:
~
{~
n
}, to be stated in section 2,
(i) asymptotic normality of
k
n reeF )-e(F)] and n 2 [U -e(F)], (ii) the law of iterated logarithm for e(F ) and
n
n
n
U , and (iii) weak convergence of continuous sample path versions of the processes
n
1
1
{n-~[e(Fk)-e(F)], k>l} and {n-~[Uk-e(F)], k>l} to processes of Brownian motion.
It may be noted that (i) is a special case of (iii), and is established under
less stringent conditions.
The main results along with the preliminary notions are presented in section 2.
Certain useful lemmas are considered in section 3, and with the aid of these, the
proofs of the main results are outlined in section 4.
The last section deals
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
••
I
3
deals with a few applications.
We may note that for m=l, e(F )=U =nn
n
l
I.1=n l
g(X.), and the corresponding
1
results have already been studied by Ibragimov (1962), Billingsley (1968),
Reznik (1968), and Philipp (1969a,b,c), among others.
we shall exclusively consider the case of
m~2.
We may also remark that the
above mentioned authors have considered the general
right hand side of (1.1) is
~
processes as special cases.
n
not hold for
~-mixing
~-mixing
processes [where the
peA), and ~ +0 as n~] which contain *-mixing
n
n
The simple proof to be considered in the current
paper does not go through for a general
martingale property of U
Hence, in the sequel,
~-mixing
process.
Also, the reverse
[cf. Berk (1966)] or related properties for e(F ) do
n
(or *-mixing) processes, so that an alternative approach
of Miller and Sen (1972), studied for independent processes, does not seem to be
readily adaptable.
An altogether different and presumably more involved proof
seems to be needed for a general
2.
For every c:
O~c~m,
0
= e(F) and gm
l;l,h
02
process.
STATEMENT OF THE RESULTS
we let
g (xl,···,x ) =
c
c
so that g
~-mixing
o )1
1RPoOcm-c
g.
l;l,h(F)
g(xl,···,x )dF(x +l)···dF(x ),
m
c
m
(2.1)
Also, let
2
E[gl (Xl)gl(X l +h )] _ e (F),
02(F) = l;1,0 + 2 Lh=l l;l,h'
h~O;
(2.2)
(2.3)
Then, we assume that (i)
(2.4)
and (ii) for some
r(~2),
I
I.
I
I
I
I
I
I
I
I_
4
v r = f···f
/g(xl, ••• ,x )I
Rpm
m
I
I
I
I
I
,e
I
dF(xl) ••• dF(x ) <00.
m
(2.5)
Finally, we define for every non-negative d,
\ co
2d
Lk=O (k+l) 1J1<.
~
Note that Ad (lji)<co
Ad' (lji)<co for all
O~d '~d,
(2.6)
and
(2.7)
Then, we have the following two theorems.
Theorem L
If A /
m 2 (lji)<co, (2.4) and (2.5) (with r=2) hold, then
lim p{ n~[8(F )-8(F)] < xma}
n
n~
for all x:
-co<
a
(2TI)-~
(2.8)
<co, and
1
I
I
r
n~18(F )-u
n
n
1+0, in probability, as
(2.9)
0+00.
Hence, (2.8) also holds for 8(F ) being replaced by U •
n
Theorem 2.
II for m* =
~[2,m/2],
n
Am*(lji)<co, (2.4) and (2.5) (with r=4) hold,
then
p{lim sup n~[8(F )-8(F)]/m[2a 2 log log n]~ =
I} = 1,
(2.10)
p{lim inf n~[8(F )-8(F)]/m[2a 2 log log ri]~ = -I} = 1,
(2.11)
p{lim sup n 18(F )-U l/m[2a 2 log log n]~
n
n
n
(2.12)
n
n
n
n
=
O}
1,
and hence, (2.10) and (2.11) also hold for U •
-
n
Consider now the space C[O,l] of all continuous real valued functions X(t),
O<t~l,
and associate with it the uniform topology
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
••
I
5
p(X,Y)
(2.13)
where both X and Y belong to C[O,l].
For every
n~l,
define then 8(F )
a
0, and
Yn(t) = {8(F[nt]) + (nt-[nt])[8(F[nt]+1) - 8(F[nt])]
(2.14)
1
- nt 8(F)}/[man~], t£I,
where [s] denotes the largest integer contained in s.
8(F ) by Uk for
k
Y*(t)
=
n
k~m
and by 8(F) for k<m-l. we define Y*(t) as in (2.14).
n
° for a<t«m-l)/n.
Y = {y (t),t£I}, y* = {Y*(t),t£I}
n
n
n
n
where W is a standard Brownian motion.
If for m* = max(2,m/2),
then both Y and
n
Y~
Then,
Also, let
--
Theorem 3.
Similarly, replacing
and
W = {W(t),t£I},
(2.15)
Then, we have the following.
Am*(~)<~'
(2.4) and (2.5) (with r=4) hold,
converge weakly in the uniform topology on C[O,l] to W, and
p(Y ,Y*)+O, in probability, as
n
n
n~.
In fact, (2.16) holds even if (2.5) holds for r=2 and
(2.16)
Am/2(~)<oo.
The proofs of the theorems are postponed to section 4.
3.
Let
{X.,-OO<i<~}
1.
let Zji = hj(X ),
i
CERTAIN USEFUL LEMMAS
be a stationary *-mixing process, and for each j(=1,2, ••. ),
-~<i<~,
be zero-one valued random variables, where hl(u),
h (u), ••. are not identical, and
2
P{Z .. =l} = l-P{Z
J1.
ji
=O} = p., j>l.
J-
(3.1)
I
I.
I
I
I
I
I
I
I
6
Lemma 3.1.
If for some
k~l, ~/2(~)<oo,
then
(3.2)
where K~«oo) depends only on {~n}.
Proof.
We sketch the proof only for k=l and 2; for
evidently, requiring more tedious steps) holds.
k~3,
the same proof (but,
For k=l, we have
(3.3)
Now, by Lemma 1 of Philipp (1969c), under (1.1) and (3.1),
(3.4)
I_
I
I
I
I
I
I
I
I·
I
Hence, (3.3) is bounded above by
and therefore, the proof follows by using (2.7).
replace the condition
A~(~)<oo
by
For k=2, we have
IE{
4
IT [L.~l(z .. -p.)]}1
. 1 1J1 J
J=
A~(~)<oo
or by
Actually, for k=l, we may
AO(~)<OO.
I
I.
I
I
I
I
I
I
I
7
where (a,8,y,0) is a permutation of (1,2,3,4), and the summation
over all 4! permutations of this type.
permutation a=l, 8=2, y=3 and 0=4.
••
I
extends
For simplicity, consider the particular
Then, we have,
(3.7)
where the summations
for which j-i =
L~l), L~2) and 2~3) extend respectively over all 1~i2j~k~~~n
max(j-i,k-j,~-k),
k-j =
max(j-i,k-j,~-k)
and
~-k
=
max(j-i,k-j,~-k).
Again, by Lemma 1 of Philipp (1969c),
L~1)IE(Zli-Pl)(Z2j-P2)(Z3k-P3)(Z4~-P4)1
~ L~l) ~j-i Elzli-PlIEI(Z2j-P2)(Z3k-P3)(Z4~-P4)1
I_
I
I
I
I
I
I
I
L*
~
2Pl
L~l) ~j-i EI(Z2j-P2)(Z3k-P3)(Z4~-P4)1,
as Elz .. -p.1 = 2p.(1-p.), j>l.
J1
J
J
J
(3.8)
-
Also, by a few straight forward steps,
EI(Z2j-P2)(Z3k-P3)(Z4~-P4)1
~
E{ I (Z2j-PZ) (Z3k-P3) IE[
IZ4~-P411~~]}
~ E{! (ZZj-PZ)(Z3k-P3)1[(1-P4)P4(1+W~-k)
~ ZP4(1+~~_k)EI(Z2j-PZ)(Z3k-P3)1
~ 8PZP3P4(1+~k_j)(1+~~_k)·
Hence, by (3.9), (3.8) is bounded above by
+ P4(1-P4)(1+~~_k)]}
(3.9)
-I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
I·
I
8
l6PlP2P3P4
dl)
Ln
~j_i(l+~k_j)(l+~i_k)
~ l6PlP2P3P4(1+~O)2 L~l) ~j-i
(3.10)
~ l6PlP2P3P4(1+~O)2 nLk~O (k+l)2 ~k
~ l6P l P2P P n(1+~O)2 A~(~),
3 4
where by (2.7),
Ar(~)
<00 whenever Al(~) <00.
Similarly,
L~3)IE(Zli-Pl)(Z2j-P2)(Z3k-P3)(Z4i-P4)1
(3.11)
~ l6P P P P
l 2 3 4
n(1+~O)2 Ar(~).
Finally, by Lemma 1 of Philipp (1969c), and a few steps similar to those in
(3.9) and (3.10), we have
L~2)IE(Zli-Pl)(Z2j-P2)(Z3k-P3)(Z4i-P4)1
~ L~2)I[E(Zli-Pl)(Z2j-P2)][E(Z3k-P3)(Z4i-P4)]1+
L~2) ~k-j
(2)
~ Ln
EI(Zli-Pl)(Z2j-P 2)lEI (Z3k- P3)(Z4i-P4)I
,
~j_i~i_kIElzli-PlIEIZ2j-P2IElz3k-P3IElz4i-P41 +
L~2) ~k_j(1+~j_i)(1~i_k)EIZli-Pllp2Elz3k-P3Ip4
(3.12)
d2)
~ l6P 1P2P3P4 L n
~j-i~i-k +
4PlP2P3P4(1+~0)2 L~2) ~k-j
~ l6P l P2P3P4 n2(Lk:0~k)2 + 4PlP2P3P4(1+~O)2n(Lk~0(k+l)2~k)
~ 4nPlP2P3P4[4n{A~(~)}2 + (1+~O)2Ar(~)]·
Thus, by (2.7), (3.10) and (3.11), whenever Al (~) <00, (3.7) is bounded above by
(3.13)
I
I.
I
I
I
I
I
I
I
9
Since (3.13) does not depend on the order of the subscript 1,2,3,4 of the p.,
~
repeating the steps for each permutation (a,S,Y,o) of (1,2,3,4) and choosing
~ = 24K~, it follows that (3.6) is bounded above by K~n2PlP2P3P4' which completes
the proof for k=2.
Lemma 3.2.
~/2(W)
••I
Ii : l
si=2k, k>l.
Then
<00 implies that
(3.14 )
where K~«oo) depends only on {~n}.
The proof is similar to that of Lemma 3.1, and hence, is omitted.
Let us now define for every c:
I_
I
I
I
I
I
I
I
Let si(~O), i=l, ••• ,r(~l) be such that
v(c)
n
Then, upon writing dF
=
n
f···f
RCP
l~c~m.
c
g (xl'···,x)
c
c
(3.15)
IT d[F (x.)-F(x.)].
j=l
n J
J
= dF + d[F -F], we have from (1.4) and (3.15) that
n
(3.16)
Note that, by definition,
(3.17)
Lemma 3.3.
If (2.5) holds for r=2, then for every c:
l<c~m, Ac/2(~)
implies
that
(3.18)
where K~ depends only on {~n}.
I
I.
I
I
I
I
I
I
I
10
Proof.
By (3.15) and the Fubini theorem,
E[V(C)]2
I
I·
I
JR'2'cp• J gC(Xl'···,XC)g C(xC+1""
,x 2C).
2c
E{ IT d[F (xj)-F(x )]}
j
j=l
n
=
JR'2'cp•J gC(xl"'"
n
-2c
(3.19)
XC) gC(xC+1"'" x 2C).
2c
\ n
E{ IT (L'-l d[c(x.-X.)-dF(x.)])}.
j=l].J].
J
Thus, if we let Zji = d[c(xj-X i )], i=1,2, ••• ,n, j=1,2, .•. ,2c, so that
P{Z .. =l} = l-P{Z .. =O} = dF(x.), j=1, ••. ,2c,
J1
J1
(3.20)
J
we obtain from Lemmas 3.1 and 3.2 that
I_
I
I
I
I
I
I
=
n
2c
IE{ IT (2'~1 d[C(Xj-Xt)-dF(x.)])}1
j=l 1
J
(3.21)
when x "",x
are all distinct; otherwise nCK~ dF(xl) ..• dF(x ), where
r
l
2c
xl, ..• ,x '
r
r~l,
are the distinct set of values of x , ••• ,x '
2c
l
Hence, by (3.19)
and (3.21),
E[V(C)]2 < K,,, nn
- 't'
c
J·2··
Ig (xl, ••. ,x)g (x +1, •.• ,x )1
2C
R cp J C
C C C
= K. n-c[J···Jlg (xl' ••• ,x )ldF(xl) .•• dF(x )]2
1jJ
RCP
C
C
C
Lemma 3.4.
If (2.5) holds for r=4 and
Al(~)
-lji4
(3.22)
<00, then
Elv(2)1~ < K V n- 4 K <00
n
2c
IT dF(x.)
J=l
J
'~.
(3.23)
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
I·
I
11
where ~ depends only on {Wn }.
The proof is similar to that of Lemma 3.3, and hence, is omitted.
We may rewrite (1.5) as
(3.24)
Un(h)
s
n-[h] ~
LPn,h
f ;p~f
)
gh(xl'···'~
h
l~l d[c(xj-Xij)-F(x j )], h=l, ••• ,m.
Note that v(l) = u(l), so that by (3.16) and (3.24),
n
n
(3.26)
Now, by the same technique as in Lemma 3.3, for every h>l,
~/2{~)
<00 implies
that
(3.27)
Also, writing Q = n 2 V(2) - n[2] U(2) and rewriting it as
n
n
n '
(3.28)
we obtain by the same technique as in Lemma 3.3 that A (W) <00 implies that
1
(3.29)
(3.25)
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
I·
I
12
Thus, from (3.18), (3.26), (3.27), (3.28), (3.29) and the c -inequality, we
r
obtain that if v 2<00 and
Am/2(~)
<00, then
E[8(F )-U]
n
2
< ~" n
n
-
-3
0/
,C,f, <00.
(3.30)
0/
These results are used in the next section, in the proof of the theorems.
4.
OUTLINE OF THE PROOFS OF THE THEOREMS
Let us first consider Theorem 1.
whenever
By virtue of (3.16) and Lemma 3.3,
Am/2(~) <00,
nE[8(F )_8(F)_mv(1)]2
n
n
= nE{Lh: 2
E[v(h)]2
(4.1)
n
< n
-1
K$
v 2 ' where
K~«oo) depends only on {~n}'
Thus, by (4.1) and the Chebychev inequality
n~18(Fn ) - 8(F) - mV(l)
I ~ 0, as n~.
n
which implies that
n~[8(Fn )-8(F)]
and
distribution, if they have one at all.
theorem for
~-mixing
mn~(l)both
have
n
(4.2)
the same limiting
Now, by (3.17) and the central limit
(and hence, *-mixing) processes [cf. Billingsley (1968,
p. 174) and Philipp (1969a)],
mn~~l) converges in law (whenever v 2<00) to a
normal distribution with zero mean and variance m20 2 , where
(2.3) and it is assumed that (2.4) holds.
0
2 i~ defined by
This completes the proof of (2.8).
By virtue of (3.30) and the Chebychev inequality, (2.9) follows directly, and
this, in turn, implies that (2.8) also holds for U •
n
Let us now consider Theorem 2.
of (3.23), that for every E>O,
Here
v4<00
and
Am*(~)
<00 imply, by virtue
I
II
I
I
I
I
I
I
I_
13
~
P{n Iv
(2)
1>£, for at least one n>n }
n
-0
\ >n
-< l.n
-0
which converges to 0 as
(~Am/2(1)J)<00)
I
Also, by Lemma 3.3, v <00 and A *(1)J) <00
m
2
no~.
imply that
p{n~ILh:3 (~) v~h)I>E,
< \
- l.n>n
-0
for at least one
n~no}
p{ ~I\ m (m) v(h)I>£}
n l.h=3 h
n
n -2 -+0
as
(4.4)
n~.
o
Thus, for every £>0,
P{n~[8(Fn )-8(F)-mV(1)]/(2
n
as
n~.
o
log log
n)~
>£
1:2
for at least one n>n } -+0
- 0
(4.5)
Consequently, it suffices to prove (2.10) and (2.11) for [8(F )-8(F)]
m
being replaced by mV(l).
I
I
I
I
I
I
I
I-
(4.3)
p{n~lv(2)1<£}
\
1
n
-< (V4~"/£~)
'fI
l.n>n~' ,
-0
Since V(l) involves an average over a stationary
n
*-mixing (and hence,
n
~-mixing)
sequence of random variables, by Theorem 1 of
Reznik (1968) [see also Philipp (1969c)] that under conditions even less
stringent than the hypothesized ones, the law of iterated logarithm holds for
{v(l)}, i.e., (2.10) and (2.11) hold.
n
Again by (3.30) and the Bonferroni
inequality, for every £>0,
k
P{n 2 lu -8(F )1>£
n
<
-
n
for at least one n>n }
Ln >n P{n~lun-8(F n )I>£}
-
0
(4.6)
-0
~ C1)J £-2 Ln>n n- 2 -+0 as no~.
-0
Thus, n~lu -8(F )1-+0 a.s. as n~, which implies (2.12), and that in turn implies
n
n
that (2.10) and (2.11) hold for {U }.
n
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
••
I
14
We now proceed to the proof of Theorem 3.
Let us define on e[O,l] a
sequence of processes {yO = [yo(t),tEI], n~l}, by
n
n
+ (
] t])[v(l)
vel) ]}/!:2
I vO(l)_-O.
YnO(t) = {Vel)
[nt]
nt- n
[nt]+l- [nt]
an, tE ,
(4.7)
Since {gl (Xl),-OO<i<oo} is stationary *-mixing (and hence, ¢-mixing), and by
(2.4) and (2.5), 0<a 2 <00, by Theorem 20.1 of Billingsley (1968, p. 174), it
follows that under A
o
(~)
<00,
y o~+ W,
n
as
n+oo.
(4.8)
We complete the proof of the theorem by showing that as n+oo,
o
p(Y ,y )
n
n
p
+
0
and
o
p(y*, y )
n
n
p
+
O.
(4.9)
Now, by (2.14), (3.16) and (4.7),
(4.10)
klv(l)l(m)
< max __k_-;---",2_ + max
l<k<n
man!:2
l<k<n
man!:2
By Lemma 3.4 and the Bonferonni inequality, under (2.2) for r=4, for every
p{ max k(m)lv(l)l> a!:2}
l<k<n
2
k
Em n
(4.11)
(K~<OO)
I
I.
I
I
I
I
I
I
I
I_
15
Similarly, under (2.2) for r=2, by Lemma 3.3 and the c -inequality, for every
r
.
e:>o,
max
I~ m
m (h)1 >
~}
p { l<k<n k Lh=3 (h)V k
£ma n
Thus, (4.10) converges in prqbability to zero as
y
I
11 W
as
sup
O::"t<m/n
IYn (t)1
p
+
Hence, by (4.8),
(4.13)
n~.
Since W is a standard arownian motion and m/n+O as
° as
n~,
n~.
(4.14)
o p
Hence, to show that p(Y*,Y ) + 0, we use the triangular inequality
n
I
I
I
I
I
I
I
I·
n
n~.
n
o
0
p(Y*,Y ) < p(Y*,Y ) + p(Y ,Y ),
nn~
nn
un
(4.15)
and for the first term on the right hand side of (4.15), we have by (4.14)
that we are only to show that
n~.
By (3.30) and the Bonferroni inequality, for every £>0,
(4.16)
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
I·
I
16
P{m~~~n Ik(Uk -8(Fk »I
..... -
In}
> gmO
~ Lk:m P{lk(Uk -8(Fk »I
>
EmO
In}
~ Lk~ {C~/(g2m202nk)}
~ (C Im 2g 20 2 )
~
(4.17)
1n I k~
n
< (C~/m2g202)(n-l log n)~O as
Thus, p(Y*,Y ) ~ 0 as
n n
(4.8) implies that
~, and hence, by (4.15), p(y*,yo)
~ 0 as n~, and thereby
n n
y* ~ W as
n
which
comple~e~
with r=2 and
n~.
n.-t<:P,
(4.18)
N9te that in (4.17),
the proof.
Am/Z(~) <~, wh~ch a~e
less
w~
rest~ictive
u~~ ~nly
have made
of (2.2)
than the hYP9thesized con-
ditions.
5•
A FEW APPLICATIONS
For illustration, we consider the following functional.
have the d.f. F(x,y), and define
00
e(F)
00
f
I
12
[F(x,oo) - ~][F(oo,y) - ~]dF(x,y),
which is known as the grade correlation of x(l) and x(2).
00
e(F )
n
~
12
= 12 n
f
We have then
00
f
[F (x,oo) n
~][F
n
(oo,y) -
-3 \ n
n
n)
Li=l(R i - 2)(Si - 2
= (1-n- 2)R + 3n- 2 ,
g
where R is the classical Spearman rank correlation i. e. ,
g
(5.1)
~]dF
n
(x,y)
(5.2)
I
I.
I
I
I
I
I
I
I
I_
17
R = [12/n(n 2 -l).]\,.n (R. _ n+l) (S
g
L1.=l
2
1.
i
_ n+l).
2
Thus, for large n, both 8(F ) and R have the same properties.
n
g
Now, as in
Hoeffding (1948), we have
R
-[3]
g
n
n
I
a+S:fy=1
(5.4)
g(X 'Xs'x ),
a
y
where
(5.5)
and s(u) = 1, 0 or -1 according as u is >, = or < O.
bounded kernel, (2.5) holds for every
r~O.
Since m=3 and g is a
Hence, under (2.4) and the stated
conditions on {~ }, all the three theorems of section 2 hold.
I
I
I
I
I
I
I
I·
I
(5.3)
n
Other examples
are easy to construct.
Let us now consider the case of random sample sizes.
For every r, let
N be a positive integer valued random variable such that there is a sequence
r
{n } of positive numbers for which
r
as
r~.
(5.6)
Then, using Lemmas 3.3 and 3.5, it can be shown that under (5.6), (4.2) readily
extends to n being replaced by N where
r
k:
N;lu
N
r
-8(F
Nr
)1+0 a.s. as r~.
r~.
Also, (4.6) insures that
Further, (4.11), (4.12) and (4.17) can be easily
adjusted to random sample sizes.
Consequently, using Theorem 20.3 of Billingsley
(1968, p. 180) for {V~l)}, we conclude that both Theorems 1 and 3 remain valid
r
for random sample sizes satisfying (5.6).
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
••
I
18
The theory developed here is of interest in the developing area of
asymptotic sequential inference procedures based on t8(F )} or {U }.
n
n
REFERENCES
[1]
BERK, R. H. (1966). Limiting behavior of posterior distribution when the
model is incorrect. Ann. Math. Statist. lZ, 51-58.
[2]
BILLINGSLEY, P. (1968).
New York.
[3]
BLUM, J. R., HANSON, D. L. and KOOPMANS, L. H. (1963). On the strong law
of large numbers for a class of stochastic processes. Zeit. Wahrsch
Verw. Geb. ~, 1-11.
[4]
HOEFFDING, W. (1948). A class of statistics with asymptotically normal
distribution. Ann. Math. Statist. 19, 293-325.
[5]
IBRAGIMOV, I. A. (1962). Some limit theorems for stationary processes.
Teor. Prob. App1. I, 349-382.
[6]
MILLER, R. G. JR., and SEN, P. K. (1972). Weak convergence of U-s~atistics
and von Mises' differentiable statistical functions. Ann. Math.
Statist. 43, in press.
[7]
MISES, R. V. (1947). On the asymptotic distribution of differentiable
statistical functions. Ann. Math. Statist. 18, 309-348.
[8]
PHILIPP, W. (1969a). The central limit theorem for mixing sequences of
random variables. Zeit. Wahrsch. Verw. Geb. 12, 155-171.
[9]
PHILIPP, W. (1969b). The remainder of the central limit theorem for mixing
stochastic processes. Ann. Math. Statist. 40, 601-609.
Convergence of Probability Measures.
John Wiley,
[10]
PHILIPP, W. (1969c). The law of iterated logarithm for mixing stochastic
processes. Ann. Math. Statist. 40, 1985-1991.
[11]
REZNIK, M. KH. (1968). The law of iterated logarithm for some class of
stationary stochastic processes. Teor. Prob. App1. 13, 606-621.