AN ALMOST SURE INVARIANCE PRINCIPLE FOR THE
EXTREMA OF CERTAIN SAMPLE FUNCTIONS
By
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 906
FEBRUARY 1974
AN ALMOST SURE INVARIANCE PRINCIPLE FOR THE EXTREMA OF
CERTAIN SAMPLE FUNCTIONS*
BY PRANAB KUMAR SEN
University of North Carolina, Chapel Hill
ABSTRACT
For a general class of statistics expressible as extrema of certain sample
functions, an almost sure invariance principle, particularly useful in the context of the law of iterated logarithm and the probabilities of moderate deviations, is established, and its applications are stressed .
.~
AMS 1970 Classification Numbers:
Key Words and Phrases:
60FlO, 60F15
Almost sure invariance principle, bundle strength of
filaments, extrema of sample functions, law of iterated logarithm, multivariate
Kolmogorov-Smirnov statistics, probability of moderate deviations.
*Work supported by the Aerospace Research Laboratories, Air Force Systems
Command, U. S. Air Force Contract No. F336l5-7l-C-1927. Reproduction in
whole or in part permitted for any purpose of the U. S. Government.
~.
Sen et al. (1973) established the asymptotic normality of a
general class of extrema of sample functions expressible as
Z = sup{L (x,F(x)): xEA}, for some ACE P ,
(1.1)
n
where F
n
n
is the empirical distribution function (df) of a random sample of size
n from a continuous df F(x), XEE P , the p(~l)-dimensional Euclidean space, and
L(x,y) satisfies certain regularity conditions.
An important member of this
class relates to the bundle strength of filaments where p=l, the basic random
variables are non-negative, and nZ
are the ordered random variables.
x(l-y) for xEA and
exposition.
.e
02Y~1;
max [(n-i+l)X .], where X l<"'<X
l<i<n
n,1
n, - n,n
n
Here, F is defined on A=[O,oo) and L(x,y)
see Bhattacharyya et al. (1970) for a detailed
In the context of sequential estimation and testing of the mean
(per unit) bundle strength [i.e., 8=sup{x[1-F(x)]: O~x<oo}], Sen (1973a,c) has
considered certain almost sure (a.s.) representation for {Z }, and has shown
n
that the usual a.s. invariance principle [viz., the embedding of Wiener
processes] for sums of independent random variables or martingales [cf.
Strassen (1967)] also holds for {Z }, though Z
n
n
a sum of independent random variables.
is not a martingale nor involves
In fact, a little deeper examination
reveals that the same proof goes through for general L(x,y) satisfying certain
mild regularity conditions; we shall
corr~ent
on it in section 5.
The object of the present investigation is to show that parallel to
Theorem 1.4 of Strassen (1967) an a.s. invariance principle (particularly useful
in the context of the law of iterated logarithm and probabilities of moderate
deviations) holds for {Z } under essentially the same conditions as pertaining
n
to its asymptotic normality.
is presented in section 2.
The main theorem along with the preliminary notions
Section 3 deals with certain basic lemmas which are
2
needed for the proof of the main
section 4.
theorem~
and the latter is sketched in
In this context, some results on (multivariate) Kolmogorov-Smirnov
statistics considered earlier by Kiefer (1961), Sethuraman (1964) and Sen (1973b)
are made use of.
The last section deals with certain concluding remarks and a
few applications of the main theorem.
~.
Let {X., i>l} be a sequence of independent and identically
~
-
distributed (i.i.d.) rv's defined on a probability space
(Q~
A ~P) where X.~ has
a continuous df F(x)~ XEE P , for some p>l~ and our desired ACE P •
for xEA,
(2.1)
L(x~F(x»
e =
assumes a unique maximum e at a unique point
sup{L(x~F(x»:
xEA}
and for every E>O, there exists an
L(x~F(x»
(2.2)
n>O~
o(>O)~
C.~ k.(>l)~ i=1~2~
~
where x
0 0 0
~-
xo~
so that
EA,
such that
< e-E for every x: Ix-xol >
Further, we assume that for some
positive constants
L(x ,F(x »
We assume that
sufficiently
n.
small~
there exist four
such that
(2.3)
for all x: Ix-xol~o~ where in (2.2)-(2.3)~ lui stands for the Euclidean norm
of the vector U(EE P ).
Also~ we assume that F admits of a continuous (joint)
density function f(x) for all x: Ix-xol<o~ and
(2.4)
o<
P
o
= F(x0 )
< l~
o<
f(x ) <
o
00.
Let thenA*={(x~y): XEACE P , O~2..l}, and for every odO~l], let
(2.5)
{(x~y):
xEA~ max(O~F(x)-8)
2.. y 2..
min(l~F(x)+o)}.
3
We assume that L(x,y) is defined for every (x,y)EA*, and for some 0>0,
(X,Y)EA~,
L(x,y) possesses a continuous partial (with respect to y) derivative L (x,y)
OI
which satisfies the following conditions:
ILOl(X'y)I ~ g(X) , 'V (X,Y)EA~,
(2.6)
Jg2 (x) dF (x)
(2.7)
<
1..
2 <
00,
"i/ AC.E P ,
A
~
L (x ,F (x »
OI o
o
(2.8)
o
where 0 < I~
0
I
<
00.
These conditions are slightly less restrictive than the ones in Sen et al.
(1973) where some additional conditions were needed to suit the non-stationarity
of the basic process.
Let us then define
02
(2.9)
= ~2p (l-p ) so that 0 <
000
0
<
00
From the results of Sen et al. (1973) it follows that under (2.1)-(2.8),
(2.10)
Let ¢={¢(t): O~t<oo} be a positive function with a continuous derivative
¢'(t) such that (i) as t-+oo with s/t-*l, ep'(s)/¢'(t)-*l, (ii) for some !2<h<3/5,
t
(2.11)
-h
¢(t) is
+ but
~(t)
-k
t 2¢(t) is t in t,
and (iii) the Kolmogorov-Petrovski-Erdos criterion holds, i.e.,
00
(2.12)
J
t-l~(t)exp{~k2~2(t)}dt <
00,
V k>l
and n>l.
n
Finally, we let
(2.13)
and assume that for every n>O, there exists an E>O, such that
4
n for every 1
Iv (n)/v (n)-1/ <
1
k
(2.14)
~ k ~ 1+€~ uniformly in n.
Let then
(2.15)
and on replacing /Z
-el
m
by (Z
-e)
and
m
(e-zm)~
we define the corresponding
probabilities by p+(~) and p-(~), respectively.
n
n
Then~ along the lines of
Theorem 1.4 of Strassen (1967), our main theorem may be presented as follows.
Theorem 1.
Under (2.1)-(2.8) and (2.11)-(2.14),
(2.16)
and, if, in addition~ lim
n~
(2.17)
(log log n)/~2(n) = O~ then
lim
n~
{[log
P (~)]/~2(n)}
n
e.
= ~
The same results hold for {P:(~)} and {P~(~)}.
The proof of the theorem is sketched in section 4; certain other results
required in this context are presented in section 3.
3.
Some usefu
as a corollary to Theorem 1 of Sethuraman (1964).
Lemma 3.1.
o<
For every BeEP and €>O~ there exists a p=p(B,E:)~ such that
p(B,€) < p(EP ,€) < 1, and
log p.
(3.1)
Let us now define
(3.2)
1<
2
z*n
= no
[L(xn
~F (x »-e]/cr~
0
where cr is defined by (2.9) and x
o
by (2.1).
Then, we have the following.
5
Lemma 3.2.
(3.3)
Under (2.4)-(2.8) and (2.11)-(2.14),
lim[{vl(n)}
n-+oo
log P{Z~ > ~(m) for some m~n}]
z: may also be replaced by -Z: or
and, in (3.3),
Proof.
-1
-1,
IZ~I.
By (2.1), (2.4)-(2.9) and (3.2), we have
(3.4)
k
Y {n 2 [F (x )-F(x )J/Ip (l-p ) } = y U , say,
n
n 0
000
n n
Z*
n
= L01(x
,uF (x0 )+(l-u)F(x0
»/L01(X
,F(x », O<u<1, and Un is the
where yn
on
00
standardized form of a binomial random variable.
(3.5)
p{Z* >
m
.-
+ P{y
m
~
P{Um ~
where 1 <
(l+c')~(m)
(l+c)/(l~~c)
~(m)
for some
Thus, for every c>O, we have
w~n}
> l~~c, for some m~n}
for some
m~n}
l+c' < 1+c/2.
+ P{Ym >
l~~c
for some
m~n},
By Theorem 1.4 of Strassen (1967), we
obtain that as n-+oo,
P{Um ~
(3.6)
~ (1+c')(2n)
_k
2
(l+c')~(m)
()()
for some
m~n}
1
J ¢' (t)t-~exp{-~(1+c')~2(t)}dt
I
n
n
(l+c'), say.
Also, by Lemma 3.1 of Sen (1973b), we have
(3.7)
limn-+oo {[log
I
n (l+c')]/v l + C ,en)} = -1.
Further, by virtue of the assumed continuity of L01(x,y) (in
(sufficiently small), there exists a 0>0, such that IL
whenever Iy-F(x o ) 1<0.
Since LOl (xo,F(x o »
=
~o
+ 0,
Ol
A~),
(xo,y)-L
Ol
for every n>O
(xo,F(xo»I<n
n can always be so chosen
6
that n/I~ I < ~~.
o
Hence, y
< 1~~ whenever IF (x )-F(x )1<0.
moo
m
p{y >
m
(3.8)
1.p--:2~
< P{IF (x )-F(x
-
where Co
~
C[l-p(o)]
of Hoeffding (1963).
n
1/5
n.
moo
-1
Thus,
for some m>n}
-
)1
> 0 for some m>n}
depends on 0, and the last step follows from Theorem 1
Note that by (2.11) and (2.13), vk(n) is bounded by
for large n, while [p(o)]
n
= exp{n
log p(o)} decreases exponentially with
Consequently, for n sufficiently large,
I
(3.9)
(l+~') + Cs[p(o)]n = I (l+~'){l+o(l)},
n u n
and hence, by (3.5), (3.6), (3.7), (3.8) and (3.9), we obtain that
(3.10)
-1
lim sUP[{v1+~,(n)}
e.
log p{z: ~ (l+~)~(m) for some m>n}] < -1.
n
Now, we may write
-1
[{vl+~,(n)}
{v1+~(n)}-1 log P{z: ~ (l+~)~(m) for some~} = [Vl+~,(n)/vl+~(n)].
log P{z:
~ (l+~)~(m)
for some
m~n}].
Also, E'<E/2.
Hence, by
letting E to be arbitrarily small, we obtain from (2.14) and (3.10) that for
every n>O,
lim sup[{vl(n)}
(3.11)
-1
log P{Z: >
~(m)
for some
m~n}] ~
-l+n.
n
On the other hand, by (3.4),
p{Z* >
(3.12 )
m
~ P{U
m
~ (l+EII)~(m)
~(m)
for some
for some
~n}
m~n}
- P{Ym < 1-'2E for some
where 1 < (l...!-iE)-l = l+E II < l-P-iE(l+E) for every O<E<l.
m~n},
Thus, proceeding as in
7
(3.6) through (3.9), it follows that for sufficiently large n, the right hand
side of (3.12) can be written as
(3.13)
I
(l+E") {1 +
n
0
(1) }.
Hence, by (3.7), (3.12) and (3.13), we obtain that
(3.14 )
lim inf[{v + ,,(n)}
1 E
n
-1
log p{z* > ljJ(m) for some m>n}] >-1.
m
Also, E" < ~ d1+E) can be made small by letting E to be small.
So, on using
(2.14), we obtain from (3.14) that for every n>O,
(3.15)
lim inf[{v1 (n)}
-1
log
P{Z~
> ljJ(m) for some
m~n}] ~
-l-n.
n
Since n(>O) is arbitrary, the proof of (3.3) follows from (3.10) and (3.15).
.e
The other two cases follows on parallel lines.
Note that in (2.3), k 2
~
k
1
~
1.
Q.E.D .
We write d
n
= max(log n, ljJ2(n», and
let
= (2k2)-1«~)
(3.16)
B
n
= Bn (x0 ,a) = {x:
(3.17)
G*
n
= sup{n2IL(x,Fn
(x»-L(x,F(x»-L(x ,F (x »+L(x ,F(x »1:
on 0
0
0
Lemma 3.3
IF(x)-F(x) [<cn-ad }, a
0
n
and c>O,
1:
Under (2.3)-(2.11), for every E>O, as
xEB }.
n
n~,
(3.18)
Proof.
On writing Yn(x) = L (x,uFn (x)+(1-u)F(X», O<u<l, XEB , we have
01
n
(3.19) G*
n
l~
= sup{i2lyn (x){Fn (x)-F(x)-Fn (x0 )+F(x0 )}+[y n (x)-yn
(x )]{F (x )-F(x )}!: xEB }
on 0
0
n
1:
2
-< sup{n JFn (x)-F(x)-F n (x0 )+F(x0 )1: xEB n }sup{lyn (x)l: xEB n }
+ ITIlL(x o ,Fn (x0 »-L(x 0 ,F(x 0 »I[sup{lyn (x)/y n
(x )-11:
xEB }].
on
8
By virtue of the assumed continuity of L
01
(x,y) [for
(x,Y)SA~],
(2.6)-(2.8),
Lemma 3.1 and Lemma 3.2, on denoting by
1
Gn = sup{~ IFn (x)-Fn (x0 )-F(x)+F(x0 ) I: xsB n },
(3.20)
it suffices to show that for every s>O, as n+oo,
(3.21)
p{G
m
>
s~(m)
for some
~:
Clog n
~ ~2(n) ~
=
o(exp{-~2(n)}).
(i)~: ~2(n) < Clog n for some C>l, and
We consider the two cases:
(ii)
m~n}
1 5
o(n / ), separately.
In case (i), basically following
the proof of Lemma 1 of Bahadur (1966) with modifications as in Theorem 3.2 of
Sen (1973a) and extending the proof to the
p(~l)-variate
case in a straight-
forward manner, it follows that for every s(>O) there exist a constant c
s
and an
~_
integer ns' such that
p{G
(3.22)
> c n- a / 2 (log n)} < n- s - 1 , for every n>n .
n
s
- s
We note that for every a>O, n
s>o,
c n
s
s~(n)
is t in n.
-a/2
-
p{Gm >
P{G
m
>
€~(m)
P{G
n- s - 1 = O(n- s ) = o(n- C)
Thus, (3.21) holds for case (i).
(3.24)
for some
s~(m)} ~ 2m: n
o(exp{-~2(n)}),
of p=l.
€~(n)
>
.)
p
a
+ n
€
m~n}
a
m > c s m- /
2
log m}
o(exp{- Clog n})
For case (ii), we consider first the case
.} by F(x ) = p
n,]
n,]
s
by choosing s>C.
We define a set of points {x
F(x
(2.11), for every
As a result, by (3.22), for n > max(n ,n ),
(3.23)
2m: n
° as n+oo, while by
€
- S
~
+
Hence, there exists an integer n • such that
-a/2 log n for n>n
~ 2m: n
log n
-!'2
~(n)j€/2,
a
a
and
j=O,+l,
... ,+N
,
- n
9
where N is the least positive integer (k) for which F(x k) > p + n
n
n,
0
1:-cx.
that N = O(n 2
n
d).
n
-ex.
d, so
n
Using the monotonicity of the df F and F we obtain by
n
0
a few simple steps that for X€[x
.,x
u,]
'+1]'
n,]
v'11IF (x) - F (x ) - F(x) + F(x ) I
(3.25)
n
n
0
0
<
max {InIF (x n)-F (x )-F(x n)+F(x )I} + €~(n)/2,
- ,Q,=j,j+1
n n,x"
n 0
n,x"
0
for j=-N , ••• ,N -1.
n
Consequently,
n
(3.26)
piG
m
N
00
<
.e
> £~(m) for some m>n}
I
'_
m
p{IffiIF (x ,)-F (x )-F(x .)+F(x) I >
m m,J
m 0
m,J
0
m=n J-- N
m
~£ljJ(m)}.
Since, for x<y, m[F (y)-F (x)] has the binomial distribution with parameters
m
m
(m, F(y)-F(x», by Theorem 1 of Hoeffding (1963) [viz., his (2.2) and (2.4)],
we obtain on noting that IF(x ,) - F(x ) I < m-ex. V IJ' I < N , that
n,J
0
m
P{rm-IF (x ,)-F (x )-F(x ,)+F(x) I > ~£~(m)}
m n,J
m 0
m,J
0
(3.27)
-ex.
whenever m
<~,
and C* is a positive constant.
Thus, by (3.24), the right
hand side of (3.26) is bounded by
(3.28)
As 10g(2N ) = O(log m) while ljJ2(m) ~ Clog m, C>l, for every £>0, ex.>0, there
m
exists an n
(3.29)
Hence, by
o
= n (£,ex.), such that for n > n ,
0
-
0
~€2ljJ2(m)[ex. log m-C*]-log(2N ) > (1+0)ljJ2(m) where Co
m
(3.2~)
l+n, n>O.
and (3.29), the right hand side of (3.23) is bounded (for n>n ) by
-0
10
I : exp{-(1+a)1)J2(m)} '"
mn
(3.30)
Lm: n
exp{-1)J2 (m)}eXp{-a1)J2 (m)}
~ exp{-1)J2 (n)}I : eXp{-a1)J2 (m)}
mn
\'
-ca
00
~ exp{ _1)J2 (n)} Lm"'n m
(as ca '" l+n, n>O)
0(exp{-1)J2(n)}).
Thus, (3.21) holds for p"'l.
For p>l, we need to replace the (2N +1) points
n
in (3.24) by (2N +l)P blocks and s/2 by s/2p, so that the increment of the df
n
F over the two corners of a block is
on parallel lines.
~
s1)J(n)/2.
The rest of the proof follows
Hence, the proof of the lemma is complete.
Let us now define
sup{L(x,F (x»:
(3.31)
e.
xsB}.
n
n
Note that, p{!JU(z(1)-S»0'''(m)
for some m_>n} -< P{sup[L(x,Fn (x»-L(x 0 ,F(x 0 »: xsB m] >
m
~
-1:
m 2 01)J(m) for some m~n} '" p{Z* + G* > 01)J(m) for some m>n}, where G* is defined by
m
(3.17).
Lerr~a
(3.32)
m
-
m
Hence, by Lemma 3.2 and Lemma 3.3, we arrive at the following.
3.4.
Under (2.3)-(2.9) and (2.11)-(2.14),
lim[{vl(n)}
-1
n-+oo
.
(1)
log p{rm-Izm
-81
> 01)J(m) for some m~n}] '" -1,
and in (3.32), Iz(1)-81 may be replaced by (z(1)-8) or by (8-Z(1».
m
ill
ill
Finally, let us denote by
(3.33)
(3.34)
Bn (a) '" {x:
Ix-x 0 I<a
but x~B
(x0 ,a)},
t n
G* '" sup{~IL(x,F (x»-L(x,F(x»I :
n
n
where 0 satisfy (2.3).
xsB (o)},
n
Also, let {an } be any sequence of positive numbers.
11
Then, by virtue of (2.5)-(2.8) and Theorem l-m of Kiefer (1961) we obtain the
following:
Lemma 3.5.
Unde~
(2.5)-(2.8), for every
n(~l),
p{G*n -> an } -< c exp{-c an2 },
2
1
(3.35)
where c
l
and c
2
are both finite positive numbers, and for large n, c
2
may be
replaced by (2-E)/~2, for some E>O.
o
4.
Proof of the main theorem.
We prove (2.16)-(2.17) only for {P+ (~)}; the
n
proof for {P-(~)} follows on parallel lines, while by noting that P:(~)~Pn(~)~
n
+
-
Pn (~)+P n (~), for all n, the proof for
.-
{Pn(~)}
follows immediately.
0~(m)
IT~n}
Note that, by definition
(4.1)
+
~2
P (~) = P{m (z -e) >
n
m
~ 1-P{Z: ~ ~(m)~
V m>n} =
for some
P{Z: > ~(m) for some m~n},
,
k
2
as Zm> L(x 0 , Fn
(x »o
= e+anZ*, with probability one.
m
Hence, by (4.1) and
Lemma 3.2, we obtain that
lim inf{[log P:(~)J/Vl (n)} > -1.
(4.2)
n
Let us define Band B (0) as in (3.16) and (3.33), and let
n
n
(4.3)
Note that by virtue of (2.6) and (2.7), for every (X,y)EAg,
(4.4)
IL(x,y)-L(x,F(x»I ~ g(x) !y-F(x)
I'
12
so that whenever suplF (x)-F(x)/<o, by (4.4),
n
x
(4.5)
As such, by a straightforward generalization of Theorem 2.1 of Sen (1971a) [under
our (2.7)], i t follows that for every 00, there exist positive constants C«oo),
p*(s): O<p*(s)<l and an integer n (s), such that for n>n (s),
o
-
0
p{su£ IL(x,Fn(X»-L(x,F(x»I > ~E} ~ C[p*(E)]n.
xEA
o
(4.6)
As a result,
(4.7)
r{sup L(x,F (x»
m
XEAo
>
e-~E
for some m>n}
~ Lm: n p{su£ L(x,Fm(x) > e~E}
xSA
=
e.
o
L:
p{sup [L(x,F(x»+{L(x,F (x»-L(x,F(x»}] > e-~E}
m-n
m
XEA
o
< L 00 p{sup {L(x,F (x»-L(x,F(x»} > ~E} (by (2.2»
- m=n
m
XEAo
~
}-1
C*[P*(E)] n , where C* = C{
1-p*(E)
«00) •
Let us then define Z(l) as in (3.28), and let
n
(4.8)
Z(2) = sup{L(x,F (x»: xEB (o)}, Z(3)
n
n
n
n
so that Zn
(4.9)
=
max{Z n
(1) ' Z
(2)' n
Z(3)}.
n
=
sup{L(x,F(x»: xEA~},
\J
Then ,
by .
(2 3) ,(3
33)
. 16) and (3 .
,
sup{L(x,F (x»: XEB (o)}
n
1
< e-c2n~ d
k2
n
n
+ sup{IL(x,F (x»-L(x,F(x»
n
so that by (4.9) and (3.34) - (3 . 35), we have
I:
XEB (o)},
n
13
8~c 2m~
p{Z(2) >
(4.10)
m
-
k2
d
for some m>n}
m
k
2
~ p{sup{IffiIL(x,Fm(X))-L(x,F(X))I: xSBm(o)} ~~c2 d
for some m>n}
m
2k
~ Lm:n[c l exp{-~c~ dm 2}]
2k
as d
=
0(exp{-~2(n)}), as n~,
2
Finally, by
n
(3.2) and (3.31),
(4.11)
k
< p{Z* <
-
m
~c2 dm2, for some m>n}
= 0(exp{-~2(n)}),
as n~,
k
where the last step follows from Lemma 3.2 by noting that d
2
n
large compared to
~(n),
by (2.11).
k
> [~2(n)] 2 is
Hence, from (4.7), (4.10) and (4.11), we
have
p{z
(4.12)
as ~2(n)/n log p*(o)
+
~ Z(l) for some m>n}
m T
0 as n~.
m
-
Then, we have
p+(~)
< p{lffi(z(1)-8) > 0~(m), for some ffi_>n} +
n
m
(4.13)
p{z
~ Z(l)
m T
m
'
for some m>n},
so that by Lemma 3.4, (4.12) and (4.13), we have
(4.14)
lim sup[{V 1 (n)}
-1
+
log Pn(~)J ~ -1,
n
,
and hence, (2.16) follows from (4.2) and (4.14).
Also, by (2.13), lim(log log
n~
n)/~2(n)=
D
14
implies that lim{vl(n)/~2(n)} =~, so that (2.17) follows from (2.16).
n-+<x:>
Q.E.D.
(i) The law of iterated
We let~2(t)
logarithm.
E>O.
2(1+~)10g log t for t>3; ~2(t)
Then, for large n, VI (n)
=E
= 1,
0~t<3, where
log log n - O(log log log n)-+<x:> as n-+<x:>, so that .
by (2.16),
k
lim P{m 2 (Z -8) > aI2(1+E)10g log m for some m>n}
m
n-+<x:>
(5.1)
On the other hand, Zn >
L(xa ,Fn (xa
-
», V n>l,
-
0, V E>O.
with probability 1, so that for
every E>O,
k
P{m 2 (Z -8) > aI2(1-E)10g log m for some m~n}
(5.2)
m
> p{Z* > 12(1-E)10g log m for some m_>n}
m
+1, as n-+<x:>,
where the last step follows from (i) (2.6)-(2.8) and Lemma 3.1 implying that
for every uE(O,l), LOl(xa ,uFn (x a ) + (l-u)F(x a » +
sa (/0)
a.s., as n-+<x:>, and (ii)
{ImIFm (xa )-F(xa )/}is attracted by the usual law of iterated logarithm.
by (5.1) and (5.2), we have on letting E(>O) to be arbitrarily small,
(5.3)
Similarly, it can be shown that
P{lim inf
(5.4)
n
~(Zn -8)/[2a 2 10g log n]~
-I}
1.
Thus, {Z } obeys the law of iterated logarithm.
n
(ii)
Probability of moderate deviations.
some O<c<oo.
Here, we let ~2(n)
Then, by (2.17), we obtain that
Hence,
15
(5.5)
which is stronger version of the usual result
1im[(log n) -1 log P {n~ (Zn-8)/a > clog n}]
(5.6)
= ~C2,
n~
and the latter is classically known as the probability of moderate deviation.
(iii)
A. s. convergence to Wiener processes.
Let us now impose another con-
clition on L(x,y), namely, that the first (partial) derivative LOl(X'y) satisfies
a local Lipschitz condition for all (x,y): Ix-xol<ol' Iy-F(XO ) 1<02' where 01'
02 are sufficiently small.
Specifically, we assume that
(5.7)
.4It
where dl and d2 are positive numbers.
Then, by virtue of Lemma 3.1, Lemma 3.2,
(3.16), (3.17) and (5.7), we conclude that there exists a d>O, such that for n
sufficiently large, the second term on the right hand side of (3.19) is
o(n-
(5.8)
d
log n) almost surely,
and by Lemma 1 of Bahadur (1966) along with Theorem 3.2 of Sen (1973a), the
first term is
O(n-
(5.9)
(4k )-1
2
(log n)) almost surely, as n~.
So that, under (5.7), for some d(~ (4k )-1), by (3.17) and (3.19),
2
(5.10)
G*
n
=
O(n
-d
(log n))
=
0(1) almost surely, as
n~.
As a result, from (4.12), (3.2), (3.17) and (5.10),
(5.11)
In(Z -8)/0
n
= Z*n + O(n -d log n) a.s., as
n~.
16
Let now W={W(t): O<t<oo} be a standard Wiener process on [0,00).
Then,
using well-known results on {1n[Fn (xa )-F(xa )], n>l}
along with the a.s. convergence of LOl(X ,uF (x ) + (l~u)F(x
a
n
a
a
»)
to ~O(+O), for O<u<l, we claim that
as n+oo,
(5.12)
= Wen) + O(n
a
log n) a.s., for some
O<a<~.
From (5.11) and (5.12), we claim that as n+oo
(5.13)
for some d>O.
III (Zn -8)/0 = n -~2 W(n) + O(n -d log n) a.s.,
Thus, the main results in Sen (1973a,c), deduced for the
particular case of bundle strength of filaments, also hold for general
L(x,F(x)), xsA, provided (5.7) holds.
e.
17
REFERENCES
[1]
BAHADUR, R.R. (1966). A note on quantiles in large samples.
Statist. 1I, 577-580.
[2]
BHATTACHARYYA, B.B., GRANDAGE, A., and SUH, M.W. (1970). On the distribution and moments of the strength of a bundle of filaments. J. Appl.
Prob. I, 712-720.
[3]
HOEFFDING, W. (1963). Probability inequalities for sums of bounded random
variables. J. Amer. Statist. Assoc. ~, 13-30.
[4]
KIEFER, J. (1961). On large deviations of the empiric d.f. of vector
chance variables and a law of iterated logarithm. Pacific J. Math.
11, 649-660.
[5]
SEN, P.K. (1973a).
of filaments.
[6]
SEN, P.K. (1973b). An almost sure invariance principle for multivariate
Kolmogorov-Smirnov statistics. Ann. Prob. 1, 488-496.
(7]
SEN, P.K. (1973c). An asymptotically efficient test for the bundle
strength of filaments. J. Appl. Prob. 10, 586-596.
[8]
SEN, P.K. BHATTACHARYYA, B.B., and SUH, M.W. (1973). Limiting behavior
of the extremum of certain sample functions. Ann. Statist. 1, 297-311.
[9]
SETHURAMAN, J. (1964). On the probability of large deviations of families
of sample means. Ann. Math. Statist. 35, 1304-1316.
[10]
Ann. Math.
On fixed size confidence bands for the bundle strength
Ann. Statist. 1, 526-537.
STRASSEN, V. (1967). Almost sure behavior of sums of independent random
variables and martingales. Proc. 5th Berkeley Syrup. Math. Statist.
Prob. ~, 315-343.
© Copyright 2026 Paperzz