I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
I-
I
AN ALMOST SURE INVARIANCE PRINCIPLE FOR MULTIVARIATE
KOLMOGOROV-SMIRNOV STATISTICS
By
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina, Chapel Hill, N. C.
Institute of Statistics Mimeo Series No. 806
February 1972
I
I.
I
I
I
I
I
I
I
I_
AN ALMOST SURE INVARIANCE PRINCIPLE FOR MULTIVARIATE
KOLMOGOROV-SMIRNOV STATISTICS 1
By PRANAB KUMAR SEN
University of North Carolina, Chapel Hill
An almost sure invariance principle for Ko1mogorov-Smirnov statistics for
vector chance variables is established along the lines of Theorems 1.4 and 4.9
of Strassen [Proc. Fifth Berkeley Symp. Math Statist. Prob. (1967),1, 315-343].
This strengthens certain asymptotic expressions on the probability of moderate
deviations for Ko1mogorov-Smirnov statistics, obtained earlier by Gnedenko,
Karo1uk and Skorokhod, and by Kiefer and Wo1fowitz, among others.
I
I
I
I
I
I
I
.I
AMS 1970 Subject classification
Primary 60B10, 60F15
Secondary 62E20
Key Words and Phrases:
Almost sure invariance principle, multivariate Ko1mogorov-Smirnov statistics,
probability of moderate deviations and reverse sub-martingales.
1) Work sponsored by the Aerospace Research Laboratories, Air Force Systems Command,
U. S. Air Force, Contract No. F336l5-71-C-1927.
Reproduction in whole or in part
permitted for any purpose of the U. S. Government.
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
.I
l . ~ . Let {~i=(Xil""'XiP)~; i>l} be a sequence of independent and
identically distributed random vectors (iidrv) defined on a probability space
(~,~,P), where X. has a continuous distribution function F(x), xgRP , the
-1.
p(~l)-dimensional Euclidean
space.
Define the empirical dfs by
(1.1)
where c(u)=l, if all the p components of u are
~O;
otherwise c(u)=O.
Consider
then the general p-variate Kolmogorov-Smirnov statistics
(1. 2)
D+ = sup{F (x)-F(x):
n
n -
xERP } ,
n~l;
(1. 3)
D = sup{F(x)-F (x):
n
n-
XERP } ,
n~l;
(1. 4)
D
n
sup{IF (x)-F(x)\:
n +
= max{D n , Dn }, n>l;
-
XERP }
these are all non-negative random variables.
In the particular case of p=l, Gnedenko, Koroluk and Skorokhod (1961, pp.
154-155) reported the results of Karplevskaia and of Li-Tsian that if {A } be a
n
sequence of positive numbers such that nA~=O(l), then as n~,
P{D+>A } = P{D- > A } = ~P{D > A }[l+o(l)]
n-n
n- n
n- n
= exp{-2nA 2 }[1+o(1)].
(1.5)
n
Kiefer and Wolfowitz (1958) have shown that for
p~l,
there exist two positive
constants c(l) and c(2), such that for every n~l, An~O,
p
p
P {D+>A } < P{D >A} < c(1)exp{-c(2)nA }.
n-n
n-n p
p
n
(1. 6)
That c(2) cannot be, in general, equal to 2, have been proved by Kiefer (1961),
p
I
2
I.
I
I
I
I
I
I
I
I_
who shows that for every £>0 and
for every
n~l,
Ie
I
there exists a positive c(p,£) such that
A >0,
n-
P{D >A } < C(p,E) exp{-(2-£)nA 2}.
(1. 7)
n- n
Since D
n
-
n
(or D+
or D-)
can not be less than the corresponding statistic for the
n
n
p univariate marginals, we obtain from (1.5) and (1.7) by letting £(>0) to be
arbitrarily small that if nA2~ as n~, then for every p>l, nA 3 =0(1),
n
n
(1.8)
as
0-+00,
and the same result holds for {D+
} and {D n- } •
n
For a positive function ¢(t) increasing at a faster (slower) rate than
1 2
3 5
t /
(t / ) and satisfying certain regularity conditions, Theorems 1.4 and 4.9
of Strassen (1967) relate to the asymptotic (as
n~)
expression for
P{Sm 2. ¢(m) for some m~n},
(1. 9)
I
I
I
I
I
I
I
p~l,
where Sm = Li:l(Xi-l.l)/cr, l.l=EX i and
o <cr 2=V(X i )
<00, and this relates to his
(third) almost sure invariance principle for cumulative seems.
The object of
the present investigation is to show that this almost sure invariance principle
also holds for {nD+}, {nD-} and {nD}.
n
n
n
This shows that under certain mild
regularity conditions, in the asymptotic expression in (1.8), [D
replaced by [D
m
~
n
~ Anl can be
Am for some m ~ nl.
The main theorems along with the basic regularity conditions are formulated
in section 2, where a few remarks are also appended.
The proofs of the theorems,
based on a reverse sub-martingale property of {n:} and Theorem 1 of Kiefer (1961),
is considered in section 3.
2 . ~ . Let ¢= {¢(t): O<t<oo} be a positive function, defined on
I
3
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
••I
[0,00), with a continuous derivative
~(t)=t-~ ~(t)
(2.1)
(ii) as
t~,
~'(t),
is t but
such that (i)
t-h~(t)
is
+ in
t,
~<h<3/5,
with s/t+l,
~'(s)/~'(t)+l,
(2.2)
(~tlJ'(s)/~'(t)+l),
and (iii) the Kolmogorov-Petrovski-Erdos criterion holds for
every n>l and
2~,i.e.,
for
k~2,
(2.3)
Note that by (2.1)
~'(t) = t - ~ ~'(t)- ~ t -3/2~(t) (>0) is continuous in t;
(2.4)
(2.5)
Also, by (2.3) and a few routine steps it follows that there exists a positive t
«00), such that for all
t~to
and
o
k~2,
(2.6)
Thus, for large t, ~2(t) >
~
log log t when (2.3) holds.
We further assume
that for large n, v (n)/v (n) is a continuous function of kE[2,2+0), for some 0>0.
2
k
Thus for every 0' (0<0'_<0), there exists an n>O, such that for n>n ,
-0
(2.7)
In fact, if for some E>O, lim t~ [(log log t)/~2(t)] -< 2-E, then (2.1)-(2.3) imply
(2.7).
~
A counter example where (2.1)-(2.3) hold but not (2.7) is ~2(t) =
log log t + A log log log t where A(>O) is a positive number.
define
(2.8)
P
n
(~)
P{m~m -> Am for some m_>n},
Let us now
I
4
I.
I
I
I
I
I
I
I
I_
+ and D- , we define P+ (~) and Pn-(~)' respecand in (2.8) on replacing Dm by Dm
m
n
tively.
Then, we have the following.
Theorem 1.
Under (2.1), (2.2), (2.3) and (2.7)
(2.9)
limn-+oo{[log Pn (,Ii)]/v
(n)}
't'
2
and, if, in addition, limn-+oo {(log log n)/",2(n)}
= 0, then
't'
limn-+oo {[log Pn (~)]/~2(n)} = -2.
(2.10)
Both (2.9) and (2.10) hold for {P:(~)} and {P:(~)}.
1
Upon letting ~(n)=n~A
.
n
i.e., ¢(n) = nA , we note that in (2.1) we limit
n
l
ourselves to A = O(nh - ) where ~ < h < 3/5.
,.
I
On the other hand, in (1.5), we had
n
A = 0(n- l / 3 ).
n
This leads us to inquire whether (2.9) [or (2.10)] holds for A
n
satisfying (1.5) but not (2.1).
I
I
I
I
I
I
I
-1,
~2(n)
Towards this end, we may remark that whenever
increases with n in such a way that (log log n)/~2(n)+0 with n+oo, we do not
require (2.2),(2.3) and (2.7), and we may also extend the range of
~(n)
l 6
to 0(n / ).
We have the following.
Theorem 2.
If for some C: O<C<oo,
~(n)
< Cn
1/6
and limn+oo(log log
n)/~2(n)
= 0,
(2.11)
and the same result holds for {P+(~)} and {p-(~)}.
n
--
n
We postpone the proof of the theorems to section 3.
The following remarks
illustrate a few applications of the theorem:
(I)
If we let ~~t) = (~)log log t, £>0, it follows that (2.1),(2.2),(2.3)
and (2.7) hold, where
I
5
I.
I
I
I
I
I
I
I
(2.12)
••
I
2
= 2£ log log t -
Therefore, by (2.9), P
n
(~) +
~
0 as n
~ log(~+£)
log log log t -
+
00
(~as t~).
i.e.,
P {lim s~p (2n)~ (log log n)~ D = I} = 1,
(2.13)
n
and the same result holds for {D+} and {D-}.
n
n
Thus, in the general p-variate
case, the law of iterated logarithm holds for Kolmogorov-Smirnov statistics;
this was proved in Theorem 2 of Kiefer (1961) by a different approach.
(II)
If we let
(log log n)/~2(n)
+
~(n) = n~A n , we observe that under the condition that
0 with n
+
00,
(2.11) extends (1.8) in the sense that
[Dn > A ] is replaced by [D > A for some m _> n].
n
m- m
This extension is comparable
to Theorem 1.4 of Strassen (1967) which provides similar extension of the pro-
I_
I
I
I
I
I
I
I
V (t)
bability of moderate deviations for sample cumulative sums, studied earlier by
Cramer (1938), Linnik (1961), Rubin and Sethuraman (1965), and others.
For a positive k, we define
3.
(3.1)
Note that by (2.3), (2.5) and (3.1), for every n
o<
(3.2)
~
J
n
(k¢) < I
n
~
1, k > 2,
(k¢) < 3/5 J
n
(k¢).
We consider first the following.
Lemma 3.1.
Proof.
(3.4)
Under (2.1), (2.2), (2.3) and (2.7), as n +
00,
By virtue of (3.2), it suffices to show that as n
{log In(k¢)}/vk(n)
+
+
00
-1, for every k ~ 2.
I
I.
I
I
I
I
I
I
I
6
Since u exp{-~u2} is
••
I
u(~l), on denoting t
o
by ~2(t )
0
k- 2 , we obtain that
for all n > t ,
-
0
\'
00
~ Lm=n km
(3.5)
-1
2
~(m) exp{-~k ~2(m)}; k ~ 2.
Let us now define a set of points {n , s=O,l, •.. } by n
s
n
(3.6)
s
= n, and
a
= [exp{(log n)l+ES}] + 1, s = 0,1, ..• , E>O,
where [k] denotes the integral part of
k(~O),
and E is arbitrarily small,
Then,
the right hand side of (3.5) is bounded from above by
(3.7)
I_
I
I
I
I
I
I
I
+ in
< [(log n)E_ l ] \'
00
L s=0
where X (0) = 1, and for s
n
~
k~(n ) exp{_~k2~2(n )} log n
s
s
s
1,
X (s)
n
We prove the lemma first for a k > 2 i.e., k = 2 + 0, 0>0.· Since vk(n) -
v
2
(n)
=
(~k2_2)~2(n)
n and every S
~
>
20~2(n),
by (2.6) and the remark made thereafter, for large
1,
(3.9)
Hence, for every E > 0, 0 > 0, for large n,
I
7
I.
I
I
I
I
I
I
I
I·
I
I
I
I
I
I
I
••
I
(3.10)
\
00
\
00
Ls=O Xn(s) ~ Ls=O [(log n)
Therefore, for k
=2
-0£ s
-0£ -1
] = {l-(log n)
}
< K£,u~ <
00.
+ 0, 0>0, we have by (3.7) and (3.8) that
(3.11)
Now, by the remark following (2.6), for k = 2+0, vk(n) > v (n) + 20~2(n»20~2(n)
2
> 0 log log n, so that the right hand side of (3.11) is bounded above by -1+£/8.
Thus, for every 8 >
° and n > 0,
there exists an £ > 0, such that £/8<n and
(3.12)
In a similar way, by working with the left hand side of (3.7), it follows that
for every n>O,
lim inf{
n [log In(k¢)]/vk(n) }
(3.13)
~
-l-n for every k
2+8, 8>0.
Thus, for every k > 2,
(3.14)
Now to consider the case of k
J (k¢) is
n
+
= 2,
we note that, by definition in (2.3),
in k, and hence, for every 8 > 0,
(3.15)
Thus, from (2.7), (3.13) and (3.15), it follows that for every n > 0,
(3.16)
lim inf
n { [log I n (2¢)]/V Z(n) }
~
-1
-no
Also, by (Z.3) and a few simple steps, we obtain on using (Z.7) that for every
n >0, there exists a 8 > 0, such that as n
+ 00,
I
8
I.
I
I
I
I
I
I
I
I_
I
I
00
(3.17)
J (2cj»
= (2TI)-1/2f exp{-V (t)}(10g t)-l d(log t)
2
t=n
n
00
< (2TI) -1/2f exp{-v2+~(t) (l-n)}(log t) -1 d(log t)
t=n
00
(2TI)-1/2f exp{-~(1-n)(2+0)2~2(t)+(1-n)10g~(t)}(10gt)-n t - l dt
t=n
< (2TI)-1/2(10g n)-n
f
00
exp{-~(1-n)(2+0)2~2(t)}[~(t)]1-nt-ldt.
n
Therefore, by the same technique as in (3.7) through (3.12), we obtain on choosing
0(>0) sufficiently small that
lim sUn P{ [log I (2"')]
/ v ()}
'I'
n _< -l+n, n>O.
2
n
(3.18)
By letting n in (3.16) and (3.18) to be arbitrarily small, we conclude that
lim
n+oo {[log I (2cj>)]/v (n)} = -1.
2
n
(3.19)
Q.E.D.
Under (2.1), (2.2), (2.3) and (2.7) for every x£R P ,
Lemma 3.2.
I
I
I
I
I
••
I
-1
= -1, where k(~) = {2F(~)[1-F(~)]}.
(3.20)
Since Fm_
(x) involves an average of iidrv's which assume only the values
Proof.
o and
(~2).
1, the existence of the moment generating function is insured, and
mV[F (x)-F(x)] = F(x)[l-F(x)]( < ~ for all x£RP).
m-
-
-
-'-
4.9 of Strassen (1967), as n+oo,
Thus, by theorems 1.4 and
I
9
I.
I
I
I
I
I
I
I
p{m~[F
(3.21)
••
I
-
~(m)
for some m>n}
where I (kcjl) is defined by (3.1) and k(x) by (3.20), and - indicates that the
n
ratio of the two sides converges to 1 as
from Lemma 3.1 and (3.21).
Lemma 3.3.
Q.E.D.
lim inf
n { [log
Proof.
By (1.2), for every
X
O
The rest of the proof follows
n~.
Under (2.1), (2.2), (2.3) and (2.7)
(3.22)
where
I
-
- I (k(x)cjl),
n
-
I_
I
I
I
I
I
I
(x)-F(x)] >
m -
is a point (in
RP )
}
Pn(~)]/v2(n)
~
-1.
n~l.
o
for which F(x )
~.
Then, by Lemma 3.2, we obtain
that
(3.24)
- 1 , as k(x o)
2.
In a similar way, it follows that
(3.25)
lim innf {[log Pn (111)]/
~
V ()}
2 n
and hence, the lemma follows from (3.24) and (3.25).
~ -
1,
Q.E.D.
For every n>l,
let Cn be the a-field generated by the unordered X
_
- 1 ,···,X
-n
and by X l' X +2""; C is ~ in n.
-n+
-n
-n
Then we have the following.
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
,.
I
10
Lemma 3.4.
{n+, C ; n>l} and {n-, C ; n>l} are both non-negative reverse subn
n
--n
n
-
martingales for every
Proof.
p~l.
For every n~l, let Z
-n
(a random vector) be a point in RP where
F (x)-F(x) attains a maximum i.e.,
n -
-
(3.26)
n+
n
Z need not be unique.
-n
= sup{Fn (x)-F(x):
x£RP} = F (Z )-F(Z );
_
n -n
-n
Also, by (1.2), D+ > 0 for n>l.
n-
Therefore, using the
fact that
(3.27)
we obtain that
(3.28)
=
D++
n 1
(~O),
for every n>l.
In a similar way, it follows that for every n~l, n: ~ 0, and
(3.29)
E(n~lcn+1) ~ D~+l' for every n>l.
Hence the lemma follows.
Lemma 3.5.
For each p(~l) and every E>O, there exists a positive C(p,E) «00),
I
I.
I
I
I
I
I
I
I
11
such that for every
~+ k
-k/2~
E[(n-v n ) ] ~ c(p,E)(2-E)
I~k+l, for all F,
(3.30)
and the same result holds for {D-} and {D }.
n
Proof.
n
It follows from Theorem 1 of Kiefer (1961) that for each
every E>O, there exists a positive
C(p,E)(~oo),
p(~l)
such that for every
n~l,
and
r>O
and F,
(3.31)
Therefore, by routine steps,
~+
(3.32)
00
k
E[(n-v ) ] =
n
JO x k
00
= k J0 x
I_
I
I
I
I
I
I
I
I·
I
k~O,
1
dP{n1J
k-l
n-
!L+
p{n-v > x}dx
n
00
~
k c(p,E) J0 x
k-l
=
~
C(p,E)I~k+l
(2-E) -k/2 .
The other two cases follow similarly.
Lemma 3.6.
+ < n}
Q.E.D.
If {c } be non-decreasing, then for every
-
m
p{ max c D+ > d
n<m<N m m (3.33)
{n-
k/2
2
exp{-(2-E)X }dx
N~n~l,
t>O,
< t -kc(p,E) r~k+l (2-E) -k/2
ck + \
n
N (c k _
k) -k/2
Ls=n+l s
cs- l s
}
The proof is a direct consequence of Lemmas 3.4 and (3.5) and of Theorem 1
of Chow (1960) which extends the H'jek-Renyi inequality for sub-martingales.
Lemma 3.7.
~(t)
< ct
(3.34)
Under (2.3) and (2.7), for all non-decreasing
1/3
,c<oo,
{~(t)}
such that
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
,.
I
12
and the same result holds for {p-(~)} and {p (~)}.
n
Proof.
--
n
We only consider the proof for {p+(~)}; the case of {p-(~)} follows
n
Also, noting that p+(~) < P (~) < p+(~) + p-(~),
n
- n
- n
n
on identical lines.
the case of {p
n
(~)}
follows trivially.
A
(3.35)
n
n
!L+
(A~)
{m-um ->
We consider the events
A~(m)
for some m>n},
n>l,
A>O,
-
(3.36)
where n
o=
later on.
nand {n } is an increasing sequence of positive integers to be chosen
s
Since
~(t)
is t in t, we have
peA (A~»
n
(3.37)
= P+(A~) < P(Ao(A~»
n
n
for all A>O, n>l.
-
Then, by Lemma 3.6 and a few simple steps, we obtain that
(3.38)
~ e(p,£)
Ls=OOO[A~(n)]
s
n
{l +
Lj : :1
-k
-k
/2
n,s(2_£) n,s
I~k
+1.
n, s
~k
-1
[l_(l_j-1)
n,s]}, £>0,
s
where we let
(3.39)
By Sterling's approximations, for large n, for every s=O,l, ... ,
(3.40)
logr~k
n,s
+1 = -~log(2TI) + ~(k
Also, by (3.39) and (2.1),
(3.41)
n,s
+l)log(~k
n,s
)-~k
n,s
+0(1)
I
13
II
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
II
so that for every
~
n
s+l
Lj=n
s
-1
s~O,
[1_(1_,-1)
~k
n,s]
J
-< ~kn,s (log n s +1-10g n s )+0(1), by (2.1) and (3.39).
(3.42)
Let us now set A = l+~E, so that ~k
= (2+E')w 2 (n ) where E' =
n,s
s
E-~E2_~E4>0 for every O<E<EO(>~).
We consider first the case of wet)
satisfying (2.1), (2.2), (2.3) and (2.7), such that W2 (t)<C log t, O<C<oo.
We set then
(3.43)
where
log n
~
indicates that n
s
s
~
(log n) 1+ES/3 , s=0,1, ... ,
is the least positive integer for which the left hand
side is > the right hand side.
(3.44)
0,
Then, for every
log n s +1 - log n s
s~O,
(log n )«log n)E/3_ 1 ) < (log n)E/3(10g n ),
s
s
so that by (3.38) through (3.44), we have for large n,
(3.45)
00
o
n
P(A «l+ E)W»
< K [ L~ w2 (n ) exp{-v (n )-E'w 2 (n )+(E/3)10g log n }]
-E
s
s
s
2 s
s=o
00
where KE«oo) depends only on E(>O), v (n) is defined by (2.6), and
2
(3.46)
(E/3)[10g log n -log log n]+2 10g[w(n )/w(n)]}, s>O.
s
s
By the remark made after (2.6) and (3.44), it follows that for large n,
(3.47)
xn (s)
.::. exp{-[~EE'S log log n -(E s/9)10g log n]
2
I
14
I.
I
I
I
I
I
I
I
I_
~ exp{-(E 2s/l8)10g log n}
as E'=E-~E2_~E2>23E/32>(2/3)Efor all O<E<EO«~)' and for large n,
(E'/12)[~2(n )_~2(n)] can be made larger than 10g[~2(n )/~2(n)].
s
for n sufficiently large,
00
(3.48)
1
Therefore,
s
~ Ls=O~n(s)
< {l-(log
n)~E2/l8 }
+
1 as n+oo.
In a similar way, it follows that for every E>O, as n+oo,
E'~2(n) > (E'/3)10g log n + 2 log ~(n), when (2.3) holds.
(3.49)
Therefore from (3.45) through (3.49), it follows that for every
E(O<E<EO<~)'
(3.50)
and (3.34) follows from (2.7), (3.37) and (3.50) by letting E(>O) to be
arbitrarily small.
I
I
I
I
I
I
I
I·
I
We next consider the case when ~(t) is t in t such that (log t)/~2(t)
and t
-1/3
~
2
(t)~c<oo
(3.51)
as t+oo.
n
+
0
In this case, we let
s
s
[n(l+E) ], s=O,l, ... ; E>O,
and virtually repeat the steps (3.44)-(3.50), with some further simplifications;
for brevity, the details are omitted.
Q.E.D.
Returning now to the proof of theorem 1, we note that (2.9) follows directly
from Lemmas 3.3 and 3.7, and (2.10) follows from (2.6), (2.9) and the fact that
~2(n)/10g log n +
00
with n+oo implies that V2(n)/~2(n)
~+
~-
n
n
+
-2 as n+oo.
~
For the proof of Theorem 2, we define by D , D and D the Kolmogorovn
Smirnov statistics, defined by (1.2), (1.3) and (1.4), but based on the observat ions xll, ... ,X
nl
; all these statistics are based on univariate observations·
and thereby are strictly distribution-free when the first marginal df of F is
I
I.
I
I
I
I
I
I
I
I_
I
I
I
I
I
I
I
Ie
I
15
continuous.
Then,
D+ > n+ D- > D and D > D for all n>l.
n - n' n - n
n - n
(3.52)
Also, as in Gnedenko, Koroluk and Skorokhod (1961), for every
!,;-+
(3.53)
P{n"'D
> x}
n-
= P{n~-n->
O~x<cnl/6,
x}
1
= exp{-2x2}{1+2x/3n~+O(1/n)};
(3.54)
Therefore, by (1.7), (3.52) and (3.54) we have
(3.55)
for all 0 ~ ~2(n) < cn l / 3 , 0 <
C
<
00.
Further, by (2.8),
(3.56)
and hence, by (3.55)
(3.57)
Since
V2(n)/~2(n) ~
2 as
n~
(under the hypothesis of Theorem 2),
we obtain from Lemma 3.7 that
(3.58)
which completes the proof for {p n (~)}; the cases of {p+(~)}
and {p-(~)}
follow
n
n
similarly.
I
16
I.
I
I
I
I
I
I
I
REFERENCES
[1]
CHOW, Y. S. (1960). A martingale inequality and the law of large numbers.
Proc. Amer. Math. Soc. 11, 107-111.
[2]
CRAMER, H. (1938).
probabilities.
[3]
GNEDENKO, B.V., KOROLUK, V.S., AND SKOROKHOD, A.V. (1961). Asymptotic
expressions in probability theory. Proc. 4th Berkeley Symp. Math.
Statist. Prob. ~, 153-170.
[4]
KIEFER, J. (1961). On large deviations of the empiric D.F. of vector
chance variables and a law of iterated logarithm. Pacific. Jour. Math.
11, 649-660.
[5]
KIEFER, J., AND WOLFOWITZ, J. (1958). On the deviations of the empiric
distribution function of vector chance variables. Trans. Amer. Math.
Soc. ~, 173-186.
[6]
LINNIK, YU. V. (1961). On the probability of large deviations for sums of
independent variables. Proc. 4th Berkeley Symp. Math. Statist. Prob.
~, 289-306.
[7]
RUBIN, H., AND SETHURAMAN, J. (1965).
Sankhya, Ser. A. ~, 325-356.
[8]
STRASSEN, V. (1967). Almost sure behavior of sums of independent random
variables and martingales. Proc. Fith Berkeley Symp. Math. Statist.
Prob. ~, 315-343.
~
I_
I
I
I
I
I
I
I
Ie
I
~
~
Sur un nouveau theorem - limite de 1a theorie des
Actua1ites Sci. Indust. 736, 5-23.
Probabilities of moderate deviations.
© Copyright 2026 Paperzz