Mucci, Anthony; (1978)Inequalities for Power One Tests for Sums of Dependent Variables."

,'Oe""""
~"
"
•
.
.j
.
(t.
,
,
i::
!;•
••
t
INEQUALITIES FOR POWER ONE TESTS
FOR SUMS OF DEPENDENT VARIABLES
by
Anthony Mucci
Department of Biostatistics
University of North Carolina at Chapel Hill
I
"
'y
Institute of Statistics Mimeo Series No. 1183
July 1978
•
r
~." , '
INEQUALITIES FOR POWER ONE TESTS
FOR SUMS OF DEPENDENT VARIABLES
ABSTRACT
Let
S
n
be the sum of
n
bounded dependent variables,
the sum of squares of these variables, and
concave function.
S
n
~
S
n
a positive increasing
¢(L )
n
under the hypothesis that
S
n
We further determine explicit upper bounds for
the mean intrinsic time for
that
n
We determine explicit and tight upper bounds
on the probability that
is a martingale.
¢
L
has positive drift.
S
n
to pass
¢(L)
n
under the hypothesis
These inequalities can be used in
power one tests for dependent variables in the manner proposed by
Darling, Robbins, and Siegmund for independent variables
AMS Classification:
Key Words
Primary 62F05, 62L99
Secondary 60645, 60640
& Phrases: Power One Tests, Sequential Methods, Martingales
This work was supported in part by Lipids Grant No.
5-T32-HL07005-03.
-2-
(1)
Introduction
Let
[0, 00) -+[0, 00)
<p:
satisfy
f
It<
(1.1)
<P(t)t oo
(1. 2)
<p
Let
t<jJ , (t)
<P(t)
1
+pE [2,1)
<P'(t) <<P~t) ,
(~,
B, P)
all
as
<pI
such that
tt oo
t:::: t* > 0
be a fixed probability space supporting the adapted
{f , B}.
sequence of random variables
If n I
$;
n
1,
n
all
00
(1. 5)
where
We fix some notation:
n
n = I1 f k
n
T = I E{f~IBk_l}
n
S
1
(1. 6)
ttoo
as
t
has a continuous derivative
p(t)::
(1.4 )
t :::: t* > 0
<P(t) + 0
and
It
(1.3)
all
<jJ (t) < t
n
L = I f2
k
n
1
n
n = I E{fkIB k _l }
1
R
We assume throughout that
n:::: 1
-3-
We fix two hypotheses:
(1. 7)
E{f
n
IBn- I} ;::8a 2 {fn IB n- I}'
We define, for each
value t
8
8 >0
all
n ;::1,
some
8 > O.
and small enough, a unique positive
where
(1. 8)
=
8
1+8
We will prove the following two theorems:
Theorem (1)
Under
p {S
(B)
P{S
n
H :
O
;:: ¢(T ),
n
For all
n
;::¢(L),
n
to ;:: t ** where
some
L
n
;:: t
0
}
/t
cjJ(t)-
¢ (t)} cjJ2 (t)
t
2t
dt
-4-
Theorem (2)
Assume
HI
Define
L
e=
LeI
Then
and
LeI
inf n
3
S ;:: <p (T )
= inf n
3
S
n
n
n
;:: <p (L )
n
are proper and
(A)
(B)
Remarks
(1.9)
These theorems are analogues of results in the literature for
sequences of i.i.d. randomvariables.
i. i . d., wi th
Jf l =0,
Jf~ =1
and
For instance, when
J/f
l <
00,
{f }
n
some positive
is
A,
then modulo some further regularity conditions, we have, assuming
<p 2 (t)
00
<Pet) e - 2t
J
I
dt < 00
t%
Darling-Robbins [3]
<p 2 (t)
00
p{S
n
;::<p(n),
some
Ie
2
J
to
<P(t) e
3'1:
t 2
2t
dt
-5-
Strassen [10] Theorem (1,4)
p{Sn:?¢(n),
lim
t too
For
some
p
as in (1.3):
n:? to}
----- =
a
¢2 (t)
00
J
p
I27f
--~
1L!J- c
dt
t%
to
¢2(t)
2t
Strassen IS result establishes that
Joo¢ (t) e
dt
%
is the
to t
rate at \vhich the probabilities under consideration tend to zero.
Darling-Robbins bound all such probabilities by a fixed constant
times such rates.
As our Theorem (1) reveals, we bound the probabilities
of the conditional versions of the crossings
-e
{S :?¢(n),
n
by a small perturbation of such rates.
Our second theorem establishes that
lim
-1
efO t e
I 1'e
T
:0:;
1
and gives rates for these lim sups.
If
{f }
n
is
i.i.d.,
with
Jf =0,
l
Jfl:::l
and with other
regularity conditions, very general but too numerous to mention
¢ (t )
e ::: e and for T defined
here, for t defined through
t
e
e
e
through
Lai [7]
(*)
T ::: inf n
e
3
S + ne :? ¢ (n) ,
n
Corollary (1)
we have
-6-
Our second theorem is the closest welve been able to come to a
conditional analogue of Lails result,
We mention here that Gut [6]
has produced limit theorems relatedto(*) for dependent sums.
(1.10)
Consider again the following hypotheses:
E{f
n
I Bn- I}
= 0,
all
n
One statistical test of these hypotheses consists in deciding for
if
S
t a large,
n
~
¢ (L )
e
n
for some
L
n
~
to'
Using our theorems and assuming
small, we have
and
and
Darling-Robbins [2], [3] have considered such "tests of
power oneil for sums of i. i . d.
this area, [8], [9],
t,
S ,
There has been subsequent work in
with the focus on i.i.d. 's also.
Our theorems are an attempt at extending such tests to dependent
sums.
This extension does not seem statistically unrealistic.
Consider the following contexts:
A drug level,
(A)
C
n-
l'
at time
received.
n,
d ,
n
is administered to a subject whose condition,
depends on pas't conditions and dosage levels
The objective of the treatment is the stabilization of
-7-
the subject's response where this response,
function of
as
(B)
E{f IB
n
n-
C
n-l
and
n
ad where
n
is a random
stabilization is interpreted
l}=O,
We wish to test, through discrete sampling, if a diffusion
has positive drift.
H vs HI
O
Zt'
Under reasonable assumptions on the variance
coefficient,
(C)
d ,
f,
and with
f
n
= Zn
- Z
n-l
suitably truncated,
as defined above will constitute a test of power one.
To decide, as quickly as possible, that a game is not fair, i.e.,
that the win or loss increment,
f,
n
obtained at time
n
by a
gambling house, favors the house in the sense that
(l.ll)
Under the mild hypothesis
ff~ ~ a>
0,
we see that
so that upper bounds become available on the expected time
(1.12)
le'
The proofs of theorems (1) and (2) are almost entirely
dependent on the use of three supermartingales devised by Freedman
[ 4 ], [5].
(a)
We introduce these here.
be adapted, with
n
C = Ig
n
1 k
n
Mn=I E{fkIBk_l}
1
A
G(A) = e - 1
F(A) = 1 - e- A
a :s: g :s: 1.
n
-8-
Then
~G
AC
e
(A)M
n
F(A)M -AC
nand
e
n
n
are supermartinga1es.
(b)
Let
K (A) = e
A
Then, with (I.Sa) and the notation
- 1 - A.
(1.6) ,
AS -K(A)T
e
n
n
is a supermartinga1e.
Proposition (1.1)
00
For all
e ~ 0,
all
Let
<
S
and let
t
~
and provided
0,
C = '\ g
00
L k
(1)
(2)
(3)
Proof:
0:::; t
1"2
T
= inf k
=
{ first
1"2
if
C
k
3
k
E:
~
S
[T. 1 ' 1"2)
no such
k
with
occurs
~ ~
(1 + )C k
00
a,e:
-9-
Then
F(A)M -AC
eLL
1 2':
J
{M 2':(1+8)C }
L
L
2': JeF(A) ((1+8))C L-AC L
{M 2':(1+0)C }
L
L
(FeA) (1+0)-A)t
2': e
provided
FeA)
P{M 2':(1+8)C }
L
1
-->--
A
- 1+8
Under these circumstances, letting
-e
L
P{M 2': (1+8) C ,
n
n
S too:
C 2': t} ~ inf e (A-F (A) 0+8)) t
n
A
some
Now the exponent minimizes for
A
~
In(1+8);
we can use this value
since
F(A) _
e
1
-A- - (1+8)ln(1+0) 2': 1+8
Using
82
in (1+8) - 8 ~ - 2 (1+0) , we have our first result,
For the second part, let
0 < t < S as before, let
before, and define
occurs
L , L be as
2
1
-10-
Then
ACT-G(A)MT
e
1 2
I
{M
S;~11
T
C}
1+\:1
T
1
I
e
2
{M
T
(A - 1+8 GCA))C T
1
1+8 CT }
s;
(A2 e
G(A))t
:
1 8
1
provided
A2 1 + 8 G(A).
Letting
S
-+00
we have
(1:8 G(A)-A)t
where
n
A:::
A
A 2 : G(A) .
1 8
A is subject to
The best choice is
Using
C 2 t} os; inf e
some
P{M S; 11 C ,
n
+8 n
fn (1
82
8
which satisfies this constraint.
+ 8)
1+8 - fn(l + 8) 2- 2(1+8)
,
we have the second case.
two cases the third case follows straightforwardly.
Q.E.D.
Proposition (1.2)
Let
{f, B}
n
n
satisfy
00
\f2 :::
L
1 n
00
a.e.
From these
-11-
Then
some
Proof:
Let
_~.
(1+8)
(II)
and use the previous proposition.
Wetve replaced
2
by
- 8(1+8)2
for later technical reasons.
Q.E.D.
Proof of Theorem (1)
Fix
0 < to < t ,
l
Define
1
1
1
=:
first
1as t
2
n
first
1
=:
{
1
2
Then, from Optimal Stopping:
1 ~
J
{S
Letting
e
1
AS -K(A)T
1
1
~ cf> (T )}
1
3
n
Tn
€ [1
:0;
1
,
if no such
t1
1
2
n
]
S
n
~ cf>
occurs
(T )
n
-12-
we have
KOe)tl-M(t
Pas
inf e
a)
A~a
The infimum occurs for
Using
A=£n{l+
x3
x2
(l + x).tn(l + x) ~x +2 - 6'
t~:o)}
we have
valid for
a)
6t 2
a
¢3(t
Define
Then
From the Mean Value Theorem and ¢'(t) s ¢~t),
Thus
¢2(t a)
2t
l
~
¢2(t l )
~ (1 - ¢(t )) 2t
l
l
Next
Consequently
I~ _ ¢(t l )]¢2(t l )
¢(t l )
t
2t 1
1
we have
-13-
Suppose we now repeat this argument over intervals
[t n- l' t n ]
where
t
n-l
We'd have
p
n-l
Then
It
n
<P(t )
P{S
n
~<P(T
n
),
Note next that
n
some
t
n
t
00
t
2t
n
n
and
3<P(t )
1
-t-_-:t--
n
cb(t
, n)
n-l
n
=
Thus, dominating the sum above by an integral;
P{S
n
~<PCT),
n
some
T
n
~t
00
litl.
} s; 3J
0
t
__
o
t
3r
2
e
(l_-!f
2
_<P(t))<P (t)
<P(t)
t
2t
dt
-14Next, for
e
momentarily unspecified, and using Proposition
+
8 2t
P{Sn > ~ [l:~] ,
2e
::;
2e
+ P{S n ::::
a
20+8) 2
(' <Po (t)
+ 3
to
Tn :::: 1+8} ::;
a
20+8) 2
8 2t
some
(1~2):
31:
to
1+8
t 2
e
1
18
+ <P (T n ),
+-
to
Tn :::: l +8 }
some
~o(t)]~~(t)
If
t
<Po(t)
2t
dt
1
<PO(t) ;::: 1+8 <P(t).
where
o + 8) t
Substituting
t
for
foo
in the integral and letting
denote this integral, and noting that the integral decreases in t,t 8
82t
a
2(1+8)2
p{S :::: ¢(L ),
n
n
some
+ 3 (1 + 8)
foo
to
Our choice for
8
will be made so that
-2(lir
82t
lim
t
atoo
e
a
;:::
a
to
OO
so that the right side in the inequality above is dominated by
f
multiplied by some constant.
to
-15-
We reason as follows;
= x,
Letting
we have
x2
fXl;:;:
to
e- 2
j'X>
dx;
cjJ(t )
o
~
this inequality follows from
By a classical inequality;
00
x
1
so that, subject to
<-
we have
- 12 '
00
But then, letting
we have
8 2t
e
1
0
2 (1+8) 2
cp2(t )
O
to
- 2":::;
e
12
JOO
1- cp(t )
O
cjJ( t oj
~
0
as to
~
to
Staying with this value of
p{S
n
8,
we have
;:;: cp (L ), some L ;:;: to} :::; 2e
n
n
+
from which Theorem (18) follows.
Q.E.D.
~
00.
-16-
(III)
Proof of Theorem (2)
Lemma
p{s
p{S
n
;::<p(T)
n
eventually} =1
n
;:: <P(L)
n
eventually}
=1
Proof of Lemma
A special case of Brown [1], Theorem (1), shows that if
and
(a)
00
a.e.,
00
then
a.e.
-+ 1
This result clearly holds in the present context.
and the hypotheses on the function
<p,
Given this result
we have for all
k >0
eventually on a set of measure one:
S ;:: ..l.R ;::
n
2
n
e
2 (1+8)
k=
T
n
;::
ke
2 (1+8)
2 (1+8)
8
Next, since, from Freedman [ 5]
Ln
--+1
T
n
a.e.,
cjJ(T )
n
now let
and
-17-
and since
~
cj>(t)
as
+1
s +1
~-
t +00 ,
t
we have
eventually .
S ? ¢(L )
n
'
n
Q.E.D.
Now define
E{fIB
n n- l}-f n
2
S
n
Then
e
AS -K(A)T
n
n
is a
positive supermartingale.
Define
T :: inf n ? 0
Our lemma tells us that
S
n
?
cj> (T )
n
T is proper, so that
e
AS -K(A)T
T
T
{T ?t}
T
>-
Ie
_ ~(S -R )_ K(A) T
2 T T
4
T
{T ?t}
T
_8__ K(A) __(cj>(TT)+l}
--=__
_ ~ T
:: J e
{T ?t}
T
2
{TT?t}
T
{ 1+8
2A
TT
-18Suppose
K(A) < _8__
2A - 1+8
(~(t)+l)
t
Then the integrand increases in
K(¢)t
P{T
(**)
T
~ t}::;
e
+
4
T ,
so that
T
(¢(t)+l)A
8tA
2
- 2 (1+8)
It is easily seen that the right side minimizes for
We have this
A available to us if it satisfies
(*).
This follows
immediately from
A
A(e - 1) ~ k(A),
Let's use
A
O
in
valid for all
A~ 0 •
(**) and also
x2
(1+x)fu(1+x) ~x+ 2(1+x) ,
all
x~O
to arrive at
P {T
'r
L
~ t} ::; exp [_ _~-;,-_t----:...,...,..~~
1+2 (_8__ (¢(t) +1))
1+8
t
_8 _ (¢(t)+1))2]
( 1+8
t
Defining t8 through
=
8
1+8 '
we have
-19-
But then
Now
(*** )
00
t (<P(t e) _ <P~t))
t
e
Je
t
(**)
2t
"6
-
dt
~J
t
e
e - "6t
(<P (t e ) _ <P~t))
t
e
e
dt
e
Use the Mean Value Theorem on the exponent in the first integral on
the right.
One has
•
(I-p(t ))2 <p 2 (t )
e
_ _e_(t_t ) 2
48
t = (l
Letting
+
r) t
Z
2t
•
Jt
e
t
e
e
e
in the integral on the right, and then letting
=
we have
r
e (<Pete) _ <pet)
t
t
e
dt
- 7;
e
e
t 3
:s;
/24
(l-p (t )) <p(t )
e
e
124
rt;
t%
0
(I-pete) ) <p(t )
J
0
Z2
-T
e
dz
-20-
As for the second integral on the right in (***), we have
Thus
_1
t
e
IT
<1+
T -
1_
(I-p(t)) 2
We turn next to the consideration of upper bounds for
where
T =
inf n
Arguring as before;
1
2:
I
{L
T
e
~
-,
Sn
~ <t> (L
n
).
~ (S -R )_KCA) T
2
TT
4
T
•
~t}
{T s[(l-c)L ,(l+s)L ]}
T
T
T
-21-
For notational convenience, let the set in the integral above be
A(t, e:).
denoted
We have
_ ~ L
2
(I-e:) _8__ K(;\) (1+e:) _
T{
1+8 . 2A
e
(¢(L )+1) }
T
L
cc
A(t,e:)
Continuing to mimic the previous argument, we choose
(-..!-
;\ =m{l +_2
1+e: 1+8 (I-E) - [¢(t)+lJJ}
t
and we have
t
P{A(t,e:)} ::::exp 2(1+e:)
•
We define
t
through
E
¢(t )+1
e:
have for
=
8(1-e:)
1+8
t;:: t :
E
P{A(t,E)}
..
But then
00
•
;-e: tJP{A(t,E)}dt ::::
e:
1~
(1-p(t E))2
and we then
-22-
Next, for
C'
<
Co
-
1
2 '
and using Proposition (1.2):
[(l - E) L , (1 + E) L ]
T
T
1
L 2t}dt
T
~
e
~
4
-:2
E
e
since
We then have, again using
00
-.!.-.JP{L
2t}dt
t
T
E t
E
Now
from which
•
-23-
Replace
€
with
~
(1 -p(t ))€
e
and we have
But then
t~
00
J P{L T ~ t}dt
t
::;
e
::; -
€
I-€
+ -
1
I-€
t
1
fooP{L:?: t}dt
T
€ t
€
Setting
€
we have
=
00
1
t
e
::;
J P{L T ~ t}dt
t
::; 16
e
4
(I-p(t )) 2
e
¢2 (t )
e
2
t
e
¢2 (t )
1
e
+
2
4(I-p(t ))2
t
e
e
+
1
4(1-p(t ))2
e
{I + 8(/3TI + 12
1 +[
8 I31T + 12
{
e
It:
¢(t )
e
r
.
rt;
J <Pete)
Q.E.D.
Ii
~]}~
¢ (t e )
<Pete)
-24-
REFERENCES
[1]
Brown, B.M. (1971). A Conditional Setting for Some Theorems
Associated with the Strong Law. Zeit. Wahr., Vol. ~,
[2]
Darling, D.A., Robbins, H. (1967). Iterated Logarithm Inequalities
Proc. N.S.A., Vol. ~, No.5, 1188-1192.
[3]
Darling, D.A., Robbins,H. (1968). Some Further Remarks on
Inequalities for Sample Sums. Proc. N.A.S., Vol. 60,
No.4, 1175-1182.
[4]
Freedman, D.A. (1975). On Tail Probabilities for Martingales.
Annals of Prob., Vol. l, No.1, 100-118.
[5]
Freedman, D.A., (1973). Another Note on the Borel-Cante11i
Lemma and the Strong Law, with the Poisson Approximation
as a by-Product. Annals of Prob., Vol. l, No.6, 910-925.
[6]
Gut, A. (1974). On the Moments of Some First Passage Times
for Sums of Dependent Random Variables. Stochastic Processes
and Their Applications, Vol. ~, No.1, 115-126.
[7]
Lai, T.L. (1977). Power One Tests Based on Sample Sums.
of Stat., Vol. ~, No.5, 866-880.
[8]
Lai, T.L. (1976). Boundary Crossing Probabilities for Sample
Sums and Confidence Sequences. Annals of Prob., Vol. 4,
No.2, 299-312.
[9]
Robbins, H., Siegmund, D. (1968). Iterated Logarithm Inequalities
and Related Statistical Procedures, Mathematics of the
Decision Sciences. A.M.S. Vo1.~, 267-282.
[10]
Strassen, V. (1965). Almost sure Behavior of Sums of Independent
Random Variables and Martingales. Fifth Berkeley
Symposium on Mathemtical Statistics and Probability, Vol. ~,
315-344.
Annals
•
•