STRONG LAW OF LARGE NUMBERS AND Lp

ACTA MATHEMATICA VIETNAMICA
Volume 30, Number 3, 2005, pp. 225-232
225
STRONG LAW OF LARGE NUMBERS AND
FOR DOUBLE ARRAYS OF
INDEPENDENT RANDOM VARIABLES
Lp -CONVERGENCE
LE VAN THANH
Abstract. For a double array of independent random variables {Xmn , m ≥
1, n ≥ 1}, a strong law of large numbers and the Lp -convergence are estabm P
n
P
lished for the double sums
Xij , m ≥ 1, n ≥ 1.
i=1 j=1
1. Introduction and notations
Pyke and Root [9] proved that if {Xn , n ≥ 1} is a sequence of independent
identically distributed random variables with E|X1 |p < ∞ (1 ≤ p < 2), then
p
n
P
E Xi − nEX1 i=1
→ 0 as n → ∞.
n
By using an inequality due to von Bahr and Esseen [1], Chatterji [3] extended
the result of Pyke and Root [9] to the case where {Xn , n ≥ 1} is dominated in
distribution by a random variable X with E|X|p < ∞ (1 ≤ p < 2). Later, using
Burkholder’s inequality (see [2]), Chow [4] strengthened the result of Chatterji
[3] by relaxing the domination condition of [3] to uniform integrability.
The aim of this paper is to establish a version of the strong law of large numbers
and the Lp -convergence for double arrays of independent random variables. From
this, we obtain the result of Gut [6, Theorem 3.2]. We also generalize an earlier
result of Smythe [11] for arrays of independent identically distributed random
variables.
Let {Xmn , m ≥ 1, n ≥ 1} be an array of independent random variables. Our
n
m P
P
Xij
main result provides conditions for
i=1 j=1
mα n β
→ 0 almost surely (a.s.) and in
Lp as max{m, n} → ∞, where α > 0, β > 0.
Received August 2, 2004.
Key words and phrases. Double array of independent random variables, double sums, strong
law of large numbers, almost sure convergence, Lp -convergence, dominated in distribution.
226
LE VAN THANH
Random variables {Xmn , m ≥ 1, n ≥ 1} are said to be dominated in distribution
by a random variable X if for some constant C it holds
P {|Xmn | > t} ≤ CP {|X| > t},
t ≥ 0, m ≥ 1, n ≥ 1.
For a, b ∈ R, min{a, b} and max{a, b} will be denoted, respectively, by a ∧ b
and a ∨ b. The number of divisors of a positive integer k will be denoted by
dk . Throughout this paper, the symbol C will denote a generic positive constant
which is not necessarily the same one in each appearance. The logarithms are to
basis 2.
2. Main results
We now present some lemmas which will be needed in the sequel.
Lemma 2.1. Let {Xmn , m ≥ 1, n ≥ 1} be a double array of random variables. If
∞ X
∞
X
(1)
E|Xmn |p < ∞ for some p > 0,
m=1 n=1
then
Xmn → 0 a.s. and in Lp
(2)
as m ∨ n → ∞.
Proof. The Lp -convergence follows immediately from (1). For an arbitrary ε > 0
and for all k ≥ 1,
X
P { sup |Xmn | > ε} ≤
P {|Xmn | > ε}
m∨n≥k
m∨n≥k
≤
1 X
E|Xmn |p
εp
(by Markov’s inequality)
m∨n≥k
→ 0 as k → ∞
(by (1)).
This proves the almost sure convergence.
Lemma 2.2. If {Xkl , Fl , l ≥ 1}, k = 1, 2, · · · , m, are nonnegative submartingales, then { max Xkl , Fl , l ≥ 1} is a nonnegative submartingale.
1≤k≤m
Proof. For L > l ≥ 1,
E( max XkL |Fl ) ≥ max E(XkL |Fl ) ≥ max Xkl .
1≤k≤m
1≤k≤m
1≤k≤m
The next lemma is due to von Bahr and Esseen [1].
Lemma 2.3. Let {Xi , 1 ≤ i ≤ n} be random variables such that E{Xk+1 |Sk } = 0
k
P
for 0 ≤ k ≤ n − 1, where S0 = 0 and Sk =
Xi for 1 ≤ k ≤ n. Then
i=1
E|Sn |p ≤ 2
n
X
i=1
E|Xi |p
for all 1 ≤ p ≤ 2.
STRONG LAW OF LARGE NUMBERS AND Lp -CONVERGENCE
227
Note that Lemma 2.3 holds when {Xi , 1 ≤ i ≤ n} are independent random
variables with EXi = 0 for 1 ≤ i ≤ n.
Lemma 2.4. Let {Xij , 1 ≤ i ≤ m, 1 ≤ j ≤ n} be a collection of mn independent
random variables. If EXij = 0 for all 1 ≤ i ≤ m, 1 ≤ j ≤ n, then
(3)
E
max
1≤k≤m,1≤l≤n
where Skl =
k
P
l
P
m X
n
X
E|Xij |p
|Skl |p ≤ C
for all 0 < p ≤ 2,
i=1 j=1
Xij ; the constant C is independent of m and n. In the case
i=1 j=1
0 < p ≤ 1, the independence hypothesis and the hypothesis that EXij = 0, 1 ≤
i ≤ m, 1 ≤ j ≤ n are superfluous.
Proof. If E|Xij |p = ∞ for some 1 ≤ i ≤ m and 1 ≤ j ≤ n, then (3) is immediate.
Thus, we can assume that E|Xij |p < ∞, 1 ≤ i ≤ m, 1 ≤ j ≤ n.
First, suppose that 1 < p ≤ 2 and m ∧ n ≥ 2. Set
Yl = max |Skl |
1≤k≤m
and
Fl = σ(Xij , 1 ≤ i ≤ m, 1 ≤ j ≤ l),
For each 1 ≤ k ≤ m and 2 ≤ l ≤ n, we have
1 ≤ l ≤ n.
E(Skl |Fl−1 ) =E (Sk,l−1 + X1l + · · · + Xkl |Fl−1 )
=E(Sk,l−1 |Fl−1 ) + E(X1l |Fl−1 ) + · · · + E(Skl |Xl−1 )
=Sk,l−1 a.s.
So {Skl , Fl , 1 ≤ l ≤ n} is a martingale for each k = 1, . . . , m. As in Scalora [10],
{|Skl |, Fl , 1 ≤ l ≤ n} is a nonnegative submartingale for each k = 1, 2, . . . , m.
Then, by Lemma 2.2, {Yl , Fl , 1 ≤ l ≤ n} is a nonnegative submartingale. By
Doob’s inequality (see, e.g., Chow and Teicher [5], p. 255),
p
p
p
p
(4)
EYnp .
E
max
|Skl | = E max
≤
1≤k≤m,1≤l≤n
1≤l≤n
p−1
Set Gk = σ(Xij , 1 ≤ i ≤ k, 1 ≤ j ≤ n), 1 ≤ k ≤ m. Since {|Skn |, Gk , 1 ≤ k ≤ m}
is a submartingale, applying Doob’s inequality once more, we have
(5)
EYnp = E( max |Skn |)p
1≤k≤m
p p
E|Smn |p
p−1
m n
p p X X
≤2
E|Xij |p
p−1
≤
( by Lemma 2.3).
i=1 j=1
The conclusion (3) follows immediately from (4) and (5).
Next, if 1 < p ≤ 2 and m ∧ n = 1 then (3) is obtained similarly as in the case
m ∧ n ≥ 2.
228
LE VAN THANH
Finally, if 0 < p ≤ 1, then we have
E
max
1≤k≤m,1≤l≤n
|Skl |p

max
≤E
1≤k≤m,1≤l≤n
l
k X
X
i=1 j=1


n
m X
X
|Xij |p 
=E

|Xij |p 
i=1 j=1
=
n
m X
X
E|Xij |p ,
i=1 j=1
which establishes (3).
Lemma 2.5. Let {Xmn , m ≥ 1, n ≥ 1} be a double array of random variables.
Suppose that {Xmn , m ≥ 1, n ≥ 1} is dominated in distribution by a random
variable X. If
(6)
E |X|p log+ |X| < ∞ for some p > 0,
then
(i)
∞ P
∞
P
m=1 n=1
(ii)
1
E |Xmn |q I |Xmn |≤(mn) p
∞ P
∞
P
m=1 n=1
q
(mn) p
<∞
1
E |Xmn |r I |Xmn |>(mn) p
r
(mn) p
<∞
for all q > p,
for all 0 < r < p.
Proof. Let F be the distribution function of X. By using the fact that
!
∞
X
log j
dk
,
q = O
q
p
j p −1
k=j k
we obtain
1
∞ X
∞ E |Xmn |q I(|Xmn | ≤ (mn) p )
X
m=1 n=1
(mn)
q
p
≤C
1
∞ X
∞
X
1
q
p
Z
(mn) p
xq dF (x)
(mn) 0
Z 1p
∞
X
dk k q
=C
x dF (x)
q
p
0
k=1 k
1
k Z p
∞
X
dk X j
q
=C
q
1 x dF (x)
p
p
k=1 k j=1 (j−1)
m=1 n=1
=C
Z
∞ X
∞
X
dk
q
j=1 k=j
kp
1
jp
1
(j−1) p
xq dF (x)
STRONG LAW OF LARGE NUMBERS AND Lp -CONVERGENCE
≤C
Z
∞
X
log j
j=1
≤C
k=1 k
1
jp
1
(j−1) p
xq dF (x)
1
jp
1
xp log xdF (x)
(j−1) p
≤ CE |X|p log+ |X| ,
which proves (i). Noting that
r
p
j
∞ Z
X
j=2
n
X
dk
q
−1
p
229
=O
log n (0 < r < p),
r
n p −1
we can obtain (ii) by the same method.
We are now in a position to establish the main result which provides conditions
for almost sure convergence and Lp -convergence for double sum of independent
random variables. This theorem in the particular case α = β = 1 and p = 2 is the
two-dimensional version of Kolmogorov’s theorem (see, e.g., Chow and Teicher
[5], pp. 121).
Theorem 2.1. Let {Xmn , m ≥ 1, n ≥ 1} be a double array of independent random variables with EXmn = 0, m ≥ 1, n ≥ 1. If
(7)
∞ X
∞
X
E|Xmn |p
m=1 n=1
then
<∞
mαp nβp
n
m P
P
0 < p ≤ 2 and
α > 0, β > 0
Xij
i=1 j=1
mα n β
(8)
for some
→ 0 a.s. and in Lp as m ∨ n → ∞.
Proof. Since
(9)
2l
2k P
2l
2k P
P
P
E|Xij |p
X
ij p
∞ X
∞
∞ X
∞
X
X
i=1 j=1
i=1 j=1
E αk βl ≤ C
2 2
(2αk 2βl )p
(by Lemma 2.4)
k=1 l=1
k=1 l=1
≤C
∞
∞ X
X
E|Xij |p
i=1 l=1
<∞
(iα j β )p
(by (7)),
Lemma 2.1 ensures that
(10)
2l
2k P
P
Xij
i=1 j=1
2αk 2βl
→ 0 a.s. and in Lp as k ∨ l → ∞.
230
LE VAN THANH
Set
Smn =
m X
n
X
m ≥ 1, n ≥ 1
Xij ,
i=1 j=1
and
Smn
S2k 2l Tkl =
max
− αk βl , k ≥ 1, l ≥ 1.
2 2
2k ≤m<2k+1 ,2l ≤n<2l+1 mα nβ
By Lemma 2.4, for all k ≥ 1 and l ≥ 1, we have
Smn p S2k 2l p
p
max
E|Tkl | ≤C E αk βl + E
2 2
2k ≤m<2k+1 ,2l ≤n<2l+1 mα nβ
p
S2k 2l 1
p
≤C E αk βl + αk βl E
max
|Smn |
2 2
2 2
1≤m≤2k+1 ,1≤n≤2l+1
k+1 2l+1
2P
P
whence
∞
∞ P
P
k=1 l=1
(11)
E|Xij |p
S2k 2l p
i=1 j=1
≤CE αk βl + C (k+1)αp (l+1)βp
2 2
2
2
p
< ∞ by (9). Then by Lemma 2.1,
ETkl
Tkl → 0 a.s. and in Lp as k ∨ l → ∞.
Note that for 2k ≤ m < 2k+1 and 2l ≤ n < 2l+1 it holds
Smn
S2k 2l |Smn |
S2k 2l S2k 2l ≤ α β − αk βl + αk βl ≤ Tkl + αk βl ,
mα n β
m n
2 2
2 2
2 2
so the conclusion (8) follows from (10) and (11).
Remark 2.1. The argument used for proving in Theorem 2.1 reveals that if
0 < p ≤ 1, then the independence hypothesis and the hypothesis that the random
variables {Xmn , m ≥ 1, n ≥ 1} have mean 0 are not needed for the validity of the
conclusion of the theorem.
Corollary 2.1. Let {Xmn , m ≥ 1, n ≥ 1} be a double array of independent random variables. Suppose that {Xmn , m ≥ 1, n ≥ 1} are dominated in distribution
by a random variable X. If
(12)
E(|X|p log+ |X|) < ∞
for some
1 ≤ p < 2,
then
(13)
n
m P
P
(Xij − EXij )
i=1 j=1
(mn)
1
p
→ 0 a.s. and in Lp as m ∨ n → ∞.
Proof. For m ≥ 1 and n ≥ 1, set
1
0
Xmn
= Xmn I |Xmn | ≤ (mn) p
and
1
00
Xmn
= Xmn I |Xmn | > (mn) p .
STRONG LAW OF LARGE NUMBERS AND Lp -CONVERGENCE
231
By Lemma 2.5,
∞ X
∞
X
E(X 0
mn
m=1 n=1
and
0 )2
− EXmn
2
(mn) p
(mn) p
n
m P
P
r
(mn) p
<∞
00 − EX 00 )
(Xij
ij
i=1 j=1
(15)
∞ X
∞
00 |r
X
E|Xmn
→ 0 a.s. and in L2 as m ∨ n → ∞,
1
p
(mn)
<∞
(mn) p
m=1 n=1
for all 0 < r < p. Then by Theorem 2.1,
n
m P
P
0 − EX 0 )
(Xij
ij
i=1 j=1
2
≤C
r
(14)
∞ X
∞
0 )2
X
E(Xmn
m=1 n=1
∞ X
∞
00 − EX 00 |r
X
E|Xmn
mn
m=1 n=1
≤
→ 0 a.s. as m ∨ n → ∞.
1
(mn) p
Since E|X|p < ∞ (by (12)),
00 p
E|Xmn
|
(16)
≤
≤
C
C
Z
∞
1
(mn) p
Z
∞
1
xp−1 P {|Xmn | > x}dx
xp−1 P {|X| > x}dx
(mn) p
→ 0
It implies that
m P
n
P
00 − EX 00 )p
E
(Xij
ij
(17)
i=1 j=1
mn
≤C
≤C
→0
as m ∨ n → ∞.
m P
n
P
i=1 j=1
n
m P
P
i=1 j=1
00 − EX 00 |p
E|Xij
ij
mn
(by Lemma 2.4)
00 |p
E|Xij
mn
as m ∨ n → ∞ ( by (16)).
Combining (14), (15) and (17) we get (13).
Remark 2.2. The generalization to d-dimensional arrays of random variables
can
be obtained by the same method under the condition E |X|p (log+ X|)d−1 < ∞.
Remark 2.3. A part of Corollary 2.1 is due to Smythe [11] who proved that
if {Xk , k ∈ Nd } be a d-dimensional array of independent identically distributed
P
Xj
j≤k
+
→0
random variables with zero mean, E |Xk |(log |Xk |)d−1 < ∞, then
|k|
a.s. as |k| → ∞, where k = (k1 , k2 , . . . , kd ) ∈ Nd , |k| = k1 k2 · · · kd .
232
LE VAN THANH
Remark 2.4. When {Xmn , m ≥ 1, n ≥ 1} are pairwise independent random
variables which are dominated in distribution by a random variable X, the almost
n
m P
P
(Xij − EXij )
i=1 j=1
was obtained by Hong and Hwang [7,
1
(mn) p
Theorem 2.3] under a stronger condition that E |X|p (log+ |X|3 ) < ∞ (1 <
p < 2). More general results were proved by Hong and Volodin [8].
sure convergence of
Acknowledgments
The author is grateful to Professor Andrew Rosalsky (University of Florida,
USA) for helpful remarks and the references [6] and [10]. He also wishes to thank
Professor Andrei Volodin (University of Regina, Canada) for the reference [11].
References
[1]
B. von Bahr and C. G. Esseen, Inequalities for the rth absolute moment of a sum of random
variables, 1 ≤ r ≤ 2, Ann. Math. Statist. 36 (1965), 299-303.
[2] D. L. Burkholder, Martingale transforms, Ann. Math. Statist. 37 (1966), 1494-1504.
[3] S. D. Chatterji, An Lp -convergence theorem, Ann. Math. Statist. 40 (1969), 1068-1070.
1
[4] Y. S. Chow, On the Lp -convergence for n p Sn , (0 < p < 2), Ann. Math. Statist. 42 (1971),
393-394.
[5] Y. S. Chow and H. Teicher, Probability Theory: Independence, Interchangeability, Martingales, 3rd Edition, Springer-Verlag, New York, 1997.
[6] A. Gut, Marcinkiewicz laws and convergence rates in the law of large numbers for random
variables with multidimensional indices, Ann. Probability. 6 (1978), 469-482.
[7] D. H. Hong and S. Y. Hwang, Marcinkiewicz-type strong law of large numbers for double
arrays of pairwise independent random variables, Int. J. Math. Math. Sci. 22 (1999), 171177.
[8] D. H. Hong and A. I. Volodin, Marcinkiewicz-type law of large numbers for double arrays,
J. Korean Math. Soc. 36 (1999), 1133-1143.
[9] R. Pyke and D. Root, On convergence in r-mean of normalized partial sums, Ann. Math.
Statist. 39 (1968), 397-381.
[10] F. S. Scalora, Abstract martingale convergence theorems, Pacific J. Math. 11 (1961), 347374.
[11] R. T. Smythe, Strong laws of large numbers for r-dimensional arrays of random variables,
Ann. Probability 1 (1973), 164-170.
Department of Mathematics
Vinh University
Nghe An Province, Vietnam