1 Introduction - Department of Statistics

On the Toeplitz Lemma, Convergence in Probability, and Mean Convergence
ANTONIO LINERO AND ANDREW ROSALSKY
Department of Statistics, University of Florida,
Gainesville, Florida, USA
Abstract Three examples are provided which demonstrate that “convergence in probability”
versions of the Toeplitz lemma, the Cesàro mean convergence theorem, and the Kronecker lemma
can fail. “Mean convergence” versions of the Toeplitz lemma, Cesàro mean convergence theorem,
and the Kronecker lemma are presented and a general mean convergence theorem for a normed
sum of independent random variables is established. Some additional problems are posed.
Keywords Toeplitz lemma, Cesàro mean convergence theorem, Kronecker lemma, convergence
in probability, mean convergence, moment generating function, continuity theorem, almost sure
convergence.
Mathematics Subject Classification (2000) 60F05 · 60F25 · 40A05 · 60F15
Running Head:
Toeplitz Lemma
Address correspondence to Andrew Rosalsky, Department of Statistics, University of Florida,
Gainesville, FL 32611-8545, USA; E-mail: [email protected]
1
Introduction
The Toeplitz lemma is a result in mathematical analysis which is a useful tool for proving a wide
variety of probability limit theorems. It is stated as follows and its proof may be found in Loève
(1997, p. 250).
Theorem 1.1 (Toeplitz lemma). Let {ank , 1 ≤ k ≤
Pknn , n ≥ 1} be a double array of real numbers
|ank | < ∞. Let {xn , n ≥ 1} be a sequence
such that limn→∞ ank = 0 for all k ≥ 1 and supn≥1 kk=1
of real numbers.
P n
(i) If limn→∞ xn = 0, then limn→∞ kk=1
ank xk = 0.
P n
P n
(ii) If limn→∞ xn = x finite and limn→∞ kk=1
ank = 1, then limn→∞ kk=1
ank xk = x.
The Toeplitz lemma contains, as corollaries, the following well-known and important results.
Corollary 1.1
Mean Convergence Theorem). Let {xn , n ≥ 1} be a sequence of real numbers
P(Cesàro
n
and let x̄n = k=1 xk /n, n ≥ 1. If limn→∞ xn = x finite, then limn→∞ x̄n = x.
Corollary 1.2 (Kronecker Lemma).
a sequence of real numbers
Pn Let {xn , n ≥ 1} and {bn , n ≥ 1}
Pbe
n
with 0 < bn ↑ ∞. If the series k=1 xk /bk converges, then limn→∞ k=1 xk /bn = 0.
The proof of the Cesàro mean convergence theorem follows immediately from the Toeplitz
lemma (ii) by taking kn = n, n ≥ 1 and ank = n−1 , 1 ≤ k ≤ n, n ≥ 1. See Loève (1977, p. 250) for
a proof of the Kronecker lemma which also follows from the Toeplitz lemma (ii).
It is clear that the Toeplitz lemma and its corollaries are are valid when the numerical sequence
{xn , n ≥ 1} and real number x are replaced, respectively, by a sequence of random variables
1
{Xn , n ≥ 1} and random variable X provided the convergence statements involving the random
variables are couched in terms of almost sure (a.s.) convergence.
It is natural to inquire as to whether or not the Toeplitz lemma and its corollaries hold when
the mode of convergence is changed from a.s. convergence to convergence in probability or to mean
convergence of some order. In Section 2, we demonstrate by three examples that both corollaries of
the Toeplitz lemma fail when a.s convergence is replaced by convergence in probability, and that a
variety of possible limiting behaviors can prevail. Some open problems are also posed. In Section 3
we present several “mean convergence” versions of the Toeplitz lemma, Cesàro mean convergence
theorem, and the Kronecker lemma. A general mean convergence theorem for a normed sum of
independent random variables is established in Section 4.
Dugué (1957)
P investigated the “convergence in probability” problem for the sequence of Cesàro
means {X̄n = nk=1 Xk , n ≥ 1} where {Xn , n ≥ 1} is a sequence of independent random variables
P
P
P
and proved that if X̄n → c for some constant c, then n1 min1≤k≤n Xk → 0 and n1 max1≤k≤n Xk → 0.
Then, for a sequence of independent random variables {Xn , n ≥ 1} where Xn has distribution
function
1
Fn (x) = 1 −
I
(x), x ∈ R, n ≥ 1,
x + n (0,∞)
P
P
P
Dugué showed Xn → 0, n1 max1≤k≤n Xk 6→ 0, and, consequently, X̄n 6→ 0. However, for his example,
Dugué did not actually characterize the weak limiting behavior of X̄n . In our Examples 2.1 and 2.2,
P
P
we present sequences of independent random variables {Xn , n ≥ 1} wherein Xn → 0 and X̄n 6→ 0
and we characterize the weak limiting behavior of X̄n .
2
Three Counterexamples
P
We present three counterexamples in this section. In the first example, Xn → 0 yet the corresponding sequence of Cesàro means X̄n has a nondegenerate limiting distribution. The following
two lemmas are used in the verification of Example 2.1.
Lemma 2.1. For x > −1,
x
x+1
≤ log(1 + x) ≤ x.
Proof. This is well known; see Bartle (1976, p. 238).
Lemma 2.2. For all t ∈ R,
|ekt/n − 1|
= 0.
n→∞ 1≤k≤n
k
lim max
(2.1)
Proof. For t ≥ 0 and 2 ≤ m ≤ n,
|ekt/n − 1|
1≤k≤n
k
|ekt/n − 1|
|ekt/n − 1|
≤ max
+ max
1≤k≤m−1
m≤k≤n
k
k
0 ≤ max
n→∞
m→∞
≤ e(m−1)t/n − 1 + et /m −→ et /m −→ 0.
A similar argument works for t < 0 with the right-hand side of the last inequality replaced by
1 − e(m−1)t/n + m−1 and so (2.1) holds for all t ∈ R.
2
Example 2.1. Let {Xn , n ≥ 1} be a sequence of independent random variables with P (Xn = 0) =
P
P
1 − n−1 and P (Xn = n) = n−1 , n ≥ 1. It is clear that Xn → 0. Set X̄n = nk=1 Xk /n, n ≥ 1. For
k ≥ 1, the moment generating function of Xk is
ekt − 1
,t ∈ R
k
mXk (t) = 1 +
and so the cumulant generating function of X̄n is
κn (t) =
n
X
k=1
ekt/n − 1
log 1 +
k
!
, t ∈ R, n ≥ 1.
Let t ∈ R and n ≥ 1. By Lemma 2.1,
κn (t) ≤
n
X
ekt/n − 1
k=1
k
n
=
1 X etk/n − 1 n→∞
−→
n
k/n
k=1
Z
1
0
etx − 1
dx,
x
where we have used the fact that the upper bound for κn (t) is a Riemann sum. Hence,
Z 1 tx
e −1
dx.
lim sup κn (t) ≤
x
n→∞
0
(2.2)
Again let t ∈ R and fix 0 < < 1. By Lemmas 2.1 and 2.2, for all large n,
tk/n
e
−1 1
n
n
X
(ekt/n − 1)/k
1 1 X etk/n − 1
k/n n
κn (t) ≥
=
≥
,
kt/n − 1)/k
ekt/n −1
1±n
k/n
1
+
(e
1
+
k=1
k=1
k=1
n
X
k
where ± is taken to be + when t ≥ 0 and − when t < 0. Thus
Z 1 tx
1
e −1
lim inf κn (t) ≥
dx,
n→∞
1± 0
x
and since 0 < < 1 is arbitrary, we have
Z
lim inf κn (t) ≥
n→∞
0
1
etx − 1
dx.
x
(2.3)
Combining (2.2) and (2.3) gives
Z
lim κn (t) =
n→∞
0
1
etx − 1
dx, t ∈ R.
x
Thus the sequence of moment generating functions {mX̄n (·), n ≥ 1} corresponding to {X̄n , n ≥ 1}
satisfies
Z 1 tx
e −1
κn (t)
lim m (t) = lim e
= exp
, t ∈ R.
n→∞ X̄n
n→∞
x
0
Then by the continuity theorem for moment generating functions (Curtis 1942, Theorem 3), the
function
Z 1 tx
e −1
m(t) = exp
dx , t ∈ R
x
0
is the moment generating function of a random variable and the sequence of Cesàro means X̄n
has a limiting distribution with moment generating function m(·). It is clear that the limiting
distribution of X̄n is nondegenerate.
3
P
In the next example, Xn → 0 yet the corresponding sequence of Cesàro means X̄n approaches
∞ in probability.
Example 2.2. Let α > 1 and let {Xn , n ≥ 1} be a sequence of independent random variables
P
with PP
(Xn = 0) = 1 − n−1 and P (Xn = nα ) = n−1 , n ≥ 1. It is clear that Xn → 0. Set
X̄n = nk=1 Xk /n, n ≥ 1. Let M ≥ 1 be arbitrary. Let kn be the smallest integer greater than or
equal to (M n)1/α , n ≥ 1. Then for all large n,


n
n
X
[
P (X̄n ≥ M ) = P (
[Xk = k α ]
Xk ≥ M n) ≥ P 
k=1
k=kn

=1−P 
n
\

α
[Xk =
6 k ] = 1 −
k=kn
=1−
n
Y
k−1
k
k=kn
kn − 1
>1−
n
(M n)1/α
n
n→∞
−→ 1.
P
Thus X̄n → ∞ since M ≥ 1 is arbitrary.
Remark 2.1. The previous two examples demonstrate the failure of the Toeplitz lemma (i) and
(ii) when the mode of convergence is “convergence in probability”, taking ank = n−1 , 1 ≤ k ≤ kn =
n, n ≥ 1.
The next example demonstrates that a convergence in probability version of the Kronecker
lemma also fails; we note, however, that for the Kronecker lemma
Pnto fail we must have dependence
among the Xk since otherwise convergence in probability of
k=1 Xk /bk to a random variable
implies convergence a.s. (see, e.g., Theorem
3.3.1
of
Chow
and
Teicher
1997, p. 72), in which case
Pn
the traditional Kronecker lemma yields k=1 Xk /bn → 0 a.s. and hence in probability.
Example 2.3. Let {Yn , n ≥ 1} be a sequence of independent random variables with P (Yn = 0) =
1
1
1 − 2n−1
and P (Yn = 16n−1 ) = 2n−1
, n ≥ 1. For n ≥ 1 define X2n−1 = Yn and X2n = −2Yn . Then
X2n−1 X2n
+ 2n = 0, n ≥ 1
22n−1
2
and so for n ≥ 1
n
X
Xk
k=1
2k
=
Xn
I(n is odd).
2n
P
P
k P
Then, since it is clear that Xn → 0, we have that nk=1 X
→ 0 as well.
2k
P
P
To show that nk=1 Xk /2n 6→ 0, we show that the convergence fails along a subsequence, namely
that
P4n
P
k=1 Xk
→
6
0.
(2.4)
24n
Note that for all odd positive integers k,
Xk + Xk+1 = Xk − 2Xk = −Xk
4
and consequently for all n ≥ 1,
P2n
k=1 Xk
22n
=
−
Pn
k=1 X2k−1
.
22n
Thus (2.4) will hold if we can show that
P2n
k=1 X2k−1
16n
P
6→ 0.
(2.5)
To this end,
P
|
!
P2n
k=1 X2k−1 |
16n
2n
[
≥P
≥1
≥P
2n
X
!
n
X2k−1 ≥ 16
k=n+1
!
[X2k−1 6= 0]
k=n+1
=1−
2n
Y
P (X2k−1 = 0)
k=n+1
2n Y
1 n
1
≥1− 1−
≥1−
1−
2k
4n
k=n+1
−1/4
→1−e
> 0,
proving (2.5).
Remark 2.2. In view of Examples 2.1-2.3, it is natural to pose the following problems. Find
conditions under which “convergence in probability” versions of the Toeplitz lemma, the Cesàro
mean convergence theorem, and the Kronecker lemma hold for a sequence of random variables.
A sufficient condition for Cesàro means to converge in probability is given in Remark 3.1 below,
but it is not clear whether that result is optimal. Apropos of the Kronecker lemma, independence
of the sequence {Xn , n ≥ 1} is sufficient to ensure that a “convergence in probability” version of
the Kronecker lemma holds (see the discussion prior to Example 2.3). Thus, we seek conditions
under which a “convergence in probability” version of the Kronecker lemma will hold without the
independence assumption being imposed on the {Xn , n ≥ 1}.
3
Mean Convergence Versions of the Toeplitz Lemma, the Cesàro
Mean Convergence Theorem, and the Kronecker Lemma
In this section, we present “mean convergence” versions of the Toeplitz lemma (Theorem 3.1),
the Cesàro mean convergence theorem (Corollary 3.1), and the Kronecker lemma (Theorems 3.2
and 3.3). In Theorems 3.1–3.3 and Corollary 3.1, no independence conditions are imposed on the
random variables {Xn , n ≥ 1}.
For p ≥ 1, the space Lp of the absolute pth power integrable random variables is a Banach
space with norm kXkp = (E|X|p )1/p , X ∈ Lp . Now the proof of the Toeplitz lemma in Loève
(1997, p. 250) carries over to a Banach space setting, mutatis mutandis. We thus immediately
obtain the following “mean convergence” version of the Toeplitz lemma.
Theorem 3.1. Let {ank , 1 ≤ k ≤ kn , n P
≥ 1} be a double array of real numbers such that
n
limn→∞ ank = 0 for all k ≥ 1 and supn≥1 kk=1
|ank | < ∞. Let {Xn , n ≥ 1} be a sequence of
Lp random variables for some p ≥ 1.
5
Lp
(i) If Xn −→ 0, then
Pkn
k=1 ank Xk
Lp
−→ 0.
Lp
(ii) If there exists a random variable X such that Xn −→ X and limn→∞
Pkn
Lp
k=1 ank Xk −→ X.
Pkn
k=1 ank
= 1, then
Letting kn = n, n ≥ 1 and ank = n−1 , 1 ≤ k ≤ kn , n ≥ 1 in Theorem 3.1 (ii) yields the following
“mean convergence” version of the Cesàro mean convergence theorem.
Corollary 3.1. Let {Xn , n ≥ 1} be a sequence of Lp random variables where p ≥ 1 and let
P
Lp
Lp
X̄n = nk=1 Xk /n, n ≥ 1. If there exists a random variable X such that Xn −→ X, then X̄n −→ X.
Remark 3.1. It follows from the Lp -convergence theorem (see, e.g., Loève 1977, p. 165) and
P
Corollary 3.1 that if Xn → X and if, for some p ≥ 1, E|Xn |p → E|X|p < ∞ or {|Xn |p , n ≥ 1} is
Lp
P
uniformly integrable, then X̄n −→ X and, a fortiori, X̄n → X.
By employing the Banach space version of the traditional Kronecker lemma (see, e.g., Taylor
1978, p. 101), the following “mean convergence” version of the Kronecker lemma is immediate.
Theorem 3.2. Let {Xn , n ≥ 1} be a sequence of Lp random variables for some p ≥ 1 and let
{bn , n ≥ 1} be a sequence of real numbers with 0 < bn ↑ ∞. If there exists a random variable S
such that
n
X
Xk
k=1
bk
Lp
−→ S,
(3.1)
then
Pn
k=1 Xk
bn
Lp
−→ 0.
(3.2)
Remark 3.2. For a sequence of independent mean 0 random variables {Xn , n ≥ 1} and a sequence
of real numbers {bn , n ≥ 1} with 0 < bn ↑ ∞, a sufficient condition for the existence of a random
variable S satisfying (3.1) with p ∈ [1, 2] is that
∞
X
E|Xn |p
n=1
bpn
< ∞.
(3.3)
This follows readily from Theorem 2 of von Bahr and Esseen (1965) and the Cauchy convergence
criterion (see, e.g., Chow and Teicher 1997, p. 99). However, for p ≥ 2, a necessary and sufficient
condition for the existence of a random variable S satisfying (3.1) is that
∞
X
E|Xn |p
n=1
bpn
<∞
and
∞
X
EX 2
n
n=1
b2n
< ∞.
This follows readily from Theorem 2 of Rosenthal (1970) and the Cauchy convergence criterion.
It is easy to construct an example showing that when p ∈ [1, 2), the condition (3.3) is not
necessary for the existence of a random variable S satisfying (3.1); that is, (3.1) can hold when
(3.3) fails.
We now establish a version of Theorem 3.2 for a sequence of nonnegative random variables
{Xn , n ≥ 1}. In view of Theorem 3.2, Theorem 3.3 is of interest only when 0 < p < 1.
6
Theorem 3.3. Let {Xn , n ≥ 1} be a sequence of nonnegative Lp random variables for some
0 < p < ∞ and let {bn , n ≥ 1} be a sequence with 0 < bn ↑ ∞. If there exists a random variable S
such that (3.1) holds, then (3.2) holds as well.
Proof. The Lp convergence
Pn in (3.1) implies convergence in probability which, in turn, implies
traditional Kronecker lemma,
a.s.
convergence
since
k=1 Xk /bk is nondecreasing. Then by
Pn
Pthe
n
Xk
X
/b
→
0
a.s.
On
the
other
hand,
L
convergence
of
p
k=1 k n
k=1 bk implies that the sequence
P
{| nk=1 Xk /bk |p , n ≥ 1} is uniformly integrable by the mean convergence criterion (see, e.g., Chow
and Teicher 1997, p. 99). By nonnegativity,
Pn
0≤
k=1 Xk
bn
≤
n
X
Xk
k=1
bk
Pn
p
and
so
the
sequence
{|
k=1 Xk /bn | , n ≥ 1} is uniformly integrable as well. Combining this with
Pn
k=1 Xk /bn → 0 a.s. and employing the mean convergence criterion gives (3.2).
Remark 3.3. For {Xn , n ≥ 1} and {bn , n ≥ 1} as above and 0 < p ≤ 1, a sufficient condition for
verifying (3.1) is that
∞
X
EXnp
<∞
p
b
n
n=1
in view of
E
m
X
Xk
bk
!p
≤E
k=n+1
m
X
Xkp
bpk
k=n+1
!
m
X
EXkp
,
=
bpk
m>n≥1
k=n+1
and the Cauchy convergence criterion.
4
A General Mean Convergence Theorem
We close by establishing in Theorem 4.1 a general mean convergence theorem for a normed sum
of independent random variables. Its proof uses three results which will now be stated for the
convenience of the reader.
The following result is the famous de La Vallée Poussin criterion for uniform integrability; a
proof of it may be found in Meyer (1996, p. 19).
Proposition 4.1. A sequence of random variables {Un , n ≥ 1} is uniformly integrable if and only
if there exists a nondecreasing convex function G defined on [0, ∞) with
G(u)
= ∞, and sup E(G(|Un |)) < ∞.
u→∞ u
n≥1
G(0) = 0, lim
(4.1)
The next result is a so-called “contraction principle” and is due to Jain and Marcus (1975).
Proposition 4.2. Let ϕ be a nonnegative nondecreasing convex function defined on [0, ∞), let
n ≥ 1, let {λ1 , ..., λn } ⊂ R, and let Y1 , ..., Yn be independent symmetric random variables. Then
n
!!
n
!!
X
X E ϕ λ k Yk ≤ E ϕ ( max |λk |) Yk .
1≤k≤n
k=1
k=1
7
The next result is referred to as a “symmetrization moment inequality” and its proof may be
found in Loève (1997, p. 258).
Proposition 4.3. Let U ∗ be a symmetrized version of a random variable U and let m be any
median of U . Then for all p > 0,
E|U − m|p ≤ 2E|U ∗ |p .
Theorem 4.1. Let {Xn , n ≥ 1} be a sequence of independent Lp random variables for some p ≥ 1
and let {bn , n ≥ 1} and {Bn , n ≥ 1} be a sequence of real numbers with 0 < Bn ↑ ∞. Suppose that
the sequence
( n
)
X X p
k
is uniformly integrable.
(4.2)
,n ≥ 1
bk k=1
(i) If bn = O(Bn ) and
Pn
k=1 Xk P
Bn
→ 0,
(4.3)
then
Pn
k=1 Xk
Bn
Lp
−→ 0.
(ii) If bn = o(Bn ) and mn = o(Bn ) where mn is any median of
Pn
k=1 Xk Lp
−→ 0.
Bn
(4.4)
Pn
k=1 Xk , n
≥ 1, then
(4.5)
Proof. Let {Xn0 , n ≥ 1} be an independent copy of {Xn , n ≥ 1} and consider the sequence of
symmetrized random variables {Xn∗ = Xn − Xn0 , n ≥ 1}. Now (4.2) holds with Xk replaced by Xk0 ,
k ≥ 1 and since
p
n
p !
n
n
X
X X p X
∗
0
X
X
k
k
k
≤ 2p−1 +
, n ≥ 1,
bk bk bk k=1
k=1
k=1
the sequence
( n
)
X X ∗ p
k
,n ≥ 1
bk is uniformly integrable.
(4.6)
k=1
By (4.6) and Proposition 4.1,
P there exists a nondecreasing convex function G defined on [0, ∞)
satisfying (4.1) with Un = | nk=1 Xk∗ /bk |p , n ≥ 1. Set b∗n = max1≤k≤n |bk |, n ≥ 1. Now the function
ϕ(u) = G(up ), u ≥ 0 is a nonnegative nondecreasing convex function and so by Proposition 4.2
p !!
!!
n
n
X
X
Xk∗ bk Xk∗ = sup E ϕ sup E G b∗ b∗ bk n≥1
n≥1
k=1 n
k=1 n
!!
X
bk n Xk∗ ≤ sup E ϕ
max 1≤k≤n b∗
bk n≥1
n
k=1
!!
n
X
Xk∗ = sup E ϕ bk n≥1
k=1
p !!
n
X
Xk∗ = sup E G <∞
bk n≥1
k=1
8
recalling (4.1). Then since G(0) = 0 and limu→∞ G(u)
u = ∞, again by applying Proposition 4.1 with
the same function G we get that the sequence
Pn
k=1 Xk∗ p
,n ≥ 1
is uniformly integrable.
(4.7)
b∗n
We first prove part (i). It follows from bn = O(Bn ) and Bn ↑ ∞ that b∗n = O(Bn ) and so by
(4.7) the sequence
Pn
k=1 Xk∗ p
,n ≥ 1
is uniformly integrable.
(4.8)
Bn
Now (4.3) also holds with Xk replaced by Xk0 , k ≥ 1 and so
Pn
Pn
Pn
∗
X0 P
k=1 Xk
k=1 Xk
=
− k=1 k → 0.
Bn
Bn
Bn
Then by (4.8), (4.9), and the mean convergence criterion,
Pn
∗
k=1 Xk Lp
−→ 0.
Bn
P
Let mn be any median of nk=1 Xk , n ≥ 1. It follows from (4.3) that
mn = o(Bn ).
(4.9)
(4.10)
(4.11)
Then by Proposition 4.3, (4.10), and (4.11),
Pn
Pn
p
k=1 Xk p
|mn |p
p−1 E | k=1 Xk − mn |
+
E
≤2
Bn
Bnp
Bnp
Pn
p
2E | k=1 Xk∗ |
|mn |p
≤ 2p−1
+
p
Bn
Bnp
→0
proving (4.4) thereby completing the proof of part (i).
Next, we prove part (ii). It follows from bn = o(Bn ) and Bn ↑ ∞ that b∗n = o(Bn ). Then
Pn
Pn
k=1 Xk∗ p b∗p
k=1 Xk∗ p
n
= o(1)O(1) = o(1)
E
= Bnp E Bn
b∗n
recalling (4.7). Since mn = o(Bn ) by hypothesis, the conclusion (4.5) follows by the same argument
used to complete the proof of part (i).
REFERENCES
[1] von Bahr, B. and Esseen, C.-G. 1965. Inequalities for the rth absolute moment of a sum of
random variables, 1 ≤ r ≤ 2. Annals of Mathematical Statistics 36:229-303.
[2] Bartle, R.G. 1976. The Elements of Real Analysis, 2nd ed. John Wiley, New York.
9
[3] Chow, Y.S., and Teicher, H. 1997. Probability Theory: Independence, Interchangeability, Martingales, 3rd ed. Springer-Verlag, New York.
[4] Curtiss, J.H. 1942. A note on the theory of moment generating functions. Annals of Mathematical Statistics 13:430-433.
[5] Dugué, D. 1957. Sur la convergence stochastique au sens de Cesàro et sur des diffèrences
importantes entre la convergence presque certaine et les convergences en probabilité et presque
completés. Sankhyā 18:127-138.
[6] Jain, N.C., and Marcus, M.B. 1975. Integrability of infinite sums of independent vector-valued
random variables. Transactions of the American Mathematical Society 212:1-36.
[7] Loève, M. 1977. Probability Theory I, 4th ed. Springer-Verlag, New York.
[8] Meyer, P.A. 1966. Probability and Potentials. Blaisdell Publishing Company; Ginn and Company, Waltham, Mass.
[9] Rosenthal, H.P. 1970. On the subspaces of Lp (p > 2) spanned by sequences of independent
random variables. Israel Journal of Mathematics 8:273:303.
[10] Taylor, R.L. 1978. Stochastic Convergence of Weighted Sums of Random Elements in Linear
Spaces. Lecture Notes in Mathematics No. 672. Springer-Verlag, Berlin.
10