On Complete Convergence of Moving Average Processes for NSD

c Allerton Press, Inc., 2015.
ISSN 1055-1344, Siberian Advances in Mathematics, 2015, Vol. 25, No. 1, pp. 11–20. On Complete Convergence of Moving Average Processes
for NSD Sequences*
M. Amini1** , A. Bozorgnia1*** , H. Naderi1**** , and A. Volodin2*****
1
Ferdowsi University of Mashhad, Mashhad, Iran
2
University of Regina, Regina, Canada
Abstract—We study the complete convergence of moving-average processes based on an identically distributed doubly infinite sequence of negative superaddative-dependent random variables. As
a corollary, the Marcinkiewicz-Zygmund strong law of large numbers is obtained.
DOI: 10.3103/S1055134415010022
Keywords: moving average process, complete convergence, slowly varying function, negatively superadditive-dependent, Marcinkiewicz-Zygmund strong law of large numbers.
1. INTRODUCTION
We assume that {Yi ; i = 0, ±1, ±2, . . .} is a doubly infinite sequence of identically distributed random
variables with E|Y1 | < ∞. Let {ai ; i = 0, ±1, ±2, . . .} be an absolutely summable sequence of real
numbers and let
∞
X
Xn =
ai Yn+i , n ≥ 1,
(1)
i=−∞
be the moving average process based on the sequence {Yi ; i = 0, ±1, ±2, . . .}. As usual,
Sn =
n
X
Xi , n ≥ 1,
i=1
denotes the sequence of partial sums.
Under the assumption that {Yi ; i = 0, ±1, ±2, . . .} is a sequence of independent identically distributed random variables, there are some authors who have studied limit properties for the moving
average process {Xn ; n ≥ 1}. In particular, Ibragimov [13] established the central limit theorem, Burton
and Dehling [5] obtained a large deviation principle, and Li et al. [16] gave the complete convergence
result for {Xn ; n ≥ 1}.
Many authors extended the complete convergence of moving average processes in the case of
dependent sequences, for example, Zhang [22] for ϕ-mixing sequences, Liang et al. [15] for NA
sequences, Li and Zhang [17] for NA sequences, Chen et al. [6] for ϕ-mixing sequences, Amini et al.
[2] for negative dependent sub-Gaussian sequences, and Yang et al. [21] for AANA Sequences.
The concept of negatively associated (NA) random variables was introduced by Alam and Saxena [1]
and carefully studied by Joag-Dev and Proschan [14]. As pointed out and proved in [14], a number
of well-known multivariate distribution possess the NA property. Negative association has found
important and wide applications in multivariate statistical analysis and reliability. Many investigators
discuss applications of NA to Probability, Stochastic Processes, and Statistics.
∗
The text was submitted by the authors in English.
E-mail: [email protected]
***
E-mail: [email protected]
****
E-mail: [email protected]
*****
E-mail: [email protected]
**
11
12
AMINI et al.
Definition 1.1. Random variables {Xi , 1 ≤ i ≤ n} are said to be NA if, for every pair of disjoint
subsets A1 and A2 of {1, 2, . . . , n},
Cov(f1 (xi ; i ∈ A1 ), f2 (Xj ; j ∈ A2 )) ≤ 0,
where f1 and f2 are increasing in any variable (or decreasing in any variable) measurable
functions such that this covariance exists. An infinite family of random variables is NA if every
finite subfamily is NA.
The next dependence notion is negatively superadditive dependence which is weaker than NA. The
concept of negatively superadditive-dependent (NSD) random variables was introduced by Hu [12] as
follows.
Definition 1.2. (Kemperman [15]). A function φ : Rn → R is called superadditive if φ(x ∨ y) +
φ(x ∧ y) ≥ φ(x) + φ(y) for all x, y ∈ Rn , where ∨ is for componentwise maximum and ∧ is for
componentwise minimum.
Definition 1.3. (Hu [12]). A random vector X = (X1 , X2 , . . . , Xn ) is said to be NSD if
Eφ(X1 , X2 , . . . , Xn ) ≤ Eφ(X1∗ , X2∗ , . . . , Xn∗ ),
where X1∗ , X2∗ , . . . , Xn∗ are independent such that Xi∗ and Xi have the same distribution for each i,
and φ is a superadditive function such that the expectations above exist.
A sequence {Xn , n ≥ 1} of random variables is said to be NSD if, for all n ≥ 1, the random vector
(X1 , X2 , . . . , Xn ) is NSD .
Hu [12] gave an example illustrating that NSD does not imply NA and posed an open problem
whether NA implies NSD. In addition, he provided some basic properties and three structural theorems
of NSD. Christofides and Vaggelatou [7] solved this open problem and indicated that NA implies NSD.
The NSD structure is an extension of NA structure and sometimes more useful than that and can be
used to get many important probability inequalities. Eghbal et al. [8] derived two maximal inequalities
and strong law of large numbers of quadratic forms of NSD random variables, and Eghbal et al. [9]
provided some Kolmogorov inequalities for quadratic forms of nonnegative NSD uniformly bounded
random variables.
This paper is organized as follows. In Section 2, some preliminary lemmas and inequalities for NSD
random variables are provided. In Section 3, complete convergence for moving-average processes of
identically distributed doubly infinite sequence and the Marcinkiewicz–Zygmund strong law of large
numbers for NSD random variables are presented. These results improve the corresponding results of
Amini et al. [2] and Sadeghi and Bozorgnia [19].
Throughout the paper, C denotes a positive constant which may be different in various places. The
notation an ≪ bn (an ≫ bn ) means that there exists a constant C > 0 such that an ≤ Cbn (an ≥ Cbn ),
and an = O(bn ) denote that there exists a constant C > 0 such that an ≤ Cbn .
2. PRELIMINARY
We provide some preliminary facts needed to prove our main results. The first two lemmas were proved
by Hu [12] and Wang et al. [20], respectively.
Lemma 2.1. If (X1 , X2 , . . . , Xn ) is NSD and g1 , g2 , . . . , gn are nondecreasing functions then
g1 (X1 ), g2 (X2 ), . . . , gn (Xn ) are NSD.
Lemma 2.2. (Rosenthal − type maximal inequality) Let {Xn ; n ≥ 1} be a sequence of NSD random variables, with E|Xi | < ∞ for each i ≥ 1, and let {Xn∗ ; n ≥ 1} be a sequence of independent
random variables such that Xi∗ and Xi have the same distribution for each i ≥ 1. Then, for all
n ≥ 1 and p ≥ 1,
k
!
k
p
X p
X
E max Xi ≤ E
Xi∗ ,
(2)
1≤k≤n i=1
i=1
k
!
n
X p
X
3−p
E max Xi ≤2
E|Xi |p f or 1 < p ≤ 2,
1≤k≤n i=1
i=1
SIBERIAN ADVANCES IN MATHEMATICS Vol. 25 No. 1 2015
(3)
ON COMPLETE CONVERGENCE
13
and

p !
p X
k
n
X
15p

E max Xi ≤2
E|Xi |p +
1≤k≤n ln p
i=1
i=1
n
X
i=1
!p/2 
 f or p > 2.
EXi2
(4)
Definition 2.3. A real-valued function l(x), positive and measurable on [A, ∞) for some A > 0,
is said to be slowly varying if limx→∞ (l(λx)/l(x)) = 1 for each λ > 0.
We have the following facts for slowly varying functions.
Lemma 2.4. (Zhidong and Chun [23]). Let l(x) > 0 be a slowly varying function as x → ∞.
Then we have
(i) lim
l(x)
= 1;
l(2k )
sup
k→∞ 2k ≤x≤2k+1
(ii)
C2kr l(η2k )
≤
k
X
2ir l(η2i ) ≤ C2kr l(η2k ) for every positive r, η, and integer k;
i=1
(iii) C2kr l(η2k ) ≤
∞
X
2ir l(η2i ) ≤ C2kr l(η2k ) for every r < 0, η > 0, and integer k;
i=k
δ
(iv) lim x l(x) = ∞,
x→∞
lim x−δ l(x) = 0 for every δ > 0.
x→∞
3. COMPLETE CONVERGENCE
The concept of complete convergence was introduced by Hsu and Robbins [11]. A sequence of
random variables {Xn ; n ≥ 1} is said to converge completely to a constant θ if
∞
X
P(|Xn − θ| > ε) < ∞ for all ε > 0.
i=1
Hsu and Robbins [11] proved that the sequence of arithmetic means of i.i.d. random variables converges
completely to the expected value if the variance of the summands is finite. Erdös [10] proved the
converse. The result of Hsu-Robbins-Erdös is a fundamental theorem in probability theory and has
been generalized and extended in several directions by many authors.
Now we state the main results of this article.
Theorem 3.1. Let l(x) > 0 be a slowly varying function at infinity, 1 < p ≤ 2, 0 < α < 1, and
rp > 1. Suppose that {Xn , n ≥ 1} is a moving average process based on a sequence {Yi , i =
0, ±1, ±2, . . .} of identically distributed centered NSD random variables. If E|Y1 |p l(|Y1 |1/α ) < ∞
then, for all ε > 0,
∞
X
αp−2
α
n
l(n)P max |Sk | ≥ εn
<∞
(5)
1≤k≤n
n=1
and
∞
X
n
αp−2
(
n
X
k=1
Xk =
n X
∞
X
k=1 i=−∞
ai Yi+k =
)
l(n)P sup |Sk /k | ≥ ε
k≥n
n=1
Proof. Recall that
α
∞
X
ai
i=−∞
SIBERIAN ADVANCES IN MATHEMATICS
i+n
X
Yj , n ≥ 1, and
j=i+1
Vol. 25 No. 1 2015
< ∞.
m
X
i=−m
|ai | = O(1) as m → ∞.
(6)
14
AMINI et al.
−
Without loss of generality, we assume that ai > 0 for all i ≥ 1 (otherwise, we use a+
i and ai instead of
−
ai , respectively, and note that ai = a+
i − ai ). Denote
(n)
= −nα I(Yi < −nα ) + Yi I(|Yi | ≤ nα ) + nα I(Yi > nα ), n ≥ 1.
Yi
So, for n ≥ 1,
(n)
Yi ≤ nα I(|Yi | > nα ) + Yi I(|Yi | > nα ) + Yi
.
Then, for any ε > 0,
∞
X
n
αp−2
l(n)P
n=1
max |Sk | > n ε
α
1≤k≤n



X
∞
i+k

X
≤
nαp−2 l(n)P max ai
[nα I(|Yj | > nα ) + Yj I(|Yj | > nα )] > nα ε/2

1≤k≤n n=1
i=−∞
j=i+1


∞
i+k
∞


X
X
X
(n) αp−2
+
n
l(n)P max ai
Yj > nα ε/2

1≤k≤n n=1
i=−∞
j=i+1
∞
X
=: I1 + I2 .
(7)
Hence, in order to prove (5), it suffices to establish that I1 < ∞ and I2 < ∞. Applying Lemma 2.4 (iv)
and taking the restriction E|Y1 |p l(|Y1 |1/α ) < ∞ into account, we conclude that
E|Y1 |p−γ < ∞ for any γ > 0.
Since p > 1 then, by Markov’s inequality, we can obtain


X
∞
∞
i+k
X
X
αp−α−2
α
α
α 

I1 ≪
n
l(n)E max ai
n I(|Yj | > n ) + Yj I(|Yj | > n )
1≤k≤n n=1
i=−∞
j=i+1
≤
∞
X
nαp−α−2 l(n)
n=1
∞
X
≪ 2
∞
X
i=−∞
ai nα+1 P(|Y1 | > nα ) + nE|Y1 |I(|Y1 | > nα )
nαp−α−1 l(n)E|Y1 |I(|Y1 | > nα )
n=1
=
≪
=
∞ 2j+1
X
X−1
nαp−α−1 l(n)E|Y1 |I(|Y1 | > nα )
j=0 n=2j
∞
X
α(p−1)j
2
j=1
∞
X
α(p−1)j
2
l(2j )E|Y1 |I(|Y1 | > 2αj ) (by Lemma 2.4 (i))
j
l(2 )
j=1
=
≪
∞
X
k=1
∞
X
∞
X
E|Y1 |I(2αk < |Y1 | ≤ 2α(k+1) )
k=j
αk
E|Y1 |I(2
α(k+1)
< |Y1 | ≤ 2
)
k
X
2α(p−1)j l(2j )
j=1
2α(p−1)k l(2k )E|Y1 |I(2αk < |Y1 | ≤ 2α(k+1) ) (by Lemma 2.4 (ii))
k=1
≪
∞
X
2αpk l(2k )P(2αk < |Y1 | ≤ 2α(k+1) )
k=1
SIBERIAN ADVANCES IN MATHEMATICS Vol. 25 No. 1 2015
(8)
ON COMPLETE CONVERGENCE
15
≪ E{|Y1 |p l(|Y1 |1/α )} < ∞.
To prove that I2 < ∞, we first show
∞
i+k
X
X
(n) −α
n
max
ai
EYj → 0 as n → ∞.
1≤k≤n i=−∞
j=i+1
(9)
In the case 0 < α < 1, p ≥ 1, and 0 < γ < min{ αp−1
α , p − 1}, we obtain by (8) the following estimate:
∞
i+k
∞
i+n
X
X
X
X
(n) (n)
−α
n
max ai
EYj ≤ n−α
ai
|EYj |
1≤k≤n i=−∞
j=i+1
i=−∞
j=i+1
≤ n−α
∞
X
i=−∞
≤ Cn
1−α
ai
i+n
X
{E|Y1 |I(|Y1 | > nα ) + nα P(|Y1 | > nα )}
j=i+1
E|Y1 |I(|Y1 | > nα ) + nP(|Y1 | > nα )
≤ Cn1−αE|Y1 |I(|Y1 | > nα )
= Cn1−αE|Y1 |p−γ |Y1 |1−p+γ I(|Y1 | > nα )
≤ Cn1+αγ−αp E|Y1 |p−γ I(|Y1 | > nα ) → 0 as n → ∞.
Hence, from (9) it follows that, for n large enough and every ε > 0,
∞
i+k
X
X
(n) −α
ai
EYj < ε/2.
n
max
1≤k≤n i=−∞
j=i+1
This implies that



X
∞
i+k

X
(n)
(n) αp−2
α
I2 ≤
n
l(n)P max ai
(Yj − EYj ) > n ε/4 .
1≤k≤n 
n=1
i=−∞
j=i+1
∞
X
(n)
By Lemma 2.1, we can see that, for every fixed n ≥ 1, the sequence {Yj , j ≥ 1} consists of NSD
random variables. Thus, by Markov’s inequality and Part J of Theorem 1 in [6], it follows from (4) that,
for q > 2,
q 

∞
∞
i+k
X
X
X
(n)
(n) 
αp−αq−2

I2 ≪
n
l(n)E max ai
(Yj − EYj )
1≤k≤n n=1
i=−∞
j=i+1
q
!q−1 ∞
i+k
∞
∞
X
X
X
X
(n)
(n) αp−αq−2
≤
n
l(n)
ai
ai E max (Yj − EYj )
1≤k≤n n=1
i=−∞
i=−∞
j=i+1


q/2 


∞
∞
i+n
i+n
X 
X
X
X (n)
(n)
(n) q
(n) 2 
αp−αq−2

≤ C
n
l(n)
ai
E Yj − EYj +
E Yj − EYj 

j=i+1

n=1
i=−∞
j=i+1
=: I3 + I4 .
To estimate I3 , by Cr inequality, we have
I3 ≪
∞
X
n=1
nαp−αq−2 l(n)
∞
X
i=−∞
ai
i+n
X
j=i+1
o
n
(n) q (n) q
E Yj + EYj SIBERIAN ADVANCES IN MATHEMATICS
Vol. 25 No. 1 2015
16
AMINI et al.
≤ 2
∞
X
n
αp−αq−2
l(n)
n=1
=
∞
X
ai
i=−∞
nαp−αq−2 l(n)
n=1
≤ C
∞
X
∞
X
j=i+1
i+n
X
ai
i=−∞
∞
X
i+n
X
(n) q
E Yj (by Jensen′ s inequality)
E |−nα I(Y1 < −nα) + Y1 I(|Y1 | ≤ nα ) + nα I(Y1 > nα )|q
j=i+1
nαp−αq−1 l(n)E {|Y1 |I(|Y1 | ≤ nα ) + nα I(|Y1 | > nα )}q
n=1
≪
∞
X
n
αp−αq−1
q
α
l(n)E|Y1 | I(|Y1 | ≤ n ) +
n=1
∞
X
nαp−1 l(n){P(|Y1 | > nα )}q (by Cr inequality)
n=1
=: I5 + I6 .
To estimate I5 , we have for p ≤ 2:
I5 =
∞
X
nαp−αq−1 l(n)E|Y1 |q I(|Y1 | ≤ nα )
n=1
=
≪
j+1
∞ 2X
−1
X
nαp−αq−1 l(n)E|Y1 |q I(|Y1 | ≤ nα )
j=0 n=2j
∞
X
(αp−αq−1)j j
2 l(2j )E|Y1 |q I(|Y1 | ≤ 2α(j+1) ) (by Lemma 2.4 (i))
2
j=1
≪
∞
X
α(p−q)j
2
j
l(2 )
j=1
≤
∞
X
≪
q
αk
E|Y1 | I(2
α(k+1)
< |Y1 | ≤ 2
2α(p−q)j l(2j )
j
X
)+
∞
X
2α(p−q)j l(2j )
j=1
k=1
j=1
∞
X
j
X
E|Y1 |q I(2αk < |Y1 | ≤ 2α(k+1) ) + C (by Lemma 2.4 (iii))
k=1
2αpk l(2k )P(2αk < |Y1 | ≤ 2α(k+1) ) + C
k=1
≪ E{|Y1 |p l(|Y1 |1/α )} < ∞.
To estimate I6 , by Lemma 2.4 (ii) and similar argument as those in proving finiteness of I5 , we can
establish that I6 < ∞.
Now we prove that I4 < ∞. By Cr inequality, we obtain

q/2
i+n
∞
∞
X

X
X
(n)
I4 ≪
nαp−αq−2 l(n)
ai
E|Yj |2


≪
≪
n=1
i=−∞
∞
X
∞
X
n=1
∞
X
nαp−αq−2 l(n)
i=−∞
q
j=i+1
q/2
ai n2α+1 P(|Y1 | > nα ) + nE|Y1 |2 I(|Y1 | ≤ nα )
nαp−2+ 2 l(n) {P(|Y1 | > nα )}q/2 +
n=1
=: I7 + I8 .
∞
X
n=1
q/2
q
nαp−αq−2+ 2 l(n) E|Y1 |2 I(|Y1 | ≤ nα )
To evaluate I7 , we choose q > 2 large enough such that (αp − 2) + 2q (1 − αp + αγ) < −1. Hence,
SIBERIAN ADVANCES IN MATHEMATICS Vol. 25 No. 1 2015
ON COMPLETE CONVERGENCE
17
in the case 0 < α < 1, p > 1, and 0 < γ < min{ αp−1
α , p − 1}, we obtain
I7 ≤
=
≤
∞
X
n=1
∞
X
n=1
∞
X
n=1
q
αq
2
l(n) {E|Y1 |I(|Y1 | > nα )}q/2
q
αq
2
q/2
l(n) E|Y1 |p−γ |Y1 |1−p+γ I(|Y1 | > nα )
nαp−2+ 2 −
nαp−2+ 2 −
q/2
q
n(αp−2)+ 2 (1−αp+αγ) l(n) E|Y1 |p−γ I(|Y1 | > nα )
< ∞.
Finally, to evaluate I8 , with the same choice of q and γ, we have
I8 =
≤
∞
X
q/2
q
nαp−αq−2+ 2 l(n) E|Y1 |p−γ |Y1 |2−p+γ I(|Y1 | ≤ nα )
n=1
∞
X
q/2
q
n(αp−2)+ 2 (1−αp+αγ) l(n) E|Y1 |p−γ I(|Y1 | ≤ nα )
< ∞.
n=1
The relation (5) is proven and we now prove (6). By Lemma 2.4 and Lemma 3 in [6], for r = αp and
ε∗ = ε/4α ,
!
!
∞
∞ 2j+1
X
X
X−1
nαp−2 l(n)P sup |Sk /kα | > ε =
nαp−2 l(n)P sup |Sk /kα | > ε
k≥n
n=1
≪
≪
j=0 n=2j
∞
X
(αp−1)j
2
j=0
∞
X
n
(αp−2)
k≥n
l(2 )P
l(n)P
n=1
This completes the proof of the theorem.
j
!
α
sup |Sk /k | > ε
k≥2j
∗ α
max |Sk | > ε n
1≤k≤n
< ∞.
Theorem 3.2. Let l(x) > 0 be a slowly varying and nondecreasing function. Suppose that
{Xn , n ≥ 1} is a moving average process based on a sequence {Yi ; i = 0, ±1, ±2, . . .} of identically
distributed NSD random variables. If EY1 = 0 and E|Y1 |1/α l(|Y1 |1/α ) < ∞, with 1/2 < α < 1, then,
for all ε > 0,
∞
X
l(n)
P max |Sk | ≥ εnα < ∞.
(10)
1≤k≤n
n
n=1
Proof. Without loss of generality, we assume that ai > 0 for all i ≥ 1. Recall that
(n)
Yi
= −nα I(Yi < −nα ) + Yi I(|Yi | ≤ nα ) + nα I(Yi > nα ), n ≥ 1.
Then, for any ε > 0,
∞
X
≤
l(n)P
n=1
max |Sk | > n ε
∞
X
n−1 l(n)P


∞
X
n=1
n
−1
α
1≤k≤n
max |
1≤k≤n
ai
i=−∞
SIBERIAN ADVANCES IN MATHEMATICS
i+k
X
j=i+1


nα I(|Yj | > nα ) + Yj I(|Yj | > nα )| > nα ε/2

Vol. 25 No. 1 2015
18
AMINI et al.
+
∞
X
n−1 l(n)P
n=1
=:
I1∗
+
I2∗ .


max |
1≤k≤n
∞
X
ai
i=−∞
i+k
X
j=i+1


(n)
Yj | > nα ε/2

Hence, in order to prove (10), it suffices to show that I1∗
similarly to that of I1 .
Prove finiteness of I2∗ . We have
<
∞ and I2∗
(11)
<
∞. Finiteness of I1∗
can be proved
∞
i+k
∞
i+n
X
X
X
X
(n) (n)
−α
−α
n
max
ai
EYj ≤ n
ai
|EYj |
1≤k≤n i=−∞
j=i+1
i=−∞
j=i+1
≤ 2n1−α E|Y1 |I(|Y1 | > nα ) (by EY1 = 0)
= 2n1−α E|Y1 |1/α |Y1 |1−1/α I(|Y1 | > nα )
≤ 2E{|Y1 |1/α I(|Y1 | > nα )} → 0 as n → ∞.
(12)
(n)
By Lemma 1.4, it is easy to show that, for every fixed n ≥ 1, the sequence {Yj ; j ≥ 1} consists of NSD
random variables. By analogy with proving the relation I2 < ∞, we have
2
i+k
∞
∞
X
X
X
(n)
(n) ∗
−2α−1
I2 ≪
n
l(n)
ai E max (Yj − EYj )
1≤k≤n n=1
i=−∞
j=i+1
≪
≪
=:
To estimate
I3∗
∞
X
n=1
∞
X
n−2α−1 l(n)
∞
X
ai
i=−∞
i+n
X
(n) 2
E|Yj
| (by (3))
j=i+1
n−2α−1 l(n) n2α+1 P(|Y1 | > nα ) + nE|Y1 |2 I(|Y1 | ≤ nα )
n=1
I3∗ + I4∗ .
for α < 1, by Lemma 2.4 (i), (ii), we can write
I3∗
≤
∞
X
n−α l(n)E|Y1 |I(|Y1 | > nα )
n=1
=
j+1
∞ 2X
−1
X
n−α l(n)E|Y1 |I(|Y1 | > nα )
j=0 n=2j
≪ E{|Y1 |1/α l(|Y1 |1/α )} < ∞.
To estimate I4∗ for α > 1/2, by Lemma 2.4 (i), (iii), we obtain
I4∗ ≤
∞
X
n−2α l(n)E|Y1 |2 I(|Y1 | ≤ nα )
n=1
=
j+1
∞ 2X
−1
X
n−2α l(n)E|Y1 |2 I(|Y1 | ≤ nα )
j=0 n=2j
≪ E{|Y1 |1/α l(|Y1 |1/α )} < ∞.
These complete the proof.
As an application of Theorem 3.2, we can deduce the Marcinkiewicz–Zygmund strong law of large
numbers as follows.
SIBERIAN ADVANCES IN MATHEMATICS Vol. 25 No. 1 2015
ON COMPLETE CONVERGENCE
19
Corollary 3.3. Let {Xn , n ≥ 1} be a moving average process based on a sequence {Yi ; i =
0, ±1, ±2, . . .} of identically distributed NSD random variables. If EY1 = 0 and E|Y1 |1/α < ∞,
with 1/2 < α < 1, then
Sn
→ 0 a. s.
nα
Proof. By applying Theorem 3.2, with l(x) = 1, we can write
∞
X
−1
α
n P max |Sk | > εn
< ∞ for all ε > 0.
1≤k≤n
n=1
Hence, for all ε > 0,
∞
1X
P
2 r=1
max
rα
1≤k≤2r−1
|Sk | > ε2
r
≤
=
∞ 2X
−1
X
n
r=1 n=2r−1
∞
X
−1
n
P
n=1
−1
P
max |Sk | > εn
1≤k≤n
max |Sk | > εn
1≤k≤n
α
α
< ∞.
So the Borel–Cantelli lemma yields that, as r → 0,
2−rα max r |Sk | → 0 a.s.
1≤k≤2
For all positive integers n, there exists a nonnegative integer r such that 2r−1 ≤ n < 2r . Thus,
n−α |Sn | ≤
This implies that
Sn
nα
max
2r−1 ≤n<2r
n−α |Sn | ≤ 2−α(r−1)
max
2r−1 ≤k<2r
→ 0 a.s.
|Sk | → 0 a.s.
REFERENCES
1. K. Alam and K.M.L. Saxena, “Positive dependence in multivariate distributions,” Comm. Statist. Theor.
Meth. 10, 1183 (1981).
2. M. Amini, H. Nili, and A. Bozorgnia, “Complete convergence of moving-average proceeses under negative
dependent Sub-Gussian assumptionns,” Bull. Iran. Math. Soci. 38, 843 (2012).
3. H. W. Block, T. H. Savits, and M. Shaked, “Some concepts of negative dependence,” Ann. Probab. 10, 765
(1982).
4. A. V. Bulinski and A. Shashkin, Limit Theorems for Associated Random Fields and Related Systems,
Moscow: Fizmatlit, 2008 [in Russian].
5. R. M. Burton and H. Dehling, “Large deviations for some weakly dependent random processes,” Statist.
Probab. Lett. 9, 397 (1990).
6. P. Chen, T. C. Hu, and A. Volodin, “Limiting behavior of moving average processes under ϕ-mixing
assumption,” Statist. Probab. Lett. 79, 105 (2009).
7. T. C. Christofides and E. Vaggelatou, “A connection between supermodular ordering and positive /negative
association,” J. Multivariate . Anal. 88, 138 (2004).
8. N. Eghbal, M. Amini, and A. Bozorgnia, “Some maximal inequalities for quadratic forms of negative
superadditive dependence random variables,” Stat. Probab. Lett. 80, 587 (2010).
9. N. Eghbal, M. Amini, and A. Bozorgnia, “On the Kolmogorov inequalities for quadratic forms of dependent
uniformly bounded random variables,” Stat. Probab. Lett. 81, 1112 (2011).
10. P. Erdös, (1949). On a theorem of Hsu-Robbins. Ann.. Math. Stat. 20, 286-291.
11. P. L. Hsu and H. Robbins, “Complete convergence and the law of large numbers,” Proc. Nat. Acad. 33, 25
(1974).
12. T. Z. Hu, “Negatively superadditive dependence of random variables with applications,” Chin. J. Appl.
Probab. Stat. 16, 133 (2000).
13. I. A. Ibragimov, “Some limit theorem for stationary processes,” Theory Probab. Appl. 7, 349 (1962).
SIBERIAN ADVANCES IN MATHEMATICS
Vol. 25 No. 1 2015
20
AMINI et al.
14. D. K. Joag-Dev and F. Proschan, “Negative association of random variables with applications,” Ann. Stat.
11, 286 (1983).
15. J.H.B. Kemperman, “On the FKG-inequalities for measures on a partially ordered space,” Nederl. Akad.
Wetensch. Proc. Ser A. 80, 313 (1977).
16. D. Li, M. B. Rao, and X. C. Wang, “Complete convergence of moving average processes,” Statist. Probab.
Lett. 14, 111 (1992).
17. Y. X. Li and L. Zhang, Complete moment convergence of moving-average processes under dependence
assumptions Statist. Probab. Lett . 70, 191 (2004).
18. H. Y. Liang, T. S. Kim, and J. l. Baek, “On the convergence of moving average processes under negatively
associated random variables,” Indian J. Pure Appl. Math. 34, 461 (2003).
19. H. Sadeghi and A. Bozorgnia, “Moving average and complete convergence,” Bull. Iranian. Math. Soc. 20,
37 (2012).
20. X. Wang, X. Deng, L.L. Zheng, and S.H. Hu, “Complete convergence for arrays of rowwise superadditivedependent random variables and its applicationns,” Statistics 48, 834 (2014).
21. W. Yang, X. Wang, N. Ling, and S. Hu, “On complete convergence of moving average processes for AANA
sequence,” Disc. Dynam. Natu. Soci. (2012).
22. L. Zhang, “Complete convergence of moving average processes under dependence assumptions,” Statist.
Probab. Lett. 30, 165 (1996).
23. B. Zhidong and S. Chun, “The complete convergence for partial sums of iid random variables,” Sci. Sin.
XXVIII, 1261 (1985).
SIBERIAN ADVANCES IN MATHEMATICS Vol. 25 No. 1 2015