STRONG APPROXIMATION OF THE QUANTILE PROCESSES AND
ITS APPLICATIONS UNDER STRONG MIXING PROPERTIES
by
*S.B. Fotopoulos
*S.K Ahn
and **S. Cho
*Washington State University
**Seoul National University
STR APPR QUANT PROC APPL STR MIX PROP
S.B. Fotopoulos
Department of Management and Systems
Washington State University
Pullman, WA 99164-4726
U.S.A
STRONG APPROXIMATION OF THE QUANTILE PROCESSES AND
ITS APPLICATIONS UNDER STRONG MIXING PROPERTIES
S.B. Fotopoulos, S.K Ahn, and S. Cho
Abstract. Given some regularity conditions on the distribution F( .) of a random sample
Xh
...,
Xn emanating from a strictly stationary sequence of random variables satisfying a
strong mixing condition, it is shown that the sequence of quantile processes
{n1hf(F1(s))(F;1(s) - F1(s)); 0 < s < I} behaves like a sequence of Brownian bridges
{BD(s); 0 < s < I}. The latter is then utilized to construct (i) simultaneous confidence
bounds for the unknown quantile function F1(s), and (ii) a tolerance interval for
predicting a future observation. Some numerical investigation of the results are also
discussed.
Key Words and Phrases: Strong Mixing, Empirical Process, Quantile Process, Prediction
Interval.
Abb. title Prediction Interval for Strong Mixing Processes
1. Introduction
Let {Xu : n E Z} be a real valued strictly stationary sequence of random
variables taking values in a space (0,
Y,
P) with common distribution F(x). The nth
empirical and quantile measures are given by F..(x)
F..-1(s) = inf{X : F..(X) ~ s}
and
respectively, and their processes are defined as follows:
C..(x) = n 'h(F..(x) - F(x»,
and
= n -1L;'•• Ic_oo.xj(X)
-00
< x ~
00,
Q..(s) = n '\F..-1(s) - F-1(s», 0 < s < 1.
We will investigate how well Cix) and Qn(x) can be approximated by the
Gaussian Processes. The literature on the behavior of the strong approximation of Cn(x)
and Qn(x) for independently and identically distributed sequences of random variables is
extensive, with prominent contributions from Csaki (1977), Csorgo and Revesz (1978,
1981), and Csorgo (1983), to name a few. For the dependence case, one needs to define
a mixing coefficient as follows. For any collection X of random variables, let B(X)
denote the Borel field generated by X. Thus, for - 00 < m S n S
~ .. = B(~:
m S k < n). Hence, for each n
(1.1)
Q
where
Q
r.. (n)
°
S
=
(~:, !TOO)
r....
~
~o, B E !Too, andP(A)P(B):;: OH 0,
00
II
r, s S 1. Since the process is stationary, it yields
any integer j. With r
= s = 0, the process satisfying the
mixing process, and Q(n)
=
define
1, define
= sup{ P(AnB)-P(A)P(B) :AE
P(A)T(B)"
00,
Q,••
(n) =
Q,••
(~2, ~"..)
for
(1.1) condition is called a strong
Qoo(n) is called a strong mixing coefficient; with r = 1 and
s = 0, one obtains the uniform mixing process, and /p(n)
=
Q1o(n) is the uniform mixing
coefficient. For the sake of completeness, we also designate the absolute regular process
to be when the absolute regular coefficient pen)
= E{SUPAET,-I peA
I~oo)
- peA) I} 1 O.
Extensive information about conditions of type (1.1) and absolute regular processes can
-2-
be found in Bradley (1986). Philipp (1977), Berkes and Philipp (1977), and Yoshihara
(1979) have studied the large sample strong approximation properties for the empirical
process Cn(x) for strictly stationary and strong mixing sequences of random variables with
a strong mixing coefficient decreasing at polynomial order.
We have two main objectives in this work. The first, is to establish strong
approximation results for the nth quantile process under a strictly stationary strong
mixing coefficient decaying at polynomial rate. The second, and more important, aim
will be to try to understand what these results represent; this is achieved by
demonstrating how to construct simultaneous confidence intervals for the quantile
function and to obtain one-step-ahead prediction intervals for future observations. In
addition to the fact that these results are of considerable intrinsic interest, they are
included here to show how much one can do from the understanding we shall develop.
•
The results are attained by relating empirical processes with those of the quantile. The
link between the empirical process and the quantile process is restricted to the strong
mixing case. It is not hard to see that most of the argument also extends to other types
of mixing cases, Le., absolute regular and uniform mixing, although there are some nontrivial technical problems to overcome on the way. (One needs to establish similar
lemmas to Lemma 1, 2, and 4 below, for absolute regular and uniform mixing.) The
important fact to know, however, is that although some details change, the same intuition
developed for the strong mixing case carries over qualitatively to uniform mixing and
absolute regular processes.
-3-
2. Background and Results
In this section, our focus is on introducing some further notations, to define two
types of Gaussian processes, which play an important role in our approximation, and to
present the main results of the study.
We commence with notations. The symbol
«
denotes that the left-hand side is
bounded by an unspecified constant times the right-hand side; it is used instead of 0(·)
notation. In exactly the same way as in the Li.d. case, we shall redefine the space for
which the sequence {Xu : n E Z} was generated to a space which is rich enough in the
sense that a separable Gaussian process can be defined on it. Now, the separable
Gaussian process will be called Brownian bridge {B(s) : 0 S s S I}, if
B(l)
= B(O) = 0, E(B(s)) = 0 and has
a covariance function E(B(s) B(s'))
= res,
s'),
for Os s, s' s 1. To define res, s'), we write
(2.1) g.,(s) = l[o.lj(UJ - s, n ~ 1,
where {Un: n E Z} is a uniform on [0, 1] strictly stationary strong mixing sequence of
random variables. Then, for 0 < S, S' S 1,
(2.2) r(s, s') = E(g\(s)g.(s'» +
L:2 {E(g.(s)g.,(s'»
+ E(g.,(s)g\(s'»}.
such that the series on the right-hand side of reS, s') is absolutely convergent.
The second separable Gaussian process which will be utilized here is a Kiefer
type process {K(s, t) : 0 S s S 1, t > O} with K(s, 0)
E(K(s, t))
= K(l,
t)
= K(O,
t)
= 0,
= 0 and covariance function rO(t, t', s, s') = min (t, t') res, 5'), for t, t'
and 0 S s, s'S 1.
~
0
-4-
For convenient reference, the basic conditions on F(·), from which the various
results are obtained, are gathered together here.
F1
F(x) is twice differentiable on (a, b), where
:
and
00
-00
S a
= sup{x : F(x) = 0)
> b = inf {x: F(x) =1},
F2
:
F' = f ~
FJ
:
sUPO<s<l
°
on (a, b),
I f/(Pl(s)) I
SUPO<s<l I f"(pl(s))
I
<
00,
< c, for some constant c > 0,
F(x) is strictly increasing on [a, b],
f(x) is unimodal,
sup < <bF(x)(l - F(x»
•
x
I£' (x) I ~
e(x)
A = lim supx ~ _a,f(x) <
1, for some 1 > 0,
00,
B = lim supx t OlIf(x) <
00,
F 9 : min (A, B) > 0,
FlO: if A =
°(resp. B
= 0), then f is non-decreasing (resp. non-increasing) on
an interval to the right of a (resp. to the left of b), and
Fu
:
for sufficiently large n, f(F-I(n _I~
len»~ > c,
where c > 0, and £(n) is a slowly
varying function of n with £(n)lIA < log n.
Theorem 1. Let
{~
: i E Z} be a strictly stationary real valued sequence of random
variables satisfying the strong mixing property with strong mixing coefficient
a(n)
«
n-8.
Suppose that F h F 2 and F 3 are satisfied. Then, there exists a Brownian bridge
defined on the same probability space as the above sequence with covariance function
r(s,
Sf),
°<
S, Sf S
1, and a positive constant A such that with probability one
-5-
SUPO<a<1 If(F -I (s))On(s) - Bn(s) I
«
(lognf".
•
If we replace F 3 by milder conditions, then a Theorem 6 version of Csarga and
Revesz (1978) can be formulated as follows.
Theorem 2. Let
{~
: i E Z} be a strictly stationary real valued sequence of random
variables satisfying the strong mixing property with strong mixing coefficient
Suppose that F I , F 2 and F7 are satisfied. Then there exists a Kiefer process defined on
the same probability space as the above sequence with covariance function
min(n, n')r(s, s'), for 0 < n, n' <
00
and 0 =::; s, s' =::; 1, such that
SUPn-· "a" l-n-.l f(F -I (s))On(S) - K(s, n)/n'lI
for some 0 <
J.L
I«
(log nf",
< dO!, d E (0, 14) and O! E (0, 1/120) and A =
_1_.
3840
If, in addition to F h F 2 and F7 , we assume that F g and F9 hold, then
SUPO<.< I If(F -1(s))On(s) - K(s, n)/n l!J
I«
(log nf".
Remark. If, in addition to F I , F 2 and F7, we assume that F g, FlO and F u hold, then
where Ais as above.
The proof of this will be seen in the next section.
•
-6-
The next two theorems exploit the above results in showing how confidence
contours for the quantile measures are obtained. As they stand, Theorems 1 and 2 do
not provide immediate confidence bound for the quantile function; this is because they
depend on the unknown function 1/f(F1(s)). For this reason, an estimator of 1/f(F1(s»
is required. The estimator proposed in this study is shown to be strong consistent. But
first, two sets of assumptions are needed.
The proposed estimator is a Kernel-type, which is similar to that suggested for the
independence case by Csorgo and Revesz (1984).
(2.3) rP (s) ..
D
..!:.
II K [~] dF-I(s).
h
h
D
D
D
K(·) is a measurable Kernel-type function with the following assumptions
K1
K(·) is a density function, which is absolutely continuous on (- 00,
:
vanishing outside of the interval (-V2, 1;2),
I
I~
K2
:
xK(x)dx .. 0,
_I,i
I~
K3
f x K(x)dx < oo,and
2
:
_I~
K.. :
sUP_ao<x<ao
IK'(x)
I<
00.
For the sequence of constants {hu : n EN}, we assume the following:
HI : hD
H2
:
(loglogn) 1/6 and
«
(logn)~ «
n
1/6'
h n li
D
,where A is defined in Theorem 1.
loglogn
With this preparation in mind, the second result is now in order.
00)
-7-
Theorem 3. Let
{~
: i E Z} be a strictly stationary real valued sequence of random
variables satisfying the strong mixing property with strong mixing coefficient
a(n)
Suppose that F 1 - F4, K1 -
~,
«
n-8.
HI and H 2 are satisfied, then there exists a
Brownian bridge defined on the same probability space with E(Bn(s)) = 0,
E(Bn(s), Bis /)) = r(s, S/), for 0 :s s, s ':s 1, and a positive constant A such that with
probability one
limD1 ..P(FD-1(S) - n'~iPD(s)c ~ F-1(s) ~ FD-1(s) + n'~iPD(s)c;
where K(c) = P(sUpossSl I Bn(s)
I
0<
s <
1)
= K(c),
<c).
•
The above result is an analog of Kolmogorov's classical theorem on the Empirical
distribution function. It should be pointed out that one-sided intervals can also be
deduced from the above result. Theorems 1 and 2 may produce more direct and simpler
confidence intervals for the quantile measure P-1(S) (the need of Kernel-type Estimator
of 1jf(F1(s)) is unnecessary), as can be seen below.
The following theorem is a Csorgo and Horvath (1989) analog for the stationary
case.
Theorem 4. Let
{~:
i E Z} be a strictly stationary real valued sequence of random
variables satisfying the strong mixing property with mixing coefficient
-8-
Then, for the sequence {on = nO", p. E (0, da), d E (0, V4), a E (0, 1/120) and n E N}
of positive constants, the following statements
and
where B(s) is a Gaussian process with covariance function r(s,
Sf),
0 <
S, Sf
< 1, hold
true without assuming any further conditions on F.
•
The next result discussed here, which ties in with the proposed method, is the
randomly-located tolerance interval. The general theory of constructing prediction
intervals for future observations was developed and expounded in detail by Butler (1981).
To give a fully detailed, self-contained version of showing some weak convergence results
similar to lemmas found in Butler's work (2.1, 2.3, etc.) under now dependence, we
would have to copy a few pages of detailed and essentially uninteresting calculations.
Also, Cho and Miller (1987) have produced some of these results under strictly
stationary uniform mixing sequences. Since this seems to be a somewhat unjustified
addition, we shall assume that the reader is familiar with the Butler (1981) and Cho and
Miller (1987) studies, and merely point out some results and ideas developed there and
omit the proofs.
The class of 100a% tolerance intervals for
~+1
is given by
-9-
(2.4) {I(e5) .. [F- ' (e5),F- ' (e5 + a)]:O ~ e5 ~ I-a}.
The one which supports the smallest trimmed variance, ,?-(0), say, of (2.4), Le., I( o*), is
the chosen inteIVal for predicting the one-step-ahead, -"n+l' future obseIVation. Since
F(·) is unknown, an estimator of 0·,
5· say, is obtained by substituting F 1(.) with
Fa-I(.).
The lOOa% prediction inteIVal for -"n+l is then given by
(2.5)
1 {5") ..
[Fa-I(S"), Fa-'(S" + a)].
The property, which can be verified by using Theorem 1 as in Cho and Miller (1987), is
expressed in the following corollary.
Corollary 1: Let
{~
: i E Z} be a strictly stationary strong mixing sequence of real-
valued random variables with strong mixing coefficient
Suppose that F 1 - F 6 are satisfied, then
1.
S" -
e5"
in probability, and
•
Some numerical investigation of the coverage probability Pn(a) is discussed in the next
section.
The above corollary enables us to obtain nonparametric prediction inteIVals
without fitting parametric models. Therefore, we do not have to worry about
specification and estimation of the marginal distribution of the process -"n+l; particularly
when we have non-Gaussian marginals such as exponential and Laplacian, we cannot
obtain the maximum likelihood estimator because of the boundary problem, see Smith
(1986). The use of prediction inteIVals instead of point prediction is advocated in
Keyfitz (1972), Butler (1982), and Cho and Miller (1987), among others.
-10-
3. Numerical Investigation in Prediction
The class of strictly stationary processes is very broad. Obviously, the one-sided
linear processes expressed by the form
is included in this class, if E I gj
= 0 and EZ~ <
I
<
00
and the sequence {Zp i E Z} is an Li.d. with EZ j
Further, by Corollary 4 in Withers (1981), it can be shown that, if
00.
the common density p(x), say, of an independently and identically distributed sequence
{Zj : i E Z} satisfies
a)
f Ip(x) - p(x+y) Idx ~ elyl, where e > 0,
b) EZ = 0, EZ2 = 1,
1
1
then {~ : t E Z} is also a strong mixing sequence with a(n) « not, where e depends
upon the moment condition of Z/s and II. Similarly, if, in addition to the above
conditions, gk « e-·k (instead of polynomial order), then the conclusion remains the
same, but now a(n) « e-·>Jt and A depends upon the moment conditions. However, if
some of the conditions stated above are violated, then there are counter examples that
even if the process
{~
mixing sequence, i.e.
: t E Z} satisfies the one-sided linear form, it is not a strong
~
= 1j~.l + ~ = 2:;:02-iZt-i,
where
P(~
= 1) = P(~ = -1) = V2
(see e.g., Bradley, 1987), is not a strong mixing process. It is apparent that when the
innovative process
{~
: t E Z} is an Li.d. Gaussian process satisfying the above
conditions, then one can show that the sequence
mixing. Obviously, the ARMA (p, q), P
~
{~
: t E Z} is strictly stationary strong
0, q > 0 is a particular case of (3.1).
To see how this confidence interval for the one-step-ahead new observation
works, we performed the simulation by generating observations from AR(2)
.
-11-
~ = <Pl~.l + <P2~.2 + ~
and ARMA(2,1)
~ = <Pl~-l
+ <P2~-2 + ~ - (J~_b
respectively, where <Pit <P2 and
and
~
(J
are the autoregressive and moving average parameters
is a Gaussian innovation with mean 0 and constant variance. The parameter
values are selected within the triangular regioli so that they satisfy the stationary
condition given in Box and Jenkins (1976).
Following the same procedure as in Cho and Miller (1987), we generated the 151
observations using the RNNOR routine of IMSL, and discarded the first 50 observations
to minimize the effect of starting values. Using the remaining 100 observations, we
constructed the 90% P.I. leaving the last observation to check the coverage. The
coverage percentages were obtained from 400 replications and reported in Table 1 and 2.
It is observed that most of the coverage percentages ranged from 87% to 92%, which is
similar to the result reported in Cho and Miller. Although we did the simulation only
for the AR(2) and ARMA(2,1) processes, considering that most of the time series data
can be approximated by the low order ARMA models, e.g., ARMA(p,q), p < 2 and
q < 2, we arrived at the same conclusions as did Cho and Miller for ARMA(1,q),
q >
o.
4. Proofs
Since {Uj =
F(~)
: i E Z} forms a uniform on [0, 1] sequence of strictly
stationary random variables, and since the problem under uniformity can be handled
much more easily, the use of the following sequences are now adopted.
-12-
{E,,(s) : ~s~l, n E N}
is the uniform empirical distribution function,
{E,,-I(S) : O<s<l, n E N}
{U,,(s)
os
is the uniform quantile function,
n 1~(E,,(s) -s) : ~s~l, n E N}
is the uniform empirical process,
{V,,(s) = n I~(E,,-I(S) -s) : ~s~l, n E N}
and
{R,,'"(s)
os
is the uniform quantile process,
U,,(s) + V,,(s) : O<s<l, n E N}
is the uniform Bahadur-Kiefer process.
Proof of Theorem 1. In establishing Theorems 1-4, we were aided by some ideas found
in Berkes and Philipp (1977), Csorgo (1983), Csorgo and Revesz (1979), Philipp (1977)
and Philipp and Pinzur (1980), but first we shall start with some standard results. From
the mean value theorem, it is obvious that under F 1 the following result is in order:
(4.1) Q,,(s) = n '4(p,,-I(S) _P- 1(s», s E (0, 1)
= n '4(P- I(E,,-I(S) - P-I(s»
= n '4(E,,-I(S) - s)/f(P -1(8.".»
=V
1
(s)/f(P-I(s»
"
= V (s)/f(P-I(s»
os
where
+ V (S){ f(P- (s»
- 1}/f(P-1(S»
" f ( P -1(9.".»
V,,(s)(s - 8....>rt (P-1(0.".»
"
f(P- I (0.".»f(P-1(9..,,»f(P-1(s»
1
(V,,(s) + f,,(s»/f(P- (s»,
9.." E (E,,-I(s)
A
S, E,,-I
+
V
s), 0.." E (9.." v s, 9.... v s)
f,,(S) = V,,(s)(s -9.".)f' (P-I(o•.,,»/f(F -1(0.".»f(F-1(8.".».
above mean a
1\
b
= min(a, b) and a v
b
and
Here, the conventions
= max(a, b), respectively.
1\
and v used
-13-
From the definition of RIl'"(s), it then follows that
(4.2)
f(F-1(s»OIl(s) '" VIl(S) + fll(S)
'" -UIl(S) + RIl'"(s) + fll(S),
which, in tum, yields
(4.3) sUPo<.<I!f(F-J(s»OIl(s) - B~(s)
where
B~(s) '" -BIl(s)
I~
SUPo:l:.:l:IIUIl(s) - BIl(s) I
+ sUPo<.<J Ifll(S) I + sUPo<.<I!RIl'"(s) I,
is a Brownian Bridge.
It remains to show that under the condition stated in Theorem 1, each of the
terms in the right-hand side of (4.3) is, at most, of order (log ny>.., for some A > O.
For the first term, the result by Philipp and Pinzur (1980) (d = 1) is needed. It is
stated as follows.
Lemma 1: Let {Ui
:
i E Z} be a uniform on [0, 1] strictly stationary sequence of
random variables satisfying the strong mixing condition with mixing coefficient
a(n)
for some
E
«
n-S-E ,
E (0, 1f4]. Let r (s, s') be the covariance function. Then, without changing
its distribution, we redefine the empirical process {Uk(s) : sE [0, 1], k > O} of {Un:
n E Z} on a richer probability space on which there exists a Kiefer Process
{K(s, k) : sE [0, l],k > O} with a covariance function r*(s, s', t, t') = min (t, t') r(s, s')
and a constant A > 0 depending only on
E,
such that with probability one
SUPk:l:IlSUP. E 10, I) Ik I~ Uk(s) -K(
s,k)I « n I~ (logn) -lo. .
•
-14-
For the latter result, it follows that
(4.4)
sUPo~,,,tIU,,(s)
- K(S,:n)
n
I = sUPo"."tlu,,(s)
- B,,(s) 1« (lognf" a.s.,
where Bn(s) = n"hK(s,n) is a Brownian Bridge.
This completes the proof of the first term being bounded above by (lognyA.
For the following result, we first define f: R'"' - R to be measurable, with
Yn = f(UD'Un+ 1, ... ), n ~ 1. Then, by defining u~1)(s) to be the uniform Empirical
Process formed by the sequence {Yn : n EN}, similar to Un(s), Berkes and Philipp
(1977) have shown the following result.
Lemma 2. Under aU)
«
f8,
the sequence
{(210glogn)"'hU~1)(s),
n
~
1} of functions on
[0, 1] x [0, 1] is with probability one, relative compact, in the supremum norm, and has
•
the unit ball B in the reproducing Kernel Hilbert space H(r) as its set of limits, where
r*(s, s', t, t') = min(t, t')r(s, s'), for 0 S s, s' S 1 and t, t'
An implication of Lemma 2 is the fact that
(4.5) lim sUP", .. sUPo"'''llu~1)(s) 1/(2Ioglogn)'h ~ c a.s.
for some constant c > O.
For f(U h U 2 ,
... )
= U h (4.5) is deduced to
(4.6) limsuP", .. SUPo"."t lu,,(s) 1/(2Ioglogn)'h ~
C,
a.s.
The following Lemma is due to Babu and Singh (1978).
~
O.
•
-15-
Lemma 3: If (4.6) holds, then
limsuP", .. sUPo"." 1 IV,,(s) I/(21oglogn)~ ~ c, a.s.,
•
for the same constant c defined in (4.6).
Now, under Fl, F2 and F3, there exists
~
> 0 such that
Thus, using Lemma 3, it yields
It remains to evaluate how R,,*(s) behaves. We observe that
R,,*(s) = U,,(s) + V,,(s)
= U,,(s) - U,,(E,,-I(S»
+
n '\E,,(E,,-I(S» - s).
Hence, from the definition of E,,-I(S) and Lemma 3, it follows that
where
.A" = n -'..i(log logn)'..i.
For the coming Lemmas, the following definitions, notations and results are
required.
Set
1; k
~
= [2k'~logk], for some 0 <
= 1, 2, ...}.
It is clear that
E
<
~
and let Nit = {n : n = nk , nk + 1, ..., nk + 1-
.A~I = t\..i/(log lognJ'..i « 2k'~I2.
Set
-16-
s
(C~
=
II,
{.l.- :
c2"
j = 0, 1, 2, ..., c2" - 1;
~
[k
l-</ZJ}, for any kEN.
Now, for kEN sufficiently
large, there exists n E No such that D.t S n < D.t+l' For
I
t-s
that
1t
I
< cAn, it follows that
- 5.J
I
< c2 -(k'~121
1t
- S I < c2 -[kl~l2l; there exists Sj E s~c~, such
and Is - Sj I <
c2 -[kI~I2I.
By setting "i(s, t) = &(t) - &(s) = I(•.II(U) - (t - s), it is clear that
(4.10) n'h Iu..(s) - U,,(t) I ~ n'h Iu,,(s) - U,,(Sj) I + n l~ IU,,(t) - U,,(s) 1 and
(4.11) n'h lu,,(S) - U,,(s) I = IE",,"i(sj's) 1
~ IL i " ..."i(sj's) I
+
1L...<i"""i(O,s) I
+
IL...<i",,"i(O,Sj) I
In conjunction with (4.10) and (4.11), it can be shown that
(4.12) n 'hsUPO"."ISUP 11-01 <cA. I U,,(s) - U,,(t) I ~ 2m~"j"c2",~""suP.J".<.J" IE"..."i(Sj' s) 1
+
4max...""<....,suPo,,." 1 IL...<i"""i(O, s) I.
The following Lemma is the key result in determining the order of R,,- (s). This is due
to Philipp (1977). In this work, the function f will be used in the restrictive form
Lemma 4. Let H > 0, N
l
= t-s
~
~
N-2, and that a(n)
1 be integers and let R
«
nOs. Then, as N t
~
00,
1. Suppose that
for 0 < s < t < 1,
•
-17-
where A( > 1), and a and {3 are positive constants (Q
= _1_,
f3
= .03).
•
120
The next two Lemmas are similar to Lemmas 5.1 and 5.2 in Berkes and Philipp
(1977).
Lemma 5. For a(j)
and s.J
E S (e)
... k
«
f8 and for any k sufficiently large, there exists n E Nk
such that
P(maxl "."_......,,.Sup
""
J'"""
'J"J.I
IE'"
x.(s., s) I > n
l~_l.
I\;IJ
)
«
(l.
exp -k ),
where A is a strictly positive constant depending only on
•
€.
Proof: For the most part, the analysis of the proof is based on arguments similar to
those presented in Lemma 5.1 of Berkes and Philipp (1977). We thus prefer to proceed
with the same notation.
For
° < s < s' :s 1 and integers P
F(P, 0, s, s')
~
0, Q
~
1, call
= 1r::~lxll(s,s')I. Clearly,
(4.13) F(P,O, s,s" )~F(P,O, s,s') +F(P,O, s',s"), for o~s < s' < s" ~1.
Put
m
= [kl-«Yz
s =
+ 1)]
for some
1 E (O,Y4].
Sj + .!{L:.r+l t.2-· +82-m}, for
Then, for any s E
[Sj, Sj+l),
r = [k 1-</2],
c
where
t. = 0, or 1 and 9 E [0, 1).
Since xll(s,s') < xll(s,s")
+
(s" - s), for any h
(4.14) F(P,O, h2 ....., h+82.....) ~ F(P,O, h2 ..... , (h+1)2c
c
c
c
m
)
= 0,
+ 02 .....c.
1, ..., 2m • 1,
we have that
-18-
By repeating (4.13), it follows that
for any
scz,......
S
".
,
,II
E S11,..
(C2 and
II .. II
+ 1, ..., m.
In exactly the same way as Berkes and Philipp (1977), define
1p(n) .. 2A(nloglogn)'h,
and define the events
Hence, with H
(4.16) P(~(II,Q»
= 0, N = nb
= 2c and t = 2- P, according to Lemma 4,
R
«
exp( -6-2- loglognJ + nk-(,+j3l
«
exp( -3-2- logk) + n k-'.
This implies that
(4.17) P(EJ
•
«
a
L.<ps:m+,2'exp( -3-2 ' log k) + 2ml\-'
«
2:,>rexp( _2a ' log k) + l\-('h....,l
« 2 -kl~('h""'l .
Thus, on the complement of Ev it follows that with probability one
F(O,nk,sk's) ~ 2A1p(nJ L.':~, 2 , + n;-m
« 1p(nJr" + n~h....,
a
«
n 'h...., .
•
This completes the proof of the Lemma.
For the proceeding result, define
S ..
,
{~
2'
: k .. 0, ..., 2'-1,
Next, we have the following result.
II
~
1}.
-19-
Lemma 6: For aU)
«
j -I
and for any k sufficiently large, there exists n E N v such that
where A is a strictly positive constant depending only on
•
E.
Proof. The proof of this lemma is also highly dependent upon similar arguments to
those in (5.2) in Berkes and Philipp (1977). So, for these reasons, we proceed with the
same notation.
As before, we can express any n : nlt :S n < ~+ l' in dyadic form as follows:
where e E [0,1), P.
= 0 or 1, P = [(Yz -1)k I j and q = [log(l\.l - nJ/1og2].
For any P>O, 0 < Q < R (integers) and s, s' : 0
~ s < s' ~ 1,
the following
subadditive property is satisfied.
(4.18) F(P,R,s,s') ~ F(P,Q,s,s') + F(P+Q,R-Q,s,s').
Hence, for 0 S
where hj
= 0,
S,
s' < 1 and n such that
1, ..., 2Q-j -1, for j
~ :S
= p + 1, ..., q.
n < nlt+l' Applying (4.18), it follows that
Since
where m is similar to that in Lemma 5, but now is related to j (for exact definition, see
below). As in Lemma 5,
(4.19) F(P, Q, 0, s) ~ L:~I F(P, Q, sa.'" sa.l) + Q2""',
-20-
where
S....,5.".I••
E S...
II
= 1,2,...,m+1 and ~a<2·.
Define the following events
and
By Lemma 4, with R
= 2·2'h(q-j)(1-14/n, H = nit
+ h2j +t, N
= 2j
and t
= 2-'
exp( -u&:.-2.....,~(q-j)(I-1411>Iom)+2-(q-j)(I-14p)-j(l+/1).
. h)
P(HIi:(II ,aJ,«
~
ClI
Moreover,
\' <J..: qL..•
\' .: ('II"')J. 2·2q-jexp(-6e2....•II (q-j)(I-14P)Iom)
( 4.20) P(HJ« L...p
ClI
+ \'
\'
2'2q-j2-(q-j)(I-1411>-j(l-")
L...p<j .:qL..•.: ('I1.,.)j
« exp( -51ogp) + 2
«
-p(~. 513 -,). qp
"4
4"
k -2.
Hence, on Hie, we have that with probability one
which yields
(4.21)
F(~,n-~,O,s)«
because tp(2,2
q jjj3
<P(2Q)L:.<j.:q2+ - +
«
<p(2' + 2q('iI-,) + n.;.1I-,
«
n.;.1I(IognJ"',
= 2 logq « ~.l
Q
~.:i(lll-,)
-
~ « ~k'"
«
+ n.;.1I-,
nli:(lognli:)....
This completes the proof of Lemma 6.
•
-21-
In connection with (4.9), (4.12) and Lemmas 5 and 6, it is shown that
Combining (4.4), (4.8) and (4.22), the proof of Theorem 1 is now completed.
Proof of Theorem 2. The following preliminary discussion is necessary for achieving the
establishment of the theorem. First, some strong law behavior of the weighted uniform [0, 1] empirical process under stationary and strong mixing is required. Let 1/1 denote a
preassigned non-negative function on [0, 1]. The functional form considered here is
I/I(s)
= (s(l - S»"'h.
The weighted empirical process is then defined by
DoCs) = n'hl/l(s)(En(s) - s), s E [0, 1].
In Lemma 2 of Berkes and Philipp (1977) it is established that for I/I(s) = 1, on a set of
probability one, the sequence {(2 log log n)"'l:llu(s)} is relatively compact in the topology
of uniform convergence on [0, 1] with limit set the unit ball B in the reproducing kernel
Hilbert space H(r), where r·(s, s', n, n') = min (n, n') res, s'). If 1/1(.) is bounded on
[0, 1], then the result remains unchanged. On the other hand, if 1/1 is not bounded, we
should expect that relative compactness may occur only for those I/I's which are bounded
on every interior interval of [0, 1]. We, therefore, have to focus our attention on the
behavior of 1/1 near
°and
1. We consider the weight I/Ia defined by
1/J.(s) = {(S(OI - S)f'h if S E [8, 1-8]
•
otherwise .
-22-
Since 1/13 is bounded, the sequence {(2 log log n)"'hUo(s), n > 3} is relative
compact in the topology of uniform convergence on [0, 1 - 0]. We then ask to know what
happens if 0 is replaced by {oo> n > I} with 00
~
°
as n t
00.
The following proposition
is then in order.
Proposition. Under aO)
some
«
J8,
the sequence {(2 log log n)'h1/ln-"(s) Uo(s), n ~ 3}, for
°
< Jl. < da, d E (0, ~) and a E (0, 1/120), of random functions on [0, 1]2 is with
•
probability one relative compact, in the supremum norm.
Remark: The Proposition implies the following
(4.23) lim sUPnt .. suP.elo, l)'l/In.•(s)(log log nr'~ IUn(s) I
'" lim sUPnt"suPn"<.<l-n.•[s(l-s)log log nr'~ IUn(s) I ~ c a.s.
for some Jl. E (0, 1/480).
Discussion. Choose an increasing sequence {nit; k > I} of positive integers such that
nit t
00
almost exponentially. Next, for fixed k, we partition the unit interval s E [0, 1]
into dlt intervals
positive small
T.
[Sj' Sj+l)
(1 < j < dlt), each of length l/d1t7 where dlt
Then,
.1. .• (s.)U (s.) - n '~.I,
ZG,k) = n~+I~ 1'1'
(s.)
k 'I' lit.• (s.)U
Dt.. j 1\.. J
J
1\ J
•
=:
n;
for some
-23-
may be considered as the component Z(j, k) of random vector
{Zt, k ;;:: I} is the skeleton process of
Zt E Rdt. The sequence
1P,.(s)U (s), where On = nO".
D
We shall show as in Berkes and Philipp (1977), that oscillation of 1P,.(s)U (s) over
D
the rectangles {(s, n):
Sj
< s <
Sj+l,
nt S n S
(A = 1/3840), uniformly for all j, with 1 S j <
fit+l}
is with probability one
«
(log n)">'
cit. The latter statement shows that in
order to prove (4.23) we should investigate the behavior of the skeleton process of
1P& (s)U (s), since it contains all the needed information about the process.
•
D
Set D(s, n)
fit
= n'!J1/I0n(s)Un(s) = 1/10n (S)Ei~JCi(O, s).
= [2k 'l for some small
Put rt = [dk1...], d E (0, ~) and define
It can be seen that for s E
Sj
E
We choose nt as follows:
> O.
= (j - 1)2-rt for j = 1, 2, ... 2rt.
[Sj' Sj+l)
and nt S n S nk+b
We need to show that
is bounded above a.s. by n~(log nt)">' for some A > O.
We choose 0 in 1/I&(s) such that
o=
It is required that
2-13r for some {3 < ex.
Sj
E [0, 1 - 0]; this implies that j E [tt, uJ n Z, where l,. = il-,3)r,
-24- .
Lemma 7. Under a(n) « nos, and for k sufficiently large,
•
for some A > 0, independent of k.
Proof. For sufficiently large k, set m = [(V2 + 'Y)k1-J, 'Y E (0, Y4]; then, in conjunction
with 4.13 and 4.14, we observe that for a, E [to, u,] n
z, " =
r + 1, ..., m + 1, P >
and Q > 1
Define as before,
tp(O) =
2A(n log log n)'h
Ek(", a) = {F(O, nb a2-', (a + 1)2-') > 2-131 "2-fjr
c {F(O, nk'
a2-',
(a + 1)2-') >
= ~('" a), where 131
+ 13
tp(nJ}
2-(13, + 13)"
tp(l\)}
= a, the same as in Lemma 4,
and
Ek = U r<,~m+l U t, <a<u,~('" a)
£: ~.
Now, as in Berkes and Philipp (1977), for P > 0, N
P(~(",
a»
«
exp(-3·2a>log k) +
and
P(~)« Er<,~m+12'exp(-3·2a>log
for
m = [(l/2 + 'Y)k1-£].
Ok-12m
k) + 1\-14
= nb
R
= 2 and 13 = 2-',
°
-25-
Hence, by Borel-Cantelli's Lemma
•
Lemma 8. Under OI(n)
«
n-8, and for sufficiently large k,
•
for some A independent of k.
Proof. For p
= [('12 - ,,)k1-£] and q =[log2(nk+ 1 -
nil)], and in exactly the same way as
5.15 - 5.17 in Berkes and Philipp (1977), we may write that for fixed k
s
= "..
2-' = ".
u 2-'
L..•• l u
"
Li.~ .. ~m "
for m
= [('12
where
01.
+ g2.....,
~
+ ,,)k1-£], and for P > 0, Q ;;:: 1 and 0 < 13 <
E [t., uJ
n z.
As before, we define the following events
•
. 1
Hk;(V, cr, j, h) = {F(Uk; + h2 J -
,
-6
•
Z!, cr2-', (cr + 1)2-") > 2 '''2,
•
!: {F(Uk; + h2 n - l , 7J, cr2-', (cr
= Hk;(V, cr, j, h)
-6
r
2
~G-q)).
!p(2")}
1 G-q)).
+
1)2-") > 2-Q"2,1
!p(2")}
-26and H~ .. UjEfp,qJnz U'E[I.(~...,)j)nz U"Ert
]nz UhEro.2."']nz H~(lI,aj,h)
!: H
• •
,
.Il
k
•
which is exactly as in Lemma 6. This completes the proof of the lemma.
Lemma 9. Under a(n)
«
n-8, and for sufficiently large n,
•
for some A > 0, independent of k.
Proof. It is clear that m = [(Yz + 'Y)k1-e]
.
111'&" (s) - 1/1&.,(Sj) I IE$,\~(O, Sj) I ~ 21/1&..(s)F(O, ~, 0, s)
~ 21/1&.. (LJ{L:.~r).1 F(O, Dk, a.2-·, (a.+l)2-j + ~2-m,
where r = [dk1-e].
The rest of the proof follows from Lemma 7.
•
Combining Lemmas 7 - 9, the proof of 4.23, or Proposition, is now in order. An
implication of 4.23 is the following result. The proof of this can be done using similar
arguments to Lemma 2.3 in Babu and Singh (1978).
Corollary 2. If (4.23) holds, then
•
-27-
for some 0 < J.I. < da, d E (0,
~)
•
and a E (0, 1/120).
With this preparation in mind, the proof of Theorem 2 follows. As in Csorgo and
Revesz (1978),
a.s.,
In view of Corollary 2, (4.24) can be majorized by
Arguing as in Csorgo and Revesz (1978), all the bracketed terms are bounded
above by a constant.
Since
(4.26) sUP"_'<I<I-a_.lf(F-1(s»0,,(s) + K(s, n)/n 'n I
~ sUP"_'<I< l-a..l f(F -1(S»O,,(s) - VIles) I
+
+
sUP"-'<I<I-a-.IR:Cs) I
sUP"_'<I<I-a_.lu,,(s) - K(s, n)/n \-II
a.s.,
and since the second and third term in (4.26) are bounded above by (log nyA (see e.g.
(4.22) and (4.4), respectively), the result follows immediately.
For the second part of the Theorem, it is sufficient to show that
(4.27) sUPle[o.&.IU[l~ •. 1) \ f(F-1(s»0,,(s) - VIles) I
«
(log nf)' a.s.
-28-
We shall show the result only for s E [0,
oJ; similar arguments are applied for the
other
tail.
As in 3.2.14 in Csorgo (1983),
where s E [0,
oJ
Is - ~ I s
and
Obviously, 11 - f(F-1(s))
f(F-1(e))
I
n- Ih I VD(s)
I.
is bounded because of the additional conditions. As
far as VD(s) is concerned, we have that
Applying Lemmas 5 and 6 by replacing Au with OD' the result shows that
(4.30) SUPo<.<&.1 Vn(s) I
«
(log n) -l. a.s.
This completes the proof of Theorem 2.
Proof of the Remark. As in the second part of the proof of Theorem 2, we need t<? show
that
-29- .
It is clear that
(4.31) f(p-I(S»Q,,(s) - V,,(s) = n \.i
fU.,,'"
U."V.
[f(P~I(S»
- 1) du,
f(P I(U»
where V en is the kth order statistic of the uniform variates Vb ..., V
S E (k-1, ~] n [ell' oj and
n n
E"
Il7
= n -,~ l(nfl.
If s < V k :m then the right-hand side of (430) is bounded above, for s E [0,
n
I~
f.u,,, [1
+ f(P-1(S»)
f{F-I(U»
du
;:5;
21 V (s) I «
oJ,
by
(log nf\
"
since f is nondecreasing to the right of Ct, and (4.30).
On the other hand, if s > V k :D7 then the right-hand side of (4.31) is bounded by
Substituting u
= n'h(log n)"\
and since f is now decreasing to the right of a, the result
follows at once.
Proof of Theorem 3. The sketch of the proof of this Theorem is based on three simple
Lemmas. Lemma 1 shows that the law of iterated logarithms is satisfied for the quantile
process; this, in turn, will play an auxiliary role in showing a strong consistency of the
Kernel-type estimator of 1jf(P- ' (s», which is shown in Lemma 3. Lemma 2 indicates
how fast the bias of this estimator approaches zero. We proceed as follows:
-30-
Calling upon Lemma 3, (4.8) and (4.2), the following Lemma is easily formed.
Lemma 10: Under the same provisos of Lemma 2, there exists c > 0, such that
10
.
f(F-1(s»
(s) I
lim sup t Supo 1
~ C
<.<
(loglogn) '..i
a.s.
Il
Il
•
•
The next result is a modified version of Csorgo and Revesz (1984). Since the
proof of the following Lemma does not differ much from Lemma 3 in Csorgo and
Revesz (1984), it is omitted.
Lemma 11: Under K t
- ~,
for some constant c >
o.
H h and F t - F4, the following result holds:
•
Next, we have the following.
Lemma 12: Under K t
- ~,
H t , H 2 and F t - F4, it follows that
h n'..i
limsupllt •
'..i
Il
(loglogn)
sUPo<.<llePll(s) -
1
f(F-1(s»
1
~
c
a.s.,
I
where ePll(s)
=
~
Il
r K( s~u)dF:I(S) > 0,
~
Il
and c is a positive constant.
•
"
-31-
Proof. Via the triangle inequality, it follows that
[IOglOogn) i
Invoking the fact that h.«
and Lemma 11, the second term on the R.H.S. of
the latter inequality tends to zero for sufficiently large n. It remains to show that the
other term is approaching zero with probability one.
In connection with Lemma 10, it follows that for sufficiently large n,
I-.!.
sup
0<1<1 h"
1
f K(~)l(F-I(S)
-t
h"
~
«
"
- F-1(s»
I=
'n
(loglogn) sup
I
h;n'.-i
0<1<1
1 f(F- 1( »0 ( )
u " u K' (~)du I
f(F-1(u»
h"
f
-t
(loglogn)'.-i
f(F-1(s»0,,(s)
I ' I'
-I
sUPo< <I
sUPo< <I K (s) /info< <If(F (s»
h"n'.'i
1
(loglogn) 'n
1
1
(loglogn)'.-i
a.s.
hn'.'i
"
•
This completes the proof of Lemma 12.
The rest of Theorem 2 follows from the triangle inequality.
(4.32)
If(F-1(s»0,,(s) - B,,(s) 1 ~ 1"',,1(S)0"(S) - B,,(s) I + f(F-1(s) lifJ (s) _
1
Y'
ifJ,,(s)
"
f(F -1(S»
I.
This completes the proof of the Theorem.
Proof of Theorem 4. The steps required to show this Theorem is exactly the same as in
Csorgo and Horvath (1989). The additional information needed here is to show that
SUp[O.&JU[l-.l•.lI IB(s) I
!
0 as n
t
00,
. -32-
where B(s) is a Gaussian Process with covariance function r(s, s). This will be
accomplished below, but first some discussion.
For any
°
< n 1 < n2 < ... define
The sequence
{~(s),
kEN and s E [0,
In is a Gaussian process with
independent increments and covariance function r(s, s') (see, e.g., Lemma 6.1 in Berkes
and Philipp, 1977).
We need to show that, for s E [0, oj U [1 - On' 1], and for On = n-I' (p. introduced
above), the process K(s, n)/n 'h is bounded above almost surely by a sequence of
constants and tending to zero uniformly in s as n too.
We may write that for nk
(4.33) K(s, n) =
".,-1
~-o (n
i+1 -
n
S
S ~+l
and s E [0, oj
U
[1 - On, 1],
n) on Bj(s) + (n - nJ on -B.,(s),
where i3.,(s) = (K(s, n) - K(s, nJ)(n - nJ-'h.
Lemma 13. For sufficiently large n,
sUP.eIO,&JuIH.,IJ IK(s, n) IIn on
«
(log nr>. a.s.
•
-33-
Proof. We shall show the validity of this result only for s E [0, on]; the same is applied
for s E [I-ant 1]. It is easy to check that for 0 :s s < s' :s 1,
res,s')
= Eg.2(s)
2L:2 Eg. (s)g,,(s)
+ Eg,2(S') + 2L:2 Eg,(s')g,,(s')
- Eg,2(S,S') - 2L:. Eg.(s,s')g,,(s,s')
+
.. ~(O,s) + ~(O,s') - ~(s,s'),
where
~(s,
s') = 1[1. I'] (Un) - (s' - s).
In conjunction with cr
«
n-8 and Lemma 6 in Davydov (1970) for 0 :s s < s' < 1
~(s,s') ~ (s'-s)(l-(s'-s)) + 10 L:2Q (nY'r,((s'-s)(l-(s'-s))) II,. +./",
«
(s' _S)3/4.
The last statement follows by assuming that r3 = 4 and 1/r1 + 1/r2 = 3/4.
Let {nk =
{Sjk
=
Sj : Sj
[2k 'l,
kEN, 0 < e < liB} be a sequence of integers and
= j -1 for rk = dcrk1-£}. It is clear that
2"
{B(s) : 0 < s < S} D
= {X(u)
= B * (u)
- B * (0), for u E [0,I]},
where BO(u) = B(s + u(s' - s» and for s' - s =
By setting in the process, X(u),
o =::;;
Sj
and
o.
Sj+1
instead of sand s', it follows that for
u, u' < 1,
Now, applying the same arguments as in Lemma 6.2 in Berkes and Philipp, it follows
that
-34-
(4.34) P(suPo<.<2... !B(s) I > (lognJ-~)
= P(SUP. $.$.) B(s) - B(~) I > (log nJ-~)
J
= P(suPO$1l$\ IX(u) I > lognJ-~) « k-3•
Thus
Next, applying Lemma 6.3 in Berkes and Philipp, for s E [0, on], we have that
(4.36) P(suP"$D$.."suP.EIO.&,J IK(s,n) - K(s,nJ I > I\(lognJ-~)
«
2
k- .
•
Hence, using Borel-Cantelli's theorem for both (4.35) and (4.36), the proof of the
Lemma follows immediately. This proves Theorem 4.
Acknowledgement. The authors are extremely thankful to Dr. G. Kallianpur and the
referees for their extensive comments and suggestions, which helped to improve the
presentation of the article. Theorems 2 and 4 were obtained while the first author was
visiting the University of North Carolina at Chapel Hill, and he would like to express his
appreciation for the support and hospitality he received from the Department of
Statistics.
•
-35-
References
[1]
BABU, GJ. and SINGH, K (1978). On deviations between empirical and
quantile processes for mixing random variables. J. Mult AnaL, 8, 532-549.
[2]
BERKES and PHILIPP (1977). An almost sure invariance principle for the
empirical distribution function of mixing random variables, Z. Wahrs. V. Geb., 41,
115-137.
[3]
BOX, G.E.P and JENKINS, G.M. (1976). Time Series Analysis: Forecasting and
CQntrQI. 2nd Ed. HQlden-Day, San FranciscQ.
[4]
BRADLEY, R.C. (1986). Long Range Dependence, Progress in PrQb. and
Statistics, VQ1. II, Birkhauser, BQstQn.
[5]
BUTLER, R.W. (1981). Basic properties Qf strong mixing cQnditiQns, PrQgress in
Probab. and Statistics, 11, 165-192.
[6]
CHO, S. and MILLER, R.B. (1987). MQdel-free one-step-ahead prediction
intervals: AsYmptotic theQry and small sample simulatiQns. Ann. Statist., 15,
1064-1078.
[7]
csAKr, E. (1977). The law Qf iterated logarithm fQr nQrmalizing empirical
distributiQn functiQns, Z. Wahrs. V. Geb., 38, 147-167.
[8]
CSORGO, M. (1983). Quantile Processes with Statistical Applications, S.I.A.M.
[9]
CSORGO, M. and REvESZ, P. (1978). Strong approximatiQn Qf the quantile
process, Ann. Statist., 6, 882-894.
[10]
CSORGO, M. and REvESZ, P. (1981). Strong Approximation in Probability
and Statistics, Academic Press.
[11]
CSORGO, M. and REvESZ, P. (1984). TWQ approaches to CQnstructing
simultaneQus confidence bounds for quantiles, Prob. and Math. Stat., 4, 221-236.
[12]
DAVYDOV, Y.A (1970). The invariance principle fQr stationary processes,
Theor. Probab1. Appl., 19, 487-498,
[13]
KEYFTIZ, N. (1972). On future populatiQn, J. Amer. Statis. Assoc., 67, 347-363.
[14]
PHILIPP, W. (1977). A functiQnallaw of the iterated logarithm for empirical
distributions functiQns of nearly dependent variables, Ann. Prob., 5, 319-350.
[15]
PHILIPP, W. and PINZUR, Z. (1980). AlmQst sure approximation theQrems for
the multivariate empirical process, Z. Wahrs. V. Geb., 54, 1-13.
-36-
[16]
SMITH, R.L. (1986). Maximum likelihood estimation for the NEAR(2) model,
J. Royal Statis. Soc., Series B, 48, 251-257.
[17]
WTIHERS, C.S. (1981). Conditions for linear processes to be strong mixing,
Wahrs. V. Geb., 57, 477-480.
[18]
YOSHIHARA, K (1979). Note on an almost invariance principle for some
empirical processes, J. Yokoh. Math., 27, 105-110.
c
z.
•
•
Table 1
Coverage Percentage of One-Step-Ahead P.I., AR(2)
cPl
-1.8
-1.5
-1.2
-0.9
-0.6
-0.3
0.0
0.3
0.6
0.9
1.2
1.5
1.8
cP2
0.837
0.9
0.875
0.905
0.832
0.845
0.905
0.912
0.867
0.855
0.890
0.897
0.885
0.887
0.902
0.917
0.857
0.887
0.897
0.872
0.895
0.892
0.880
0.885
0.892
0.867
0.897
0.855
0.892
0.902
0.915
0.885
0.875
0.887
0.890
0.892
0.870
0.865
0.887
0.862
0.880
0.875
0.895
0.880
0.882
0.887
0.900
0.855
0.6
0.3
0.0
-0.3
-0.6
-0.9
0.855
0.895
Table 2
Coverage Percentage of One-Step-Ahead P.I., ARMA(2,1), 8
= 0.4
c/>l
-1.8
-1.5
-1.2
-0.9
-0.6
-0.3
0.0
0.3
0.6
0.9
1.2
1.5
1.8
c/>2
0.837
0.9
0.875
0.900
0.867
0.872
0.892
0.897
0.870
0.860
0.872
0.920
0.902
0.897
0.882
0.915
0.862
0.880
0.887
0.882
0.907
0.897
0.857
0.887
0.905
0.875
0.880
0.870
0.890
0.882
0.907
0.890
0.885
0.880
0.890
0.897
0.875
0.877
0.892
0.852
0.882
0.890
0.915
0.875
0.885
0.892
0.885
0.872
0.6
0.3
0.0
0.3
0.6
0.860
0.9
.'
.~
'I
..
...
.
0.892
..
11 •
~
© Copyright 2026 Paperzz