Chanda, K.C. and Ruymgaart, F.H.; (1987).General Linear Processes: A Property of the Empirical Process Applied to Density and Mode Estimation."

GENERAL LINEAR PRO FSSF.S:
A PROPEXIY OF TIlE EMPIRICAL PROrn&S
APPLIED 1'0 IENSIlY AND IDE ESI'IJIATI<If
by
K. C. Chanda and
Department of Mathematics
Texas Tech University
Lubbock, Texas 79409
U.S.A.
* Part
F . H. Ruymgaar t *
Department of Mathematics
Kath. Un. Nijmegen
6525 ED Nijmegen
HOLLAND
of this research was done while the author was visiting the University
of North Carolina, Chapel Hill.
SUMMARY
General linear processes do not in general satisfy strong mixing
conditions (Bradley (1987».
Therefore. we investigate the empirical
process based on samples from such a general linear process by using a
truncation argument and derive a local fluctuation inequality.
It is
well-known that such a fluctuation inequality is of basic importance in the
study of the empirical process (see e.g. Einmahl (1987».
Here it is
applied to obtain a rate of a.s. convergence for certain density estimators
in the supremum norm.
This extends a result by Chanda (1983).
As a direct
corollary a rate of a.s. convergence for a mode estimator is obtained.
AMS 1980 subject cLassification:
key words and phrases:
mode estimation.
primary 62MI0. secondary 62G99.
linear processes. empirical processes. density and
1.
INTRODUCfION. NOfATION. ASSUMPTIONS
The class of linear processes contains many important examples of
time series models like e.g. moving average (MA). autoregressive (AR). and.
more generally. ARMA-processes; see e.g. Anderson (1971) or Hannan (1970).
In his review paper on strong mixing Bradley (1986) refers to an interesting
counterexample by Rosenblatt (1980. p. 267) which shows that even decent
processes like AR(1)-processes need not always satisfy a strong mixing
condition.
Hence. statistical procedures for mixing sample elements do not
in general immediately apply to samples constituting a linear process.
This is in particular true for the empirical process and its
ramifications like density estimators.
Properties of empirical processes
under mixing condi tions may be found in Mehra and Rao (1975) and Basawa and
Prakasa Rao (1980. Chapter 11) for the univariate case and in Harel and Puri
(1987) for multivariate sample elements.
In this note we give a probability
inequality on the local fluctuations of the empirical process based on a
vector-valued linear process (Section 2).
This inequality is of independent
interest and typically prOVides the most important tool in the study of weak
and strong convergence properties of empirical processes; see e.g.
Einmahl
(1987) for the i.i.d. case.
Restricting ourselves to the univariate case for convenience. we
apply this inequality to obtain a speed of strong uniform convergence for a
class of density estimators. from which strong consistency of the mode
estimator as defined in Chernoff (1964) is derived (Section 3).
The present
result on density estimation extends the local strong convergence of density
estimators in Chanda (1983).
Also in the univariate case we give a
discussion of our assumptions and some examples of processes that are
covered by our set-up (Section 4).
2
The remainder part of this section is devoted to a specification of
the model. the assumptions we need and some notation.
For each j € Z. the
set of all integers. Zj is a d-dimensional random vector (d €
~)
defined on
a probability space ({l.l~PY w(d Aj'iii a given non-random dxd-matrix.
The Zj
are referred to as thej~~r&=:i'te:riRs.'
ASSUMPTION 1.1. :' .l.Tf1f,_~j (J € Z) are independent and identicaHy
distributed.
For i=l ....• n (n
the series on the right beLow
€~)
converges in probabiLity.
Hence the sampLe eLements Xl •.. · ,X are
n
d
weLL-defined random vectors in m and form a stationary generaL Linear
process.
The common distribution function Fof the Xi has marginaLs with
derivatives f .(j=l ....• d)
J
e.
that are ccmithuous and uniformLy bounded by
M € (0. 00 ) on their respective supports in
m.
d
For x = (xl'" .,xd ). Y = (Y l .... 'Yd) €m we define X ~ Y to mean
I, ~; I:; /
:.. M b\
i ,. r )
i. ~ ~~ y
Yj for all j=l •.... d and x < Y is defined by the requirement x j ~ Yj
,p...
~
xj
for all j=l •...• d and
f
€
s"t~-:i~~,~.~~'f!I,io.t1m ~()rat
m we wri te f * = (f ..... f)
-00* = (-00 •...• -00).
For m €
~
d
€ II!.;!-rs.~~~~~ny
,w~
Fm.
For arbitrary
(1.3)
n(m.~.i)
(1.4)
pen
c
~
>0
For real
.
defIne
00* = (00 ••••• 00) and
we write
The distribution function of the Xi
by
least one j.
.m
is denoted by F and that of the
m
we write
- • ~ ~*}. n(m.~) = ni=l
n
= {-~* ~ X
n(m.~.i).
i m
(m.~.i»
=
~(m.~).
2 d
~(m.~)
+ 2 d(l+2d)M
~
=
o(m.~).
Xl.m
.
3
It is clear that
(1.5)
c
P(O (m,c»
~
n
~(m,c).
In the example in Rosenblatt
(l~0) \_IIJ~t~~~~flc~Qpvj.e\ th~
dimension d=l. F
th~llftipr.~9.:is~Fete.
is the uniform (0,1) distribution and
Therefore, we
preferred not to impose any smoothness condition on the F.
The smoothness
m
,
:::,
.
i
('
..-::..:: ..
of F as required by Assumption 1.1, however, ',entails
0,
(1.6)
I
IF(x) - Fm(x)
= IF(x) - [P({X i ~ x + Xi,m} n O(m,c,i»
max{F(x) - F(x-c* ) +
~
F(x+ c * )
~(m,c),
~,Ftx) ~~(m,c)} ~
d
~. ~(!'1, clf<~JI. c 'n"
. for
,J:..
..
I
..,;,.)0.. '"_
....
. ~1
' .. )
fir
at," 1. x € IR •
".
~ j ~X!Ctii?'
<
s~; j j :; :)';';·8 ",
From this we obtain the useful relation
.. '....!
(1.7)
Lfri ) (,.
I(
•.
c'\ '
d IF(x) - Fm(x) I ~~(m,c) +
sup
x€1R
,'h~l.\! 15'1
"'~
2d
'.'
,i
I :; ,
Mc <::S(m,c).
to ..
, ) ; J>f.
In asymptotic situati6D:s ~k ~~t'fyIilli~e 'ni;'::;'-lni ~oo
.
as the sample size n
( 1.8)
~
n =
-+
00,
"
....., " , . , _"
,b~i' ..
arid Y~affiP'rf' 'wri't~t-
and
n
c = c
n
-+ 0,
f ;;.
't-.
~(m
,c ), 0n = oemn,cn ), 0n = Oemn,cn ).
nn
.t'·,
The following assumption on the orders of magnitude will be needed.
ASSUMPTION 1. 2.
mn =
O(n~)
(1.9)
~
n
As n
-+00
there exist c
for some
° < ~ < ~,
such that
= O(n- p ) ,
for some p
> 2,
= O(n-a )
n
and hence 0
n
for some a
= O(n-a ).
>~
and
4
Let us conclude with some more notation that is used throughout.
any W
~d ~ ~ and x < y we define
(1.10)
w{x.y}
= A~
W.
where AY is the usual difference operator.
x
may e.g. write P(x
For
< Xi
~
y) = F{x.y}.
Note that in this notation we
Throughout this paper the numbers
will be used as generic constants that may only depend on the dimension d.
Hence these numbers are in fact independent of all the relevant parameters
like in particular the distribution functions F and F (m
m
€
rn). and the
sample size n.
2.
A FUNDAMENTAL FLUCTUATION INEQUALI1Y
Ie IT' c'. .' D :;;::)1 1.rqm~;f Sd1
-~. ('-
The empirical distribution function ~~~ on the X ....• X will as
n
1
bfL.~
usual be defined by
f
(2.1)
'"
1
Fn(x) - n • #{1
~
i
~
n : Xi
~
x}. x €
' tt:
~ •
and the corresponding empirical process by
(2.2)
An important role in our inequality is played by the function
(2.3)
~(X)
=2
X-
2
~ log(l+x)dx. X > 0; ~(O)
This function is continuous on [O.~) and ~(X)
= 1.
! O. as X f
THEoREM 2.1. Let Assumption 1.1 be satisfied.
and
Eo
>0
we have
~.
For arbitrary m
€
rn
5
(2.4)
P(suPa~x<y~b IUn {x,y} I ~ X) ~
,.,
2
r -A X
[BX
~ m C expLm F{a,b} ~ n~ F{a,b}
for any
*
~
-00
<b
a
*
~
00
X
~
(2.6)
)]
+ n ~(m,E),
':'~~q:c
f :.JfIS' ' ',: r"c
2 0 (m', ~ }.1. '1 ::: \"
.
.~
Without loss of generality we may and('ltill assume that
~,
v = n/(2m+I) €
so that 2m+1 = n/v.
(2.7)
~
4 n lh 0 (m, E), F {a, b }
PROOF.
J
provided that
:.' . : ,.
(2. 5)
.
.... ,
It is immediate from the definition of the X.
I,m
Xj+(i-l)n/v,m'
that
i € {l, ... ,v},
are i.i.d.
Let F(j} denote the empirical distribution function based on the
v,m
....,·7'L1AUq3?,1 Yi.<r;:'IJ,UI~(IJ';; .lI.TVl .
sample of size v in (2.7), F
the empirical distribution function based on
I
,n,m,
"
d'
.,
., ITO os;:: HC~' 0 iJ ~) ;'W i. nCll: J Z·.''':~ I [>
all of the Xl ,m , ... ,Xn,m , and
(2.8)
U~~~
(2.9) Un,m
= v
=n
lh
(F~~~ -
FnJ,'
..
"
lh ....
(Fn,m-F).
m
The following relation is obvious
;' b nIt! ~>
-
( ~- >.:) "
>
(2.10)
{
U
= (n/v}-lh ~v U(j) .
j=l v,m
n,m
It follows that
(2.11)
P
[suPa~x<y~b
IUn,m {x,y} I
_n/v
[
~
~ rj=l P sUPa~x<y~b
X]
~
IUv,m
(j)
{x,y}
I
~ X(n/v}
JA]
.
6
To each of the probabilities on the right in (2.11) we may apply e.g.
Ruymgaart and Wellner (1984, Theorem 1.1), since this inequality is
well-known to remain true for independent and identically distributed
d-dimensional random vectors with arbitrary range and arbitrary distribution
(
function (see
:~"- ~. "}
,'
als(Je_;g.·~:~~r"l;{1987, Inequali ty
,
2.5 and Section 6.3.c).
- IT:
Application yields
We may now pass from the Un,m- process to the Un - process, provided
that we restrict the outcomes to the subset O(m,E) in (1.3).
that
(2.13)
* Y ± E* } - F{x,y}I ~
+ n~I F(x fE,
*
E ,
0* '
y ± E } + n
~'"
6(m,E),
for aU
in view of Assumption 1.1 and (1.7).
It is immediate from (2.13) that
(2.14)
supa~x<y~b 1O(m,E) •
~ sUPa-E*~x<y~b+E*
Iun{x' y} I
-<
IUn,m{x,y}I +
n~
6(m,E) ,
Let us note
7
and application of (2.12) yields (note (2.5»
~ ~."-,
2""
,,'
<
-
!!
V
+
-.;-'!".
>
~
I'"
O''''-,'t"?'''
;'\-~
"")"i"
.':I'\!
,.0,
[-A(A-n 6(m.E.)) v , '~ B(A~~ .6(m,'E.» ])
C exp n F {a-E..
if
b+ E.if} , ,~ ~ ~,.riH
"if 'b+ E.if}"
l' \a-E. .'
m
m
penc (m.E.)).
Let us now recall (1.7). note that F{~.'ij'Y,~,-6~~mri=.) ~ Fm{a-E.*. b+E.*} ~
F{a.b} + 6(m.E.) and that v/n = l/(2m+l). use the fact that
~
is decreasing
on [0. 00 ) and exploit the generic character of the numbers A. B. C € (0. 00 ) to
arrive at
(2.16)
P
[suPa~x<y~b
IUn{x. y
}! ~. ;1l.!~2!':; '.r
-A(A-n~ 6(m.E.»2
~ m C exp [ m(F{a.b} + 6(m.E.»
,'!'.~
"'-\'
+ n .p(m. ~J-
+. -'.
.3 .-
Y
.
~::J.,:r
.-'
,
'7'~
',~
'>
;.],<
N-'·
h .... ~
,.
:~,","
•
)
J
'~""
We finally obtain (2.4) "by!Obserw~>t~t- ooDd~.tipn (2.5) implies
(A-n~ 6(m.E.»2 ~ ~ A2 • F{a.b} + 6(m.E.) ~ 3 F{a.b} and F{a.b} - 6(m.E.) ~
~
F{a.b}.
1",'
Q.E.D.
•
{
01 -::.
y ,
Let us now specialize to d=l. ...introduce the uniform (0.1) random
.
~..·;I
.~
~'.G
A.
variables F(X l ) = fl •...• F(Xn >' = f n and write f n for their empirical
distribution function.
The corresponding reduced empirical process is
written
,....,
(2.17)
"'oJ
~
A
Un = {Un (t) = n (fn (t)-t). t € [O.l]}.
It is clear that we have the relation
8
Al though the fl ....• f n do
not,f?_r~
,a, linear process the following resul t is
nevertheless inwnediate frolll 'ij1eore,,!2.1 and (2.18).
C~
•
COROLLARY 2.1.
arbitrary m €
~~
'"
Take d=l and let Assumption 1.1 be FulFilLed.
>0
~
and
~
r -A A
m C eXPLm{b-a)
~
For
•
the reduced empiricaL process in (2.17) satisFies
2
For any
0
a
~
<b
~
[BA
~ n~{b-a)
]]
+ n ~(m.~).
1 provided that
(2.20)
~
b-a
t
2
~,.'.r"~) ~! '~~
6{m.~).
=-(, £~:
If in addition Assumption 1.2 is fulfilled and if we also choose
in (2.19) often can
COROLLARY 2.2.
Take d = 1 and Let both Assumption 1.1 and
~.
'.-,
~
c·".
Assumption 1.2 be satisFied.
such that bn -an = n --r For some 0
and
0
< ~'Y.
~
(-
,
i:
LeCus"choose'~.
q
~}'t"
;',~'
-~: -~
n
< 'Y<
C nP exp [-A c 2 n2a+'Y-P] +
~
n(a.p.'Y.A.O).
n
€
[0.1] A=A
n
€ (0,00)
.~. and
. . An = c nO For some c € (0. 00 )
Then there exists n(a.p.'Y.A.O)
provided that n
b=b
€ ~
C n 1-p •
such that
•
9
PROOF.
that
Let us just note that the condi tions on the parameters entail
na~~ ~ O. as n ~~. so that ~(B A /(n~(b -a ») ~ 1. as n ~~. and
n
n
hence may be absorbed in the generic const~t
3.
Afor
n sufficiently large.
conditioh'~(2":2drlfs~utbmaticallYfulfilled for
It is. moreover. clear that
n sufficiently large.
n
Q.E.D.
DENSITI AND MODE FSflMATION
Throughout this section we take
d~l. .~' w'1;I-iJ iconsider
so called
naive estimators of F'=f. defined by
(3.1)
,.,
f
1'"
= D Fn {x~
n
~
for some i
n
n
> o.
,.,
(3.2)
i .
n
x~
i }. x € IR.
n
For the expected values we write
1
E f n (x) =
n~
',"
F{x~
n
i
n
. x~ i
n
~
} = f
> •.
(x). x € IR.
n
TIIEOREM: 3. 1 .
suppose in addition that F'=f is twice continuously '1differentiable
with
....
• t~~
bounded by M € (0. ~) on IR.
PROOF.
~
Then we have
The present smoothness conditions entail that
so that it remains to consider f -f
n
(3.5)
, ;::
+
= t-
x.n
n
Let us write
If" I
10
and note that
It+
- t- I < M i , for aLL x E
x,n
x,n n
(3.6)
m.
Choosing kn
= [l/(M
i )] c€ 1N,
.. wh~rte
[;Z]
is the greatest integer
'n
,-: c."
.£.,
'
us partition [0,1] into the
~
z €
m,
let
subi~tervals
•
Let us note that the lenith,of the intervals in (3.7) is of order O(n-~) but
-
+
at least M i n , so that any interval (tx,n , t x,n ] intersects at most two
adjacent subintervals in (3.7).
In terms of the reduced empirical process in (2.17) we have
•
for arbitrary c € (0,00).
~
t -t _
j j 1
~
Because
~E-~
< ~~,
< (1-~)/5 < ~
0
for:~all~ j=l~/-L"'. ,k~'-'Cordllary
2M n-"Y
and M n
-~
2.2 applies and yields for
n sufficiently large
(3.9)
f
Pn(c) = p(n sUPxEm !fn(x) - f n (x),1
~
c]
~ (C/M)n~+"Y exp (-A c2nl-2f-2"Y-~] +
Because
1-2f-2"Y-~
it is clear that
> 1-4(1-~)/5
00
~n=l
Pn(c)
-
(1-~)/5-~
= 0
~
C n 1- p
and p
>2
by Assumption 1.2
< 00. Q.E.D.
•
In Chanda (1985) a class of linear estimators of the form ~ 1 a.X.
1=
for estimating the symmetry point of a symmetric density is considered.
1
1
11
The asymptotics of location estimators that are linear combinations of order
statistics of the form ~ 1 a. X..
1=
paper.
1
might be investigated in a forthcoming
l·n
The previous theorem enables us, however, to simply prove
'._'_'
,":,\'
a
speed of
".c
F
a. s. convergence of the mode estimator that'wa:~f first, considered in Chernoff
(1964) and based on naive density estimators'."
TIIEOREM 3.2.
In addition to the'iccmdi:HJ'ons
o.
assume that f has a unique maximum at
L~et
Jr theorem
3.1 Let us
fn;rot.>e a, maximum at On.
Then
we have
nf /
(3.10)
2 (AO -0) a.s.
0 ,as n
~
PROOF.
n
= n 1 (W.e)
such that
For an~'w€ 00 and e
.' <.
f(O) - e n-
~
for n
~
n .
1
there exists
L
! {
,~J..
>0
° with
XL'
1.:'
:'"
J'L:3>:'" ,
<
~ :rPJ,~(~n1i». ~I'~\O) +,~.!~-f,;
f
(3.11)
< f < 2(1-P)/5.
Let us choose f as in (3.10) and let 00 be a subset of
P(00)=1 on which (3.3) holds true.
n1
for any 0
~ 00,
~
~
,",
,-
.'
~-
,
~[.
This entails that
',,'
A
(3.12)
on (w)
€ {x € IR
Moreover, there exist 6
x € (0-6,0+6), and n
2
>0
Since e
~
< 0 for all
-c
= ~(6) ~~ 1111' ~~ch -that; r~he set on the right in (3.12)
is contained in (0-6, 0+6) for n
= (0
such that f"(x)
~.
n .
2
Hence for n
~
- 2(e/c)~ n~, 0 + 2(e/c)~ n~).
> 0 is arbitrary this proves (3.10).
Q.E.D.
n
2
we have
12
4.
DISC:USSION AND EXAMPLES
Also in this section we choose d=l.
The assumptions in Section 1
will be discussed by showing that they are satisfied for two examples of
subclasses of the class of linear processes.
AR(I)-PROCESSF..8I.'" For:O
•
<'1 let us consider
~;r
(4.1)
It is usually assumed that the error terms Zj have E(Zj) = 0 and Var(Zj) =
a
2
(4.1) is in quadratic mean and
€ (0. 00 ) . so that the convergence in
consequently in probability as we require.
m.
E(exp(i t Zj»' t €
(4.2)
J~ 1~(t)ldt
Let us write
~(t)
=
Assuming that
< 00.
..i
entails that the distribution of the Zj has a
the number on the left in (4.2).
~~ntf~uous
density bounded by
e.
It is easily seen that (4.2) also implies
that the Xi have a distribution with bounded continuous density.
Hence
Assumption 1.1 is satisfied.
Let us next note that. for any
l~
Choosing
~
=
E
n
= n -a and
.. ;
~
>0
,d .'.-
m.
and m €
f
m = mn = [n~]. for any
a
> ~ and 0 < ~ < ~. we
find that
(4.4)
for some number c
1
€
{O.oo}.
It is obvious that for each
a number c 2 = c {p} € (O.oo) such that
2
(4.5)
for aLl. n €
m.
p
>2
there exists
•
13
Hence Assumption 1.2 is amply fulfilled.
The counterexample by Rosenblatt (1980). considered in Bradley (1986)
and mentioned in the introduction, is the special case where r =
Zj are {O,l}- variables with P(Zj=O) = P(Zj=l) =
~.
~
and the
Althouth the Z. are
J
discrete the X. turn out to have the Uniform distribution on (0,1). so that
1
It is interesting that this process is
the assumptions are still fulfilled.
not strongly mixing.
ERROR TERMS WIm STABLE D.F.
Let us now assume that the Z. have a
J
sYmmetric stable distribution with scale parameter 1 and index 0
that the first moment doesn't even exist.
< ~ < 1.
so
According to Leadbetter. Lindgren
and Rootzen (1983, p. 73) the series
converges a.s. if and only if
Since condition (4.2) is obviously fulfilled in the stable case, Assumption
n
<
:.h (;
.:;'1{,Q..:
~" C!.
.
.,1__;
1.2 is satisfied provided that (4.7) holds true.
Let us now assume that·· ',.
(4.8)
I~I ~ c k-
and c € (0,00).
T
,
for
T
!.
~
>·1/~ ,
Using Leadbetter, Lindgren and Rootzen (1983, p. 74) we see
that for some numbers c 1 =
m € IN we have
SOIIlel
>~-
c1(~)'
c2 =
c2(~)
€ (0. 00 ), and any ~
> 0 and
14
for any a
Choosing c
>~
and
0
< ~ < ~.
we
•
obtain
( 4.10)
where c
= c (J..L) € (O.co).
3
3
(4.11)
> 2.
p
for each
(4.12)
It follows that
m satisfying
T €
T
> (p
+
+ ~)/(~J..L).
aJ..L
(Note that (4.12) implies
',~
(4.13)
T
>1
a
>
+ 5/J,.l..
> ~.
0
Relation (4.12) also implies that for
1/J..L.)
~}G~qDvn'D
Si~,
.0.. 1('J..L; ~
",
we can find
T
(~~;C
< ~.J<
~: ~
, (Y8fU) .J.M
n(..-. j:1 .r bno:.·
':19
~~;l/~'! ~)b
1,r;"<::I.i.;!?o. ()Ci~ 01"
,
':i .~.~ ..;.
e,
,~j ~'/'- :.~
~·r,anc:'#)~,.1.2
~.
".",~'; :,;cc
~,:'D~·,
.
su.ch/tM:t' (4. 12) is fulfilled.
Hence we have established simple conditions under which Assumption 1.2 is
satisfied.
: -:,~
.-~
C.l,'L'
h
n"
~J',:.,:
J.~
The purpose of this example; was to provide a useful situation in
which the assumptions are easily seen to be satisfied. although first
moments do not even exist.
It is not excluded. however. that in this case a
strong mixing condition may be fulfilled.
•
15
REFERENCES
[1]
ANDERSON. T.W. (1971).
New York.
The Statistical Analysis of Time Series. Wiley.
[2]
BASAWA. I.V. and PRAKASA RAO. B.L.S. (1980). St~tistical Inference for
Stochas tic Processes. Ac. Press; !'jew' Yorkrrc
[3]
BRADLEY. R.C. (1986). Basic properties of strong mixing conditions.
In: Dependence in Probability and Statistics (E. Eberlein and M.S.
Taqqu. Eds.). pp. 165-192. Birkhauser. BQ~ton~_
..' 't-"-'f·
~.
[4]
CHANDA. K.C. (1983). Density estimation for linear processes.
Inst. Statist. Math. 35. 439-446.
[5]
CHANDA. K.C. (1985). Sampling distribution for a class of estimators
for nonregular linear processes. Statist. Probab. Letters 3.
261-268.
[6]
CHERNOFF. H. (1964).
16. 31-41.
[7]
EINMAHL. J.H.J. (1987).
32. Ams terdam.
[8]
HANNAN. E.J. (1970).
,.~
[9]
Ann.
Estimation of the mode. Ann. Inst. Statist. Math.
Multivariate Empirical Processes.
Multiple Time Series.
(£.i. ,1:-)
nc.1.:i;;:l':lJ{
l
1-\\;
CWI tract
Wiley. New York.
-r
~'e.L:'
HAREL. M. and PURl. M.L. (1987).
Convergence faible de la statistique
serielle lineaire de rang en condition de dependance avec
applications aux series chronologiques et·pl'~essus de Markov. C.R.
Acad. Sci. Paris. t.304. serie 1. no. 19. 583-586.
[10] LEADBETTER. M.R .• LINDGREN.)G·~ and ROC1fEEN,'.t H.C\(l983). Extremes and
Related Properties of Random Sequences and Processes. Springer. New
York.
~c"~:~.lb!1c:, ::;''.111':','
carl;,'
[11] MEHRA. K.L. and RAO. M.S. (1975). Weak convergence of generalized
empirical processes relative to d under strong mixing. Ann.
Probability 3. 979-991v,,'c (). uwq'q~;rG~"
:'
[12] RQSENBLATT. M. (1980). Linear;processes'and bispectra.
Probability 17. 265-270.
]. Appl.
[13] RUYMGAART. F.H. and WELLNER. J.A. (1984). Some properties of weighted
multivariate empirical processes. Statist. Decisions 2. 199-223.